Technical report from Automatic Control at Linköpings universitet
An inexact interior-point method for
semi-definite programming, a description
and convergence proof
Janne Harju, Anders Hansson
Division of Automatic Control
E-mail: harju@isy.liu.se, hansson@isy.liu.se
14th September 2007
Report no.: LiTH-ISY-R-2819
Address:
Department of Electrical Engineering Linköpings universitet
SE-581 83 Linköping, Sweden
WWW: http://www.control.isy.liu.se
AUTOMATIC CONTROL REGLERTEKNIK LINKÖPINGS UNIVERSITET
Technical reports from the Automatic Control group in Linköping are available from http://www.control.isy.liu.se/publications.
Abstract
In this report we investigate convergence for an infeasible interior-point method for semidefinite programming.
Keywords: LMI optimization; Interior-point methods; Iterative computa-tion; convergence.
An inexact interior-point method for
semi-definite programming, a description and
convergence proof
Janne Harju
2007-09-14
Inexact search directions in interior-point methods for semi-definite program-ming has been considered in [3] where also a proof of convergence is given. How-ever, because the method considered was a feasible method the inexact search direction had to be projected onto the feasible space at a high computational cost. In this report we instead investigate convergence for an infeasible interior-point method for semidefinite programming.
1
Optimization problem
Let X be a finite-dimensional real vector space with an inner product h·, ·iX : X × X → R and define the linear mappings
A : X → Sn (1)
A∗: Sn→ X (2)
where Sn denotes the space of symmetric matrices of size n × n and where A∗ is the adjoint of A. The space Sn has the inner product hX, Y i
Sn = Tr(XY T). We will with abuse of notation use the same notation h·, ·i for inner products defined on different spaces when the inner product used is clear from context. Now consider the primal and dual optimization problems
min hc, xi (3) s.t A(x) + M0= S (4) S 0 (5) max − hM0, Zi (6) s.t A∗(Z) = c (7) Z 0 (8)
where c, x ∈ X and S, Z ∈ Sn. Additionally define z = (x, S, Z) and the corresponding finite-dimensional vector space Z = X × Sn× Sn with its inner product h·, ·iZ. We define the corresponding 2-norm k · k2 : Z → R by kzk22= hz, zi. We notice that the 2-norm of a matrix with this definition will be the Frobenius norm and not the induced 2-norm.
1.1
Optimality conditions
If strong duality holds then the Karush-Kuhn-Tucker conditions defines the solution to the primal and dual optimization problems, page 244 in [1]. The Karush-Kuhn-Tucker conditions for the optimization problems defined in the previous section are
A(x) + M0= S (9)
A∗(Z) = c (10)
ZS = 0 (11)
S 0, Z 0 (12)
It is assumed that the mapping A has full rank.
Definition 1.1 The complementary slackness ν is defined as ν = hZ, Si
n (13)
Definition 1.2 Define the central-path as the solution points for
A(x) + M0= S (14)
A∗(Z) = c (15)
ZS = νI (16)
S 0, Z 0 (17)
where ν ≥ 0. Note that the central-path converges to a solution of the Karush-Kuhn-Tucker conditions when ν tends to zero.
2
Interior-point method
For a thorough description of algorithms and theory within the area of interior-point methods see [12] for linear programming while [11] gives an extensive overview of semidefinite programming. Here follows a brief discussion on what relaxed KKT conditions are to be solved when using an infeasible interior-point method. Then an inexact infeasible predictor-corrector method for semidefinite programming is described in detail.
2.1
Infeasible interior-point method
In this section an infeasible interior-point method is discussed. Such a method is initiated with an infeasible or a feasible point z and then its iterates tend toward feasibility and optimality by computing a sequence of search directions and taking steps in these directions. To derive equations for the search directions the next iterate z+= z+∆z is introduced and inserted into (14)-(17). This gives a nonlinear system of equations for ∆z. Even after linearization the variables ∆S and ∆Z of the solution ∆z to these equations are not guaranteed to be symmetric since this requirement is only implicit. A solution to this remedy is to introduce the symmetry transformation H : Rn×n→ Sn that is defined by
H(X) = 1 2 R
−1XR + (R−1XR)T
where R ∈ Rn×n is the so called scaling matrix. For a thorough description of scaling matrices, see [11] and [13]. In [13] it is shown that the relaxed comple-mentary slackness condition ZS = νI is equivalent with
H(ZS) = νI (19)
for any nonsingular matrix R. Hence we may replace (16) with (19). Replacing z with the next iterate z++ ∆z in (14), (15), (17) and (19) results in
A(∆x) − ∆S = −(A(x) + M0− S) (20)
A∗(∆Z) = (c − A∗(Z)) (21)
H(∆ZS + Z∆S) = νI − H(ZS) − H(∆Z∆S) (22)
S + ∆S 0, Z + ∆Z 0 (23)
If the nonlinear term in (22) is ignored ∆S and ∆Z will be symmetric. Sev-eral approaches to handling the nonlinear term in the complementary slackness equation have been presented in the literature. A direct solution is to ignore the higher order term which gives a linear system of equations. Another approach is presented in [6].
2.2
Inexact predictor-corrector method
Here an inexact infeasible predictor-corrector method is presented. The idea of using inexact search directions has been applied to model predictive control ap-plications in [4] and to monotone variational inequality problems in [8]. Inexact search directions have also been applied for a potential reduction method in [10] and [3]
In a predictor-corrector method alternating steps are taken. There are two separate objectives in the strategy. The predictor step decreases the duality gap while the corrector step moves the iterate towards the central-path. To this end the parameter ν in (22) is replaced with σν. Then for small values of σ a step is taken to reduce the complementary slackness ν, and for values of σ close to 1 a step to find an iterate close to the central path is taken. Then the linear system of equations to be solved for the search directions is
A(∆x) − ∆S = −(A(x) + M0− S) (24)
A∗(∆Z) = (c − A∗(Z)) (25)
H(∆ZS + Z∆S) = σνI − H(ZS) (26)
Lemma 2.1 If the operator A has full rank, i.e. A(x) = 0 implies that x = 0, and if Z 0 and S 0, then the linear system of equations in (24)-(26) has a unique solution.
Proof See Theorem 10.2.2 in [11].
For later use and to obtain an easier notation define F : Z → Sn× Sn× Sn as F (z) = Fp(z) Fd(z) Fc(z) = A(x) + Mo− S A∗(Z) − c H(ZS) (27)
where z = (x, S, Z). Also define r = rp rd rc = −(A(x) + M0− S) (c − A∗(Z)) σνI − H(ZS) (28)
Now define the set Ω as
Ω = {z = (x, S, Z) | S 0, Z 0, (29)
kFp(z)k2≤ βν, kFd(z)k2≤ βν, γνI Fc(z) ηνI}
where the scalars β, γ and η will be defined later on. Then define the set Ω+ as
Ω+= {z ∈ Ω | S 0, Z 0} (30)
Finally define the set S for which the Karush-Kuhn-Tucker conditions (9)-(12) are fulfilled.
S = {z | Fp(z) = 0, Fd(z) = 0, Fc(z) = 0, S 0, Z 0} (31) 2.2.1 Algorithm
Below the overall algorithm is summarized, which is taken from [8] and adapted to semidefinite programming.
0. Initialize the counter j = 1 and choose 0 < η < ηmax< 1, γ ≥ n, β > 0, κ ∈ (0, 1), 0 < σmin< σmax< 1/2, > 0, 0 < χ < 1 and z0∈ Ω.
1. Evaluate stopping criteria. If fulfilled, terminate the algorithm. 2. Choose σ ∈ (σmin, σmax).
3. Compute the scaling matrix R.
4. Solve (24)-(26) for search direction ∆zj with a residual tolerance σβν/2. 5. Choose a step length αj as the first element in the sequence {1, χ, χ2, . . .}
such that zj+1= zj+ αj∆zj∈ Ω and such that νj+1≤ 1 − ακ(1 − σ)νj.
6. Update the variables, zj+1= zj+ αj∆zj and the counter j := j + 1. 7. Return to step 1.
Note that any iterate generated by the algorithm is in Ω, which is a closed set, since it is defined as an intersection of closed sets, see Section 3.3 for details.
3
Convergence
A global proof of convergence will be presented. This proof of convergence is due to [7]. It has been extended in [8], and inexact solutions of the equations for the search direction were considered in [4] for the application to model predictive control and in [2] for variational inequalities. The convergence result is that
either the sequence generated by the algorithm terminates at a solution to the Karusch-Kuhn-Tucker conditions in a finite number of iterations, or all limit points, if any exist, are solutions to the Karusch-Kuhn-Tucker conditions defined in (9)-(12). Here the proof is extended to the case of semidefinite programming.
3.1
Convergence of inexact interior-point method
In order to prove convergence some preliminary results are presented in the following lemmas.
Lemma 3.1 Any iterate generated by the algorithm in Section 2.2.1 is in Ω, which is a closed set. If z /∈ S and z ∈ Ω then z ∈ Ω+.
Proof Any iterate generated by the algorithm is in Ω from the definition of the algorithm. The set Ω is closed, since it is defined as an intersection of closed sets. For a detailed proof, see Section 3.3.
The rest of the proof follows by contradiction. Assume that z /∈ S, z ∈ Ω and z /∈ Ω+. Now study the two cases ν = 0 and ν > 0 separately. Note that ν ≥ 0 by definition.
First assume that ν = 0. Since ν = 0 and z ∈ Ω it follows that Fc(z) = 0, kFp(z)k2 = 0 and kFd(z)k2 = 0. This implies that Fp(s) = Fd(s) = 0. Note that z ∈ Ω also implies that Z 0 and S 0. Combing these conclusions gives that z ∈ S, which is a contradiction.
Now assume that ν > 0. Since ν > 0 and z ∈ Ω it follows that Fc(z) = H(ZS) 0. To complete the proof two inequalities are needed. First note that det(H(ZS)) > 0. To find the second inequality the Ostrowski-Taussky inequality, see page 56 in [5],
detX + X T
2
≤ | det(X)| (32)
is applied to (18). This gives
det(H(ZS)) ≤ | det(R−1ZSR)| = | det(Z)| · | det(S)| = det(Z) · det(S) where the last equality follows from z ∈ Ω. Combining the two inequalities gives
0 < det(H(ZS)) ≤ det(Z) · det(S) (33) Since the determinant is the product of all eigenvalues and z ∈ Ω, (33) shows that the eigenvalues are nonzero and therefore Z 0 and S 0. This implies that z ∈ Ω+, which is a contradiction.
The linear system of equations in (24)-(26) for the step direction is now rewritten as
∂vec(F (z))
∂vec(z) vec(∆z) = vec(r) (34)
Note that the vectorization is used for the theoretical proof of convergence. In practice solving the equations is preferably made with a solver based on the operator formalism.
Lemma 3.2 Assume that A has full rank. Let ˆz ∈ Ω+, and let ∈ (0, 1). Then there exist scalars ˆδ > 0 and ˆα ∈ (0, 1] such that if
∂vec(F (z))
∂vec(z) vec(∆z) − vec(r) 2 ≤ 2σβν, (35) ∂vec(Fc(z))
∂vec(z) vec(∆z) − vec(rc) = 0 (36) and if the algorithm takes a step from any point in
B = {z | kz − ˆzk2≤ ˆδ} (37) then the calculated step length α will satisfy α ≥ ˆα.
Proof See Section 3.2.
Now the global convergence proof is presented.
Theorem 3.3 Assume that A has full rank. Then for the iterates generated by the interior-point algorithm either
- zj ∈ S for some finite j.
- all limit points of {zj} belongs to S.
Remark Note that nothing is said about the existence of a limit point. It is only stated that if a convergent subsequence exists, then its limit point is in S. A sufficient condition for the existence of a limit point is that {zj} is uniformly bounded, [7].
Proof Suppose that the sequence {zj} is infinite and it has a subsequence which converges to ˆz /∈ S. Denote the corresponding subsequence {ji} with K. Then for all δ0> 0, there exist a k such that for j ≥ k and j ∈ K it holds that
kzj− ˆzk
2≤ δ0 (38)
Since ˆz /∈ S it holds by Lemma 3.1 that ˆz ∈ Ω+and hence from Lemma 3.2 and (13) that there exist a ˆδ > 0 and ˆα ∈ (0, 1] such that for all zj, j ≥ k such that kzj− ˆzk
2≤ ˆδ it holds that
νj+1− νj≤ − ˆακ(1 − σ
max)νj< − ˆακ(1 − σmax)ˆδ2< 0 (39) Now take δ0 = ˆδ. Then for two consecutive points zj, zj+i of the subsequence with j ≤ k and j ∈ K it holds that
νj+i− νj= (νj+i− νj+i−1) + (νj+i−1− νj+i−2+ · · · ) + (νj+1− νj) < < νj+1− νj ≤ − ˆακ(1 − σ
max)ˆδ2< 0 (40)
Since K is infinite it holds that {νj}j∈K diverges to −∞. The assumption ˆ
z ∈ Ω+ gives that ˆν > 0, which is a contradiction.
Proof We have kz − ˆzk2≤ ˆδ. Hence kz − ˆzk22≤ ˆδ2 and kS − ˆSk22+ kZ − ˆZk 2 2+ kx − ˆxk 2 2≤ ˆδ 2 . (41) Therefore kS − ˆSk2
2 ≤ ˆδ2 and kZ − ˆZk22 ≤ ˆδ2. Hence by Lemma 3.5 below −ˆδI ≤ S − ˆS ≤ ˆδI and −ˆδI ≤ Z − ˆZ ≤ ˆδI
Lemma 3.5 Assume A is symmetric. If kAk2≤ a, then −aI A aI. Proof Denote the i:th largest eigenvalue of a matrix A with λi(A). Then the norm defined in Section 1 fulfills
kAk22= n X i=1
λ2i(A) (42)
see page 647 in [1]. Then
kAk2≤ a ⇔ n X i=1 λ2i(A) ≤ a2⇒ λ2 i(A) ≤ a 2⇒ |λ i(A)| ≤ a ⇔ (43) −a ≤ λmin(A) ≤ λi(A) ≤ λmax(A) ≤ a (44) and hence −aI A aI.
3.2
Proof of Lemma 3.2
Here a proof of Lemma 3.2 is presented. What is needed to show is that there exist an α such that α ≥ ˆα > 0 so that z(α) = z + α∆z ∈ Ω and that
ν(α) ≤ (1 − ακ(1 − σ))ν (45)
First a property for symmetric matrices is stated for later use in the proof.
A B, C D ⇒ hA, Ci ≥ hB, Di (46) where A, B, C, D ∈ Sn +, page 44, Property (18) in [5]. Define ˆ δ = 1 2min min i λi( ˆS), mini λi( ˆZ) (47) Notice that z ∈ B by Lemma 3.4 implies that S ˆS − ˆδI and Z ˆZ − ˆδI. Then it holds that ν = hZ, Si n ≥ h ˆZ − ˆδI, ˆS − ˆδIi n ≥ hˆδI, ˆδIi n = ˆδ 2> 0 (48) where the first inequality follows from (46) and what was mentioned above. The second inequality is due to (47) and (46). Similarly as in the proof of Lemma 3.1 it also holds that Z, S 0.
Now since A has full rank it follows by Lemma 2.1 that (∂vec(F (z))/∂vec(z)) is invertible in an open set containing B. Furthermore, it is clear that (∂vec(F (z)) /∂vec(z)) is a continuous function of z, and that r is a continuous function of z and σ. Therefore, for all z ∈ B, σ ∈ (σmin, 1/2], it holds that the solution ∆z0of (∂vec(F (z))/∂vec(z))vec(∆z0) = vec(r) satisfies k∆z0k2 ≤ C0 for a constant
C0> 0. Introduce δ∆z = ∆z −∆z0. Then from the bound on the residual in the assumptions of this lemma it follows that k(∂vec(F (z))/∂vec(z))vec(δ∆z)k2is bounded. Hence there must be a constant δC > 0 such that kδ∆zk2≤ δC. Let C = C0+ δC > 0. It now holds that k∆zk2≤ C for all z ∈ B, σ ∈ (σmin, 1/2]. Notice that it also holds that k∆Zk2≤ C and k∆Sk2≤ C. Define ˆα(1)= ˆδ/2C. Then for all α ∈ (0, ˆα(1)] it holds that
Z(α) = Z + α∆Z ˆZ − ˆδI + α∆Z 2ˆδI − ˆδI − ˆ δ
2I 0 (49) where the first inequality follows by Lemma 3.4 and z ∈ B, and where the second inequality follows from (47). The proof for S(α) 0 is analogous. Hence is is possible to take a positive step without violating the constraint S(α) 0, Z(α) 0.
Now we prove that
Fc(z) = H(Z(α)S(α)) ην(α)I (50) for some positive α. This follows from two inequalities that utilizes the fact that
∂vec(Fc(z))
∂vec(z) ∆z − vec(rc) = 0 ⇔ (51) H(∆ZS + Z∆S) + H(ZS) − σνI = 0
and the fact that the previous iterate z is in the set Ω. Note that H(·) is a linear operator. Hence H(Z(α)S(α)) = H(ZS + α(∆ZS + Z∆S) + α2∆Z∆S) (52) = (1 − α)H(ZS) + α(H(ZS) + H(∆ZS + Z∆S)) + α2H(∆Z∆S) (1 − α)ηνI + ασνI + α2H(∆Z∆S) [(1 − α)η + ασ]ν − α2|λ∆ min|I where λ∆
min= miniλi(H(∆Z∆S). Note that λ∆minis bounded. To show this we note that in each iterate in the algorithm Z and S are bounded and hence is R bounded since it its calculated from Z and S. Hence is H(∆Z∆S) is bounded since k∆Zk2 < C and k∆Sk2 < C. In each iterate in the algorithm Z and S are bounded and hence is R bounded since it its calculated from Z and S.
Moreover
nν(α) = hZ(α), S(α)i = hH(Z(α)S(α)), Ii (53) = h(1 − α)H(ZS) + ασνI + α2H(∆Z∆S), Ii
= (1 − α)hZ, Si + nασν + α2h∆Z, ∆Si ≤ (1 − α)nν + nασν + α2C2
The last inequality follows from h∆Z, ∆Si ≤ k∆Zk2k∆Sk2 ≤ C2. Rewriting (53) gives that ην(α) ≤(1 − α + ασ)ν + α 2C2 n η (54)
Clearly [(1 − α)η + ασ]ν − α2|λ∆ min| I [(1 − α)η + αση]ν +α 2C2η n I (55) implies (50), which (assuming that α > 0) is fulfilled if
α ≤ σν(1 − η) C2η/n + |λ∆
min|
(56)
Recall that σ ≥ σmin> 0, η < ηmax< 1 by assumption and that 0 < ˆδ2≤ ν by (48). Hence with ˆ α(2)= min ˆ α(1), σmin ˆ δ2(1 − η max) C2η max/n + |λ∆min| (57)
(50) is satisfied for all α ∈ (0, ˆα(2)]. We now show that
γν(α)I H(Z(α)S(α)) (58)
First note that γ ≥ n. Then γ n X i λi H(Z(α)S(α)) ≥ λmax H(Z(α)S(α)) ⇔ (59) γ n X i
λi H(Z(α)S(α))I λmax H(Z(α)S(α))I ⇔ (60) γ
nTr H(Z(α)S(α))I λmax H(Z(α)S(α))I (61) where the first expression is fulfilled by definition. The second equivalence follows from a property of the trace of a matrix, see page 41 in [5]. From the definition of complementary slackness in (13)it now follows that (58) holds.
Now we prove that (45) is satisfied. Let
ˆ α(3)= min ˆ α(2), nˆδ 2(1 − κ) 2C2 (62) Then for all α ∈ [0, ˆα(3)] it holds that
α2C2≤ αn(1 − κ)ˆδ2/2 ≤ αn(1 − κ)ν/2 ≤ αn(1 − κ)(1 − σ)ν (63) where the second inequality follows from (48) and the third inequality follows from the assumption σ ≤ 1/2. This inequality together with (53) implies that
ν(α) ≤ (1 − α) + ασν + α 2C2
2 ≤ (64)
≤ (1 − α) + ασν + α(1 − κ)(1 − σ) = 1 − ακ(1 − σ) (65) where the second inequality is due to σ < 1/2 and hence (45) is satisfied.
It now remains to prove that
kFp(z(α))k2≤ βν(α) (66)
Since the profs are similar, it will only be proven that (66) holds true. Use Taylor’s theorem to write
vec(Fp(z(α))) = vec(Fp(z)) + α ∂vec(Fp(z)) ∂vec(z) vec(∆z) + αR (68) R = Z 1 0 ∂vec(Fp(z(θα))) ∂vec(z) − ∂vec(Fp(z)) ∂vec(z) vec(∆z)dθ (69) Using ∆z = ∆z0+ δ∆z and the fact that (∂vec(Fp(z))/∂vec(z))vec(∆z0) = −vec(Fp(z)), it follows that
vec(Fp(z(α))) = (1 − α)vec(Fp(z)) + α ∂vec(Fp(z)) ∂vec(z) vec(δ∆z) + αR (70) Furthermore kRk2≤ max θ∈(0,1) ∂vec(Fp(z(θα))) ∂vec(z) − vec(Fp(z)) ∂vec(z) 2· kvec(∆z)k2 (71) Since k∆zk2≤ C and since ∂vec(Fp(z))/∂vec(z) is continuous, there exists an
ˆ
α(4)> 0 such that for α ∈ (0, ˆα(4)], ∈ (0, 1) and z ∈ B it holds that kRk2<
1 −
2 σβν (72)
Using the fact that kFp(z)k2≤ βν, it now follows that kFp(z(α))k2≤ (1 − α)βν + α
σ
2βν (73)
for all α ∈ [0, ˆα(4)]. By reducing ˆα(4), if necessary, it follows that αC2< σ
2nν (74)
for all α ∈ [0, ˆα(4)] . Similar (53), but bounding below instead, it holds that nν(α) ≥ 1 − α(1 − σ)nν − α2C2⇔ (75) (1 − α)ν ≤ ν(α) − ασν −α 2C2 n Hence kFp(z(α))k2≤ β ν(α) − ασν +α 2C2 n + ασ 2βν (76) = βν(α) − αβσν − αC 2 n − σν 2 ≤ βν(α) (77)
where the last inequality follows from (74). The proof for Fd(z(α)) ≤ βν(α) is done analogously, which gives an ˆα(5)> 0.
3.3
Proof of closed set
Definition 3.6 Let X and Y denote two metric spaces and define the mapping f : X → Y . Let C ⊆ Y . Then the inverse image f−1(C) of C is the set {x | f (x) ∈ C, x ∈ X}.
Lemma 3.7 A mapping of a metric space X into a metric space Y is contin-uous if and only if f−1(C) is closed in X for every closed set C ⊆ Y .
Proof See page 81 in [9].
Lemma 3.8 The set {z | kFp(z)k2≤ βν, z ∈ Z} is a closed set.
Proof Consider the mapping Fp(z) : Z → Sn and the set C = {C | kCk2 ≤ βν, C ∈ Sn} that is closed since the norm defines a closed set, see page 634 in [1]. Now note that the mapping Fp(z) is continuous. Then the inverse image {z | Fp(z) ∈ C, z ∈ Z} = {z | kFp(z)k2≤ βν, z ∈ Z} is a closed set.
Lemma 3.9 The set {z | kFd(z)k2≤ βν, z ∈ Z} is a closed set. Proof Analogous with the proof of Lemma 3.8.
Lemma 3.10 The set {z | γνI H(ZS) ηνI, z ∈ Z} is a closed set. Proof Define the mapping h(z) = H(ZS) : Z → Snand the set C
1= {C1|C1 ηνI, C1 ∈ Sn} that is closed, page 43 in [1]. Since the mapping is continuous the inverse image {z | H(ZS) ηνI, z ∈ Z} is a closed set. Now define the set C2= {C2| C2 ηνI, C2∈ Sn}. Using continuity again gives that {z | H(ZS) γνI, z ∈ Z} is a closed set. Since the intersection of closed sets is a closed set, Theorem 2.24 in [9], {z | γνI H(ZS) ηνI, z ∈ Z} is a closed set.
Lemma 3.11 The set Ω is a closed set.
Proof Note that (Z, ρ) is a metric space with distance ρ(u, v) = ku − vk2. Hence is Z a closed set. Lemma 3.8, 3.9 and 3.10 gives that each additional constraint in Ω defines a closed set. Since the intersection of closed sets is a closed set, Theorem 2.24 in [9], Ω is a closed set.
References
[1] S. Boyd and L. Vandenberghe. Convex optimization. Cambridge University Press, 2004.
[2] M. G. Gasparo, S. Pieraccini, and A. Armellini. An infeasible interior-point method with nonmonotonic complementarity gaps. Optimization Methods and Software, 17(4):561 – 586, 2002.
[3] J. Gillberg and A. Hansson. Polynomial complexity for a Nesterov-Todd potential-reduction method with inexact search directions. In Proceedings of thet 42nd IEEE Conference on Decision and Control, page 6, Maui, Hawaii, USA, December 2003.
[4] A. Hansson. A primal-dual interior-point method for robust optimal control of linear discrete-time systems. Automatic Control, IEEE Transactions on, 45(9):1639–1655, 2000.
[5] H. Lütkepohl. Handbook of matrices. John Wiley and sons, 1996.
[6] S. Mehrotra. On the implementation of a primal-dual interior point method. SIAM Journal on Optimization, 2(4):575–601, 1992.
[7] E. Polak. Computational methods in optimization. Academic press, 1971. [8] D. Ralph and S. J. Wright. Superlinear convergence of an interior-point
method for monotone variational inequalities. Complementarity and Vari-ational Problems: State of the Art, 1997.
[9] W. Rudin. Principles of Mathematical Analysis. McGraw-Hill, 3 edition, 1976.
[10] L Vandenberghe and S. Boyd. A primal-dual potential reduction method for problems involving matrix inequalities. Mathematical Programming, 69:205 – 236, 1995.
[11] H. Wolkowicz, R. Saigal, and L. Vandenberghe, editors. Handbook of Semidefinite Programming: Theory, Algorithms, and Applications, vol-ume 27 of International series in operations research & management sci-ence. KLUWER, 2000.
[12] S. J. Wright. Primal-Dual Interior-Point Methods. SIAM, 2 edition, 1997. [13] Y. Zhang. On extending some primal–dual interior-point algorithms from linear programming to semidefinite programming. SIAM Journal on Opti-mization, 8(2):365–386, 1998.
Avdelning, Institution Division, Department
Division of Automatic Control Department of Electrical Engineering
Datum Date 2007-09-14 Språk Language Svenska/Swedish Engelska/English Rapporttyp Report category Licentiatavhandling Examensarbete C-uppsats D-uppsats Övrig rapport
URL för elektronisk version http://www.control.isy.liu.se
ISBN — ISRN
—
Serietitel och serienummer Title of series, numbering
ISSN 1400-3902
LiTH-ISY-R-2819
Titel Title
An inexact interior-point method for semi-definite programming, a description and conver-gence proof
Författare Author
Janne Harju, Anders Hansson
Sammanfattning Abstract
In this report we investigate convergence for an infeasible interior-point method for semidef-inite programming.