Kronecker Products LyapunovEquation2



Comments



Description

On the Kronecker ProductKathrin Sch¨acke August 1, 2013 Abstract In this paper, we review basic properties of the Kronecker product, and give an overview of its history and applications. We then move on to introducing the symmetric Kronecker product, and we derive several of its properties. Furthermore, we show its application in finding search directions in semidefinite programming. Contents 1 Introduction 1.1 Background and Notation . . . . . . . . . . . . . . . . . . . . 1.2 History of the Kronecker product . . . . . . . . . . . . . . . . 2 The Kronecker Product 2.1 Properties of the Kronecker Product . . . . . . . . . . 2.1.1 Basic Properties . . . . . . . . . . . . . . . . . . 2.1.2 Factorizations, Eigenvalues and Singular Values 2.1.3 The Kronecker Sum . . . . . . . . . . . . . . . . 2.1.4 Matrix Equations and the Kronecker Product . 2.2 Applications of the Kronecker Product . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2 5 6 . 6 . 6 . 8 . 10 . 11 . 11 3 The Symmetric Kronecker Product 14 3.1 Properties of the symmetric Kronecker product . . . . . . . . 16 3.2 Applications of the symmetric Kronecker product . . . . . . . 28 4 Conclusion 33 1 1 Introduction The Kronecker product of two matrices, denoted by A ⊗ B, has been researched since the nineteenth century. Many properties about its trace, determinant, eigenvalues, and other decompositions have been discovered during this time, and are now part of classical linear algebra literature. The Kronecker product has many classical applications in solving matrix equations, such as the Sylvester equation: AX +XB = C, the Lyapunov equation: XA + A∗ X = H, the commutativity equation: AX = XA, and others. In all cases, we want to know which matrices X satisfy these equations. This can easily be established using the theory of Kronecker products. A similar product, the symmetric Kronecker product, denoted by A ⊗s B, has been the topic of recent research in the field of semidefinite programming. Interest in the symmetric Kronecker product was stimulated by its appearance in the equations needed for the computation of search directions for semidefinite programming primal–dual interior–point algorithms. One type of search direction is the AHO direction, named after Alizadeh, Haeberly, and Overton. A generalization of this search direction is the Monteiro–Zhang family of directions. We will introduce those search directions and show where the symmetric Kronecker product appears in the derivation. Using properties of the symmetric Kronecker product, we can derive conditions for when search directions of the Monteiro–Zhang family are uniquely defined. We now give a short overview of this paper. In Section 2, we discuss the ordinary Kronecker product, giving an overview of its history in Section 1.2. We then list many of its properties without proof in Section 2.1, and conclude with some of its applications in Section 2.2. In Section 3, we introduce the symmetric Kronecker product. We prove a number of its properties in Section 3.1, and show its application in semidefinite programming in Section 3.2. We now continue this section with some background and notation. 1.1 Background and Notation Let Mm,n denote the space of m × n real (or complex) matrices and Mn the square analog. If needed, we will specify the field of the real numbers by R, and of the complex numbers by C. Real or complex matrices are denoted by Mm,n (R) or Mm,n (C). We skip the field if the matrix can be either real or complex without changing the result. Let Sn denote the space of n × n real symmetric matrices, and let Rn denote the space of n–dimensional real 2 vectors. The (i, j)th entry of a matrix A ∈ Mm,n is referred to by (A)ij , or by aij , and the ith entry of a vector v ∈ Rn is referred to by vi . Upper case letters are used to denote matrices, while lower case letters are used for vectors. Scalars are usually denoted by Greek letters. The following symbols are being used in this paper: ⊗ ⊕ ⊗s for the Kronecker product, for the Kronecker sum, for the symmetric Kronecker product. Let A be a matrix. Then we note by AT its transpose, by A∗ its conjugate 1 transpose, by A−1 its inverse (if existent, i.e. A nonsingular), by A 2 its positive semidefinite square root (if existent, i.e. A positive semidefinite), and by det(A) or |A| its determinant. Furthermore, we introduce the following special vectors and matrices: In is the identity matrix of dimension n. The dimension is omitted if it is clear from the context. The ith unit vector is denoted by ei . Eij is the (i, j)th elementary matrix, consisting of all zeros except for a one in row i and column j. We work with the standard inner product in a vector space hu, vi = uT v, u, v ∈ Rn , and with the trace inner product in a matrix space hM, Ni = trace M T N, where M, N ∈ Mn (R), or M, N ∈ Mn (C), hM, Ni = trace M ∗ N, trace M = n X mii . i=1 n This definition holds in Mn as well as in S√. The corresponding norm is the Frobenius norm, defined by kMkF = trace M T M, M ∈ Mn (R) (or √ trace M ∗ M , M ∈ Mn (C)). The trace of a product of matrices has the following property: trace AB = trace BA, i.e. the factors can be commuted. 3 ∀ compatible A, B, ∀ M. It is called positive definite if the inequality is strict for all nonzero p ∈ Rn .n (R) is defined as A = QR. which has the following property: ∀ u ∈ Rm .n (R) is upper triangular. ∀ p ∈ Rn . The Schur factorization of a matrix A ∈ Mm. if pT Sp ≥ 0. where L is a lower triangular square matrix. Si 4 . where U ∈ Mn is unitary. which has the following two properties known as linearity: A (M + N) = A (M) + A (N). A linear operator A : Sn −→ Rm is a mapping from the space of symmetric n × n real matrices to the space of m–dimensional real vectors. The adjoint operator of A is another linear operator A ∗ : Rm −→ Sn . denoted S  0. N ∈ Mn is strictly upper triangular. A (λM) = λA (M). where Q ∈ Mn (R) is orthogonal and R ∈ Mm. hu. The following factorizations of a matrix will be mentioned later: The LU factorization with partial pivoting of a matrix A ∈ Mn (R) is defined as P A = LU.A symmetric matrix S ∈ Sn is called positive semidefinite. N ∈ Sn . The QR factorization of a matrix A ∈ Mm. where P is a permutation matrix. S ∈ Sn . A (S)i = hA ∗ (u). It exists if A is positive semidefinite. and D is diagonal. containing all eigenvalues of A. λ ∈ R. The Cholesky factorization of a matrix A ∈ Mn (R) is defined as A = LLT . and ∀ M ∈ Sn . L is a lower triangular square matrix and U is an upper triangular square matrix.n is defined as U ∗ AU = D + N =: T. Apparently. There were other writers such as Rados in the late 1800’s who also discovered property (1) independently. Therefore today. Later. questioning Hensel’s contributing it to Kronecker.2 History of the Kronecker product The following information is interpreted from the paper “On the History of the Kronecker Product” by Henderson. (AC) ⊗ (BD). Zehfuss was acknowledged by Muir (1881) and his followers. 5 . or “Rados” product. Hurwitz used the symbol × to denote the operation. Hensel. where he introduced the result (1) to his students. It was he. (1) where A and B are square matrices of dimension a and b. acknowledged in some of his papers that Kronecker presented (1) in his lectures. in the 1880’s. in the 1890’s. even the definition of the matrix operation A ⊗ B was associated with Kronecker’s name. “Hurwitz”. Kronecker gave a series of lectures in Berlin. and Searle [10]. Pukelsheim. the first documented work on Kronecker products was written by Johann Georg Zehfuss between 1858 and 1868. Hurwitz and St´ephanos developed the same determinant equality and other results involving Kronecker products such as: Im ⊗ In (A ⊗ B)(C ⊗ D) (A ⊗ B)−1 (A ⊗ B)T = = = = Imn . One of these students. we know the Kronecker product as “Kronecker” product and not as “Zehfuss”. “St´ephanos”. Despite Rados’ claim. who called the determinant |A ⊗ B| the Zehfuss determinant of A and B. Later on. A−1 ⊗ B −1 . Rados even thought that he wrote the original paper on property (1) and claims it for himself in his paper published in 1900. AT ⊗ B T . in the 1930’s. Furthermore. who established the determinant result |A ⊗ B| = |A|b |B|a .1. St´ephanos derives the result that the eigenvalues of A ⊗ B are the products of all eigenvalues of A with all eigenvalues of B. respectively. the determinant result (1) continued to be associated with Kronecker. However. s . . 2. Definition 2. . . many of them are stated and proven in the basic literature about matrix analysis ( e. A ∈ Mp. 2 Note that the inner products for Rn and Mn are compatible: trace (AT B) = vec (A)T vec (B). [9. am2 . .1 The Kronecker product of the matrix A ∈ Mp. . . .g.2 For any matrix A ∈ Mm.e. a12 . in [8]). .e. However in the succeeding sections we consider only the fields of the real and complex numbers. . (3) i.2. . In order to explore the variety of applications of the Kronecker product we introduce the notation of the vec –operator. Definition 2. .1 ∀ A.  .q with the matrix B ∈ Mr. B ∈ Mr. .. . B ∈ Mn . . the entries of A are stacked columnwise forming a vector of length mn. .1 Basic Properties KRON 1 (4. .s is defined as   a11 B . i. (αA) ⊗ B = A ⊗ (αB) = α(A ⊗ B) 6 ∀α ∈ K. amn )T .. direct product (Section 4.q .g. apq B Other names for the Kronecker product include tensor product. am1 . . denoted by K = R or C.  ap1 B . . Properties of the Kronecker Product The Kronecker product has a lot of interesting properties. . 2.1. a1n . Chapter 4] ). a1q B  .2 The Kronecker Product The Kronecker product is defined for two matrices of arbitrary size over any ring.n the vec –operator is defined as vec (A) = (a11 . .. (2) . A ⊗ B =  . .3 in [9]) It does not matter where we place multiplication with a scalar.2 in [9]) or left direct product (e. e. C ∈ Mr.7 in [9]) The Kronecker product is right–distributive.1 in [9]) The famous determinant result (1) in our notation reads: det(A ⊗ B) = det(B ⊗ A) = (det(A))n (det(B))m ∀A ∈ Mm . B ∈ Mr. C ∈ Mr.2.q . KRON 5 (4.10 in [9]) The product of two Kronecker products yields another Kronecker product: (A ⊗ B)(C ⊗ D) = AC ⊗ BD ∀A ∈ Mp.4 in [9]) Taking the transpose before carrying out the Kronecker product yields the same result as doing so afterwards. (A ⊗ B) ⊗ C = A ⊗ (B ⊗ C) ∀A ∈ Mm. 7 . B ∈ Mp. C ∈ Mq.KRON 2 (4. (A ⊗ B)∗ = A∗ ⊗ B ∗ ∀A ∈ Mp. KRON 3 (4.k .12 in [9]) The trace of the Kronecker product of two matrices is the product of the traces of the matrices.2.e.s . KRON 8 (Exercise 4.e. (A ⊗ B)T = AT ⊗ B T ∀A ∈ Mp.e.2.2.2.s .l . C ∈ Mr. This implies that A ⊗ B is nonsingular if and only if both A and B are nonsingular. B ∈ Mn . A ⊗ (B + C) = A ⊗ B + A ⊗ C ∀A ∈ Mp. B ∈ Mr.2. B ∈ Mp. trace (A ⊗ B) = trace (B ⊗ A) = trace (A)trace (B) ∀A ∈ Mm . B ∈ Mr.n . i.q .q . KRON 6 (4.2. i.s . B.q .e. KRON 9 (Exercise 4. i. (A + B) ⊗ C = A ⊗ C + B ⊗ C ∀A.s (C).5 in [9]) Taking the complex conjugate before carrying out the Kronecker product yields the same result as doing so afterwards. i. B ∈ Mn .s . KRON 4 (4.q . D ∈ Ms.q (C).6 in [9]) The Kronecker product is associative.s . i. KRON 7 (Lemma 4.e.2.8 in [9]) The Kronecker product is left–distributive. i. let us observe that the Kronecker product of two upper (lower) triangular matrices is again upper (lower) triangular. Since the entries of A ⊗ B contain all possible products of entries in A with entries in B one can derive the following relation: KRON 11 (Section 3 in [11]) For A ∈ Mp. KRON 13 (Section 1 in [14]) Let A ∈ Mm .KRON 10 (Corollary 4. This property follows directly from the mixed product property KRON 7.1. It is described in full detail in [6]. 8 . Then we can easily derive the LU factorization with partial pivoting of their Kronecker product: A ⊗ B = (PAT LA UA ) ⊗ (PBT LB UB ) = (PA ⊗ PB )T (LA ⊗ LB )(UA ⊗ UB ).11 in [9]) If A ∈ Mm and B ∈ Mn are nonsingular then (A ⊗ B)−1 = A−1 ⊗ B −1 . The Kronecker product does not commute. UB be the matrices corresponding to their LU factorizations with partial pivoting.s .2 Factorizations. LB be the matrices corresponding to their Cholesky factorizations. KRON 12 (Section 1 in [14]) Let A ∈ Mm . T B ⊗ A = Sp.q and B ∈ Mr. and let PA . This fact in addition to the nonsingularity property KRON 9 and the mixed product property KRON 7 allows us to derive several results on factors of Kronecker products. B ∈ Mn be positive (semi)definite.r (A ⊗ B)Sq. 2.s . Eigenvalues and Singular Values First. LA . B ∈ Mn be invertible.n = m X i=1 (eTi ⊗ In ⊗ ei ) = n X j=1 (ej ⊗ Im ⊗ eTj ) is the perfect shuffle permutation matrix. PB . LB . Then we can easily derive the Cholesky factorization of their Kronecker product: A ⊗ B = (LA LTA ) ⊗ (LB LTB ) = (LA ⊗ LB )(LA ⊗ LB )T . UA .2. where Sm. and let LA . where V ∈ Mm . KRON 15 (in proof of Theorem 4. This vector x is then called the eigenvector corresponding to λ.n are the square roots of the min(m. let λ ∈ σ(A) with corresponding eigenvector x.The fact that A ⊗ B is positive (semi)definite follows from the eigenvalue theorem established below. Recall that the singular values of a matrix A ∈ Mm. Then λµ is an eigenvalue of A ⊗ B with corresponding eigenvector x ⊗ y. 9 . Then we can easily derive the Schur factorization of their Kronecker product: A ⊗ B = (UA TA UA∗ ) ⊗ (UB TB UB∗ ) = (UA ⊗ UB )(TA ⊗ TB )(UA ⊗ UB )∗ . The singular value decomposition of A is A = V ΣW ∗ . and let UA . Corollary 2. A consequence of this property is the following result about eigenvalues. QB . KRON 14 (Section 1 in [14]) Let A ∈ Mq. B ∈ Ms.r . n) (counting multiplicities) largest eigenvalues of A∗ A.2. then A ⊗ B is also positive (semi)definite. 1 ≤ r ≤ q. It follows that the rank of A is the number of its nonzero singular values.4 It follows directly that if A ∈ Mm . Then we can easily derive the QR factorization of their Kronecker product: A ⊗ B = (QA RA ) ⊗ (QB RB ) = (QA ⊗ QB )(RA ⊗ RB ). Furthermore. TB be the matrices corresponding to their Schur factorizations. RB be the matrices corresponding to their QR factorizations. The spectrum.t. W ∈ Mn are unitary and Σ is a diagonal matrix containing the singular values ordered by size on the diagonal. B ∈ Mn . Recall that the eigenvalues of a square matrix A ∈ Mn are the factors λ that satisfy Ax = λx for some x ∈ Cn . is denoted by σ(A). be of full rank. and let QA . which is the set of all eigenvalues. Theorem 2. 1 ≤ t ≤ s. B ∈ Mn are positive (semi)definite matrices. RA .12 in [9]) Let A ∈ Mm . Any eigenvalue of A ⊗ B arises as such a product of eigenvalues of A and B. TA . UB .2.12 in [9]) Let A ∈ Mm and B ∈ Mn . and let µ ∈ σ(B) with corresponding eigenvector y.3 (Theorem 4. let λ ∈ σ(A) with corresponding eigenvector x. Note that the definition of the Kronecker sum varies in the literature.4. Note that the distributive property does not hold in general for the Kronecker product and the Kronecker sum: (A ⊕ B) ⊗ C 6= (A ⊗ C) ⊕ (B ⊗ C). Furthermore.3 The Kronecker Sum The Kronecker sum of two square matrices A ∈ Mm . Then λ + µ is an eigenvalue of A ⊕ B with corresponding eigenvector y ⊗ x. have rank rA .15 in [9]) Let A ∈ Mq. As for the Kronecker product. B ∈ Mn is defined as A ⊕ B = (In ⊗ A) + (B ⊗ Im ). WA . 2. [7]. Horn and Johnson ([9]) use the above definition. For more information on these factorizations and decompositions see e.t . and let µ ∈ σ(B) with corresponding eigenvector y. one can derive a result on the eigenvalues of the Kronecker sum. In this paper we will work with Horn and Johnson’s version of the Kronecker sum. VB .1. Then we can easily derive the singular value decomposition of their Kronecker product: A ⊗ B = (VA ΣA WA∗ ) ⊗ (VB ΣB WB∗ ) = (VA ⊗ VB )(ΣA ⊗ ΣB )(WA ⊗ WB )∗ . 10 . and therefore rank (A ⊗ B) = rank (B ⊗ A) = rA rB . ΣA .g.5 in [9]) Let A ∈ Mm and B ∈ Mn . whereas Amoia et al ([2]) as well as Graham ([6]) use A ⊕ B = (A ⊗ In ) + (Im ⊗ B). Any eigenvalue of A ⊕ B arises as such a sum of eigenvalues of A and B. B ∈ Ms. Choosing the first identity matrix of dimension n and the second of dimension m ensures that both terms are of dimension mn and can thus be added. It follows directly that the singular values of A ⊗ B are the rA rB possible positive products of singular values of A and B (counting multiplicities). ΣB be the matrices corresponding to their singular value decompositions. and let VA .r .KRON 16 (Theorem 4. rB .5 (Theorem 4. WB .2. Theorem 2. AX + Y B = C. C = I2 . The first claim can be illustrated by the following example: Let A = 1. (I ⊗ A)vec X + (B T ⊗ I)vec Y = vec C corresponds to (7).4 Matrix Equations and the Kronecker Product The Kronecker product can be used to present linear equations in which the unknowns are matrices. AXB = C. (10) (11)  Note that with the notation of the Kronecker sum. C ∈ Mm. whereas the right hand side works out to (A ⊗ C) ⊕ (B ⊗ C) = (1I2 ) ⊕ (2I2 ) = I2 ⊗ 1I2 + 2I2 ⊗ I2 = I4 + 2I4 = 3I4 .and A ⊗ (B ⊕ C) 6= (A ⊗ B) ⊕ (A ⊗ C). (A ⊕ B) ⊗ C = (1 ∗ 1 + 2 ∗ 1) ⊗ I2 = 3I2 . B = 2. (8) (9) (B T ⊗ A)vec X = vec C corresponds to (6). Equation (5) is known to numerical linear algebraists as the Sylvester equation.n . AX + XB = C. B ∈ Mn . 2. A similar example can be used to validate the second part. (4) (5) (6) (7) These equations are equivalent to the following systems of equations: (I ⊗ A)vec X = vec B  (I ⊗ A) + (B T ⊗ I) vec X = vec C corresponds to (4). Examples for such equations are: AX = B. For given A ∈ Mm .2 Applications of the Kronecker Product The above properties of the Kronecker product have some very nice applications.1. one wants to find all 11 . equation (9) can be written as (A ⊕ B T )vec X = vec C. 2. corresponds to (5). whether there is a solution to this equation or not. equation (5) appears frequently in system theory (see e. consider the computation of the Nesterov–Todd search direction (see e. This system of linear equations plays a central role in control theory. It has a unique solution X if and only if A∗ and −AT have no eigenvalues in common. therefore Hermitian. In other words one wants to know if the Kronecker sum A ⊕ B T is nonsingular.n which satisfy the equation. A solution of this equation can be found by transforming it into the equivalent system of equations:  T  (A ⊗ I) + (I ⊗ A∗ ) vec (X) = vec (H). Given a matrix A ∈ Mn . H ∈ Mn are given and H is Hermitian. Now.g. or invariant subspace computation to name just a few applications. which is equivalent to  ∗  A ⊕ AT vec (X) = vec (H). In the case of all matrices being square and of the same dimension. [3]). we want to know all matrices X ∈ Mn that 12 . Another application of the Kronecker product is the commutativity equation. [5]). Poisson equation solving. The following equation needs to be solved: 1 (DV V + V DV ) = µI − V 2 . and therefore V and −V T have no eigenvalues in common. The question is often. An important special case of the Sylvester equation is the Lyapunov equation: XA + A∗ X = H. we can immediately conclude that this matrix is nonsingular if and only if the spectrum of A has no eigenvalue in common with the negative spectrum of B: σ(A) ∩ (−σ(B)) = ∅.g. we can conclude that this equation has a unique symmetric solution since V is positive definite. 2 where V is a real symmetric positive definite matrix and the right hand side is real and symmetric. This special type of matrix equation arises in the study of matrix stability. where A.X ∈ Mm. From our knowledge about eigenvalues of the Kronecker sum. For example. This can be rewritten as AX −XA = 0. we know that those X are therefore Kronecker products of eigenvectors of AT with the eigenvectors of A. and hence as   (I ⊗ A) − (AT ⊗ I) vec (X) = 0. which is equivalent to   A ⊕ (−AT ) vec (X) = µvec (X). i. There are many other applications of the Kronecker product in e. This also ties in with our result on the commutativity  equation. signal processing. For T µ = 0.   T we find that µ has to be an eigenvalue of A ⊕ (−A ) . The latter will be discussed in the following sections on the symmetric Kronecker product.g. Given A ∈ Mn and µ ∈ K.e. Now we have transformed the commutativity problem into a null space problem which can be solved easily. and that all X satis  fying (12) are eigenvectors of A ⊕ (−AT ) (after applying vec to X). From our results on the eigenvalues and eigenvectors of the Kronecker sum. quantum computing and semidefinite programming. we get that 0 has to be an eigenvalue of A ⊕ (−A ) in order for a nontrivial commutating X to exist. Graham ([6]) mentions another interesting application of the Kronecker product. By transforming the equation into   (I ⊗ A) − (AT ⊗ I) vec (X) = µvec (X).commute with A. {X : AX = XA} . we want to know when the equation AX − XA = µX (12) has a nontrivial solution. 13 . image processing. QQT = I 1 n(n+1) . . . . 2sn1 . and T¨ ut¨ unc¨ u’s paper [12].  s1n s2n . . . . For every dimension n. . s22 . (13) Note that this definition yields another inner product equivalence: trace (ST ) = svec (S)T svec (T ). s1n  s12 s22 . Much of the following can be found in De Klerk’s book [5]. .  . Definition 3. . . . where S ∈ Sn : 1 (G ⊗s H)svec (S) = svec (HSGT + GSH T ). . . Consider the entries of the symmetric matrix S ∈ Sn :   s11 s12 .3 The Symmetric Kronecker Product The symmetric Kronecker product has many applications in semidefinite programming software and theory. Definition 3. . . 2 This is an implicit definition of the symmetric Kronecker product. . H ∈ Mn as a mapping on a vector svec (S). 2s21 . . . or in Todd. . i. We can also give a direct definition if we first introduce the orthonormal matrix 1 2 Q ∈ M 2 n(n+1)×n . 2s32 . . (14) Orthonormal is used in the sense of Q having orthonormal rows. snn )T .e. s2n    S =  . snn 14 . 2sn2 .1 For any symmetric matrix S ∈ Sn we define the vector 1 svec (S) ∈ R 2 n(n+1) as √ √ √ √ svec (S) = (s11 . ∀ S.2 The symmetric Kronecker product can be defined for any two (not necessarily symmetric) matrices G. there is only one such matrix Q.. . It can be 2 characterized as follows (compare to [12]). Toh. . . . .. T ∈ Sn . . . which has the following property: Qvec (S) = svec (S) and QT svec (S) = vec (S) ∀ S ∈ Sn . . and in the column that is multiplied with the element skl in vec (S). √. sn1 s12 . 2sn1 √s22 2s32 .. 0 0 0 1 we can check equations (14):     s11 s11 Qvec (S) =  √12 s21 + √12 s12  =  √22 s21  = svec (S). qij.. . . or i = l 6= j = k. .kl =  2 0 otherwise.then            vec (S) =           s11 s21 ..         We can now characterize the entries of Q using the equation Qvec (S) = svec (S). √. We will work out the details for dimension n = 2:   1 0 0 0 Q =  0 √12 √12 0  .. Then   1 if i = j = k = l.  . s22 s22 and   s11 s11    s21   s21 QT svec (S) =   s21  =  s12 s22 s22  15    = vec (S).kl be the entry in the row defining element sij in svec (S). 2sn2 . .. √1 if i = k 6= j = l. Let qij.. snn                       and         svec (S) =          √s11 2s21 . s1n .. sn2 . snn          . (G ⊗s H)svec (U) = = = = = = 1 Q(G ⊗ H + H ⊗ G)QT svec (U) 2 1 Q(G ⊗ H + H ⊗ G)vec (U) by (14) 2 1 Q((G ⊗ H)vec (U) + (H ⊗ G)vec (U)) 2 1 Q(vec (HUGT ) + vec (GUH T )) by (10) 2 1 Qvec (HUGT + GUH T ) 2 1 svec (HUGT + GUH T ). 2 where the last equality follows since HUGT + GUH T = HUGT + (HUGT )T . H ∈ Mn . and vice versa. Some follow directly from the properties of the Kronecker product. these equations show that QT Q is the orthogonal projection matrix onto the space of symmetric matrices in vector form.Note that equations (14) imply that QT Qvec S = QT svec S = vec (S).3 Let Q be the unique 12 n(n + 1) × n matrix which satisfies (14). and Q as before. Let us now define the symmetric Kronecker product using the matrix introduced above. others hold for the symmetric but not for the ordinary Kronecker product. Furthermore. The two definitions for the symmetric Kronecker product are equivalent. ∀ S ∈ Sn . and therefore symmetric. where S is a symmetric matrix. Then the symmetric Kronecker product can be defined as follows: 1 G ⊗s H = Q(G ⊗ H + H ⊗ G)QT . and by applying equation (14). 16 . Let G.1 Properties of the symmetric Kronecker product The symmetric Kronecker product has many interesting properties. i.e. 3. Definition 3. H ∈ Mn . 2 ∀ G. U ∈ Sn . onto vec (S). (αA) ⊗s B = SKRON 2 (A ⊗s B)T = AT ⊗s B T ∀A. 1 Q((αA) ⊗ B + B ⊗ (αA))QT 2 1 = Q(A ⊗ (αB) + (αB) ⊗ A)QT 2 = A ⊗s (αB) = α(A ⊗s B). B ∈ Mn Proof. for any A. we can prove properties according to KRON 1 -KRON 8 with the exception of KRON 4 for which we provide a counter example. B ∈ Mn . Furthermore.The symmetric Kronecker product is commutative: A ⊗s B = B ⊗s A. 1 (A ⊗s B)T = ( Q(A ⊗ B + B ⊗ A)QT )T 2 1 Q((A ⊗ B)T + (B ⊗ A)T )QT = 2 1 = Q(AT ⊗ B T + B T ⊗ AT )QT 2 = AT ⊗s B T . B ∈ Mn Proof. SKRON 1 (αA) ⊗s B = A ⊗s (αB) = α(A ⊗s B) ∀α ∈ R. Corollary 3. A. 17 . This follows directly from the definition.4 An immediate consequence of this property is that A ⊗s I is symmetric if and only if A is symmetric. Here.SKRON 3 (A ⊗s B)∗ = A∗ ⊗s B ∗ ∀A. Consider the left hand side. A ⊗s (B ⊗s C). for any bigger dimension. B ∈ Mn (C) Proof. Now. In order for the outer symmetric Kronecker product to be defined. i. consider the right hand side. (A ⊗s B)∗ = = =1 = 1 ( Q(A ⊗ B + B ⊗ A)QT )∗ 2 1 T ∗ (Q ) ((A ⊗ B)∗ + (B ⊗ A)∗ )Q∗ 2 1 Q(A∗ ⊗ B ∗ + B ∗ ⊗ A∗ )QT 2 A∗ ⊗s B ∗ . SKRON 4 (A ⊗s B) ⊗s C = A ⊗s (B ⊗s C) does not hold in general Proof. B ∈ Mn . C ∈ Mn . The symmetric Kronecker product is defined for any two square matrices of equal dimension. the left hand side and right hand side are never simultaneously well defined. (A ⊗s B) ⊗s C. However. i. we 1 require C to be in M 2 n(n+1) .e. which holds if and only if n = 0 or n = 1. say A. SKRON 5 (A + B) ⊗s C = A ⊗s C + B ⊗s C 4 since Q is a real matrix 18 ∀A. B. This holds if and only if n = 21 n(n + 1).e. The resulting matrix is a square matrix of dimension 21 n(n+ 1). In both cases the result holds trivially. the inner symmetric Kronecker product is defined if and only if the matrices B and C are of equal dimensions. B. C ∈ Mn Proof. Lemma E. C. A ⊗s (B + C) = SKRON 7 (see e.g. 19 .1.2 in [5]) 1 (A ⊗s B)(C ⊗s D) = (AC ⊗s BD + AD ⊗s BC) ∀A. and (A ⊗s A)(B ⊗s C) = AB ⊗s AC. 1 Q(A ⊗ (B + C) + (B + C) ⊗ A)QT 2 1 Q(A ⊗ B + A ⊗ C + B ⊗ A + C ⊗ A)QT = 2 1 1 = Q(A ⊗ B + B ⊗ A)QT + Q(A ⊗ C + C ⊗ A)QT 2 2 = A ⊗s B + A ⊗s C. D ∈ Mn 2 Furthermore. 1 Q((A + B) ⊗ C + C ⊗ (A + B))QT 2 1 = Q(A ⊗ C + B ⊗ C + C ⊗ A + C ⊗ B)QT 2 1 1 Q(A ⊗ C + C ⊗ A)QT + Q(B ⊗ C + C ⊗ B)QT = 2 2 = A ⊗s C + B ⊗s C. (A + B) ⊗s C = SKRON 6 A ⊗s (B + C) = A ⊗s B + A ⊗s C ∀A. (A ⊗s B)(C ⊗s C) = AC ⊗s BC.Proof. B. Let S be a symmetric matrix. . Now. and 1 ek = √ svec (Eij + Eji ).Proof. (A ⊗s B)(C ⊗s D)svec (S) 1 (A ⊗s B)svec (CSD T + DSC T ) 2 1 svec (ACSD T B T + BCSD T AT + ADSC T B T + BDSC T AT ) 4 1 svec ((AC)S(BD)T + (BC)S(AD)T + (AD)S(BC)T + (BD)S(AC)T ) 4 1 (AC ⊗s BD + AD ⊗s BC)svec (S). if k = (j − 1)n + 1 − (j − 2)(j − 1) 2 ∀ j = 1. n. B ∈ Mn Proof. . . Note that ek = svec (Ejj ). . 2 1≤i<j≤n 2 2 20 . the proof follows straight forward: n(n+1) 2 n(n+1) 2 trace (A ⊗s B) = = X k=1 n X k=1 + (A ⊗s B)kk = X k=1 eTk (A ⊗s B)ek svec (Ekk )T (A ⊗s B)svec (Ekk ) 1 X 1 1 svec ( (Eij + Eji ))(A ⊗s B)svec ( (Eij + Eji )). 2 if k = (j − 1)n + i − j(j − 1) 2 ∀ 1 ≤ j < i ≤ n. 2 SKRON 8 trace (A ⊗s B) = trace (AB)+ 1 X (aii bjj + ajj bii 2 1≤i<j≤n −(aij bji + aji bij )) ∀ A. then = = = = This proof is directly taken from [5]. Now. So far. consider 1 svec (Ekk )T svec (BEkk AT + AEkk B T ) 2 1 trace (Ekk BEkk AT + Ekk AEkk B T ) = 2 = trace (ek eTk Bek eTk AT ) = trace (eTk Bek )(eTk AT ek ) = akk bkk . 21 . . but we can say something about the next property. no one has discovered properties corresponding to KRON 9. we have = = = = svec (Eij + Eji )T (A ⊗s B)svec (Eij + Eji ) 1 svec (Eij + Eji )T svec (B(Eij + Eji )AT + A(Eij + Eji )B T ) 2 1 trace (Eij BEij AT + Eij BEji AT + Eij AEij B T + Eij AEji B T 2 +Eji BEij AT + Eji BEji AT + Eji AEij B T + Eji AEji B T ) trace (Eij BEij AT + Eij AEij B T + Eij BEji AT + Eij AEji B T ) (eTj Bei )(eTj AT ei ) + (eTj Aei )(eTj B T ei ) +(eTj Bej )(eTi AT ei ) + (eTj Aej )(eTi B T ei ) = bji aij + aji bij + bjj aii + ajj bii . we get trace (A ⊗s B) = n X akk bkk + k=1 1 X ((aij bji + aji bij 2 1≤i<j≤n +aii bjj + ajj bii ) 1 X (aii bjj + ajj bii = trace (AB) + 2 1≤i<j≤n −(aij bji + aji bij )). svec (Ekk )T (A ⊗s B)svec (Ekk ) = and for any 1 ≤ i < j ≤ n. for any k = 1. . . Putting the pieces together. . n. From SKRON 7 it follows that (A ⊗s A)(B ⊗s C) = AB ⊗s AC. B= . 22 . A ⊗s B has dimension three. For the second part of the claim consider     1 0 0 1 A= . Using this and the fact that multiplication with Q does not increase the rank of this matrix. Let us first prove a preliminary lemma. ∀ nonsingular A. if and only if B = A and C = A−1 . but (A ⊗s B)−1 6= (A−1 ) ⊗s (B −1 ) in general. and −1 AB ⊗s AC = I. Proof. we conclude that the inverse  −1 1 −1 T (A ⊗s B) = Q(A ⊗ B + B ⊗ A)Q 2 does not exist. the sum  0 1  1 0 A⊗B+B⊗A=  1 0 0 1 1 0 0 1  0 1   1  0 is singular with rank two. As for the Kronecker product and the Kronecker sum. Try to find matrices B and C such that (A ⊗s A)(B ⊗s C) = I.SKRON 10 (A ⊗s A)−1 = (A−1 ) ⊗s (A−1 ) ∀ nonsingular A ∈ Mn . 0 1 1 0 Both matrices are invertible. B ∈ Mn . we can also establish a result on expressing the eigenvalues and eigenvectors of the symmetric Kronecker product of two matrices A and B in terms of the eigenvalues and eigenvectors of A and of B. However. Now. 1 √ svec (vi vjT + vj viT ) 2 if i > j. Then. i 6= k or j 6= l. B ∈ Mn columnwise. i = 1. j)th column of V ⊗s V.Lemma 3. can be written as (V ⊗s V )eij . the (i. 2 To prove orthogonality of V ⊗s V.1 in [1]) Let V be defined as the matrix which contains the orthonormal eigenvectors vi . . Recall that Eij denotes the matrix containing all zeros except for a one in position (i. . where eij denotes the corresponding unit vector. of the simultaneously diagonalizable matrices A. 1 ≤ j ≤ i ≤ n. observe that for i 6= j. The (i. j) 6= (k. Furthermore. the matrix V ⊗s V is orthonormal. 1 (V ⊗s V )eij = (V ⊗s V ) √ svec (Eij + Eji ) 2 1 1 √ svec (V (Eij + Eji )V T + V (Eij + Eji )V T ) = 2 2 1 = √ svec (V Eij V T + V Eji V T ) 2 1 = √ svec (vi vjT + vj viT ). l). 23 . n. j)th column vector.5 (adapted from Lemma 7. Proof. = = = = 1 1 √ svec (vi vjT + vj viT )T √ svec (vk vlT + vl vkT ) 2 2 1 trace (vi vjT + vj viT )(vk vlT + vl vkT ) 2 1 trace (vi vjT vk vlT + vi vjT vl vkT + vj viT vk vlT + vj viT vl vkT ) 2 1 (vi vjT vk vlT + 0 + 0 + vj viT vl vkT ) 2 0. of the matrix V ⊗s V can be written in terms of the ith and jth eigenvectors of A and B as follows: svec (vi vjT ) if i = j. . . 1 ≤ j ≤ i ≤ n. consider columns (i.e. i. j). . . . Theorem 3.The last equality holds since in order for one of the terms. . But we excluded that option. the following proves normality: = = = = 1 1 √ svec (vi vjT + vj viT )T √ svec (vi vjT + vj viT ) 2 2 1 trace (vi vjT + vj viT )(vi vjT + vj viT ) 2 1 trace (vi vjT vi vjT + vi vjT vj viT + vj viT vi vjT + vj viT vj viT ) 2 1 (0 + 1 + 1 + 0) 2 1. for i = j. . This yields i ≥ j = k ≥ l = i. . say vi vjT vk vlT . 1 √ svec (vi vjT + vj viT ) 2 if i < j. for all indices i 6= j. . 1 (A ⊗s B) √ svec (vi vjT + vj viT ) 2 24 . . vn a common basis of orthonormal eigenvectors. and their corresponding eigenvectors can be written as svec (vi vjT ) if i = j. and v1 . we can now establish the following theorem on eigenvalues and eigenvectors of the symmetric Kronecker product. the eigenvalues of A ⊗s B are given by 1 (λi µj + λj µi ). let λ1 . λn and µ1 . Having proven this result. . .6 (Lemma 7. we yield the above claims in a similar fashion by writing the unit vector eii as svec (Eii ). Recall that i ≥ j and k ≥ l. 2 1 ≤ i ≤ j ≤ n. we need j = k and i = l.2 in [1]) Let A. to be one. Furthermore. On the other hand. Then. B ∈ Mn be simultaneously diagonalizable matrices. Proof. . forcing all indices to be equal. . µn be their eigenvalues. Furthermore. Since of them. 2 2 Note that these eigenvectors are exactly the columns of V ⊗s V in the lemma above. we have all these vectors are eigenvectors.7 (see e. (or ≥ 0 in the case of positive semidefinite). 1 We need to show the following for any s ∈ R 2 n(n+1) . B ∈ Sn are positive (semi)definite.g.1. Theorem 3.1 that the Kronecker product of two positive (semi)definite matrices is positive (semi)definite as well. A similar property holds for the symmetric Kronecker product. We have seen in Section 2. then A ⊗s B is positive (semi)definite (not necessarily symmetric). Lemma E.4 in [5]) If A. We proved that these column vectors are orthogonal. and since there are n(n+1) 2 shown that they span the complete eigenspace of A ⊗s B. s 6= 0 : sT (A ⊗s B)s > 0. By denoting s as svec (S). we can show that sT (A ⊗s B)s = svec (S)T (A ⊗s B)svec (S) 1 svec (S)T svec (BSA + ASB) = 2 1 trace (SBSA + SASB) = 2 1 1 1 1 = trace SASB = trace B 2 SA 2 A 2 SB 2 1 1 1 1 1 1 = trace (A 2 SB 2 )T (A 2 SB 2 ) = kA 2 SB 2 kF > 0.1 1 = √ svec (B(vi vjT + vj viT )AT + A(vi vjT + vj viT )B T ) 22 1 1 = √ svec (Bvi vjT AT + Bvj viT AT + Avi vjT B T + Avj viT B T ) 22 1 1 = √ svec (µi vi λj vjT + µj vj λi viT + λi vi µj vjT + λj vj µi viT ) 22 1 1 = √ svec ((µi λj + µj λi )(vi vjT + vj viT )) 22 1 1 = (µi λj + µj λi ) √ svec (vi vjT + vj viT ). 25 . Proof. (or ≥ 0 in case of positive semidefinite). assume for contradiction that A ⊗ B is not positive definite. Then.e. that there exists an eigenvalue λ ≤ 0. say B. The following property relates positive (semi)definiteness of the ordinary Kronecker product to positive (semi)definiteness of the symmetric Kronecker product. i. A ⊗ B is positive (semi)definite if and only if A ⊗s B is positive (semi)definite.) svec (U)T (A ⊗s B)svec (U) = = = = = 1 svec (U)T svec (BUA + AUB) 2 1 trace (UBUA + UAUB) 2 trace (UBUA) vec (U)T vec (BUA) vec (U)T (A ⊗ B)vec (U) > 0. Since any eigenvalue of A ⊗ B is a product µη. To prove the other direction. We need to show that svec (U)T (A ⊗s B)svec (U) > 0 (≥ 0. Let U be a symmetric matrix. say A. Let U = abT + baT . svec (U)T (A ⊗s B)svec (U) 1 = svec (U)T svec (BUA + AUB) 2 1 trace (UBUA + UAUB) = trace (UBUA) = 2 = trace (abT + baT )B(abT + baT )A 26 .8 (Theorem 2. Proof. has a nonnegative eigenvalue η. has a nonpositive eigenvalue µ and the other one. we must assume that one of the matrices A and B. assume first that A ⊗s B is positive definite. Theorem 3. where µ is an eigenvalue of A and η is an eigenvalue of B. Further. (or ≥ 0 respectively). The strict inequality for the positive 1 1 definite case holds since S is nonzero and both A 2 and B 2 are positive definite.10 in [13]) Let A and B be in Sn . We denote the corresponding eigenvectors by a and b. Then. Assume that A⊗B is positive (semi)definite. From this and equations (15) and (16). (16) To prove that part three is nonpositive. Furthermore. contradicting the positive definiteness of A ⊗s B.7 can be established as a corollary of Corollary 2. we can say that bT Bb 6= 0. 27 .= trace (abT BabT A + abT BbaT A + baT BabT A + baT BbaT A) = trace ((Bb)T abT (Aa) + (bT Bb)(aT Aa) + (aT Ba)(bT Ab) +aT (Bb)(Aa)T b) = ηµ(bT a)2 + (bT Bb)(aT Aa) + (aT Ba)(bT Ab) + ηµ(aT b)2 =: P 1 + P 2 + P 3 + P 4. For A ⊗s B positive semidefinite. Now.4 and Theorem 3. 1 svec (vv T )T (A ⊗s B)svec (vv T ) = svec (vv T )T svec (Bvv T A + Avv T B) 2 1 T = trace (vv Bvv T A + vv T Avv T B) 2 = trace (v T Bvv T Av) = (v T Bv)(v T Av) > 0. P 3 < 0. and for v = a. (15) and aT Aa = µaT a ≤ 0. and aT Aa < 0 implies aT Ba < 0. Part two is nonpositive since bT Bb = ηbT b ≥ 0. we conclude that bT Ab > 0 and aT Ba < 0. consider an arbitrary rank one matrix vv T . Parts one and four (P 1. bT Bb > 0 implies bT Ab > 0. This implies that λ < 0 since for v = b.8. and therefore. Theorem 3. This yields that svec (U)T (A ⊗s B)svec (U) = P 1 + P 2 + P 3 + P 4 < 0. it follows that aT Aa 6= 0. With this theorem. P 4) are nonpositive since ηµ ≤ 0. the result follows analogously. 2 Now the last row of system (18) reads 1 1 (∆XS + X∆S + S∆X + ∆SX) = νI − (XS + SX). ∗ (17) where A is a linear operator from Sn to Rm with full row rank. ∆S) of the following system of equations: A ∆X = b − A X. We try to find approximate solutions by taking Newton steps. XS = νI. ∆y. system (18) needs to be changed to produce symmetric solutions ∆X and ∆S. Note that if ∆X is a solution of (18) then so is 12 (∆X + ∆X T ). The search direction for a single Newton step is the solution (∆X. ∗ (18) In order to yield useful results. Haeberly. The solutions to this system for different ν > 0. b is a vector in Rm . and ν is a scalar. one tries to solve the following system of equations (see Section 3. C is a matrix in Sn . One approach was used by Alizadeh. A y + S = C. A ∗ is its adjoint. ∆XS + X∆S = νI − XS. S(ν)). 28 . represent the central path. Here. 2 2 Let A be the matrix representation of A and let AT be the matrix representation of A ∗ . (X(ν). I is the n dimensional identity matrix. A ∆y + ∆S = C − S − A ∗ y.2 Applications of the symmetric Kronecker product A major application of the symmetric Kronecker product comes up in defining the search direction for primal–dual interior–point methods in semidefinite programming.3.1 in [12]): A X = b. y(ν). They symmetrized the third equation of (17) by writing it as: 1 (XS + (XS)T ) = νI. and Overton [1]. The solution to this system of equations is called the AHO direction. 2 which yields the AHO direction. Using the Monteiro–Zhang symmetrization (see e. F := P X ⊗s P −T . we get 1 HI (XS) = (XS + SX). and EAHO and FAHO are defined using the symmetric Kronecker product. 0 EAHO FAHO svec (∆S) svec (νI − 21 (XS + SX)) where I is the identity matrix of dimension 21 n(n + 1). The modified system of equations can now be written in block form:      0 A 0 ∆y b − AX  AT 0 I   svec (∆X)  =  svec (C − S − A ∗ y)  . FAHO := X ⊗s I. Note. This search direction is a special case of the more general Monteiro–Zhang family of search directions. we get the following system of equations:      0 A 0 ∆y b − AX  AT 0 I   svec (∆X)  =  svec (C − S − A ∗ y)  . EAHO := I ⊗s S.g. which yields the following lemma. using property SKRON 7. and F = (P XP T ⊗s I)(P −T ⊗s P −T ). these matrices can be written as E = (I ⊗s P −T SP −1)(P ⊗s P ). [12]). (20) 0 E F svec (∆S) svec (νI − HP (XS)) where the more general matrices E and F are defined as E := P ⊗s P −T S. 2 (19) where P is an invertible matrix.so we can use svec and ⊗s in this context. 29 . The same holds for ∆S. Note that. For this family of search directions the product XS is being symmetrized via the following linear transformation 1 HP (XS) = (P (XS)P −1 + P −T (XS)T P T ). that for P = I. Consider the equations Asvec (∆X) = 0. ∆S) of system (20). We want to  0  AT 0 show that     A 0 ∆y 0     0 I svec (∆X) 0 . (25) When multiplying this by E −1 from the left and then by A. The result for F can be obtained similarly. we get Asvec (∆X) − AE −1 F AT ∆y = 0. which is positive.Lemma 3. Then system (20) has a unique solution. the matrix I ⊗s P −T SP −1 is invertible. and note that I and P −T SP −1 are simultaneously diagonalizable. This can be shown by proving the nonsingularity of each factor. Denote its eigenvalues by µi . S and E −1 F be positive definite (E −1 F does not need to be symmetric). i = 1. P ⊗s P is invertible because of property SKRON 10. Having established nonsingularity of E and F. Theorem 3. . we can now state the following theorem.1 in [12]) Let X. Then. . (22) AT ∆y + svec (∆S) = 0. (23) Esvec (∆X) + F svec (∆S) = 0. Proof. and plugging the result into equation (24) yields Esvec (∆X) − F AT ∆y = 0. Also. = E F svec (∆S) 0 (21) has only the trivial solution. . ∆y. the eigenvalues of I ⊗s P −T SP −1 are 21 (µj + µi ). n. It provides a sufficient condition for the uniqueness of the solution (∆X. (24) and Solving equation (23) for svec (∆S). 30 . Proof. then E and F are nonsingular. We observe that the matrix P −T SP −1 is positive definite.9 If X and S are positive definite. .10 (Theorem 3. Therefore. 1 ≤ i ≤ j ≤ n. Then we have uT E −1 F u = k T F E T k = k T (P X ⊗s P −T )(P T ⊗s SP −1 )k 1 T = k (P XP T ⊗s P −T SP −1 + P XSP −1 ⊗s P −T P T )k 2 1 1 T k (P XP T ⊗s P −T SP −1 )k + k T (P XSP −1 ⊗s I)k = 2 2 1 T > k (P XSP −1 ⊗s I)k 2 1 svec (K)T (P XSP −1 ⊗s I)svec (K) = 2 1 svec (K)T svec (KP −T SXP T + P XSP −1K) = 4 1 = trace (KKP −T SXP T + KP XSP −1K) 4 1 trace K(P −T SXP T + P XSP −1)K = 4 1 = trace KHP (XS)K ≥ 0. Let u ∈ R 2 be a nonzero vector. The following results establish several such conditions. we want to know conditions for which E −1 F is positive definite. 31 . n(n+1) Proof. it follows that AE −1 F AT is positive definite. because of equation (22). Now. it follows that P XP T ⊗s P −T SP −1 ≻ 0. Denote by k the product −T E u. and the strict inequality holds since P XP T ≻ 0 and P −T SP −1 ≻ 0 and from Theorem 3.7. Plugging this back into equations (23) and (25) establishes the desired result. Since A has full row rank.1 in [12]) E −1 F is positive definite if X and S are positive definite and HP (XS) is positive semidefinite. and therefore ∆y = 0. 2 where the second equality follows from SKRON 7. and since E −1 F is positive definite.11 (part of Theorem 3. and define K by k = svec (K). Lemma 3.which is AE −1 F AT ∆y = 0. 2.12 imply that system (20) has a unique solution. Theorem 3. This establishes the equivalence between the second and the third statement. 32 . 2 The last equality follows from property SKRON 7.2 in [12]) Let X and S be positive definite.Lemma 3. E −1 F is symmetric. it follows that P XSP −1 ⊗s I is symmetric if and only if P XSP −1 is symmetric. We know that P XP T and P −T SP −1 are symmetric. From Corollary 3.4. Then any of the conditions in Lemma 3. P XP T and P −T SP −1 commute. Therefore. The equivalence between the last two statements follows from the equation E −1 F = E −1 (F E T )E −T = E −1 (EF T )E −T = F T E −T = (E −1 F )T . The first two statements are equivalent since (P XSP −1)T = P −T SXP T = (P −T SP −1 )(P XP T ) = (P XP T )(P −T SP −1 ) = P XSP −1. and 4. if and only if the first statement holds. 3. Proof. P XSP −1 is symmetric. F E T is symmetric.13 (part of Theorem 3. if and only if the third statement holds. Note that 1 F E T = (P X ⊗s P −T )(P T ⊗s SP −1 ) = (P XP T ⊗s P −T SP −1 +P XSP −1 ⊗s I). F E T is symmetric if and only if P XSP −1 ⊗s I is symmetric. Then the following are equivalent: 1.12 (Theorem 3.2 in [12]) Let X and S be positive definite. Proof. 1998. References [1] F.10. A and B simultaneously diagonalizable). 8-3:746–768. Then 1 T −T u (P SXP T + P XSP −1)u 2 = uT P XSP −1u = uT (P XP T )(P −T SP −1 )u.A. Haeberly and M. We now conclude from Lemma 3. When trying to find search directions of the Monteiro–Zhang family. the properties of the symmetric Kronecker product lead to some nice conditions for when the search direction is unique. Assume that the second (and therefore also the first) statement in Lemma 3. Let u be a nonzero vector in Rn . factorizations of the symmetric Kronecker product cannot easily be derived unless we consider special cases (e.12 implies that HP (XS) is positive semidefinite. Stability and Numerical Results.g. ¯ = u T (Q ¯ T DP XP T DP −T SP −1 Q ¯ is again a positive definite matrix. “Primal–Dual Interior– Point Methods for Semidefinite Programming: Convergence Rates. Now we continue the above equation ¯ T DP XP T Q ¯Q ¯ T DP −T SP −1 Qu ¯ uT (P XP T )(P −T SP −1)u = uT Q ¯ T DP XP T DP −T SP −1 Q)u. it follows that Since Q uT HP (XS)u > 0 for all nonzero u. Alizadeh and J.L. Overton. we can denote their eigenvalue decompositions by Q T ¯ ¯ ¯ Q DP −T SP −1 Q respectively.” SIAM J. 33 . We want to show that one of the conditions in Lemma 3. 4 Conclusion We have shown that the symmetric Kronecker product has several properties according to the properties of the ordinary Kronecker product.. where Q is an orthogonal matrix containing their eigenvectors rowwise. we get the desired result. Applying this information to Theorem 3. Optim. However.11 that E −1 F is positive definite. uT HP (XS)u = Since P XP T and P −T SP −1 commute and are symmetric positive de¯ T DP XP T Q ¯ and finite.12 is true. Todd and K. 25-9:772–781. Matrix Computations. 772–781)”. “Computer–Oriented Formulation of Transition–Rate Matrices via Kronecker Algebra. 1978. de Klerk. Chichester. 1981. T¨ ut¨ unc¨ u. [11] H.A. Amoia. W.[2] V.R. Topics in Matrix Analysis. 30:123–132. The Netherlands. [3] J. MD. 1998. “Kronecker Products and Matrix Calculus in System Theory. W. Optim. Horn and C. 1979. Tun¸cel and H. 1981. Golub and C. Pukelsheim. 17. “Strengthened Existence and Uniqueness Conditions for Search Directions in Semidefinite Programming. Cambridge University Press.. 9:271–288.V. 34 .C. Brewer. Graham. [12] M. Aspects of Semidefinite Programming.” SIAM J. and S. “On the Nesterov–Todd Direction in Semidefinite Programming. 26-5:360. [4] J. [5] E. 1985. Circuits and Systems 25 (1978). IEEE Transactions on Circuits and Systems. Brewer. 2002. [8] B. Kluwer Academic Publishers.” submitted in July.V. [9] R. The Vec Operator and Kronecker Products: A Review. Cambridge. and M. [7] G. Kronecker Products and Matrix Calculus With Applications. Searle.H. 2:117–141. “On the History of the Kronecker Product. [13] L. 1981. [10] H. Holmquist. Baltimore. Wolkowicz. 2003. 14:113–120. Henderson and S.” Linear and Multilinear Algebra.” IEEE Transactions on Reliability. “The direct product permuting matrices. Santomauro. “Correction to: “Kronecker products and matrix calculus in system theory” (IEEE Trans. Searle.” Linear and Multilinear Algebra. Johnson. 3rd Edition. Henderson. John Hopkins University Press.” IEEE Transactions on Circuits and Systems. 8-3:769–796. 1983. Ellis Horwood Limited. 1991.” Linear and Multilinear Algebra. [6] A. 9. Van Loan. 1996. De Micheli. “The Vec–Permutation Matrix.R. G.J. Toh and R. F. no.H.R. 293– 314.F. [15] Y. van Loan and N. 35 . “On extending some primal–dual interior–point algorithms from linear programming to semidefinite programming. 8:365–386. “Approximation With Kronecker Products.” Linear Algebra for Large Scale and Real–Time Applications. 1998.[14] C. Pitsianis. Zhang. Optim.” SIAM J. 1993...
Copyright © 2024 DOKUMEN.SITE Inc.