Archive

Posts Tagged ‘Linear Preserver Problems’

Isometries of the Vector p-Norms: Signed and Complex Permutation Matrices

September 17th, 2010

Recall that in linear algebra, the vector p-norm of a vector x ∈ Cn (or x ∈ Rn) is defined to be

where xi is the ith element of x and 1 ≤ p ≤ ∞ (where the p = ∞ case is understood to mean the limit as p approaches ∞, which gives the maximum norm). By far the most well-known of these norms is the Euclidean norm, which arises when p = 2. Another well-known norm arises when p = 1, which gives the “taxicab” norm.

The problem that will be investigated in this post is to characterize what operators preserve the p-norms – i.e., what their isometries are. In the p = 2 case of the Euclidean norm, the answer is well-known: the isometries of the real Euclidean norm are exactly the orthogonal matrices, and the isometries of the complex Euclidean norm are exactly the unitary matrices. It turns out that if p ≠ 2 then the isometry group looks much different. Indeed, Exercise IV.1.3 of [1] asks the reader to show that the isometries of the p = 1 and p = ∞ norms are what are known as complex permutation matrices (to be defined). We will investigate those cases as well as a situation when p ≠ 1, 2, ∞.

p = 1: The “Taxicab” Norm

Recall that a permutation matrix is a matrix with exactly one “1” in each of its rows and columns, and a “0” in every other position. A signed permutation matrix (sometimes called a generalized permutation matrix) is similar – every row and column has exactly one non-zero entry, which is either 1 or -1. Similarly, a complex permutation matrix is a matrix for which every row and column has exactly one non-zero entry, and every non-zero entry is a complex number with modulus 1.

It is not difficult to show that if x ∈ Rn then the taxicab norm of x is preserved by signed permutation matrices, and if x ∈ Cn then the taxicab norm of x is preserved by complex permutation matrices. We will now show that the converse holds:

Theorem 1. Let P ∈ Mn be an n × n matrix. Then

if and only if P is a complex permutation matrix (or a signed permutation matrix, respectively).

Proof. We only prove the “only if” implication, because the “if” implication is trivial (an exercise left for the reader?). So let’s suppose that P is an isometry of the p = 1 vector norm. Let ei denote the ith standard basis vector, let pi denote the ith column of P, and let pij denote the (j,i)-entry of P (i.e., the jth entry of pi). Then Pei = pi for all i, so

Similarly, P(ei + ek) = pi + pk for all i,k, so

However, by the triangle inequality for the absolute value we know that the above equality can only hold if there exist non-negative real constants cijk ≥ 0 such that pij = cijkpkj. However, it is similarly the case that P(ei – ek) = pi – pk for all i,k, so

Using the equality condition for the complex absolute value again we then know that there exist non-negative real constants dijk ≥ 0 such that pij = -dijkpkj. Using the fact that each cijk and each dijk is non-negative, it follows that each row contains at most one non-zero entry (and each row must indeed contain at least one non-zero entry since the isometries of any norm must be nonsingular).

Thus every row has exactly one non-zero entry. By using (again) the fact that isometries must be nonsingular, it follows that each of the non-zero entries must occur in a distinct column (otherwise there would be a zero column). The fact that each non-zero entry has modulus 1 follows from simply noting that P must preserve the p = 1 norm of each ei.

p = ∞: The Maximum Norm

As with the p = 1 case, it is not difficult to show that if x ∈ Rn then the maximum norm of x is preserved by signed permutation matrices, and if x ∈ Cn then the maximum norm of x is preserved by complex permutation matrices. We will now show that the converse holds in this case as well:

Theorem 2. Let P ∈ Mn be an n × n matrix. Then

if and only if P is a complex permutation matrix (or a signed permutation matrix, respectively).

Proof. Again, we only prove the “only if” implication, since the “if” implication is trivial. So suppose that P is an isometry of the p = ∞ vector norm. As before, let ei denote the ith standard basis vector, let pi denote the ith column of P, and let pij denote the (j,i)-entry of P (i.e., the jth entry of pi). Then Pei = pi for all i, so

In other words, each entry of P has modulus at most 1, and each column has at least one element with modulus equal to 1. Also, P(ei ± ek) = pi ± pk for all i,k, so

It follows that if |pij| = 1, then pkj = 0 for all k ≠ i. Because each column has an element with modulus 1, it follows that each row has exactly 1 non-zero entry. Because each column has an entry with modulus 1, it follows that each row and column has exactly 1 non-zero entry, which must have modulus 1, so P is a signed or complex permutation matrix.

Any p ≠ 2

When p = 2, the isometries are orthogonal/unitary matrices. When p = 1 or p = ∞, the isometries are signed/complex permutation matrices, which are a very small subset of the orthogonal/unitary matrices. One might naively expect that the isometries for other values of p somehow interpolate between those two extremes. Alternatively, one might expect that the signed/complex permutation matrices are the only isometries for all other values of p as well. It turns out that the latter conjecture is correct [2,3].

Theorem 3. Let P ∈ Mn be an n × n matrix and let p ∈ [1,2) ∪ (2,∞]. Then

if and only if P is a complex permutation matrix (or a signed permutation matrix, respectively).

References:

  1. R. Bhatia, Matrix analysis. Volume 169 of Graduate texts in mathematics (1997).
  2. S. Chang and C. K. Li, Certain Isometries on Rn. Linear Algebra Appl. 165, 251–265 (1992).
  3. C. K. Li, W. So, Isometries of lp norm. Amer. Math. Monthly 101, 452–453 (1994).

Isometries of Unitarily-Invariant Complex Matrix Norms

August 15th, 2010

Recall that a unitarily-invariant matrix norm is a norm on matrices X ∈ Mn such that

One nice way to think about unitarily-invariant norms is that they are the matrix norms that depend only on the matrix’s singular values. Some unitarily-invariant norms that are particularly well-known are the operator (spectral) norm, trace norm, Frobenius (Hilbert-Schmidt) norm, Ky Fan norms, and Schatten p-norms (in fact, I would say that the induced p-norms for p ≠ 2 are the only really common matrix norms that aren’t unitarily-invariant – I will consider these norms in the future).

The core question that I am going to consider today is what linear maps preserve singular values and unitarily-invariant matrix norms. Clearly multiplication on the left and right by unitary matrices preserve such norms (by definition). However, the matrix transpose also preserves singular values and all unitarily-invariant norms – are there other linear maps on complex matrices that preserve these norms? For a more thorough treatment of this question, the interested reader is directed to [1,2].

Linear Maps That Preserve Singular Values

We first consider the simplest of the above questions: what linear maps Φ : Mn → Mn are such that the singular values of Φ(X) are the same as the singular values of X for all X ∈ Mn? In order to answer this question, recall Theorem 1 from my previous post, which states [3] that if Φ is an invertible map such that Φ(X) is nonsingular whenever X is nonsingular, then there exist M, N ∈ Mn with det(MN) ≠ 0 such that

In order to make use of this result, we will first have to show that any singular-value-preserving map is invertible and sends nonsingular matrices to nonsingular matrices. To this end, notice (recall?) that the operator norm of a matrix is equal to its largest singular value. Thus, any map that preserves singular values must be an isometry of the operator norm, and thus must be invertible (since all isometries are easily seen to be invertible).

Furthermore,  if we use the singular value decomposition to write X = USV for some unitaries U, V ∈ Mn and a diagonal matrix of singular values S ∈ Mn, then det(X) = det(USV) = det(U)det(S)det(V) = det(UV)det(S). Because UV is unitary, we know that |det(UV)| = 1, so we have |det(X)| = |det(S)| = det(S); that is, the product of the singular values of X equals the absolute value of its determinant. So any map that preserves singular values also preserves the absolute value of the matrix determinant. But any map that preserves the absolute value of determinants must preserve the set of nonsingular matrices because X is nonsingular if and only if det(X) ≠ 0. It follows from the above result about invertibility-preserving maps that if Φ preserves singular values then there exist M, N ∈ Mn with det(MN) ≠ 0 such that either Φ(X) = MXN or Φ(X) = MXTN.

We will now prove that M and N must each in fact be unitary. To this end, pick any unit vector x ∈ Cn and let c denote the Euclidean length of Mx:

By the fact that Φ must preserve singular values (and hence the operator norm) we have that if y ∈ Cn is any other unit vector, then

Because y was an arbitrary unit vector, we have that N* = (1/c)U, where U ∈ Mn is some unitary matrix. It can now be similarly argued that M = cV for some unitary matrix V ∈ Mn. By simply adjusting constants, we have proved the following:

Theorem 1. Let Φ : Mn → Mn be a linear map. Then the singular values of Φ(X) equal the singular values of X for all X ∈ Mn if and only if there exist unitary matrices U, V ∈ Mn such that

Isometries of the Frobenius Norm

We now consider the problem of characterizing isometries of the Frobenius norm defined for X ∈ Mn by

That is, we want to describe the maps Φ that preserve the Frobenius norm. It is clear that the Frobenius norm of X is just the Euclidean norm of vec(X), the vectorization of X. Thus we know immediately from the standard isomorphism that sends operators to bipartite vectors and super operators to bipartite operators that Φ preserves the Frobenius norm if and only if there exist families of operators {Ai}, {Bi} such that Σi Ai ⊗ Bi is a unitary matrix and

It is clear that any map of the form described by Theorem 1 above can be written in this form, but there are also many other maps of this type that are not of the form described by Theorem 1. In the next section we will see that the Frobenius norm is essentially the only unitarily-invariant complex matrix norm containing isometries that are not of the form described by Theorem 1.

Isometries of Other Unitarily-Invariant Norms

One way of thinking about Theorem 1 is as providing a canonical form for any map Φ that preserves all unitarily-invariant norms. However, in many cases it is enough that Φ preserves a single unitarily-invariant norm for it to be of that form. For example, it was shown by Schur in 1925 [4] that if Φ preserves the operator norm then it must be of the form described by Theorem 1. The same result was proved for the trace norm by Russo in 1969 [5]. Li and Tsing extended the same result to the remaining Schatten p-norms, Ky Fan norms, and (p,k)-norms in 1988 [6].

In fact, the following result, which completely characterizes isometries of all unitarily-invariant complex matrix norms other than the Frobenius norm, was obtained in [7]:

Theorem 2. Let Φ : Mn → Mn be a linear map. Then Φ preserves a given unitarily-invariant norm that is not a multiple of the Frobenius norm if and only if there exist unitary matrices U, V ∈ Mn such that

References:

  1. C.-K. Li and S. Pierce, Linear preserver problems. The American Mathematical Monthly 108, 591–605 (2001).
  2. C.-K. Li, Some aspects of the theory of norms. Linear Algebra and its Applications 212213, 71–100 (1994).
  3. J. Dieudonne, Sur une generalisation du groupe orthogonal a quatre variables. Arch. Math. 1, 282–287 (1949).
  4. I. Schur, Einige bemerkungen zur determinanten theorie. Sitzungsber. Preuss. Akad. Wiss. Berlin 25, 454–463 (1925).
  5. B. Russo, Trace preserving mappings of matrix algebra. Duke Math. J. 36, 297–300 (1969).
  6. C.-K. Li and N.-K. Tsing, Some isometries of rectangular complex matrices. Linear and Multilinear Algebra 23, 47–53 (1988).
  7. C.-K. Li and N.-K. Tsing, Linear operators preserving unitarily invariant norms of matrices. Linear and Multilinear Algebra 26, 119–132 (1990).

An Introduction to Linear Preserver Problems

August 5th, 2010

The theory of linear preserver problems deals with characterizing linear (complex) matrix-valued maps that preserve certain properties of the matrices they act on. For example, some of the most famous linear preserver problems ask what a map must look like if it preserves invertibility or the determinant of matrices. Today I will focus on introducing some of the basic linear preserver problems that got the field off the ground – in the near future I will explore linear preserver problems dealing with various families of norms and linear preserver problems that are actively used today in quantum information theory. In the meantime, the interested reader can find a more thorough introduction to common linear preserver problems in [1,2].

Suppose Φ : Mn → Mn (where Mn is the set of n×n complex matrices) is a linear map. It is well-known that any such map can be written in the form

where {Ai}, {Bi} ⊂ Mn are families of matrices (sometimes referred to as the left and right generalized Choi-Kraus operators of Φ (phew!)). But what if we make the additional restrictions that Φ is an invertible map and Φ(X) is nonsingular whenever X ∈ Mn is nonsingular? The problem of characterizing maps of this type (which are sometimes called invertibility-preserving maps) is one of the first linear preserver problems that was solved, and it turns out that if Φ is invertibility-preserving then either Φ or T ○ Φ (where T represents the matrix transpose map) can be written with just a single pair of Choi-Kraus operators:

Theorem 1. [3] Let Φ : Mn → Mn be an invertible linear map. Then Φ(X) is nonsingular whenever X ∈ Mn is nonsingular if and only if there exist M, N ∈ Mn with det(MN) ≠ 0 such that

In addition to being interesting in its own right, Theorem 1 serves as a starting point that allows for the simple derivation of several related results.

Determinant-Preserving Maps

For example, suppose Φ is a linear map such that det(Φ(X)) = det(X) for all X ∈ Mn. We will now find the form that maps of this type (called determinant-preserving maps) have using Theorem 1. In order to use Theorem 1 though, we must first show that Φ is invertible.

We prove that Φ is invertible by contradiction. Suppose there exists X ≠ 0 such that Φ(X) = 0. Then because Φ preserves determinants, it must be the case that X is singular. Then there exists a singular Y ∈ Mn such that X + Y is nonsingular. It follows that 0 ≠ det(X + Y) = det(Φ(X + Y)) = det(0 + Φ(Y)) = det(Y) = 0, a contradiction. Thus it must be the case that X = 0 and so Φ is invertible.

Furthermore, any map that preserves determinants must preserve the set of nonsingular matrices because X is nonsingular if and only if det(X) ≠ 0. It follows from Theorem 1 that for any determinant-preserving map Φ there must exist M, N ∈ Mn with det(MN) ≠ 0 such that either Φ(X) = MXN or Φ(X) = MXTN. However, in this case we have det(X) = det(Φ(X)) = det(MXN) = det(MN)det(X) for all X ∈ Mn, so det(MN) = 1. Conversely, it is not difficult (an exercise left to the interested reader) to show that any map of this form with det(MN) = 1 must be determinant-preserving. What we have proved is the following result, originally due to Frobenius [4]:

Theorem 2. Let Φ : Mn → Mn be a linear map. Then det(Φ(X)) = det(X) for all X ∈ Mn if and only if there exist M, N ∈ Mn with det(MN) = 1 such that

Spectrum-Preserving Maps

The final linear preserver problem that we will consider right now is the problem of characterizing linear maps Φ such that the eigenvalues (counting multiplicities) of Φ(X) are the same as the eigenvalues of X for all X ∈ Mn (such maps are sometimes called spectrum-preserving maps). Certainly any map that is spectrum-preserving must also be determinant-preserving (since the determinant of a matrix is just the product of its eigenvalues), so by Theorem 2 there exist M, N ∈ Mn with det(MN) = 1 such that either Φ(X) = MXN or Φ(X) = MXTN.

Now note that any map that preserves eigenvalues must also preserve trace (since the trace is just the sum of the matrix’s eigenvalues) and so we have Tr(X) = Tr(Φ(X)) = Tr(MXN) = Tr(NMX) for all X ∈ Mn. This implies that Tr((I – NM)X) = 0 for all X ∈ Mn, so we have NM = I (i.e., M = N-1). Conversely, it is simple (another exercise left for the interested reader) to show that any map of this form with M = N-1 must be spectrum-preserving. What we have proved is the following characterization of maps that preserve eigenvalues:

Theorem 3. Let Φ : Mn → Mn be a linear map. Then Φ is spectrum-preserving if and only if det(Φ(X)) = det(X) and Tr(Φ(X)) = Tr(X) for all X ∈ Mn if and only if there exists a nonsingular N ∈ Mn such that

References:

  1. C. K. Li, S. Pierce, Linear preserver problems. The American Mathematical Monthly 108, 591–605 (2001).
  2. C. K. Li, N. K. Tsing, Linear preserver problems: A brief introduction and some special techniques. Linear Algebra and its Applications 162164, 217–235 (1992).
  3. J. Dieudonne, Sur une generalisation du groupe orthogonal a quatre variables. Arch. Math. 1,
    282–287 (1949).
  4. G. Frobenius, Uber die Darstellung der endlichen Gruppen durch Linear Substitutionen. Sitzungsber
    Deutsch. Akad. Wiss. Berlin 994–1015 (1897).