Archive for September, 2009

Quantum Semidefinite Programs

September 25th, 2009

In quantum information theory, semidefinite programs are often useful, as one is often interested in the behaviour of linear maps over convex sets. For example, they have very recently been used to compute the completely bounded norm of a linear map [1], prove that QIP = PSPACE [2], and bound a new family of norms of operators [3]. However, if you were to look at the standard form of a semidefinite program provided on the Wikipedia page linked above, you would likely only see some very superficial connections with the standard form of quantum semidefinite programs in references [1-3] — this post aims to bridge that gap and show that the two forms are indeed equivalent (or at the very least outline the key steps in proving they are equivalent).

The “Quantum” Form

Let Mn denote the space of n×n complex matrices. Assume that we are given Hermitian matrices A = A* ∈ Mn and B = B* ∈ Mm, as well as a Hermicity-preserving linear map Φ : Mn → Mm (i.e., a map such that Φ(X) is Hermitian whenever X is Hermitian). Then we can define a “quantum” semidefinite program to be the following pair of optimization problems:

Quantum Semidefinite Program

In the dual problem, Φ refers to the dual map of Phi — that is, the adjoint map in the sense of the Hilbert-Schmidt inner product. It is not surprising that many problems in quantum information theory can be formulated as an optimization problem of this type — completely positive maps (a special class of Hermicity-preserving maps) model quantum channels, positive semidefinite matrices represent quantum states, and the trace of a product of two positive semidefinite matrices represents an expectation value.

The Standard Form

In the more conventional set up of semidefinite programming, we are given matrices D and {G_i} ∈ Mr and a complex vector c ∈ Cs. The associated semidefinite program is given by the following pair of optimization problems:

Semidefinite Programming Standard Form

The interested reader should read on Wikipedia about how semidefinite programs generalize linear programs and how their theory of duality works. It is also important to note that semidefinite programs can be solved efficiently to any desired accuracy by a variety of different solvers, using a number of different algorithms. Thus, once we show that quantum semidefinite programs can be put into this standard form, we will be able to efficiently solve quantum semidefinite programs.

Converting the Quantum Form to the Standard Form

Define a linear map Ψ : Mn → (Mm ⊕ Mn) by


Then the requirement that $\Phi(P) \leq B$ and $P \geq 0$ is equivalent to
\Psi(X) \leq \begin{bmatrix}B & 0 \\ 0 & 0 \end{bmatrix}.

Then the requirement that Ψ(P) ≤ B and P ≥ 0 is equivalent to

Psi Inequality

The dual map Ψ is given by

Psi Dual

By putting these last few steps together, we see that our original quantum semidefinite program is of the following form:

Simplified Quantum SDP

The inequality in the dual problem was able to be replaced by equality because of the flexibility that was introduced by the arbitrary positive operator R. Now let {Ea} and {Fa} be families of left and right generalized Choi-Kraus operators for Ψ. Denote the (k,l)-entry of P by pkl and the (i,j)-entry of Ea or Fa by eaij or faij, respectively. Then

Psi Reductionwhere


Finally, defining x := vec(P) and c := vec(A) (where vec refers to the vectorization of a matrix, which stacks each of its columns on top of each other into a column vector) shows that the quantum primal problem is in the form of the standard primal problem. Some simple linear algebra can be used to show that the quantum dual form reduces to the standard dual form as well.



  1. J. Watrous, Semidefinite programs for completely bounded norms. Preprint (2009). arXiv:0901.4709 [quant-ph]
  2. R. Jain, Z. Ji, S. Upadhyay, J. Watrous, QIP = PSPACE. Preprint (2009). arXiv:0907.4737 [quant-ph]
  3. N. Johnston and D. W. Kribs, A family of norms with applications in quantum information theory. Journal of Mathematical Physics 51, 082202 (2010). arXiv:0909.3907 [quant-ph]

Golly 2.1 Released (with Online Archive Support!)

September 18th, 2009

One of the things that has bothered me severely with the status of Conway’s Game of Life on the internet (and the main reason that I started the LifeWiki) is the severe fragmentation of information about the game — there are tidbits of knowledge sprinkled all over the place, but it’s quite a task to find a complete collection of patterns of a specific type unless you already know where to look. Fortunately, this fragmentation problem just got knocked around quite a bit by the release of Golly 2.1.

Golly is an open-source, cross-platform application for exploring Conway’s Game of Life (and it is probably currently the most widely-used such program). Version 2.1 was just released this week, and it’s a particularly exciting update from my point of view because it introduces a feature that has been long-needed in the Game of Life world — access to online pattern collections.

The pattern collections that Golly 2.1 can access by default are as follows:

Additionally, Golly can directly download rules from the cellular automata Rule Table Repository and scripts from the Golly Scripts Database. So now all the interested Lifer has to do to find out about (for example) period 51 oscillators is open up the LifeWiki pattern archive, select “oscillators”, and either load a relevant pattern or click on the help link beside it to bring up the corresponding page at LifeWiki. Take that, fragmentation of information.

Golly 2.1's LifeWiki pattern archive

Anyway, other changes have of course been made for the new release of Golly as well — a complete list can be found here. Or just go right ahead and…

Download Golly

No, Primes with Millions of Digits Are Not Useful for Cryptography

September 11th, 2009

About once a year, the internet news fills up for a week or so with talk of how a new largest-known prime has just been found. This largest-known prime has invariably been found by GIMPS, a distributed computing project designed to find large Mersenne primes.  Of course, mainstream media doesn’t like reporting things unless they can give people the illusion of some sort of immediate practical purpose. So what to do when you can’t think of a practical use for some recently-discovered 10-million-digit prime numbers? Make one up, of course! Just say that they have applications in cryptography:

Scientists in the US and Germany have found the two largest prime numbers ever calculated in a discovery which could dramatically increase the effectiveness of cryptographic systems.

The Source of the Myth: RSA Encryption

Like all good myths, the Mersenne prime cryptography myth is so widespread because it is so close to being true. The most widely-used form of encryption used on the internet is RSA encryption, which works by multiplying two huge prime numbers together to form an even larger number with exactly two prime factors. Since factoring numbers is believed to be computationally difficult, reversing this process is currently a very difficult problem, which leads to RSA providing reasonably strong encryption. The thing is, RSA typically uses primes that have a few hundred digits, not a few million digits. Some of the reasons for this are as follows:

  1. You don’t need to use million-digit primes. Considering that even cracking RSA that uses 250-digit primes is an extremely difficult problem that hasn’t been completed yet, and the problem gets exponentially more difficult as you add more digits, even the most paranoid of people should be comfortable using primes with a couple thousand digits. You might argue that some big government agencies would want RSA to be as secure as possible for their transactions, so they might want to use million-digit primes, but any agency that is that worried about security shouldn’t be using public key cryptography in the first place.
  2. Using primes with millions of digits actually decreases security. As of this writing, there are 26 known primes with more than one million digits, so to break RSA encryption that makes use of primes with millions of digits you can just test each one of the known million-digit primes to see if they are one of the factors. RSA only works because there are lots of primes with hundreds of digits to choose from (as in billions of billions of billions of them, and then some).
  3. Manipulating numbers with millions of digits is slow. Internet-based public key cryptography systems need to be fast if they’re to be of any practical use, so it doesn’t make much sense to try to use a cryptography system that relies on multiplying and finding residues with numbers that take several megabytes just to store. Just imagine trying to do some online banking when you have to transmit this number along with every other piece of data that you send back to the server.

Not all media outlets are so bad as to directly say that the primes found by GIMPS are useful for cryptography, but the vast majority of them imply it at some point throughout the story. Consider the following examples, which are taken from stories about newly-discovered GIMPS primes:

Mersenne primes are important for the theory of numbers and they may help in developing unbreakable codes and message encryptions.

BBC News

Current cryptographic systems rely on the challenge of factoring large primes.


While those tidbits of information are quite true (well, almost — see the comments), when taken in context they are entirely misleading and cause the reader to think that GIMPS primes have applications in today’s cryptography systems. It’s like running a story about a recent plane crash that includes a sentence about how it’s a good idea to wear a helmet when riding a bicycle.

So Why Do We Search for Huge Primes?

The main reason that we search for huge primes is simply for sport. It gives our idle CPU cycles something to do. Non-mathematicians seem to balk at that idea and call it a huge waste of CPU cycles/time, and they’re probably right, but so what? Have you ever played a video game? This is our version of going for a high score. If that doesn’t seem like a particularly good reason to you, perhaps one of the reasons given by GIMPS itself will satisfy you. One thing that you’ll notice though is that cryptography is not mentioned anywhere on that page.

No Similarity-Invariant Matrix Norm

September 4th, 2009

A matrix norm on Mn is said to be weakly unitarily-invariant if conjugating a matrix by a unitary U does not change the norm. That is,

\|X\|=\|UXU^*\|\ \ \forall \, X,U\in M_n \text{ with $U$ unitary.}

Many commonly-used matrix norms are weakly unitarily-invariant, including the operator norm, Frobenius norm, numerical radius, Ky Fan norms and Schatten p-norms. One might naturally wonder whether there are matrix norms that satisfy the slightly stronger property of similarity-invariance:

\|X\|=\|SXS^{-1}\|\ \ \forall\, X,Sin M_n\text{ with $S$ nonsingular.}

Upon first glance there doesn’t seem to be any reason why this shouldn’t be possible — one can look for simple examples that cause problems, but you’ll have trouble coming up with a matrix that causes problems if you restrict your attention to “nice” (i.e., normal) matrices. Nevertheless, we have the following lemma, which appeared as Exercise IV.4.1 in [1]:

Lemma (No Similarity-Invariant Norm). Let f : Mn → R be a function satisfying f(SXS-1) = f(X) for all X,S ∈ Mn with S invertible. Then f is not a norm.

If you’re interested in the (very short and elementary) proof of this lemma, see the pdf attached below. I would be greatly interested in seeing a proof of this fact that relies less on the structure of matrices themselves. It seems as though there should be a more general result that characterizes when we can and can not find a norm on a given vector space that is invariant with respect to some given subgroup, or some such thing. Would anyone care to enlighten me?

Related Links:


  1. R. Bhatia, Matrix analysis. Volume 169 of Graduate texts in mathematics (1997).