Blog > TQC 2009

TQC 2009

I spent the beginning of this week at the 4th Workshop on Theory of Quantum Computation, Communication and Cryptography, and I decided that I might as well post some information and thoughts about it since it was probably the best quantum workshop that I’ve attended thus far (not least because it was only three days long and the Institute for Quantum Computing knows how to do food right). There were some fifteen or so talks, and here I’ll describe two of my personal favourites.

Quantum algorithm for online memory checking

Wim van Dam and Qingqing Yuan

In his talk on online memory checking, Wim van Dam described the problem of trying to retrieve data from an unreliable and public server. Because the server is public, an adversary could tamper with its data, rendering the information that you retrieve useless. The online memory checking question of interest is then how to make sure that the requested information has not been tampered with.

One obvious (albeit largely useless) approach to making sure that the public server has not been tampered with is to simply keep a copy of its entire contents on a private server as well. When data is requested from the public server, check to see if it matched the data on the private server, and you’re good to go. This carries the obvious problem that you have to keep two copies of all of the data.

The obvious improvement to this is to use some sort of hashing function on the contents of the public server, store the results on a private server, and then check to see if the hash of your retrieved data equals the hash of the stored data whenever you access the public server. The question of how large the hash (or whatever encoding of the information you use) must be was answered in 1994 by Blum et al. and in 2005 by Naor and Rothblum: s × t ∈ Ω(n), where n is the size of the data on the public server, s is the size of the data stored on the private server, and t is the number of queries made to the public server.

Wim van Dam and Qingqing Yuan show that if the memory checker is allowed to act in a quantum mechanical manner, then the complexity of the problem is reduced to s × t ∈ Ω(polylog(n)). Score 1 for quantum algorithms.

A necessary condition for approximate quantum error correction

Cedric Bény

It’s not particularly surprising that this talk spoke to me pretty directly, as it deals with the same approach to the same subarea of the same field of research that I do. That and my name was mentioned in the talk. Score 10 for Cedric Bény.

The main result of the talk was a generalization of the Knill-Laflamme conditions to the realm of approximate quantum error correction (as opposed to perfect quantum error correction).  The idea was that the Knill-Laflamme conditions can be restated in terms of the complementary channel in a very natural way: a given subspace is correctable for a channel if and only if the complementary channel sends everything supported on that subspace to a single fixed operator (another way of saying this is that the subspace is private for the complementary channel, a connection that was explored by Kretschmann, Kribs and Spekkens in 2007).

It turns out that there is a natural way to generalize this formulation of the Knill-Laflamme conditions. The idea is that if the complementary channel, when restricted to a given subspace, is within a small distance of a channel that maps everything to a fixed operator (where distance is measured via the diamond norm), then that subspace is approximately correctable (and he of course quantifies the “small distance” and “approximately”).

He also provides a generalization of these results to approximate error correction of subsystem codes, which is a nice bonus.  Finally, it’s interesting to note that the set of operators that are approximately correctable by a given recovery operation does not necessarily form an algebra, in contrast to the perfect error correction situation.

  1. No comments yet.
  1. No trackbacks yet.