Statistical Mechanics, A Computational Approach
Semester-long undergraduate course on statistical mechanics focusing on computational aspects such as Monte Carlo methods as well as modern techniques such as tensor networks.
I am a Professor at The University of Sydney, and faculty member in the Quantum Physics Group.
My research interests center around quantum information theory and applications of the theory to a broad range of topics, including condensed matter theory, topologically ordered phases, tensor networks, error correction, quantum optics, precision metrology, and classical statistical inference and machine learning, specifically compressed sensing.
Quantum mechanics was born around 1900 when the loose threads at the frayed edge of classical mechanics were pulled, unveiling an even richer tapestry of beautiful physical phenomena beneath. For many years we stood in awe of the theory; as much as we admired it we also feared it, with dreaded things like the “uncertainty principle” and “wave function collapse” leading us to believe that quantum mechanics was about limitations on what we could do.
Quantum information theory is the realization that these strange phenomena are a feature, not a bug. We can harness superposition and entanglement to perform tasks that are intractable or even impossible in a world governed by classical mechanics. By putting quantum mechanics in the context of information processing, we are able to view physics and computer science through a single unified lens that greatly enriches our understanding of both. Quantum information science has led to breakthroughs in our understanding of quantum phases of matter, precision measurement, algorithms, complexity, and cryptography, to name but a few. Our ultimate goal is to put the universe on your laptop and there is every reason to expect more breakthroughs in this young and exciting field.
Each of the research projects below highlights a particular aspect of quantum information theory that my research group and I focus on.
Coming soon...
Coming soon...
Coming soon...
Coming soon...
Coming soon...
Coming soon...
Currently all of my research publications can be found on arXiv.org, and you can find the citation data for my work at Google Scholar. The versions available for download here are from the arXiv and can be considered to be the versions of record.
Noise is the central obstacle to building large-scale quantum computers. Quantum systems with sufficiently uncorrelated and weak noise could be used to solve computational problems that are intractable with current digital computers. There has been substantial progress towards engineering such systems. However, continued progress depends on the ability to characterize quantum noise reliably and efficiently with high precision. Here we introduce a protocol that completely and efficiently characterizes the error rates of quantum noise and we experimentally implement it on a 14-qubit superconducting quantum architecture. The method returns an estimate of the effective noise with relative precision and detects all correlated errors. We show how to construct a quantum noise correlation matrix allowing the easy visualization of all pairwise correlated errors, enabling the discovery of long-range two-qubit correlations in the 14 qubit device that had not previously been detected. These properties of the protocol make it exceptionally well suited for high-precision noise metrology in quantum information processors. Our results are the first implementation of a provably rigorous, full diagnostic protocol capable of being run on state of the art devices and beyond. These results pave the way for noise metrology in next-generation quantum devices, calibration in the presence of crosstalk, bespoke quantum error-correcting codes, and customized fault-tolerance protocols that can greatly reduce the overhead in a quantum computation.
Pauli channels are ubiquitous in quantum information, both as a dominant noise source in many computing architectures and as a practical model for analyzing error correction and fault tolerance. Here we prove several results on efficiently learning Pauli channels, and more generally the Pauli projection of a quantum channel. We first derive a procedure for learning a Pauli channel on $n$ qubits with high probability to a relative precision $\epsilon$ using $O\bigl(\epsilon^{-2} n 2^n\bigr)$ measurements, which is efficient in the Hilbert space dimension. The estimate is robust to state preparation and measurement errors which, together with the relative precision, makes it especially appropriate for applications involving characterization of high-accuracy quantum gates. Next we show that the error rates for an arbitrary set of $s$ Pauli errors can be estimated to a relative precision $\epsilon$ using $O\bigl(\epsilon^{-4} \log s\log s/\epsilon\bigr)$ measurements. Finally, we show that when the Pauli channel is given by a Markov field with at most $k$-local correlations, we can learn an entire $n$-qubit Pauli channel to relative precision $\epsilon$ with only $O_k\bigl(\epsilon^{-2} n^2 \log n \bigr)$ measurements, which is efficient in the number of qubits. These results enable a host of applications beyond just characterizing noise in a large-scale quantum system: they pave the way to tailoring quantum codes, optimizing decoders, and customizing fault tolerance procedures to suit a particular device.
Noise in quantum computing is countered with quantum error correction. Achieving optimal performance will require tailoring codes and decoding algorithms to account for features of realistic noise, such as the common situation where the noise is biased towards dephasing. Here we introduce an efficient high-threshold decoder for a noise-tailored surface code based on minimum-weight perfect matching. The decoder exploits the symmetries of its syndrome under the action of biased noise and generalises to the fault-tolerant regime where measurements are unreliable. Using this decoder, we obtain fault-tolerant thresholds in excess of 6% for a phenomenological noise model in the limit where dephasing dominates. These gains persist even for modest noise biases: we find a threshold of ~5% in an experimentally relevant regime where dephasing errors occur at a rate one hundred times greater than bit-flip errors.
The code capacity threshold for error correction using qubits which exhibit asymmetric or biased noise channels is known to be much higher than with qubits without such structured noise. However, it is unclear how much this improvement persists when realistic circuit level noise is taken into account. This is because implementations of gates which do not commute with the dominant error un-bias the noise channel. In particular, a native bias-preserving controlled-NOT (CX) gate, which is an essential ingredient of stabilizer codes, is not possible in strictly two-level systems. Here we overcome the challenge of implementing a bias-preserving CX gate by using stabilized cat qubits in driven nonlinear oscillators. The physical noise channel of this qubit is biased towards phase-flips, which increase linearly with the size of the cat, while bit-flips are exponentially suppressed with cat size. Remarkably, the error channel of this native CX gate between two such cat qubits is also dominated by phase-flips, while bit-flips remain exponentially suppressed. This CX gate relies on the topological phase that arises from the rotation of the cat qubit in phase space. The availability of bias-preserving CX gates opens a path towards fault-tolerant codes tailored to biased-noise cat qubits with high threshold and low overhead. As an example, we analyze a scheme for concatenated error correction using cat qubits. We find that the availability of CX gates with moderately sized cat qubits, having mean photon number <10, improves a rigorous lower bound on the fault-tolerance threshold by a factor of two and decreases the overhead in logical Clifford operations by a factor of 5. We expect these estimates to improve significantly with further optimization and with direct use of other codes such as topological codes tailored to biased noise.
We show that naturally associated to a SIC (symmetric informationally complete positive operator valued measure or SIC-POVM) in dimension d there are a number of higher dimensional structures: specifically a projector and a complex Hadamard matrix in dimension d squared and a pair of ETFs (equiangular tight frames) in dimensions $d(d-1)/2$, $d(d+1)/2$. We also show that a WH (Weyl Heisenberg covariant) SIC in odd dimension d is naturally associated to a pair of symmetric tight fusion frames in dimension d. We deduce two relaxations of the WH SIC existence problem. We also find a reformulation of the problem in which the number of equations is fewer than the number of variables. Finally, we show that in at least four cases the structures associated to a SIC lie on continuous manifolds of such structures. In two of these cases the manifolds are non-linear. Restricted defect calculations are consistent with this being true for the structures associated to every known SIC with d between 3 and 16, suggesting it may be true for all d greater than 2.
Randomized benchmarking and variants thereof, which we collectively call RB+, are widely used to characterize the performance of quantum computers because they are simple, scalable, and robust to state-preparation and measurement errors. However, experimental implementations of RB+ allocate resources suboptimally and make ad-hoc assumptions that undermine the reliability of the data analysis. In this paper, we propose a simple modification of RB+ which rigorously eliminates a nuisance parameter and simplifies the experimental design. We then show that, with this modification and specific experimental choices, RB+ efficiently provides estimates of error rates with multiplicative precision. Finally, we provide a simplified rigorous method for obtaining credible regions for parameters of interest and a heuristic approximation for these intervals that performs well in currently relevant regimes.
The surface code, with a simple modification, exhibits ultrahigh error-correction thresholds when the noise is biased toward dephasing. Here, we identify features of the surface code responsible for these ultrahigh thresholds. We provide strong evidence that the threshold error rate of the surface code tracks the hashing bound exactly for all biases and show how to exploit these features to achieve significant improvement in the logical failure rate. First, we consider the infinite bias limit, meaning pure dephasing. We prove that the error threshold of the modified surface code for pure dephasing noise is 50%, i.e., that all qubits are fully dephased, and this threshold can be achieved by a polynomial time-decoding algorithm. We demonstrate that the subthreshold behavior of the code depends critically on the precise shape and boundary conditions of the code. That is, for rectangular surface codes with standard rough or smooth open boundaries, it is controlled by the parameter $g=\mathrm{gcd}(j,k)$, where $j$ and $k$ are dimensions of the surface code lattice. We demonstrate a significant improvement in the logical failure rate with pure dephasing for coprime codes that have $g = 1$ and closely related rotated codes, which have a modified boundary. The effect is dramatic: The same logical failure rate achievable with a square surface code and $n$ physical qubits can be obtained with a coprime or rotated surface code using only $O(\sqrt{n})$ physical qubits. Finally, we use approximate maximum-likelihood decoding to demonstrate that this improvement persists for a general Pauli noise biased toward dephasing. In particular, comparing with a square surface code, we observe a significant improvement in the logical failure rate against biased noise using a rotated surface code with approximately half the number of physical qubits.
Estimating the parameters governing the dynamics of a system is a prerequisite for its optimal control. We present a simple but powerful method that we call STEADY, for STochastic Estimation Algorithm for DYnamical variables, to estimate the Hamiltonian (or Lindbladian) governing a quantum system of a few qubits. STEADY makes efficient use of all measurements and it saturates the information-theoretic limits for such an estimator. Importantly, it is inherently robust to state preparation and measurement errors. It is not limited to evaluating only a fixed set of possible gates, rather it estimates the complete Hamiltonian of the system. The estimator is applicable to any Hamiltonian that can be written as a piecewise-differentiable function and it can easily include estimators for the non-unitary parameters as well. At the heart of our approach is a stochastic gradient descent over the difference between experimental measurement and model prediction.
We give a broad generalisation of the mapping, originally due to Dennis, Kitaev, Landahl and Preskill, from quantum error correcting codes to statistical mechanical models. We show how the mapping can be extended to arbitrary stabiliser or subsystem codes subject to correlated Pauli noise models, including models of fault tolerance. This mapping connects the error correction threshold of the quantum code to a phase transition in the statistical mechanical model. Thus, any existing method for finding phase transitions, such as Monte Carlo simulations, can be applied to approximate the threshold of any such code, without having to perform optimal decoding. By way of example, we numerically study the threshold of the surface code under mildly correlated bit-flip noise, showing that noise with bunching correlations causes the threshold to drop to $p_{\text{corr}}=10.04(6)\%$, from its known iid value of $p_{\text{iid}}=10.917(3)\%$. Complementing this, we show that the mapping also allows us to utilise any algorithm which can calculate/approximate partition functions of classical statistical mechanical models to perform optimal/approximately optimal decoding. Specifically, for 2D codes subject to locally correlated noise, we give a linear-time tensor network-based algorithm for approximate optimal decoding which extends the MPS decoder of Bravyi, Suchara and Vargo.
The performance requirements for fault-tolerant quantum computing are very stringent. Qubits must be manipulated, coupled, and measured with error rates well below 1%. For semiconductor implementations, silicon quantum dot spin qubits have demonstrated average single-qubit Clifford gate fidelities of 99.86% in isotopically enriched 28Si/SiGe devices. While this performance may meet the threshold for fault-tolerant quantum computing in some architectures, it will be necessary to further improve these single qubit gate fidelities in order to reduce the overhead for quantum computing, especially if these gates are to be used in combination with lower-fidelity two-qubit gates and measurements. Here we show that pulse engineering techniques, widely used in magnetic resonance, improve average Clifford gate fidelities for silicon quantum dot spin qubits to 99.957% experimentally; that is, gate error rates have been improved by a factor of 3 from previous best results for silicon devices, from 0.14% down to to 0.043%. By including tomographically complete measurements in randomised benchmarking, we infer higher-order features of the noise, which in turn allow us to theoretically predict that average gate fidelities as high as 99.974% may be achievable with further pulse improvements. These fidelities are ultimately limited by Markovian noise, which we attribute to charge noise emanating from the silicon device structure itself, or the environment.
Quantum computers will require encoding of quantum information to protect them from noise. Fault-tolerant quantum computing architectures illustrate how this might be done, but have not yet shown a conclusive practical advantage. Here we demonstrate that a small but useful error detecting code improves the fidelity of the fault-tolerant gates implemented in the code space as compared to the fidelity of physically equivalent gates implemented on physical qubits. By running a randomized benchmarking protocol in the logical code space of the [4,2,2] code, we observe an order of magnitude improvement in the infidelity of the gates, with the two-qubit infidelity dropping from $5.8(2)\%$ to $0.60(3)\%$. Our results are consistent with fault tolerance theory and conclusively demonstrate the benefit of carrying out computation in a code space that can detect errors. Although the fault tolerant gates offer an impressive improvement in fidelity, the computation as a whole is not below the fault tolerance threshold because of noise associated with state preparation and measurement on this device.
We compare the performance of quantum error correcting codes when memory errors are unitary with the more familiar case of dephasing noise. For a wide range of codes we analytically compute the effective logical channel that results when the error correction steps are performed noiselessly. Our examples include the entire family of repetition codes, the 5-qubit, Steane, Shor, and surface codes. When errors are measured in terms of the diamond norm, we find that the error correction is typically much more effective for unitary errors than for dephasing. We observe this behavior for a wide range of codes after a single level of encoding, and in the thresholds of concatenated codes using hard decoders. We show that this holds with great generality by proving a bound on the performance of any stabilizer code when the noise at the physical level is unitary. By comparing the diamond norm error $D_\diamond'$ of the logical qubit with the same quantity at the physical level $D_\diamond$, we show that $D_\diamond' \le c D_\diamond^d$ where d is the distance of the code and c is constant that depends on the code but not on the error. This bound compares very favorably to the performance of error correction for dephasing noise and other Pauli channels, where an error correcting code of odd distance d will exhibit a scaling $D_\diamond'\sim D_\diamond^{(d+1)/2}$.
Randomized benchmarking provides a tool for obtaining precise quantitative estimates of the average error rate of a physical quantum channel. Here we define real randomized benchmarking, which enables a separate determination of the average error rate in the real and complex parts of the channel. This provides more fine-grained information about average error rates with approximately the same cost as the standard protocol. The protocol requires only averaging over the real Clifford group, a subgroup of the full Clifford group, and makes use of the fact that it forms an orthogonal 2-design. Our results are therefore especially useful when considering quantum computations on rebits (or real encodings of complex computations), in which case the real Clifford group now plays the role of the complex Clifford group when studying stabilizer circuits.
We show that a simple modification of the surface code can exhibit an enormous gain in the error correction threshold for a noise model in which Pauli Z errors occur more frequently than X or Y errors. Such biased noise, where dephasing dominates, is ubiquitous in many quantum architectures. In the limit of pure dephasing noise we find a threshold of 43.7(1)% using a tensor network decoder proposed by Bravyi, Suchara and Vargo. The threshold remains surprisingly large in the regime of realistic noise bias ratios, for example 28.1(1)% at a bias of 10. The performance is in fact at or near the hashing bound for all values of the bias. The modified surface code still uses only weight-4 stabilizers on a square lattice, but merely requires measuring products of Y instead of Z around the faces, as this doubles the number of useful syndrome bits associated with the dominant Z errors. Our results demonstrate that large efficiency gains can be found by appropriately tailoring codes and decoders to realistic noise models, even under the locality constraints of topological codes.
Algebraic number theory relates SIC-POVMs in dimension \(d > 3\) to those in dimension \(d(d−2)\). We define a SIC in dimension \(d(d−2)\) to be aligned to a SIC in dimension \(d\) if and only if the squares of the overlap phases in dimension d appear as a subset of the overlap phases in dimension \(d(d−2)\) in a specified way. We give 19 (mostly numerical) examples of aligned SICs. We conjecture that given any SIC in dimension \(d\) there exists an aligned SIC in dimension \(d(d−2)\). In all our examples the aligned SIC has lower dimensional equiangular tight frames embedded in it. If \(d\) is odd so that a natural tensor product structure exists, we prove that the individual vectors in the aligned SIC have a very special entanglement structure, and the existence of the embedded tight frames follows as a theorem. If \(d−2\) is an odd prime number we prove that a complete set of mutually unbiased bases can be obtained by reducing an aligned SIC to this dimension.
The Kitaev honeycomb model is an approximate topological quantum error correcting code in the same phase as the toric code, but requiring only a 2-body Hamiltonian. As a frustrated spin model, it is well outside the commuting models of topological quantum codes that are typically studied, but its exact solubility makes it more amenable to analysis of effects arising in this noncommutative setting than a generic topologically ordered Hamiltonian. Here we study quantum error correction in the honeycomb model using both analytic and numerical techniques. We first prove explicit exponential bounds on the approximate degeneracy, local indistinguishability, and correctability of the code space. These bounds are tighter than can be achieved using known general properties of topological phases. Our proofs are specialized to the honeycomb model, but some of the methods may nonetheless be of broader interest. Following this, we numerically study noise caused by thermalization processes in the perturbative regime close to the toric code renormalization group fixed point. The appearance of non-topological excitations in this setting has no significant effect on the error correction properties of the honeycomb model in the regimes we study. Although the behavior of this model is found to be qualitatively similar to that of the standard toric code in most regimes, we find numerical evidence of an interesting effect in the low-temperature, finite-size regime where a preferred lattice direction emerges and anyon diffusion is geometrically constrained. We expect this effect to yield an improvement in the scaling of the lifetime with system size as compared to the standard toric code.
Encoding schemes and error-correcting codes are widely used in information technology to improve the reliability of data transmission over real-world communication channels. Quantum information protocols can further enhance the performance in data transmission by encoding a message in quantum states, however, most proposals to date have focused on the regime of a large number of uses of the noisy channel, which is unfeasible with current quantum technology. We experimentally demonstrate quantum enhanced communication over an amplitude damping noisy channel with only two uses of the channel per bit and a single entangling gate at the decoder. By simulating the channel using a photonic interferometric setup, we experimentally increase the reliability of transmitting a data bit by greater than 20% for a certain damping range over classically sending the message twice. We show how our methodology can be extended to larger systems by simulating the transmission of a single bit with up to eight uses of the channel and a two-bit message with three uses of the channel, predicting a quantum enhancement in all cases.
We demonstrate that small quantum memories, realized via quantum error correction in multi-qubit devices, can benefit substantially by choosing a quantum code that is tailored to the relevant error model of the system. For a biased noise model, with independent bit and phase flips occurring at different rates, we show that a single code greatly outperforms the well-studied Steane code across the full range of parameters of the noise model, including for unbiased noise. In fact, this tailored code performs almost optimally when compared with 10,000 randomly selected stabilizer codes of comparable experimental complexity. Tailored codes can even outperform the Steane code with realistic experimental noise, and without any increase in the experimental complexity, as we demonstrate by comparison in the observed error model in a recent 7-qubit trapped ion experiment.
Recently, several intriguing conjectures have been proposed connecting symmetric informationally complete quantum measurements (SIC POVMs, or SICs) and algebraic number theory. These conjectures relate the SICs and their minimal defining algebraic number field. Testing or sharpening these conjectures requires that the SICs are expressed exactly, rather than as numerical approximations. While many exact solutions of SICs have been constructed previously using Gröbner bases, this method has probably been taken as far as is possible with current computer technology. Here we describe a method for converting high-precision numerical solutions into exact ones using an integer relation algorithm in conjunction with the Galois symmetries of a SIC. Using this method we have calculated 69 new exact solutions, including 9 new dimensions where previously only numerical solutions were known, which more than triples the number of known exact solutions. In some cases the solutions require number fields with degrees as high as 12,288. We use these solutions to confirm that they obey the number-theoretic conjectures and we address two questions suggested by the previous work.
Extrapolating physical error rates to logical error rates requires many assumptions and thus can radically under- or overestimate the performance of an error correction implementation. We introduce logical randomized benchmarking, a characterization procedure that directly assesses the performance of a quantum error correction implementation at the logical level, and is motivated by a reduction to the well-studied case of physical randomized benchmarking. We show that our method reliably reports logical performance and can estimate the average probability of correctable and uncorrectable errors for a given code and physical channel.
We give an overview of some remarkable connections between symmetric informationally complete measurements (SIC-POVMs, or SICs) and algebraic number theory, in particular, a connection with Hilbert's 12th problem. The paper is meant to be intelligible to a physicist who has no prior knowledge of either Galois theory or algebraic number theory.
Randomized benchmarking (RB) is an efficient and robust method to characterize gate errors in quantum circuits. Averaging over random sequences of gates leads to estimates of gate errors in terms of the average fidelity that are isolated from the state preparation and measurement errors that plague other methods like channel tomography and direct fidelity estimation. A decisive factor in the feasibility of randomized benchmarking is the number of samples required to obtain rigorous confidence intervals. Previous bounds were either prohibitively loose or required the number of sampled sequences to scale exponentially with the number of qubits in order to obtain a fixed confidence interval at a fixed error rate. Here we show that the number of sampled sequences required for a fixed confidence interval is dramatically smaller than could previously be justified. In particular, we show that the number of sampled sequences required is essentially independent of the number of qubits. We also show that the number of samples required with a single qubit is substantially smaller than previous rigorous results, especially in the limit of large sequence lengths. Our results bring rigorous randomized benchmarking on systems with many qubits into the realm of experimental feasibility.
We study the fundamental limits on the reliable storage of quantum information in lattices of qubits by deriving tradeoff bounds for approximate quantum error correcting codes. We introduce a notion of local approximate correctability and code distance, and give a number of equivalent formulations thereof, generalizing various exact error-correction criteria. Our tradeoff bounds relate the number of physical qubits $n$, the number of encoded qubits $k$, the code distance $d$, the accuracy parameter $\delta$ that quantifies how well the erasure channel can be reversed, and the locality parameter $\ell$ that specifies the length scale at which the recovery operation can be done. In a regime where the recovery is successful to accuracy $\delta$ that is exponentially small in $\ell$, which is the case for perturbations of local commuting projector codes, our bound reads $kd^{\frac{2}{D-1}} \le O\bigl(n (\log n)^{\frac{2D}{D-1}} \bigr)$ for codes on $D$-dimensional lattices of Euclidean metric. We also find that the code distance of any local approximate code cannot exceed $O\bigl(\ell n^{(D-1)/D}\bigr)$ if $\delta \le O(\ell n^{-1/D})$. As a corollary of our formulation of correctability in terms of logical operator avoidance, we show that the code distance $d$ and the size $\tilde d$ of a minimal region that can support all approximate logical operators satisfies $\tilde d d^{\frac{1}{D-1}}\le O\bigl( n \ell^{\frac{D}{D-1}} \bigr)$, where the logical operators are accurate up to $O\bigl( ( n \delta / d )^{1/2}\bigr)$ in operator norm. Finally, we prove that for two-dimensional systems if logical operators can be approximated by operators supported on constant-width flexible strings, then the dimension of the code space must be bounded. This supports one of the assumptions of algebraic anyon theories, that there exist only finitely many anyon types.
We propose a framework for the systematic and quantitative generalization of Bell's theorem using causal networks. We first consider the multi-objective optimization problem of matching observed data while minimizing the causal effect of nonlocal variables and prove an inequality for the optimal region that both strengthens and generalizes Bell's theorem. To solve the optimization problem (rather than simply bound it), we develop a novel genetic algorithm treating as individuals causal networks. By applying our algorithm to a photonic Bell experiment, we demonstrate the trade-off between the quantitative relaxation of one or more local causality assumptions and the ability of data to match quantum correlations.
Randomized benchmarking (RB) is an important protocol for robustly characterizing the error rates of quantum gates. The technique is typically applied to the Clifford gates since they form a group that satisfies a convenient technical condition of forming a unitary 2-design, in addition to having a tight connection to fault-tolerant quantum computing and an efficient classical simulation. In order to achieve universal quantum computing one must add at least one additional gate such as the $T$ gate (also known as the $\pi$/8 gate). Here we propose and analyze a simple variation of the standard interleaved RB protocol that can accurately estimate the average fidelity of the $T$ gate while retaining the many advantages of a unitary 2-design and the fidelity guarantees that such a design delivers, as well as the efficient classical simulation property of the Clifford group. Our work complements prior methods that have succeeded in estimating $T$ gate fidelities, but only by relaxing the 2-design constraint and using a more complicated data analysis.
We explore the relationship between approximate symmetries of a gapped Hamiltonian and the structure of its ground space. We start by showing that approximate symmetry operators---unitary operators whose commutators with the Hamiltonian have norms that are sufficiently small---which possess certain mutual commutation relations can be restricted to the ground space with low distortion. We generalize the Stone-von Neumann theorem to matrices that approximately satisfy the canonical (Heisenberg-Weyl-type) commutation relations, and use this to show that approximate symmetry operators can certify the degeneracy of the ground space even though they only approximately form a group. Importantly, the notions of "approximate" and "small" are all independent of the dimension of the ambient Hilbert space, and depend only on the degeneracy in the ground space. Our analysis additionally holds for any gapped band of sufficiently small width in the excited spectrum of the Hamiltonian, and we discuss applications of these ideas to topological quantum phases of matter and topological quantum error correcting codes. Finally, in our analysis we also provide an exponential improvement upon bounds concerning the existence of shared approximate eigenvectors of approximately commuting operators which may be of independent interest.
Well-controlled quantum devices with their increasing system size face a new roadblock hindering further development of quantum technologies: The effort of quantum tomography-the characterization of processes and states within a quantum device-scales unfavorably to the point that state-of-the-art systems can no longer be treated. Quantum compressed sensing mitigates this problem by reconstructing the state from an incomplete set of observables. In this work, we present an experimental implementation of compressed tomography of a seven qubit system---the largest-scale realization to date---and we introduce new numerical methods in order to scale the reconstruction to this dimension. Originally, compressed sensing has been advocated for density matrices with few non-zero eigenvalues. Here, we argue that the low-rank estimates provided by compressed sensing can be appropriate even in the general case. The reason is that statistical noise often allows only for the leading eigenvectors to be reliably reconstructed: We find that the remaining eigenvectors behave in a way consistent with a random matrix model that carries no information about the true state. We report a reconstruction of quantum states from a topological color code of seven qubits, prepared in a trapped ion architecture, based on tomographically incomplete data involving 127 Pauli basis measurement settings only, repeated 100 times each.
We introduce a fast and accurate heuristic for adaptive tomography that addresses many of the limitations of prior methods. Previous approaches were either too computationally intensive or tailored to handle special cases such as single qubits or pure states. By contrast, our approach combines the efficiency of online optimization with generally applicable and well-motivated data-processing techniques. We numerically demonstrate these advantages in several scenarios including mixed states, higher-dimensional systems, and restricted measurements.
Let $K$ be a real quadratic field. For certain $K$ with sufficiently small discriminant we produce explicit unit generators for specific ray class fields of $K$ using a numerical method that arose in the study of complete sets of equiangular lines in $\CC^d$ (known in quantum information as symmetric informationally complete measurements or SICs). The construction in low dimensions suggests a general recipe for producing unit generators in infinite towers of ray class fields above arbitrary $K$ and we summarise this in a conjecture. Such explicit generators are notoriously difficult to find, so this recipe may be of some interest.
In a forthcoming paper we shall publish promising results of numerical comparisons between the logarithms of these canonical units and the values of $L$-functions associated to the extensions, following the programme laid out in the Stark Conjectures.
We introduce a numerical method for identifying topological order in two-dimensional models based on one-dimensional bulk operators. The idea is to identify approximate symmetries supported on thin strips through the bulk that behave as string operators associated to an anyon model. We can express these ribbon operators in matrix product form and define a cost function that allows us to efficiently optimize over this ansatz class. We test this method on spin models with abelian topological order by finding ribbon operators for $\ZZ_d$ quantum double models with local fields and Ising-like terms. In addition, we identify ribbons in the abelian phase of Kitaev's honeycomb model which serve as the logical operators of the encoded qubit for the quantum error-correcting code. We further identify the topologically encoded qubit in the quantum compass model, and show that despite this qubit, the model does not support topological order. Finally, we discuss how the method supports generalizations for detecting nonabelian topological order.
Achieving error rates that meet or exceed the fault-tolerance threshold is a central goal for quantum computing experiments, and measuring these error rates using randomized benchmarking is now routine. However, direct comparison between measured error rates and thresholds is complicated by the fact that benchmarking estimates average error rates while thresholds reflect worst-case behavior when a gate is used as part of a large computation. These two measures of error can differ by orders of magnitude in the regime of interest. Here we facilitate comparison between the experimentally accessible average error rates and the worst-case quantities that arise in current threshold theorems by deriving relations between the two for a variety of physical noise sources. Our results indicate that it is coherent errors that lead to an enormous mismatch between average and worst case, and we quantify how well these errors must be controlled to ensure fair comparison between average error probabilities and fault-tolerance thresholds.
We use a simple real-space renormalization group approach to investigate the critical behavior of the quantum Ashkin-Teller model, a one-dimensional quantum spin chain possessing a line of criticality along which critical exponents vary continuously. This approach, which is based on exploiting the on-site symmetry of the model, has been shown to be surprisingly accurate for predicting some aspects of the critical behavior of the Ising model. Our investigation explores this approach in more generality, in a model where the critical behavior has a richer structure but which reduces to the simpler Ising case at a special point. We demonstrate that the correlation length critical exponent as predicted from this real-space renormalization group approach is in broad agreement with the corresponding results from conformal field theory along the line of criticality. Near the Ising special point, the error in the estimated critical exponent from this simple method is comparable to that of numerically-intensive simulations based on much more sophisticated methods, although the accuracy decreases away from the decoupled Ising model point.
Classically simulating the dynamics of anyonic excitations in two-dimensional quantum systems is likely intractable in general because such dynamics are sufficient to implement universal quantum computation. However, processes of interest for the study of quantum error correction in anyon systems are typically drawn from a restricted class that displays significant structure over a wide range of system parameters. We exploit this structure to classically simulate, and thereby demonstrate the success of, an error-correction protocol for a quantum memory based on the universal Fibonacci anyon model. We numerically simulate a phenomenological model of the system and noise processes on lattice sizes of up to 128x128 sites, and find a lower bound on the error-correction threshold of approximately 12.5%, which is comparable to those previously known for abelian and (non-universal) nonabelian anyon models.
Randomized benchmarking is an important statistical tool for characterizing the performance of high-fidelity quantum gates. Here we study randomized benchmarking in the presence of quasi-static (i.e. low-frequency) errors. We contrast the resulting fidelity with that under the conventional assumption of Markovian errors. We treat these limiting cases analytically by mapping the averaging process to a random walk in "Pauli space". In the quasi-static case, we find a broad, highly skewed distribution of fidelities, while Markovian errors produce a narrow, approximately Gaussian distribution of fidelities. We use the filter-transfer function formalism to reveal the underlying reason for these differences in terms of effective coherent averaging of correlated errors in certain random sequences. Large skew in the distribution towards high-fidelity outcomes -- consistent with existing experimental data -- highlights potential finite-sampling pitfalls when deploying randomized benchmarking. Moreover, these results demonstrate general challenges in extracting useful single-gate fidelities from randomized benchmarking measurements with correlated errors.
The trapped atomic ion qubits feature desirable properties for use in a quantum computer such as long coherence times, high qubit measurement fidelity, and universal logic gates. The quality of quantum logic gate operations on trapped ion qubits has been limited by the stability of the control fields at the ion location used to implement the gate operations. For this reason, the logic gates utilizing microwave fields have shown gate fidelities several orders of magnitude better than those using laser fields. Here, we demonstrate low-error single-qubit gates performed using stimulated Raman transitions on an ion qubit trapped in a microfabricated chip trap. Gate errors are measured using a randomized benchmarking protocol, where amplitude error in the control beam is compensated using various pulse sequence techniques. Using B2 compensation, we demonstrate single-qubit gates with an average error per randomized Clifford group gate of 3.6(3) × 10^{−4}. We also show that compact palindromic pulse compensation sequences (PDn) compensate for amplitude errors as designed.
Noise mechanisms in quantum systems can be broadly characterized as either coherent (i.e., unitary) or incoherent. For a given fixed average error rate, coherent noise mechanisms will generally lead to a larger worst-case error than incoherent noise. We show that the coherence of a noise source can be quantified by the unitarity, which we relate to the average change in purity averaged over input pure states. We then show that the unitarity can be efficiently estimated using a protocol based on randomized benchmarking that is efficient and robust to state-preparation and measurement errors.
Given a gapped Hamiltonian of a spin chain, we give a polynomial-time algorithm for finding the degenerate ground space projector. The output is an orthonormal set of matrix product states that approximate the true ground space projector up to an inverse polynomial error in any Schatten norm, with a runtime exponential in the degeneracy. Our algorithm is an extension of the recent algorithm of Landau, Vazirani, and Vidick for the nondegenerate case, and it includes the recent improvements due to Huang. The main new idea is to incorporate the local distinguishability of ground states on the half-chain to ensure that the algorithm returns a complete set of global ground states.
We show that non-exponential fidelity decays in randomized benchmarking experiments on quantum dot qubits are consistent with numerical simulations that incorporate low-frequency noise. By expanding standard randomized benchmarking analysis to this experimental regime, we find that such non-exponential decays are better modeled by multiple exponential decay rates, leading to an instantaneous control fidelity for isotopically-purified-silicon MOS quantum dot qubits which can be as high as 99.9% when low-frequency noise conditions and system calibrations are favorable. These advances in qubit characterization and validation methods underpin the considerable prospects for silicon as a qubit platform for fault-tolerant quantum computation.
We describe a general method for turning quantum circuits into sparse quantum subsystem codes. Using this prescription, we can map an arbitrary stabilizer code into a new subsystem code with the same distance and number of encoded qubits but where all the generators have constant weight, at the cost of adding some ancilla qubits. With an additional overhead of ancilla qubits, the new code can also be made spatially local.
Applying our construction to certain concatenated stabilizer codes yields families of subsystem codes with constant-weight generators and with minimum distance $d = n^{1-\varepsilon}$, where $\varepsilon = O(1/\sqrt{\log n})$. For spatially local codes in $D$ dimensions we nearly saturate a bound due to Bravyi and Terhal and achieve $d = n^{1-\varepsilon-1/D}$. Previously the best code distance achievable with constant-weight generators in any dimension, due to Freedman, Meyer and Luo, was $O(\sqrt{n\log n})$ for a stabilizer code.
Topological quantum computing promises error-resistant quantum computation without active error correction. However, there is a worry that during the process of executing quantum gates by braiding anyons around each other, extra anyonic excitations will be created that will disorder the encoded quantum information. Here we explore this question in detail by studying adiabatic code deformations on Hamiltonians based on topological codes, notably Kitaev's surface codes and the more recently discovered color codes. We develop protocols that enable universal quantum computing by adiabatic evolution in a way that keeps the energy gap of the system constant with respect to the computation size and introduces only simple local Hamiltonian interactions. This allows one to perform holonomic quantum computing with these topological quantum computing systems. The tools we develop allow one to go beyond numerical simulations and understand these processes analytically.
Randomized benchmarking is a promising tool for characterizing the noise in experimental implementations of quantum systems. In this paper, we prove that the estimates produced by randomized benchmarking (both standard and interleaved) for arbitrary Markovian noise sources are remarkably precise by showing that the variance due to sampling random gate sequences is small. We discuss how to choose experimental parameters, in particular the number and lengths of random sequences, in order to characterize average gate errors with rigorous confidence bounds. We also show that randomized benchmarking can be used to reliably characterize time-dependent Markovian noise (e.g., when noise is due to a magnetic field with fluctuating strength). Moreover, we identify a necessary property for time-dependent noise that is violated by some sources of non-Markovian noise, which provides a test for non-Markovianity.
Bender et al. have developed PT-symmetric quantum theory as an extension of quantum theory to non-Hermitian Hamiltonians. We show that when this model has a local PT symmetry acting on composite systems it violates the non-signaling principle of relativity. Since the case of global PT symmetry is known to reduce to standard quantum mechanics, this shows that the PT-symmetric theory is either a trivial extension or likely false as a fundamental theory.
We consider two-dimensional lattice models that support Ising anyonic excitations and are coupled to a thermal bath. We propose a phenomenological model for the resulting short-time dynamics that includes pair-creation, hopping, braiding, and fusion of anyons. By explicitly constructing topological quantum error-correcting codes for this class of system, we use our thermalization model to estimate the lifetime of the quantum information stored in the encoded spaces. To decode and correct errors in these codes, we adapt several existing topological decoders to the non-Abelian setting. We perform large-scale numerical simulations of these two-dimensional Ising anyon systems and find that the thresholds of these models range between 13% to 25%. To our knowledge, these are the first numerical threshold estimates for quantum codes without explicit additive structure.
Quantum simulation is a promising near term application for mesoscale quantum information processors, with the potential to solve computationally intractable problems at the scale of just a few dozen interacting quantum systems. Recent experiments in a range of technical platforms have demonstrated the basic functionality of quantum simulation applied to quantum magnetism, quantum phase transitions, and relativistic quantum mechanics. In all cases, the underlying hardware platforms restrict the achievable inter-particle interaction, forming a serious constraint on the ability to realize a versatile, programmable, quantum simulator. In this work, we address this problem by developing novel sequences of unitary operations that engineer desired effective Hamiltonians in the time-domain. The result is a hybrid programmable analog simulator permitting a broad class of interacting spin-lattice models to be generated starting only with an arbitrary native inter-particle interaction and single-qubit addressing. Building on previous work proving that universal simulation is possible using both entangling gates and single-qubit unitaries, we show how determining the relevant hardware "program" of unitary pulses to implement an arbitrary spin Hamiltonian on such a simulator can be formulated as a linear program that runs in polynomial time and scales efficiently in hardware resources. Our analysis extends from circuit model quantum information to adiabatic quantum evolutions, where our approach allows for the creation of non-native ground state solutions to a computation.
We describe a many-body quantum system which can be made to quantum compute by the adiabatic application of a large applied field to the system. Prior to the application of the field quantum information is localized on one boundary of the device, and after the application of the field this information has propagated to the other side of the device with a quantum circuit applied to the information. The applied circuit depends on the many-body Hamiltonian of the material, and the computation takes place in a degenerate ground space with symmetry-protected topological order. Such adiabatic quantum transistors are universal adiabatic quantum computing devices which have the added benefit of being modular. Here we describe this model, provide arguments for why it is an efficient model of quantum computing, and examine these many-body systems in the presence of a noisy environment.
We provide two simple counterexamples to Kalai's Conjecture C and discuss our perspective on the implications for the prospect of large-scale fault-tolerant quantum computation.
Intuitively, if a density operator has small rank, then it should be easier to estimate from experimental data, since in this case only a few eigenvectors need to be learned. We prove two complementary results that confirm this intuition. First, we show that a low-rank density matrix can be estimated using fewer copies of the state, i.e., the sample complexity of tomography decreases with the rank. Second, we show that unknown low-rank states can be reconstructed from an incomplete set of measurements, using techniques from compressed sensing and matrix completion. These techniques use simple Pauli measurements, and their output can be certified without making any assumptions about the unknown state.
We give a new theoretical analysis of compressed tomography, based on the restricted isometry property (RIP) for low-rank matrices. Using these tools, we obtain near-optimal error bounds, for the realistic situation where the data contains noise due to finite statistics, and the density matrix is full-rank with decaying eigenvalues. We also obtain upper-bounds on the sample complexity of compressed tomography, and almost-matching lower bounds on the sample complexity of any procedure using adaptive sequences of Pauli measurements.
Using numerical simulations, we compare the performance of two compressed sensing estimators -- the matrix Dantzig selector and the matrix Lasso -- with standard maximum-likelihood estimation (MLE). We find that, given comparable experimental resources, the compressed sensing estimators consistently produce higher-fidelity state reconstructions than MLE. In addition, the use of an incomplete set of measurements leads to faster classical processing with no loss of accuracy.
Finally, we show how to certify the accuracy of a low rank estimate using direct fidelity estimation and we describe a method for compressed quantum process tomography that works for processes with small Kraus rank, and requires only Pauli eigenstate preparations and Pauli measurements.
We study the computational difficulty of computing the ground state degeneracy and the density of states for local Hamiltonians. We show that the difficulty of both problems is exactly captured by a class which we call #BQP, which is the counting version of the quantum complexity class QMA. We show that #BQP is not harder than its classical counting counterpart #P, which in turn implies that computing the ground state degeneracy or the density of states for classical Hamiltonians is just as hard as it is for quantum Hamiltonians.
We present a procedure to obtain the Hamiltonians of the toric code and Kitaev quantum double models as the low-energy limits of entirely two-body Hamiltonians. Our construction makes use of a new type of perturbation gadget based on error-detecting subsystem codes. The procedure is motivated by a PEPS description of the target models, and reproduces the target models' behavior using only couplings which are natural in terms of the original Hamiltonians. This allows our construction to exactly capture the symmetries of the target models.
We describe a simple method for certifying that an experimental device prepares a desired quantum state $\rho$. Our method is applicable to any pure state $\rho$, and it provides an estimate of the fidelity between $\rho$ and the actual (arbitrary) state in the lab, up to a constant additive error. The method requires measuring only a constant number of Pauli expectation values, selected at random according to an importance-weighting rule. Our method is faster than full tomography by a factor of $d$, the dimension of the state space, and extends easily and naturally to quantum channels.
We provide a unified graphical calculus for all Gaussian pure states, including graph transformation rules for all local and semi-local Gaussian unitary operations, as well as local quadrature measurements. We then use this graphical calculus to analyze continuous-variable (CV) cluster states, the essential resource for one-way quantum computing with CV systems. Current graphical approaches to CV cluster states are only valid in the unphysical limit of infinite squeezing, and the associated graph transformation rules only apply when the initial and final states are of this form. Our formalism applies to all Gaussian pure states and subsumes these rules in a natural way. In addition, the term "CV graph state" currently has several inequivalent definitions in use. Using this formalism we provide a single unifying definition that encompasses all of them. We provide many examples of how the formalism may be used in the context of CV cluster states: defining the "closest" CV cluster state to a given Gaussian pure state and quantifying the error in the approximation due to finite squeezing; analyzing the optimality of certain methods of generating CV cluster states; drawing connections between this new graphical formalism and bosonic Hamiltonians with Gaussian ground states, including those useful for CV one-way quantum computing; and deriving a graphical measure of bipartite entanglement for certain classes of CV cluster states. We mention other possible applications of this formalism and conclude with a brief note on fault tolerance in CV one-way quantum computing.
Examples of symmetric informationally complete positive operator valued measures (SIC-POVMs) have been constructed in every dimension less than or equal to 67. However, it remains an open question whether they exist in all finite dimensions. A SIC-POVM is usually thought of as a highly symmetric structure in quantum state space. However, its elements can equally well be regarded as a basis for the Lie algebra $\mathrm{gl}(d,\CC)$. In this paper we examine the resulting structure constants, which are calculated from the traces of the triple products of the SIC-POVM elements and which, it turns out, characterize the SIC-POVM up to unitary equivalence. We show that the structure constants have numerous remarkable properties. In particular we show that the existence of a SIC-POVM in dimension d is equivalent to the existence of a certain structure in the adjoint representation of $\mathrm{gl}(d,\CC)$. We hope that transforming the problem in this way, from a question about quantum state space to a question about Lie algebras, may help to make the existence problem tractable.
Quantum state tomography, the ability to deduce the state of a quantum system from measured data, is the gold standard for verification and benchmarking of quantum devices. It has been realized in systems with few components, but for larger systems it becomes infeasible because the number of quantum measurements and the amount of computation required to process them grows exponentially in the system size. Here we show that we can do exponentially better than direct state tomography for a wide range of quantum states, in particular those that are well approximated by a matrix product state ansatz. We present two schemes for tomography in 1-D quantum systems and touch on generalizations. One scheme requires unitary operations on a constant number of subsystems, while the other requires only local measurements together with more elaborate post-processing. Both schemes rely only on a linear number of experimental operations and classical postprocessing that is polynomial in the system size. A further strength of the methods is that the accuracy of the reconstructed states can be rigorously certified without any a priori assumptions.
We establish methods for quantum state tomography based on compressed sensing. These methods are specialized for quantum states that are fairly pure, and they offer a significant performance improvement on large quantum systems. In particular, they are able to reconstruct an unknown density matrix of dimension $d$ and rank $r$ using $O(rd \log^2 d)$ measurement settings, compared to standard methods that require $d^2$ settings. Our methods have several features that make them amenable to experimental implementation: they require only simple Pauli measurements, use fast convex optimization, are stable against noise, and can be applied to states that are only approximately low-rank. The acquired data can be used to certify that the state is indeed close to pure, so no a priori assumptions are needed. We present both theoretical bounds and numerical simulations.
Models of quantum computation are important because they change the physical requirements for achieving universal quantum computation (QC). For example, one-way QC requires the preparation of an entangled "cluster" state followed by adaptive measurement on this state, a set of requirements which is different from the standard quantum circuit model. Here we introduce a model based on one-way QC but without measurements (except for the final readout), instead using adiabatic deformation of a Hamiltonian whose initial ground state is the cluster state. This opens the possibility to use the copious results from one-way QC to build more feasible adiabatic schemes.
We study the possibility of performing quantum state reconstruction from a measurement record that is obtained as a sequence of expectation values of a Hermitian operator evolving under repeated application of a single random unitary map, $U_0$. We show that while this single-parameter orbit in operator space is not informationally complete, it can be used to yield surprisingly high-fidelity reconstruction. For a $d$-dimensional Hilbert space with the initial observable in $\mathrm{su}(d)$, the measurement record lacks information about a matrix subspace of dimension $\gt d-2$ out of the total dimension $d^2-1$. We determine the conditions on $U_0$ such that the bound is saturated, and show they are achieved by almost all pseudorandom unitary matrices. When we further impose the constraint that the physical density matrix must be positive, we obtain even higher fidelity than that predicted from the missing subspace. With prior knowledge that the state is pure, the reconstruction will be perfect (in the limit of vanishing noise) and for arbitrary mixed states, the fidelity is over 0.96, even for small $d$, and reaching $F \gt 0.99$ for $d \gt 9$. We also study the implementation of this protocol based on the relationship between random matrices and quantum chaos. We show that the Floquet operator of the quantum kicked top provides a means of generating the required type of measurement record, with implications on the relationship between quantum chaos and information gain.
We generalize the topological entanglement entropy to a family of topological Rényi entropies parametrized by a parameter alpha, in an attempt to find new invariants for distinguishing topologically ordered phases. We show that, surprisingly, all topological Rényi entropies are the same, independent of alpha for all non-chiral topological phases. This independence shows that topologically ordered ground-state wavefunctions have reduced density matrices with a certain simple structure, and no additional universal information can be extracted from the entanglement spectrum.
The difficulty in producing precisely timed and controlled quantum gates is a significant source of error in many physical implementations of quantum computers. Here we introduce a simple universal primitive, adiabatic gate teleportation, which is robust to timing errors and many control errors and maintains a constant energy gap throughout the computation above a degenerate ground state space. Notably this construction allows for geometric robustness based upon the control of two independent qubit interactions. Further, our piecewise adiabatic evolution easily relates to the quantum circuit model, enabling the use of standard methods from fault-tolerance theory for establishing thresholds.
In the one-way model of quantum computing, quantum algorithms are implemented using only measurements on an entangled initial state. Much of the hard work is done up-front when creating this universal resource, known as a cluster state, on which the measurements are made. Here we detail a new proposal for a scalable method of creating cluster states using only a single multimode optical parametric oscillator (OPO). The method generates a continuous-variable cluster state that is universal for quantum computation and encoded in the quadratures of the optical frequency comb of the OPO. This work expands on the presentation in Phys. Rev. Lett. 101, 130501 (2008).
It is often argued that entanglement is at the root of the speedup for quantum compared to classical computation, and that one needs a sufficient amount of entanglement for this speedup to be manifest. In measurement-based quantum computing (MBQC), the need for a highly entangled initial state is particularly obvious. Defying this intuition, we show that quantum states can be too entangled to be useful for the purpose of computation. We prove that this phenomenon occurs for a dramatic majority of all states: the fraction of useful n-qubit pure states is less than $\exp(-n^2)$. Computational universality is hence a rare property in quantum states. This work highlights a new aspect of the question concerning the role entanglement plays for quantum computational speed-ups. The statements remain true if one allows for certain forms of post-selection and also cover the notion of CQ-universality. We identify scale-invariant states resulting from a MERA construction as likely candidates for physically relevant states subject to this effect.
Quantum computation in the one-way model requires the preparation of certain resource states known as cluster states. We describe how the construction of continuous-variable cluster states for optical quantum computing relate to the existence of certain families of matrices. The relevant matrices are known as weighing matrices, with a few additional constraints. We prove some results regarding the structure of these matrices, and their associated graphs.
A parameter whose coupling to a quantum probe of n constituents includes all two-body interactions between the constituents can be measured with an uncertainty that scales as $1/n^{3/2}$, even when the constituents are initially unentangled. We devise a protocol that achieves the $1/n^{3/2}$ scaling without generating any entanglement among the constituents, and we suggest that the protocol might be implemented in a two-component Bose-Einstein condensate.
One-way quantum computing allows any quantum algorithm to be implemented easily using just measurements. The difficult part is creating the universal resource, a cluster state, on which the measurements are made. We propose a radically new approach: a scalable method that uses a single, multimode optical parametric oscillator (OPO). The method is very efficient and generates a continuous-variable cluster state, universal for quantum computation, with quantum information encoded in the quadratures of the optical frequency comb of the OPO.
We report on our research effort to generate large-scale multipartite optical-mode entanglement using as few physical resources as possible. We have previously shown that cluster- and GHZ-type $N$-partite continuous-variable entanglement can be obtained in an optical resonator that contains a suitably designed second-order nonlinear optical medium, pumped by at most $O(N^2)$ fields. In this paper, we show that the frequency comb of such a resonator can be entangled into an arbitrary number of independent $2\times 2$ and $2\times 3$ continuous-variable cluster states by a single optical parametric oscillator pumped by just a few optical modes.
We study the performance of initial product states of $n$-body systems in generalized quantum metrology protocols that involve estimating an unknown coupling constant in a nonlinear $k$-body ($k \ll n$) Hamiltonian. We obtain the theoretical lower bound on the uncertainty in the estimate of the parameter. For arbitrary initial states, the lower bound scales as $1/n^k$, and for initial product states, it scales as $1/n^{k-1/2}$. We show that the latter scaling can be achieved using simple, separable measurements. We analyze in detail the case of a quadratic Hamiltonian ($k = 2$), implementable with Bose-Einstein condensates. We formulate a simple model, based on the evolution of angular-momentum coherent states, which explains the $O(n^{-3/2})$ scaling for $k = 2$; the model shows that the entanglement generated by the quadratic Hamiltonian does not play a role in the enhanced sensitivity scaling. We show that phase decoherence does not affect the $O(n^{-3/2})$ sensitivity scaling for initial product states.
We study how heralded qubit losses during the preparation of a two-dimensional cluster state, a universal resource state for one-way quantum computation, affect its computational power. Above the percolation threshold we present a polynomial-time algorithm that concentrates a universal cluster state, using resources that scale optimally in the size of the original lattice. On the other hand, below the percolation threshold, we show that single qubit measurements on the faulty lattice can be efficiently simulated classically. We observe a phase transition at the threshold when the amount of entanglement in the faulty lattice directly relevant to the computational power changes exponentially.
We propose an experimental scheme that has the potential for large-scale realization of continuous-variable (CV) cluster states for universal quantum computation. We do this by mapping CV cluster-state graphs onto two-mode squeezing graphs, which can be engineered into a single optical parametric oscillator (OPO). The desired CV cluster state is produced directly from a joint squeezing operation on the vacuum using a multi-frequency pump beam. This method has potential for ultracompact experimental implementation. As an illustration, we detail an experimental proposal for creating a four-mode square CV cluster state with a single OPO.
Entanglement measures constructed from two positive, but not completely positive maps on density operators are used as constraints in placing bounds on the entanglement of formation, the tangle, and the concurrence of $4 \times N$ mixed states. The maps are the partial transpose map and the $\Phi$-map introduced by Breuer [H.-P. Breuer, Phys. Rev. Lett. 97, 080501 (2006)]. The norm-based entanglement measures constructed from these two maps, called negativity and $\Phi$-negativity, respectively, lead to two sets of bounds on the entanglement of formation, the tangle, and the concurrence. We compare these bounds and identify the sets of $4 \times N$ density operators for which the bounds from one constraint are better than the bounds from the other. In the process, we present a new derivation of the already known bound on the concurrence based on the negativity. We compute new bounds on the three measures of entanglement using both the constraints simultaneously. We demonstrate how such doubly constrained bounds can be constructed. We discuss extensions of our results to bipartite states of higher dimensions and with more than two constraints.
We develop generalized bounds for quantum single-parameter estimation problems for which the coupling to the parameter is described by intrinsic multi-system interactions. For a Hamiltonian with $k$-system parameter-sensitive terms, the quantum limit scales as $1/N^k$ where $N$ is the number of systems. These quantum limits remain valid when the Hamiltonian is augmented by any parameter independent interaction among the systems and when adaptive measurements via parameter-independent coupling to ancillas are allowed.
The generalized Pauli group and its normalizer, the Clifford group, have a rich mathematical structure which is relevant to the problem of constructing symmetric informationally complete POVMs (SIC-POVMs). To date, almost every known SIC-POVM fiducial vector is an eigenstate of a "canonical" unitary in the Clifford group. I show that every canonical unitary in prime dimensions $p \gt 3$ lies in the same conjugacy class of the Clifford group and give a class representative for all such dimensions. It follows that if even one such SIC-POVM fiducial vector is an eigenvector of such a unitary, then all of them are (for a given such dimension). I also conjecture that in all dimensions $d$, the number of conjugacy classes is bounded above by 3 and depends only on $d \bmod 9$, and I support this claim with computer computations in all dimensions $\lt 48$.
The "Power of One Qubit" refers to a computational model that has access to only one pure bit of quantum information, along with $n$ qubits in the totally mixed state. This model, though not as powerful as a pure-state quantum computer, is capable of performing some computational tasks exponentially faster than any known classical algorithm. One such task is to estimate with fixed accuracy the normalized trace of a unitary operator that can be implemented efficiently in a quantum circuit. We show that circuits of this type generally lead to entangled states, and we investigate the amount of entanglement possible in such circuits, as measured by the multiplicative negativity. We show that the multiplicative negativity is bounded by a constant, independent of $n$, for all bipartite divisions of the $n+1$ qubits, and so becomes, when $n$ is large, a vanishingly small fraction of the maximum possible multiplicative negativity for roughly equal divisions. This suggests that the global nature of entanglement is a more important resource for quantum computation than the magnitude of the entanglement.
We consider measurements, described by a positive-operator-valued measure (POVM), whose outcome probabilities determine an arbitrary pure state of a $D$-dimensional quantum system. We call such a measurement a pure-state informationally complete (PSI-complete) POVM. We show that a measurement with $2D-1$ outcomes cannot be PSI-complete, and then we construct a POVM with $2D$ outcomes that suffices, thus showing that a minimal PSI-complete POVM has $2D$ outcomes. We also consider PSI-complete POVMs that have only rank-one POVM elements and construct an example with $3D-2$ outcomes, which is a generalization of the tetrahedral measurement for a qubit. The question of the minimal number of elements in a rank-one PSI-complete POVM is left open.
The current courses that I'm teaching, and links to past courses that I have taught.
Semester-long undergraduate course on statistical mechanics focusing on computational aspects such as Monte Carlo methods as well as modern techniques such as tensor networks.
Advanced 3rd-year undergraduate course on quantum mechanics covering wave mechanics, angular momentum, the hydrogen atom, perturbation theory, atoms in EM fields.
Semester-long undergraduate course on statistical mechanics focusing on computational aspects such as Monte Carlo methods as well as modern techniques such as tensor networks.
Here are my current students, postdocs and senior scientists, as well as my former students. We are also always collaborating with the rest of the Sydney quantum physics group, including both the theorists and the experimentalists.
Research Scientist
Postdoctoral Researcher
Postdoctoral Researcher
PhD student, co-supervised with Andrew Doherty.
PhD student.
PhD student.
Honours Student.
Honours student. Currently doing a PhD in mathematical biology at the University of Sydney.
Undergraduate Summer Student at Caltech. Currently a graduate student at University of New Mexico.
Undergraduate Summer Student at Perimeter Institute. Currently a graduate student at UC Berkeley.
Undergraduate Summer Student at Perimeter Institute. Currently a graduate student at Yale.
Coming soon...
Coming soon...
Some old notes
We are always looking for Honours, Masters, and PhD students. If you are interested in doing research in quantum information theory at Sydney and you have a strong interest and background in at least two of physics, mathematics, and computer science, then don't hesitate to contact me to discuss the opportunities available to you. Co-supervisory arrangements for students in maths or computer science departments are possible as well, as the research in the group is strongly interdisciplinary.
If you are interested in doing a postdoc with our group, please send a CV, a brief research statement (2 pages max), and arrange for two referrals to send letters of recommendation. There are also several very generous fellowships available for outstanding candidates, which include personal discretionary funds for travel and equipment, but these must be applied for well in advance. Please don't hesitate to contact me for more information.
The Sydney quantum information group has very broad interests, and students are encouraged to learn as much as possible from their peers and postdocs as well as their supervisor. Collaboration is strongly supported, and we strive to find a congenial atmosphere for exploring creative ideas in groups as well as individually. The projects in the group range from highly mathematical to more experimentally focused theory projects.
The group is located in the heart of Sydney, one of the most iconic and beautiful cities in the world, a few minutes walk from the central business district and just a few kilometers by bus from the world-class beaches. We are also located within a short distance from other quantum information groups at the University of Technology-Sydney, Macquarie University, and the University of New South Wales.
My work has been covered in popular press, including Nature, Science, Physics Viewpoint (twice), PhysOrg, American Scientist, Science News, and Physics World.
40+ published research articles in peer-reviewed journals, including 10 papers in Physical Review Letters.
More than 3000+ citations in the last 5 years, and an h-index of 28, increasing +2 per year on average (source: Google Scholar).
Chief Investigator on an ARO grant worth US$1.6M through 2019, and co-Principal Investigator on two other grants with ARO and IARPA. Holder of an ARC Future Fellowship for the next four years (until 2018).
University of Sydney, School of Physics
University of Sydney, School of Physics
University of Sydney, School of Physics
University of Washington, Department of Computer Science
Caltech, Institute for Quantum Information
Perimeter Institute for Theoretical Physics, Quantum Information group
Ph.D. in Physics
University of New Mexico
Physics & Mathematics
Pennsylvania State University