As the field of the quantum internet advances, the necessity for a comprehensive guide to navigate its complexities has become increasingly crucial. Quantum computing, a more established relative, shares foundational principles with the quantum internet; however, distinguishing between the two is essential for further development and deeper understanding. This work systematically introduces the quantum internet by addressing fundamental questions: Why is it important? What are its core components? How does it function? When will it be viable? Who are the key players? What are the challenges and future directions? Through elucidating these aspects along with a comprehensive introduction to the basic concepts, we aim to provide a clear and accessible overview of the quantum internet, facilitating the groundwork for future innovations and research in the field.
The objective to be minimized in the variational quantum eigensolver (VQE) has a restricted form, which allows a specialized sequential minimal optimization (SMO) that requires only a few observations in each iteration. However, the SMO iteration is still costly due to the observation noise -- one observation at a point typically requires averaging over hundreds to thousands of repeated quantum measurement shots for achieving a reasonable noise level. In this paper, we propose an adaptive cost control method, named subspace in confident region (SubsCoRe), for SMO. SubsCoRe uses the Gaussian process (GP) surrogate, and requires it to have low uncertainty over the subspace being updated, so that optimization in each iteration is performed with guaranteed accuracy. The adaptive cost control is performed by first setting the required accuracy according to the progress of the optimization, and then choosing the minimum number of measurement shots and their distribution such that the required accuracy is satisfied. We demonstrate that SubsCoRe significantly improves the efficiency of SMO, and outperforms the state-of-the-art methods.
We consider the problem of estimating the energy of a quantum state preparation for a given Hamiltonian in Pauli decomposition. For various quantum algorithms, in particular in the context of quantum chemistry, it is crucial to have energy estimates with error bounds, as captured by guarantees on the problem's sampling complexity. In particular, when limited to Pauli basis measurements, the smallest sampling complexity guarantee comes from a simple single-shot estimator via a straightforward argument based on Hoeffding's inequality. In this work, we construct an adaptive estimator using the state's actual variance. Technically, our estimation method is based on the Empirical Bernstein stopping (EBS) algorithm and grouping schemes, and we provide a rigorous tail bound, which leverages the state's empirical variance. In a numerical benchmark of estimating ground-state energies of several Hamiltonians, we demonstrate that EBS consistently improves upon elementary readout guarantees up to one order of magnitude.
Monitored many-body systems can exhibit a phase transition between entangling and disentangling dynamical phases by tuning the strength of measurements made on the system as it evolves. This phenomenon is called the measurement-induced phase transition (MIPT). Understanding the properties of the MIPT is a prominent challenge for both theory and experiment at the intersection of many-body physics and quantum information. Realizing the MIPT experimentally is particularly challenging due to the postselection problem, which demands a number of experimental realizations that grows exponentially with the number of measurements made during the dynamics. Proposed approaches that circumvent the postselection problem typically rely on a classical decoding process that infers the final state based on the measurement record. But the complexity of this classical process generally also grows exponentially with the system size unless the dynamics is restricted to a fine-tuned set of unitary operators. In this work we overcome these difficulties. We construct a tree-shaped quantum circuit whose nodes are Haar-random unitary operators followed by weak measurements of tunable strength. For these circuits, we show that the MIPT can be detected without postselection using only a simple classical decoding process whose complexity grows linearly with the number of qubits. Our protocol exploits the recursive structure of tree circuits, which also enables a complete theoretical description of the MIPT, including an exact solution for its critical point and scaling behavior. We experimentally realize the MIPT on Quantinuum's H1-1 trapped-ion quantum computer and show that the experimental results are precisely described by theory. Our results close the gap between analytical theory and postselection-free experimental observation of the MIPT.
Tensor network formalisms have emerged as powerful tools for simulating quantum state evolution. While widely applied in the study of optical quantum circuits, such as Boson Sampling, existing tensor network approaches fail to address the complexity mismatch between tensor contractions and the calculation of photon-counting probability amplitudes. Here, we present an alternative tensor network framework, the operator-basis Matrix Product State (MPS), which exploits the input-output relations of quantum optical circuits encoded in the unitary interferometer matrix. Our approach bridges the complexity gap by enabling the computation of the permanent -- central to Boson Sampling -- with the same computational complexity as the best known classical algorithm based on a graphical representation of the operator-basis MPS that we introduce. Furthermore, we exploit the flexibility of tensor networks to extend our formalism to incorporate partial distinguishability and photon loss, two key imperfections in practical interferometry experiments. This work offers a significant step forward in the simulation of large-scale quantum optical systems and the understanding of their computational complexity.
Magic state cultivation is a newly proposed protocol that represents the state of the art in magic state generation. It uses the transversality of the $\text{H}_{XY}$ gate on the 2d triangular color-code, together with a novel grafting mechanism to transform the color-code into a matchable code with minimal overhead. Still, the resulting code has a longer cycle time and some high weight stabilizers. Here, we introduce a new cultivation protocol that avoids grafting, by exploiting the transversality of the controlled-X (CX) operation on all CSS codes. Our protocol projects a pairs of low distance surface codes into a magic CX-eigenstate, and expands them to larger surface codes after post-selection. The effect of erasure qubits on both types of cultivation is analyzed, showing how it can be used to further reduce the logical error rate of cultivation. Importantly, CX cultivation uses non-local connectivity, and benefits from platforms with native Toffoli gates ($\text{CCX}$), both of which were recently demonstrated with Rydberg atoms.
Uhlmann's theorem states that, for any two quantum states $\rho_{AB}$ and $\sigma_A$, there exists an extension $\sigma_{AB}$ of $\sigma_A$ such that the fidelity between $\rho_{AB}$ and $\sigma_{AB}$ equals the fidelity between their reduced states $\rho_A$ and $\sigma_A$. In this work, we generalize Uhlmann's theorem to $\alpha$-R\'enyi relative entropies for $\alpha \in [\frac{1}{2},\infty]$, a family of divergences that encompasses fidelity, relative entropy, and max-relative entropy corresponding to $\alpha=\frac{1}{2}$, $\alpha=1$, and $\alpha=\infty$, respectively.
We propose a framework for simulating the real-time dynamics of quantum field theories (QFTs) using continuous-variable quantum computing (CVQC). Focusing on ($1+1$)-dimensional $\varphi^4$ scalar field theory, the approach employs the Hamiltonian formalism to map the theory onto a spatial lattice, with fields represented as quantum harmonic oscillators. Using measurement-based quantum computing, we implement non-Gaussian operations for CQVC platforms. The study introduces methods for preparing initial states with specific momenta and simulating their evolution under the $\varphi^4$ Hamiltonian. Key quantum objects, such as two-point correlation functions, validate the framework against analytical solutions. Scattering simulations further illustrate how mass and coupling strength influence field dynamics and energy redistribution. Thus, we demonstrate CVQC's scalability for larger lattice systems and its potential for simulating more complex field theories.
Solid-state quantum sensors based on ensembles of nitrogen-vacancy (NV) centers in diamond have emerged as powerful tools for precise sensing applications. Nuclear spin sensors are particularly well-suited for applications requiring long coherence times, such as inertial sensing, but remain underexplored due to control complexity and limited optical readout efficiency. In this work, we propose cooperative cavity quantum electrodynamic (cQED) coupling to achieve efficient nuclear spin readout. Unlike previous cQED methods used to enhance electron spin readout, here we employ two-field interference in the NV hyperfine subspace to directly probe the nuclear spin transitions. We model the nuclear spin NV-cQED system (nNV-cQED) and observe several distinct regimes, including electromagnetically induced transparency, masing without inversion, and oscillatory behavior. We then evaluate the nNV-cQED system as an inertial sensor, indicating a rotation sensitivity improved by three orders of magnitude compared to previous solid-state spin demonstrations. Furthermore, we show that the NV electron spin can be simultaneously used as a comagnetometer, and the four crystallographic axes of NVs can be employed for vector resolution in a single nNV-cQED system. These results showcase the applications of two-field interference using the nNV-cQED platform, providing critical insights into the manipulation and control of quantum states in hybrid NV systems and unlocking new possibilities for high-performance quantum sensing.
We define a time-dependent extension of the quantum geometric tensor to describe the geometry of the time-parameter space for a quantum state, by considering small variations in both time and wave function parameters. Compared to the standard quantum geometric tensor, this tensor introduces new temporal components, enabling the analysis of systems with non-time-separable or explicitly time-dependent quantum states and encoding new information about these systems. In particular, the time-time component of this tensor is related to the energy dispersion of the system. We applied this framework to a harmonic/inverted oscillator, a time-dependent harmonic oscillator, and a chain of generalized harmonic/inverted oscillators. We show some results on the scalar curvature associated with the time-dependent quantum geometric tensor and the generalized Berry curvature behavior on the transition from harmonic oscillators to inverted ones. Furthermore, we analyze the entanglement for the chain through purity analysis, obtaining that the purity for any excited state is zero in the mentioned transitions.
We consider a system of two indistinguishable fermions (with four accessible states each) that suffers decoherence without dissipation due to its coupling with a global bosonic bath at a fixed temperature. Using an appropriate measure of fermionic entanglement, we identify families of two-fermion states whose entanglement persists throughout the evolution, either fully or partially, despite the noisy effects of the interaction with the bath, and independently of its temperature. The identified resilience to decoherence provides valuable insights into the entanglement dynamics of open systems of indistinguishable fermions, and into the conditions under which long-lived entanglement emerges under more general decoherence channels.
Digital quantum simulation has emerged as a powerful approach to investigate complex quantum systems using digital quantum computers. Many-particle bosonic systems and intricate optical experimental setups pose significant challenges for classical simulation methods. Utilizing a recently developed formalism that maps bosonic operators to Pauli operators via the Gray code, we digitally simulate interferometric variants of Afshar's experiment on IBM's quantum computers. We investigate the analogous experiments of Unruh and Pessoa J\'unior, exploring discussions on the apparent violation of Bohr's complementarity principle when considering the entire experimental setup. Furthermore, we analyze these experiments within the framework of an updated quantum complementarity principle, which applies to specific quantum state preparations and remains consistent with the foundational principles of quantum mechanics. Our quantum computer demonstration results are in good agreement with the theoretical predictions and underscore the potential of quantum computers as effective simulators for bosonic systems.
The development of quantum codes with good error correction parameters and useful sets of transversal gates is a problem of major interest in quantum error-correction. Abundant prior works have studied transversal gates which are restricted to acting on all logical qubits simultaneously. In this work, we study codes that support transversal gates which induce $\textit{addressable}$ logical gates, i.e., the logical gates act on logical qubits of our choice. As we consider scaling to high-rate codes, the study and design of low-overhead, addressable logical operations presents an important problem for both theoretical and practical purposes. Our primary result is the construction of an explicit qubit code for which $\textit{any}$ triple of logical qubits across one, two, or three codeblocks can be addressed with a logical $\mathsf{CCZ}$ gate via a depth-one circuit of physical $\mathsf{CCZ}$ gates, and whose parameters are asymptotically good, up to polylogarithmic factors. The result naturally generalizes to other gates including the $\mathsf{C}^{\ell} Z$ gates for $\ell \neq 2$. Going beyond this, we develop a formalism for constructing quantum codes with $\textit{addressable and transversal}$ gates. Our framework, called $\textit{addressable orthogonality}$, encompasses the original triorthogonality framework of Bravyi and Haah (Phys. Rev. A 2012), and extends this and other frameworks to study addressable gates. We demonstrate the power of this framework with the construction of an asymptotically good qubit code for which $\textit{pre-designed}$, pairwise disjoint triples of logical qubits within a single codeblock may be addressed with a logical $\mathsf{CCZ}$ gate via a physical depth-one circuit of $\mathsf{Z}$, $\mathsf{CZ}$ and $\mathsf{CCZ}$ gates. In an appendix, we show that our framework extends to addressable and transversal $T$ gates, up to Clifford corrections.
We extend the recently introduced Clifford dressed Time-Dependent Variational Principle (TDVP) to efficiently compute many-body wavefunction amplitudes in the computational basis. This advancement enhances the study of Loschmidt echoes, which generally require accurate calculations of the overlap between the evolved state and the initial wavefunction. By incorporating Clifford disentangling gates during TDVP evolution, our method effectively controls entanglement growth while keeping the computation of these amplitudes accessible. Specifically, it reduces the problem to evaluating the overlap between a Matrix Product State (MPS) and a stabilizer state, a task that remains computationally feasible within the proposed framework. To demonstrate the effectiveness of this approach, we first benchmark it on the one-dimensional transverse-field Ising model. We then apply it to more challenging scenarios, including a non-integrable next-to-nearest-neighbor Ising chain and a two-dimensional Ising model. Our results highlight the versatility and efficiency of the Clifford-augmented MPS, showcasing its capability to go beyond the evaluation of simple expectation values. This makes it a powerful tool for exploring various aspects of many-body quantum dynamics.
Decoherence of quantum hardware is currently limiting its practical applications. At the same time, classical algorithms for simulating quantum circuits have progressed substantially. Here, we demonstrate a hybrid framework that integrates classical simulations with quantum hardware to improve the computation of an observable's expectation value by reducing the quantum circuit depth. In this framework, a quantum circuit is partitioned into two subcircuits: one that describes the backpropagated Heisenberg evolution of an observable, executed on a classical computer, while the other is a Schr\"odinger evolution run on quantum processors. The overall effect is to reduce the depths of the circuits executed on quantum devices, trading this with classical overhead and an increased number of circuit executions. We demonstrate the effectiveness of this method on a Hamiltonian simulation problem, achieving more accurate expectation value estimates compared to using quantum hardware alone.
Superconducting quantum computers require microwave control lines running from room temperature to the mixing chamber of a dilution refrigerator. Adding more lines without preliminary thermal modeling to make predictions risks overwhelming the cooling power at each thermal stage. In this paper, we investigate the thermal load of SC-086/50-SCN-CN semi-rigid coaxial cable, which is commonly used for the control and readout lines of a superconducting quantum computer, as we increase the number of lines to a quantum processor. We investigate the makeup of the coaxial cables, verify the materials and dimensions, and experimentally measure the total thermal conductivity of a single cable as a function of the temperature from cryogenic to room temperature values. We also measure the cryogenic DC electrical resistance of the inner conductor as a function of temperature, allowing for the calculation of active thermal loads due to Ohmic heating. Fitting this data produces a numerical thermal conductivity function used to calculate the static heat loads due to thermal transfer within the wires resulting from a temperature gradient. The resistivity data is used to calculate active heat loads, and we use these fits in a cryogenic model of a superconducting quantum processor in a typical Bluefors XLD1000-SL dilution refrigerator, investigating how the thermal load increases with processor sizes ranging from 100 to 225 qubits. We conclude that the theoretical upper limit of the described architecture is approximately 200 qubits. However, including an engineering margin in the cooling power and the available space for microwave readout circuitry at the mixing chamber, the practical limit will be approximately 140 qubits.
Quantum computers have the potential to revolutionise our understanding of the microscopic behaviour of materials and chemical processes by enabling high-accuracy electronic structure calculations to scale more efficiently than is possible using classical computers. Current quantum computing hardware devices suffer from the dual challenges of noise and cost, which raises the question of what practical value these devices might offer before full fault tolerance is achieved and economies of scale enable cheaper access. Here we examine the practical value of noisy quantum computers as tools for high-accuracy electronic structure, by using a Quantinuum ion-trap quantum computer to predict the ionisation potential of helium. By combining a series of techniques suited for use with current hardware including qubit-efficient encoding coupled with chemical insight, low-cost variational optimisation with hardware-adapted quantum circuits, and moments-based corrections, we obtain an ionisation potential of 24.5536 (+0.0011, -0.0005) eV, which agrees with the experimentally measured value to within true chemical accuracy, and with high statistical confidence. The methods employed here can be generalised to predict other properties and expand our understanding of the value that might be provided by near-term quantum computers.
Quantum Cramer-Rao (QCR) bound is attached to a particular nonclassical state, therefore appropriate choice of the probe state is of the key importance to enhance sensitivity beyond classical one. Since the work of C.M. Caves (Phys. Rev. D 23 1693 (1981)) Mach-Zehnder (MZ) interferometry operates with single-mode squeezed vacuum (SMSV) light coupled with a coherent state. We report the gain sensitivity of the phase-dependent MZ interferometer by more than 10 dB compared to the original result (Phys. Rev. Lett. 100, 073601 (2008)) by using SMSV state with squeezing <10 dB from which a certain number of photons was initially subtracted. The gain sensitivity is also observed when measuring the difference of output intensities of the SMSV state with squeezing <3 dB from which 2,4,6 photons are subtracted and large coherent state. Overall, subtracting photons from the initially weakly squeezed light can prove to be a more efficient strategy in the quantum MZ interferometry compared to highly squeezed SMSV state generation.
We propose a quantum algorithm for the linear advection-diffusion equation (ADE) Lattice-Boltzmann method (LBM) that leverages dynamic circuits. Dynamic quantum circuits allow for an optimized collision-operator quantum algorithm, introducing partial measurements as an integral step. Efficient adaptation of the quantum circuit during execution based on digital information obtained through mid-circuit measurements is achieved. The proposed new collision algorithm is implemented as a fully unitary operator, which facilitates the computation of multiple time steps without state reinitialization. Unlike previous quantum collision operators that rely on linear combinations of unitaries, the proposed algorithm does not exhibit a probabilistic failure rate. Moreover, additional qubits no longer depend on the chosen velocity set, which reduces both qubit overhead and circuit complexity. Validation of the quantum collision algorithm is performed by comparing results with digital LBM in one and two dimensions, demonstrating excellent agreement. Performance analysis for multiple time steps highlights advantages compared to previous methods. As an additional variant, a hybrid quantum-digital approach is proposed, which reduces the number of mid-circuit measurements, therefore improving the efficiency of the quantum collision algorithm.
Solving quantum molecular systems presents a significant challenge for classical computation. The advent of early fault-tolerant quantum computing (EFTQC) devices offers a promising avenue to address these challenges, leveraging advanced quantum algorithms with reduced hardware requirements. This review surveys the latest developments in EFTQC and fully fault-tolerant quantum computing (FFTQC) algorithms for quantum molecular systems, covering encoding schemes, advanced Hamiltonian simulation techniques, and ground-state energy estimation methods. We highlight recent progress in overcoming practical barriers, such as reducing circuit depth and minimizing the use of ancillary qubits. Special attention is given to the potential quantum advantages achievable through these algorithms, as well as the limitations imposed by dequantization and classical simulation techniques. The review concludes with a discussion of future directions, emphasizing the need for optimized algorithms and experimental validation to bridge the gap between theoretical developments and practical implementation in EFTQC and FFTQC for quantum molecular systems.
With the growing interest in quantum computing, quantum image processing technology has become a vital research field due to its versatile applications and ability to outperform classical computing. A quantum autoencoder approach has been used for compression purposes. However, existing autoencoders are limited to small-scale images, and the mechanisms of state compression remain unclear. There is also a need for efficient quantum autoencoders using standard representation approaches and for studying parameterized position-aware control qubits and their corresponding quality measurement metrics. This work introduces a novel parameterized position-aware lossy quantum autoencoder (PALQA) circuit that utilizes the least significant bit control qubit for image compression. The PALQA circuit employs a transformed coefficient block-based modified state connection approach to efficiently compress images at various resolutions. The method leverages compression opportunities in the state-label connection by applying position-aware least significant control qubit. Compared to JPEG and other enhanced quantum representation-based quantum autoencoders, the PALQA circuit demonstrates superior performance in terms of the number of gates required and PSNR metrics.
Quantum optimal control methods, such as gradient ascent pulse engineering (GRAPE), are used for precise manipulation of quantum states. Many of those methods were pioneered in magnetic resonance spectroscopy where instrumental distortions are often negligible. However, that is not the case elsewhere: the usual gallimaufry of cables, resonators, modulators, splitters, filters, and amplifiers can and would distort control signals. Those distortions may be non-linear, their inverse functions may be ill-defined and unstable; they may even vary from one day to the next, and across the sample. Here we introduce the response-aware gradient ascent pulse engineering (RAW-GRAPE) framework, which accounts for any cascade of differentiable distortions directly within the GRAPE optimisation loop, does not require response function inversion, and produces control sequences that are resilient to user-specified distortion cascades with user-specified parameter ensembles. The framework is implemented into the optimal control module supplied with versions 2.10 and later of the open-source Spinach library; the user needs to provide function handles returning the actions by the distortions and, optionally, parameter ensembles for those actions.
Efficiently distributing secret keys over long distances remains a critical challenge in the development of quantum networks. "First-generation" quantum repeater chains distribute entanglement by executing protocols composed of probabilistic entanglement generation, swapping and distillation operations. However, finding the protocol that maximizes the secret-key rate is difficult for two reasons. First, calculating the secretkey rate for a given protocol is non-trivial due to experimental imperfections and the probabilistic nature of the operations. Second, the protocol space rapidly grows with the number of nodes, and lacks any clear structure for efficient exploration. To address the first challenge, we build upon the efficient machinery developed by Li et al. [1] and we extend it, enabling numerical calculation of the secret-key rate for heterogeneous repeater chains with an arbitrary number of nodes. For navigating the large, unstructured space of repeater protocols, we implement a Bayesian optimization algorithm, which we find consistently returns the optimal result. Whenever comparisons are feasible, we validate its accuracy against results obtained through brute-force methods. Further, we use our framework to extract insight on how to maximize the efficiency of repeater protocols across varying node configurations and hardware conditions. Our results highlight the effectiveness of Bayesian optimization in exploring the potential of near-term quantum repeater chains.
We address the problem of solving a system of linear equations via the Quantum Singular Value Transformation (QSVT). One drawback of the QSVT algorithm is that it requires huge quantum resources if we want to achieve an acceptable accuracy. To reduce the quantum cost, we propose a hybrid quantum-classical algorithm that improves the accuracy and reduces the cost of the QSVT by adding iterative refinement in mixed-precision A first quantum solution is computed using the QSVT, in low precision, and then refined in higher precision until we get a satisfactory accuracy. For this solver, we present an error and complexity analysis, and first experiments using the quantum software stack myQLM.
Quantum hypergraph states form a generalisation of the graph state formalism that goes beyond the pairwise (dyadic) interactions imposed by remaining inside the Gaussian approximation. Networks of such states are able to achieve universality for continuous variable measurement based quantum computation with only Gaussian measurements. For normalised states, the simplest hypergraph states are formed from $k$-adic interactions among a collection of $k$ harmonic oscillator ground states. However such powerful resources have not yet been observed in experiments and their robustness and scalability have not been tested. Here we develop and analyse necessary criteria for hypergraph nonclassicality based on simultaneous nonlinear squeezing in the nullifiers of hypergraph states. We put forward an essential analysis of their robustness to realistic scenarios involving thermalisation or loss and suggest several basic proof-of-principle options for experiments to observe nonclassicality in hypergraph states.
An essential component of many sophisticated metaheuristics for solving combinatorial optimization problems is some variation of a local search routine that iteratively searches for a better solution within a chosen set of immediate neighbors. The size $l$ of this set is limited due to the computational costs required to run the method on classical processing units. We present a qubit-efficient variational quantum algorithm that implements a quantum version of local search with only $\lceil \log_2 l \rceil$ qubits and, therefore, can potentially work with classically intractable neighborhood sizes when realized on near-term quantum computers. Increasing the amount of quantum resources employed in the algorithm allows for a larger neighborhood size, improving the quality of obtained solutions. This trade-off is crucial for present and near-term quantum devices characterized by a limited number of logical qubits. Numerically simulating our algorithm, we successfully solved the largest graph coloring instance that was tackled by a quantum method. This achievement highlights the algorithm's potential for solving large-scale combinatorial optimization problems on near-term quantum devices.
This work introduces SpinGlassPEPS.jl, a software package implemented in Julia, designed to find low-energy configurations of generalized Potts models, including Ising and QUBO problems, utilizing heuristic tensor network contraction algorithms on quasi-2D geometries. In particular, the package employs the Projected Entangled-Pairs States to approximate the Boltzmann distribution corresponding to the model's cost function. This enables an efficient branch-and-bound search (within the probability space) that exploits the locality of the underlying problem's topology. As a result, our software enables the discovery of low-energy configurations for problems on quasi-2D graphs, particularly those relevant to modern quantum annealing devices. The modular architecture of SpinGlassPEPS.jl supports various contraction schemes and hardware acceleration.
Quantum key distribution requires tight and reliable bounds on the secret key rate to ensure robust security. This is particularly so for the regime of finite block sizes, where the optimization of generalized Renyi entropic quantities is known to provide tighter bounds on the key rate. However, such an optimization is often non-trivial, and the non-monotonicity of the key rate in terms of the Renyi parameter demands additional optimization to determine the optimal Renyi parameter as a function of block sizes. In this work, we present a tight analytical bound on the Renyi entropy in terms of the Renyi divergence and derive the analytical gradient of the Renyi divergence. This enables us to generalize existing state-of-the-art numerical frameworks for the optimization of the key rate. With this generalized framework, we show improvements in regimes of high loss and low block sizes, which are particularly relevant for long-distance satellite-based protocols.
Non-unitary protocols are already at the base of many hybrid quantum computing applications, especially in the noisy intermediate-scale quantum (NISQ) era where quantum errors typically affect the unitary evolution. However, while the framework for Parameterized Quantum Circuits is widely developed, especially for applications where the parameters are optimized towards a set goal, we find there are still interesting opportunities in defining a unified framework also for non-unitary protocols in the form of Parameterized Quantum Channels as a computing resource. We first discuss the general parameterization strategies for controlling quantum channels and their practical realizations. Then we describe a simple example of application in the context of error mitigation, where the control parameters for the quantum channels are optimized in the presence of noise, in order to maximize channel fidelity with respect to a given target channel.
This paper focuses on the complex relationship between Heisenberg's Uncertainty Principle and the nodal structure of wave functions in a variety of quantum systems including the quantum harmonic oscillator, the particle in a 1D box , and the particle on a ring. We argue that the uncertainty in conjugate variables, like location and momentum, is generally a function of the number of nodes. As our investigation reveals, the nature of this influence depends on the system. This paper demonstrates that Heisenberg's Uncertainty Principle is influenced by the nodal structure of wave functions and how the nature of this dependence is system-dependent.
In conventional circuit-based quantum computing architectures, the standard gate set includes arbitrary single-qubit rotations and two-qubit entangling gates. However, this choice is not always aligned with the native operations available in certain hardware, where the natural entangling gates are not restricted to two qubits but can act on multiple, or even all, qubits simultaneously. However, leveraging the capabilities of global quantum operations for algorithm implementations is highly challenging, as directly compiling local gate sequences into global gates usually gives rise to a quantum circuit that is more complex than the original one. Here, we circumvent this difficulty using a variational approach. Specifically, we propose a parameterized circuit ansatze composed of a finite number of global gates and layers of single-qubit unitaries, which can be implemented in constant time. Furthermore, by construction, these ansatze are equivalent to linear depth local-gate quantum circuits and are highly expressible. We demonstrate the capabilities of this approach by applying it to the problem of ground state preparation for the Heisenberg model and the toric code Hamiltonian, highlighting its potential to offer a practical quantum advantage.
We present a constructive method utilizing the Cartan decomposition to characterize topological properties and their connection to two-qubit quantum entanglement, in the framework of the tenfold classification and Wootters' concurrence. This relationship is comprehensively established for the 2-qubit system through the antiunitary time reversal (TR) operator. The TR operator is shown to identify concurrence and differentiate between entangling and non-entangling operators. This distinction is of a topological nature, as the inclusion or exclusion of certain operators alters topological characteristics. Proofs are presented which demonstrate that the 2-qubit system can be described in the framework of the tenfold classification, unveiling aspects of the connection between entanglement and a geometrical phase. Topological features are obtained systematically by a mapping to a quantum graph, allowing for a direct computation of topological integers and of the 2-qubit equivalent of topological zero-modes. An additional perspective is provided regarding the extension of this new approach to condensed matter systems, illustrated through examples involving indistinguishable fermions and arrays of quantum dots.
Quantum computers create new security risks for today's encryption systems. This paper presents an improved version of the Advanced Encryption Standard (AES) that uses quantum technology to strengthen protection. Our approach offers two modes: a fully quantum-based method for maximum security and a hybrid version that works with existing infrastructure. The system generates encryption keys using quantum randomness instead of predictable computer algorithms, making keys virtually impossible to guess. It regularly refreshes these keys automatically to block long-term attacks, even as technology advances. Testing confirms the system works seamlessly with current security standards, maintaining fast performance for high-volume data transfers. The upgraded AES keeps its original security benefits while adding three key defenses: quantum-powered key creation, adjustable security settings for different threats, and safeguards against attacks that exploit device vulnerabilities. Organizations can implement this solution in stages--starting with hybrid mode for sensitive data while keeping older systems operational. This phased approach allows businesses to protect financial transactions, medical records, and communication networks today while preparing for more powerful quantum computers in the future. The design prioritizes easy adoption, requiring no costly replacements of existing hardware or software in most cases.
The renowned Local Friendliness no-go theorem demonstrates the incompatibility of quantum theory with the combined assumptions of Absoluteness of Observed Events -- the idea that observed outcomes are singular and objective -- and Local Agency -- the requirement that the only events correlated with a setting choice are in its future light cone. This result is stronger than Bell's theorem because the assumptions of Local Friendliness are weaker than those of Bell's theorem: Local Agency is less restrictive than local causality, and Absoluteness of Observed Events is encompassed within the notion of realism assumed in Bell's theorem. Drawing inspiration from the correspondence between nonlocality proofs in Bell scenarios and generalized contextuality proofs in prepare-and-measure scenarios, we present the Operational Friendliness no-go theorem. This theorem demonstrates the inconsistency of quantum theory with the joint assumptions of Absoluteness of Observed Events and Operational Agency, the latter being a weaker version of noncontextuality, in the same way that Local Agency is a weaker version of local causality. Our result generalizes the Local Friendliness no-go theorem and is stronger than no-go theorems based on generalized noncontextuality.
With the stability of integrated photonics at network nodes and the advantages of photons as flying qubits, photonic quantum information processing (PQIP) makes quantum networks increasingly scalable. However, scaling up PQIP requires the preparation of many identical single photons which is limited by the spectral distinguishability of integrated single-photon sources due to variations in fabrication or local environment. To address this, we introduce frequency auto-homogenization via group-velocity-matched downconversion to remove spectral distinguishability in varying quantum emitters. We present our theory using $\chi^{(2)}$ quantum frequency conversion and show proof-of-principle data in a free-space optical setup.
An ensemble of negatively charged nitrogen-vacancy centers in diamond can act as a precise quantum sensor even under ambient conditions. In particular, to optimize thier sensitivity, it is crucial to increase the number of spins sampled and maximize their coupling to the detection system, without degrading their spin properties. In this paper, we demonstrate enhanced quantum magnetometry via a high-quality buried laser-written waveguide in diamond with a 4.5 ppm density of nitrogen-vacancy centers. We show that the waveguide-coupled nitrogen-vacancy centers exhibit comparable spin coherence properties as that of nitrogen-vacancy centers in pristine diamond using time-domain optically detected magnetic resonance spectroscopy. Waveguide-enhanced magnetic field sensing is demonstrated in a fiber-coupled integrated photonic chip, where probing an increased volume of high-density spins results in 63 pT.Hz$^{-1/2}$ of DC-magnetic field sensitivity and 20 pT.Hz$^{-1/2}$ of AC magnetic field sensitivity. This on-chip sensor realizes at least an order of magnitude improvement in sensitivity compared to the conventional confocal detection setup, paving the way for microscale sensing with nitrogen-vacancy ensembles.
We consider the quantum analog of the generalized Zernike systems given by the Hamiltonian: $$ \hat{\mathcal{H}} _N =\hat{p}_1^2+\hat{p}_2^2+\sum_{k=1}^N \gamma_k (\hat{q}_1 \hat{p}_1+\hat{q}_2 \hat{p}_2)^k , $$ with canonical operators $\hat{q}_i,\, \hat{p}_i$ and arbitrary coefficients $\gamma_k$. This two-dimensional quantum model, besides the conservation of the angular momentum, exhibits higher-order integrals of motion within the enveloping algebra of the Heisenberg algebra $\mathfrak h_2$. By constructing suitable combinations of these integrals, we uncover a polynomial Higgs-type symmetry algebra that, through an appropriate change of basis, gives rise to a deformed oscillator algebra. The associated structure function $\Phi$ is shown to factorize into two commuting components $\Phi=\Phi_1 \Phi_2$. This framework enables an algebraic determination of the possible energy spectra of the model for the cases $N=2,3,4$, the case $N=1$ being canonically equivalent to the harmonic oscillator. Based on these findings, we propose two conjectures which generalize the results for all $N\ge 2$ and any value of the coefficients $\gamma_k$, that they are explicitly proven for $N=5$. In addition, all of these results can be interpreted as superintegrable perturbations of the original quantum Zernike system corresponding to $N=2$ which are also analyzed and applied to the isotropic oscillator on the sphere, hyperbolic and Euclidean spaces.
Quantum computing has long been an experimental technology with the potential to simulate, at scale, phenomena which on classical devices would be too expensive to simulate at any but the smallest scales. Over the last several years, however, it has entered the NISQ era, where the number of qubits are sufficient for quantum advantage but substantial noise on hardware stands in the way of this achievement. This thesis details NISQ device-centered improvements to techniques of quantum simulation of the out-of-equilbrium real-time dynamics of lattice quantum chromodynamics (LQCD) and of dense 3-flavor neutrino systems on digital quantum devices. The first project concerning LQCD is a comparison of methods for implementing the variational quantum eigensolver (VQE) that initializes the ground state of an SU(3) plaquette-chain. The thesis then pivots to a 1+1D lattice of quarks interacting with an SU(3) gauge-field. A VQE-based state-preparation for the vacua and a Trotterized time-evolution circuit is designed and applied to the problems of simulating beta and neutrinoless double beta decay. Finally, these circuits are adapted to a version useable on quantum devices with nearest-neighbor connectivity with minimal overhead, with an eye towards utilizing the higher qubit count of such devices for hadron dynamics and scattering. This thesis covers two projects that concern dense 3-flavor neutrino systems. The first details design and testing of Trotterized time-evolution circuits on state-of-the-art quantum devices. The second, motivated by the Gottesman-Knill theorem's result that deviation from stabilizer states ("magic") is necessary for a problem to exhibit quantum advantage, details results with implications for the Standard Model in general that the 3 flavor ultradense neutrino systems with the highest, most-persistent magic are those that start with neutrinos in all 3 flavors.
We discuss the emulation of non-Hermitian dynamics during a given time window by a low-dimensional quantum system coupled to a finite set of equidistant discrete states acting as an effective continuum. We first emulate the decay of an unstable state, and map the quasi-continuum parameters enabling a precise approximation of the non-Hermitian dynamics. The limitations of this model, including in particular short- and long- time deviations, are extensively discussed. We then consider a driven two-dimensional system, and establish criteria for the non-Hermitian dynamics emulation with a finite quasi-continuum. We quantitatively analyze the signatures of finiteness of the effective continuum, addressing the possible emergence of non-Markovian behavior during the time interval considered. Finally, we investigate the emulation of dissipative dynamics using a finite quasi-continuum with a tailored density of states. We show on the example of a two-level system that such a continuum can reproduce non-Hermitian dynamics more efficiently than the usual equidistant quasi-continuum model.
Quantum error correction is vital for fault-tolerant quantum computation, with deep connections to entanglement, magic, and uncertainty relations. Entanglement, for instance, has driven key advances like surface codes and has deepened our understanding of quantum gravity through holographic quantum codes. While these connections are well-explored, the role of contextuality, a fundamental non-classical feature of quantum theory, remains unexplored. Notably, Bell nonlocality is a special case of contextuality, and prior works have established contextuality as a key resource for quantum computational advantage. In this work, we establish the first direct link between contextuality and quantum error-correcting codes. Using a sheaf-theoretic framework, we define contextuality for such codes and prove key results on its manifestation. Specifically, we prove the equivalence of contextuality definitions from Abramsky--Brandenburger's sheaf-theoretic framework and Kirby--Love's tree-based approach for the partial closure of Pauli measurement sets. We present several findings, including the proof of a conjecture by Kim and Abramsky [1]. We further show that subsystem stabilizer codes with two or more gauge qubits are strongly contextual, while others are noncontextual. Our findings reveal a direct connection between contextuality and quantum error correction, offering new insights into the non-classical resources enabling fault-tolerant quantum computation.
We derive a novel chain rule for a family of channel conditional entropies, covering von Neumann and sandwiched R\'{e}nyi entropies. In the process, we show that these channel conditional entropies are equal to their regularized version, and more generally, additive across tensor products of channels. For the purposes of cryptography, applying our chain rule to sequences of channels yields a new variant of R\'{e}nyi entropy accumulation, in which we can impose some specific forms of marginal-state constraint on the input states to each individual channel. This generalizes a recently introduced security proof technique that was developed to analyze prepare-and-measure QKD with no limitations on the repetition rate. In particular, our generalization yields ``fully adaptive'' protocols that can in principle update the entropy estimation procedure during the protocol itself, similar to the quantum probability estimation framework.
Rapid development of quantum computing technology has led to a wide variety of sophisticated quantum devices. Benchmarking these systems becomes crucial for understanding their capabilities and paving the way for future advancements. The Quantum Volume (QV) test is one of the most widely used benchmarks for evaluating quantum computer performance due to its architecture independence. However, as the number of qubits in a quantum device grows, the test faces a significant limitation: classical simulation of the quantum circuit, which is indispensable for evaluating QV, becomes computationally impractical. In this work, we propose modifications of the QV test that allow for direct determination of the most probable outcomes (heavy output subspace) of a quantum circuit, eliminating the need for expensive classical simulations. This approach resolves the scalability problem of the Quantum Volume test beyond classical computational capabilities.
Every state on the algebra $M_n$ of complex nxn matrices restricts to a state on any matrix system. Whereas the restriction to a matrix system is generally not open, we prove that the restriction to every *-subalgebra of $M_n$ is open. This simplifies topology problems in matrix theory and quantum information theory.
Generating and distributing remote entangled pairs (EPs) is the most important responsibility of quantum networks, because entanglement serves as the fundamental resource for important quantum networks applications. A key performance metric for quantum networks is the time-to-serve (TTS) for users' EP requests, which is the time to distribute EPs between the requesting users. Reducing the TTS is critically important given the limited qubit coherence time. In this paper, we study the Adaptive Continuous entanglement generation Protocol (ACP), which enables quantum network nodes to continuously generate EPs with their neighbors, while adaptively selecting the neighbors to reduce the TTS. Meanwhile, entanglement purification is used to mitigate the idling decoherence of the EPs generated by the ACP prior to the arrival user requests. We extend the capability of the SeQUeNCe simulator to allow the implementation of the ACP with full support. Then through extensive simulations, we evaluate the ACP at different network scales, demonstrating significant improvements in both the TTS (up to 94% decrease) and the fidelity (up to 0.05 increase) of distributed entanglement.
Conformal field theory underlies critical ground states of quantum many-body systems. While conventional conformal field theory is associated with positive central charges, nonunitary conformal field theory with complex-valued central charges has recently been recognized as physically relevant. Here, we demonstrate that complex-valued entanglement entropy characterizes complex conformal field theory and critical phenomena of open quantum many-body systems. This is based on non-Hermitian reduced density matrices constructed from the combination of right and left ground states. Applying the density matrix renormalization group to non-Hermitian systems, we numerically calculate the complex entanglement entropy of the non-Hermitian five-state Potts model, thereby confirming the scaling behavior predicted by complex conformal field theory.
We present an automated protocol for tuning single-electron transistors (SETs) or single-hole transistors (SHTs) to operate as precise charge sensors. Using minimal device-specific information, the protocol performs measurements to enable the selection and ranking of high-sensitivity operating points. It also characterizes key device parameters, such as dot radius and gate lever arms, through acquisition and analysis of Coulomb diamonds. Demonstration on an accumulation-mode silicon SET at 1.5 K highlights its potential in the 1-2 K range for "hot" spin qubits in scalable quantum computing systems. This approach significantly reduces the tuning time compared to manual methods, with future improvements aimed at faster runtimes and dynamic feedback for robustness to charge fluctuations.