In the Letter [Physical Review Letters 120, 117702 (2018)], Ferraro \emph{et al.} claimed a quantum advantage in the Dicke quantum battery (QB), whereby $N$ two-level systems (TLS) are coupled to a common photonic mode of a cavity. They argued that compared with the so-called Rabi QB, the Dicke QB exhibits a $\sqrt{N}$ quantum enhancement in the charging power because of the entanglement created by the common photonic mode. In this Comment, however, we demonstrate that the apparent $\sqrt{N}$ enhancement actually comes from the stronger cavity electric (or magnetic) field under the setup discussed in [Physical Review Letters 120, 117702 (2018)]. This is a trivial classical effect, and there is no true "quantum advantage" in the charging power of the Dicke QB. While somewhat similar questions regarding the origin of the claimed "quantum advantage" have been raised before, here we would like to make it clear that the $\sqrt{N}$ enhancement in [Physical Review Letters 120, 117702 (2018)] is purely a classical effect, not attributable to quantum entanglement or collective phenomena. Therefore, we believe that using the term "quantum battery" in this context may be inappropriate and misleading.

In this work, we present a novel representation of matrix product states (MPS) within the framework of quasi-local algebras. By introducing an enhanced compatibility condition, we enable the extension of finite MPS to an infinite-volume state, providing new insights into complex, high-dimensional quantum systems. As an illustrative example, we apply this method to the Greenberger-Horne-Zeilinger (GHZ) state. This approach offers significant potential for advancing theoretical frameworks and practical methodologies in the field of quantum information.

While many statistical properties of deep random quantum circuits can be deduced, often rigorously and other times heuristically, by an approximation to global Haar-random unitaries, the statistics of constant-depth random quantum circuits are generally less well-understood due to a lack of amenable tools and techniques. We circumvent this barrier by considering a related constant-time Brownian circuit model which shares many similarities with constant-depth random quantum circuits but crucially allows for direct calculations of higher order moments of its output distribution. Using mean-field (large-n) techniques, we fully characterize the output distributions of Brownian circuits at shallow depths and show that they follow a Porter-Thomas distribution, just like in the case of deep circuits, but with a truncated Hilbert space. The access to higher order moments allows for studying the expected and typical Linear Cross-entropy (XEB) benchmark scores achieved by an ideal quantum computer versus the state-of-the-art classical spoofers for shallow Brownian circuits. We discover that for these circuits, while the quantum computer typically scores within a constant factor of the expected value, the classical spoofer suffers from an exponentially larger variance. Numerical evidence suggests that the same phenomenon also occurs in constant-depth discrete random quantum circuits, like those defined over the all-to-all architecture. We conjecture that the same phenomenon is also true for random brickwork circuits in high enough spatial dimension.

Creating and manipulating anyons and symmetry defects in topological phases, especially those with a non-Abelian character, constitutes a primitive for topological quantum computation. We provide a physical protocol for implementing the ribbon operators of non-Abelian anyons and symmetry defects. We utilize dualities, in particular the Kramers-Wannier or gauging map, which have previously been used to construct topologically ordered ground states by relating them to simpler states. In this work, ribbon operators are implemented by applying a gauging procedure to a lower-dimensional region of such states. This protocol uses sequential unitary circuits or, in certain cases, constant-depth adaptive circuits. We showcase this for anyons and defects in the $\mathbb{Z}_3$ toric code and $S_3$ quantum double. The general applicability of our method is demonstrated by deriving unitary expressions for ribbon operators of various (twisted) quantum doubles.

The development of programmable quantum devices can be measured by the complexity of manybody states that they are able to prepare. Among the most significant are topologically ordered states of matter, which enable robust quantum information storage and processing. While topological orders are more readily accessible with qudits, experimental realisations have thus far been limited to lattice models of qubits. Here, we prepare a ground state of the Z3 toric code state on 24 qutrits in a trapped ion quantum processor with fidelity per qutrit exceeding 96.5(3)%. We manipulate two types of defects which go beyond the conventional qubit toric code: a parafermion, and its bound state which is related to charge conjugation symmetry. We further demonstrate defect fusion and the transfer of entanglement between anyons and defects, which we use to control topological qutrits. Our work opens up the space of long-range entangled states with qudit degrees of freedom for use in quantum simulation and universal error-correcting codes.

Dynamical quantum systems both driven by unitary evolutions and monitored through measurements have proved to be fertile ground for exploring new dynamical quantum matters. While the entanglement structure and symmetry properties of monitored systems have been intensively studied, the role of topology in monitored dynamics is much less explored. In this work, we investigate novel topological phenomena in the monitored dynamics through the lens of free-fermion systems. Free-fermion monitored dynamics were previously shown to be unified with the Anderson localization problem under the Altland-Zirnbauer symmetry classification. Guided by this unification, we identify the topological area-law-entangled phases in the former setting through the topological classification of disordered insulators and superconductors in the latter. As examples, we focus on 1+1D free-fermion monitored dynamics in two symmetry classes, DIII and A. We construct quantum circuit models to study different topological area-law phases and their domain walls in the respective symmetry classes. We find that the domain wall between topologically distinct area-law phases hosts dynamical topological modes whose entanglement is protected from being quenched by the measurements in the monitored dynamics. We demonstrate how to manipulate these topological modes by programming the domain-wall dynamics. In particular, for topological modes in class DIII, which behave as unmeasured Majorana modes, we devise a protocol to braid them and study the entanglement generated in the braiding process.

Quantum theory allows for the superposition of causal orders between operations, i.e., for an indefinite causal order; an implication of the principle of quantum superposition. Since a higher theory might also admit this feature, an understanding of superposition and indefinite causal order in a generalised probabilistic framework is needed. We present a possible notion of superposition for such a framework and show that in maximal theories, respecting non-signalling relations, single system state-spaces do not admit superposition; however, composite systems do. Additionally, we show that superposition does not imply entanglement. Next, we provide a concrete example of a maximally Bell-nonlocal theory, which not only admits the presented notion of superposition, but also allows for post-quantum violations of theory-independent inequalities that certify indefinite causal order; even up to an algebraic bound. These findings might point towards potential connections between a theory's ability to admit indefinite causal order, Bell-nonlocal correlations and the structure of its state spaces.

We consider the quantum magic in systems of dense neutrinos undergoing coherent flavor transformations, relevant for supernova and neutron-star binary mergers. Mapping the three-flavor-neutrino system to qutrits, the evolution of quantum magic is explored in the single scattering angle limit for a selection of initial tensor-product pure states for $N_\nu \le 8$ neutrinos. For $|\nu_e\rangle^{\otimes N_\nu}$ initial states, the magic, as measured by the $\alpha=2$ stabilizer Renyi entropy $M_2$, is found to decrease with radial distance from the neutrino sphere, reaching a value that lies below the maximum for tensor-product qutrit states. Further, the asymptotic magic per neutrino, $M_2/N_\nu$, decreases with increasing $N_\nu$. In contrast, the magic evolving from states containing all three flavors reaches values only possible with entanglement, with the asymptotic $M_2/N_\nu$ increasing with $N_\nu$. These results highlight the connection between the complexity in simulating quantum physical systems and the parameters of the Standard Model.

The rapid advancement of quantum computers makes it particularly important to develop methods for certifying their correct functioning. In a single-device setup, we propose a simple protocol called quantum system quizzing. This protocol achieves self-testing of an entire quantum model given by state preparation, gates, and measurement in a black-box scenario under the dimension assumption only. Due to the self-testing approach, this certification method is inherently free of state preparation and measurement errors. One of the major challenges in the single-device setup is recovering the tensor-product structure of a multi-qubit system in a black-box scenario. Our work is the first to solve this challenge without relying on computational assumptions. We achieve this by identifying deterministic input-output correlations of the target model that can only be exhibited by systems, in which individual qubits can be addressed. These input-output relations are tested on a quantum computer in each protocol round. We identify sets of instructions which self-test multi-qubit universal gate sets for arbitrary numbers of qubits, delivering the first sound certification tool for memory-bounded quantum computers.

This paper began as a set of notes introducing quantum physicists of the QBist persuasion to enactive theory. Unlike mainstream cognitive science, which views cognition as computations on internal representations of the external world (and thus the mind as in the head), the enactive approach sees cognition as adaptive, embodied action. Enaction can ground concepts of experience, agency, knowledge, and normativity - which play key roles in QBist quantum mechanics - in terms consistent with QBism's participatory approach. That's because QBism and enaction both reject an absolute, pregiven subject-object split: for QBism, quantum measurement is the enactment of the subject-object divide; for enaction, cognition is the enactment of that divide. Indeed, each appears to be half of the same story. QBism is a theory of how the world is created and recreated through interactions with an agent, while enaction is a theory of how agents are created and recreated through interactions with the world. Through conversations with QBists and enactivists, these notes evolved into a larger project aimed at unifying QBist physics with enactive cognitive science. Taken together, they offer the possibility of a unified metaphysics - one that brings subject and object, mind and world, back together again.

We put forth Oblivious State Preparation (OSP) as a cryptographic primitive that unifies techniques developed in the context of a quantum server interacting with a classical client. OSP allows a classical polynomial-time sender to input a choice of one out of two public observables, and a quantum polynomial-time receiver to recover an eigenstate of the corresponding observable -- while keeping the sender's choice hidden from any malicious receiver. We obtain the following results: - The existence of (plain) trapdoor claw-free functions implies OSP, and the existence of dual-mode trapdoor claw-free functions implies round-optimal (two-round) OSP. - OSP implies the existence of proofs of quantumness, test of a qubit, blind classical delegation of quantum computation, and classical verification of quantum computation. - Two-round OSP implies quantum money with classical communication, classically-verifiable position verification, and (additionally assuming classical FHE with log-depth decryption) quantum FHE. Several of these applications were previously only known via tailored LWE-based constructions, whereas our OSP-based constructions yield new results from a wider variety of assumptions, including hard problems on cryptographic group actions. Finally, towards understanding the minimal hardness assumptions required to realize OSP, we prove the following: - OSP implies oblivious transfer between one classical and one quantum party. - Two-round OSP implies public-key encryption with classical keys and ciphertexts. In particular, these results help to ''explain'' the use of public-key cryptography in the known approaches to establishing a ''classical leash'' on a quantum server. For example, combined with a result of Austrin et al. (CRYPTO 22), we conclude that perfectly-correct OSP cannot exist unconditionally in the (quantum) random oracle model.

The quantum stochastic drift protocol, also known as qDRIFT, has become a popular algorithm for implementing time-evolution of quantum systems using randomised compiling. In this work we develop qFLO, a higher order randomised algorithm for time-evolution. To estimate an observable expectation value at time $T$ to precision $\epsilon$, we show it is sufficient to use circuit depths of $O(T^2\log(1/\epsilon))$ -- an exponential improvement over standard qDRIFT requirements with respect to $\epsilon$. The protocol achieves this using $O(1/\epsilon^2)$ repeated runs of the standard qDRIFT protocol combined with classical post-processing in the form of Richardson extrapolation. Notably, it requires no ancillary qubits or additional control gates making it especially promising for near-term quantum devices. Furthermore, it is well-conditioned and inherits many desirable properties of randomly compiled simulation methods, including circuit depths that do not explicitly depend on the number of terms in the Hamiltonian.

Trapped ion systems present non-classical characteristics such as squeezed states that show a quantum advantage in quantum sensing, quantum information processing and quantum thermodynamics. We analyze the non-classical characteristics of a system described by a single ion trapped by a periodic potential field. Within the regime of non-adiabatic manipulation of the potential field, the dynamics of motion of the center of mass of the ion can be described by a dimensionless parameter called the non-adiabatic parameter $Q^{*}$. This parameter allows us to distinguish the classical and non-classical characteristics of the system. Using the equations of motion of observables in the Heisenberg picture, we propose an analysis of the unitary time evolution operator and discuss the squeezing behavior in the state of motion of the ion. The results shown can serve as a basis to discuss the presence of squeezing as a resource in quantum thermodynamics in the non-adiabatic regime in actual achievable experimental limitations.

We propose a novel technique for optimizing a modular fault-tolerant quantum computing architecture, taking into account any desired space-time trade--offs between the number of physical qubits and the fault-tolerant execution time of a quantum algorithm. We consider a concept architecture comprising a dedicated zone as a multi-level magic state factory and a core processor for efficient logical operations, forming a supply chain network for production and consumption of magic states. Using a heuristic algorithm, we solve the multi-objective optimization problem of minimizing space and time subject to a user-defined error budget for the success of the computation, taking the performance of various fault-tolerant protocols such as quantum memory, state preparation, magic state distillation, code growth, and logical operations into account. As an application, we show that physical quantum resource estimation reduces to a simple model involving a small number of key parameters, namely, the circuit volume, the error prefactors ($\mu$) and error suppression rates ($\Lambda$) of the fault-tolerant protocols, and an allowed slowdown factor ($\beta$). We show that, in the proposed architecture, $10^5$--$10^8$ physical qubits are required for quantum algorithms with $T$-counts in the range $10^6$--$10^{15}$ and logical qubit counts in the range $10^2$--$10^4$, when run on quantum computers with quantum memory $\Lambda$ in the range 3--10, for all slowdown factors $\beta \geq 0.2$.

Determining the quantum capacity of a noisy quantum channel is an important problem in the field of quantum communication theory. In this work, we consider the Gaussian random displacement channel $N_{\sigma}$, a type of bosonic Gaussian channels relevant in various bosonic quantum information processing systems. In particular, we attempt to make progress on the problem of determining the quantum capacity of a Gaussian random displacement channel by analyzing the error-correction performance of several families of multi-mode Gottesman-Kitaev-Preskill (GKP) codes. In doing so we analyze the surface-square GKP codes using an efficient and exact maximum likelihood decoder (MLD) up to a large code distance of $d=39$. We find that the error threshold of the surface-square GKP code is remarkably close to $\sigma=1/\sqrt{e}\simeq 0.6065$ at which the best-known lower bound of the quantum capacity of $N_{\sigma}$ vanishes. We also analyze the performance of color-hexagonal GKP codes up to a code distance of $d=13$ using a tensor-network decoder serving as an approximate MLD. By focusing on multi-mode GKP codes that encode just one logical qubit over multiple bosonic modes, we show that GKP codes can achieve non-zero quantum state transmission rates for a Gaussian random displacement channel $N_{\sigma}$ at larger values of $\sigma$ than previously demonstrated. Thus our work reduces the gap between the quantum communication theoretic bounds and the performance of explicit bosonic quantum error-correcting codes in regards to the quantum capacity of a Gaussian random displacement channel.

We study the problem of finding a product state with optimal fidelity to an unknown $n$-qubit quantum state $\rho$, given copies of $\rho$. This is a basic instance of a fundamental question in quantum learning: is it possible to efficiently learn a simple approximation to an arbitrary state? We give an algorithm which finds a product state with fidelity $\varepsilon$-close to optimal, using $N = n^{\text{poly}(1/\varepsilon)}$ copies of $\rho$ and $\text{poly}(N)$ classical overhead. We further show that estimating the optimal fidelity is NP-hard for error $\varepsilon = 1/\text{poly}(n)$, showing that the error dependence cannot be significantly improved. For our algorithm, we build a carefully-defined cover over candidate product states, qubit by qubit, and then demonstrate that extending the cover can be reduced to approximate constrained polynomial optimization. For our proof of hardness, we give a formal reduction from polynomial optimization to finding the closest product state. Together, these results demonstrate a fundamental connection between these two seemingly unrelated questions. Building on our general approach, we also develop more efficient algorithms in three simpler settings: when the optimal fidelity exceeds $5/6$; when we restrict ourselves to a discrete class of product states; and when we are allowed to output a matrix product state.

Preparing thermal (Gibbs) states is a common task in physics and computer science. Recent algorithms mimic cooling via system-bath coupling, where the cost is determined by mixing time, akin to classical Metropolis-like algorithms. However, few methods exist to demonstrate slow mixing in quantum systems, unlike the well-established classical tools for systems like the Ising model and constraint satisfaction problems. We present a quantum generalization of these tools through a generic bottleneck lemma that implies slow mixing in quantum systems. This lemma focuses on quantum measures of distance, analogous to the classical Hamming distance but rooted in uniquely quantum principles and quantified either through Bohr spectrum jumps or operator locality. Using our bottleneck lemma, we establish unconditional lower bounds on the mixing times of Gibbs samplers for several families of Hamiltonians at low temperatures. For classical Hamiltonians with mixing time lower bounds $T_\mathrm{mix} = 2^{\Omega(n^\alpha)}$, we prove that quantum Gibbs samplers also have $T_\mathrm{mix} = 2^{\Omega(n^\alpha)}$. This applies to models like random $K$-SAT instances and spin glasses. For stabilizer Hamiltonians, we provide a concise proof of exponential lower bounds $T_\mathrm{mix} = 2^{\Omega(n)}$ on mixing times of good $n$-qubit stabilizer codes at low constant temperature, improving upon previous bounds of $T_\mathrm{mix} = 2^{\Omega(\sqrt n)}$. Finally, for $H = H_0 + h\sum_i X_i$ with $H_0$ diagonal in $Z$ basis, we show that a linear free energy barrier in $H_0$ leads to $T_\mathrm{mix} = 2^{\Omega(n)}$ for local Gibbs samplers at low temperature and small $h$. Even with sublinear barriers, we use Poisson Feynman-Kac techniques to lift classical bottlenecks to quantum ones establishing an asymptotically tight lower bound $T_\mathrm{mix} = 2^{n^{1/2-o(1)}}$ for the 2D transverse field Ising model.

Characterizing the entanglement structure of ground states of local Hamiltonians is a fundamental problem in quantum information. In this work we study the computational complexity of this problem, given the Hamiltonian as input. Our main result is that to show it is cryptographically hard to determine if the ground state of a geometrically local, polynomially gapped Hamiltonian on qudits ($d=O(1)$) has near-area law vs near-volume law entanglement. This improves prior work of Bouland et al. (arXiv:2311.12017) showing this for non-geometrically local Hamiltonians. In particular we show this problem is roughly factoring-hard in 1D, and LWE-hard in 2D. Our proof works by constructing a novel form of public-key pseudo-entanglement which is highly space-efficient, and combining this with a modification of Gottesman and Irani's quantum Turing machine to Hamiltonian construction. Our work suggests that the problem of learning so-called "gapless" quantum phases of matter might be intractable.

Preparing encoded logical states is the first step in a fault-tolerant quantum computation. Standard approaches based on concatenation or repeated measurement incur a significant time overhead. The Raussendorf-Bravyi-Harrington cluster state offers an alternative: a single-shot preparation of encoded states of the surface code, by means of a constant depth quantum circuit, followed by a single round of measurement and classical feedforward. In this work we generalize this approach and prove that single-shot logical state preparation can be achieved for arbitrary quantum LDPC codes. Our proof relies on a minimum-weight decoder and is based on a generalization of Gottesman's clustering-of-errors argument. As an application, we also prove single-shot preparation of the encoded GHZ state in arbitrary quantum LDPC codes. This shows that adaptive noisy constant depth quantum circuits are capable of generating generic robust long-range entanglement.

Fault tolerant on-chip photonic quantum computation is enormously helped by (a) deterministic generation of the needed thousands to millions of photon qubits from (b) quantum emitters in designed spatially ordered arrays to enable networks for implementing many-qubit logic circuits. Scaling up photonic quantum information processing systems has, however, been prevented by the lack of such quantum emitters until the demonstration of the platform of mesa-top single quantum dots (MTSQDs) -- controlled shape, size, and volume single QD -- located in designed regular arrays. Here we demonstrate 2 qubit CNOT gate operation -- a universal gate necessary to enable quantum circuits of arbitrary complexity -- in polarization basis using photons emitted from individual MTSQDs. A Bell state fidelity of 0.825$\pm$0.010 is achieved with two photon interference (TPI) visibility of 0.947$\pm$0.0015 at 4K without Purcell enhancement. The results make a strong case for developing MTSQD arrays for utility scale optical quantum information processing platforms.

The ubiquitous noise in quantum system hinders the advancement of quantum information processing and has driven the emergence of different hardware-efficient quantum error correction protocols. Among them, qubits with structured noise, especially with biased noise, are one of the most promising platform to achieve fault-tolerance due to the high error thresholds of quantum error correction codes tailored for them. Nevertheless, their quantum operations are challenging and the demonstration of their performance beyond the fault-tolerant threshold remain incomplete. Here, we leverage Schr\"odinger cat states in a scalable planar superconducting nonlinear oscillator to thoroughly characterize the high-fidelity single-qubit quantum operations with systematic quantum tomography and benchmarking tools, demonstrating the state-of-the-art performance of operations crossing the fault-tolerant threshold of the XZZX surface code. These results thus embody a transformative milestone in the exploration of quantum systems with structured error channels. Notably, our framework is extensible to other types of structured-noise systems, paving the way for systematic characterization and validation of novel quantum platforms with structured noise.

The process of reconstructing quantum states from experimental measurements, accomplished through quantum state tomography (QST), plays a crucial role in verifying and benchmarking quantum devices. A key challenge of QST is to find out how the accuracy of the reconstruction depends on the number of state copies used in the measurements. When multiple measurement settings are used, the total number of state copies is determined by multiplying the number of measurement settings with the number of repeated measurements for each setting. Due to statistical noise intrinsic to quantum measurements, a large number of repeated measurements is often used in practice. However, recent studies have shown that even with single-sample measurements--where only one measurement sample is obtained for each measurement setting--high accuracy QST can still be achieved with a sufficiently large number of different measurement settings. In this paper, we establish a theoretical understanding of the trade-off between the number of measurement settings and the number of repeated measurements per setting in QST. Our focus is primarily on low-rank density matrix recovery using Pauli measurements. We delve into the global landscape underlying the low-rank QST problem and demonstrate that the joint consideration of measurement settings and repeated measurements ensures a bounded recovery error for all second-order critical points, to which optimization algorithms tend to converge. This finding suggests the advantage of minimizing the number of repeated measurements per setting when the total number of state copies is held fixed. Additionally, we prove that the Wirtinger gradient descent algorithm can converge to the region of second-order critical points with a linear convergence rate. We have also performed numerical experiments to support our theoretical findings.

Providing evidence that quantum computers can efficiently prepare low-energy or thermal states of physically relevant interacting quantum systems is a major challenge in quantum information science. A newly developed quantum Gibbs sampling algorithm by Chen, Kastoryano, and Gily\'en provides an efficient simulation of the detailed-balanced dissipative dynamics of non-commutative quantum systems. The running time of this algorithm depends on the mixing time of the corresponding quantum Markov chain, which has not been rigorously bounded except in the high-temperature regime. In this work, we establish a polylog(n) upper bound on its mixing time for various families of random n by n sparse Hamiltonians at any constant temperature. We further analyze how the choice of the jump operators for the algorithm and the spectral properties of these sparse Hamiltonians influence the mixing time. Our result places this method for Gibbs sampling on par with other efficient algorithms for preparing low-energy states of quantumly easy Hamiltonians.

Quantum low-density parity-check (qLDPC) codes are an important component in the quest for quantum fault tolerance. Dramatic recent progress on qLDPC codes has led to constructions which are asymptotically good, and which admit linear-time decoders to correct errors affecting a constant fraction of codeword qubits. These constructions, while theoretically explicit, rely on inner codes with strong properties only shown to exist by probabilistic arguments, resulting in lengths that are too large to be practically relevant. In practice, the surface/toric codes, which are the product of two repetition codes, are still often the qLDPC codes of choice. A previous construction based on the lifted product of an expander-based classical LDPC code with a repetition code (Panteleev & Kalachev, 2020) achieved a near-linear distance (of $\Omega(N/\log N)$ where $N$ is the number of codeword qubits), and avoids the need for such intractable inner codes. Our main result is an efficient decoding algorithm for these codes that corrects $\Theta(N/\log N)$ adversarial errors. En route, we give such an algorithm for the hypergraph product version these codes, which have weaker $\Theta(\sqrt{N})$ distance (but are simpler). Our decoding algorithms leverage the fact that the codes we consider are quasi-cyclic, meaning that they respect a cyclic group symmetry. Since the repetition code is not based on expanders, previous approaches to decoding expander-based qLDPC codes, which typically worked by greedily flipping code bits to reduce some potential function, do not apply in our setting. Instead, we reduce our decoding problem (in a black-box manner) to that of decoding classical expander-based LDPC codes under noisy parity-check syndromes. For completeness, we also include a treatment of such classical noisy-syndrome decoding that is sufficient for our application to the quantum setting.

Quantum computing has emerged as a powerful tool for solving complex computational problems, but access to real quantum hardware remains limited due to high costs and increasing demand for efficient quantum simulations. Unfortunately, software simulators on CPUs/GPUs such as Qiskit, ProjectQ, and Qsun offer flexibility and support for a large number of qubits, they struggle with high power consumption and limited processing speed, especially as qubit counts scale. Accordingly, quantum emulators implemented on dedicated hardware, such as FPGAs and analog circuits, offer a promising path for addressing energy efficiency concerns. However, existing studies on hardware-based emulators still face challenges in terms of limited flexibility, lack of fidelity evaluation, and power consumption. To overcome these gaps, we propose FQsun, a quantum emulator that enhances performance by integrating four key innovations: efficient memory organization, a configurable Quantum Gate Unit (QGU), optimized scheduling, and multiple number precisions. Five FQsun versions with different number precisions, including 16-bit floating point, 32-bit floating point, 16-bit fixed point, 24-bit fixed point, and 32-bit fixed point, are implemented on the Xilinx ZCU102 FPGA, utilizing between 9,226 and 18,093 LUTs, 1,440 and 7,031 FFs, 344 and 464 BRAMs, and 14 and 88 DSPs and consuming a maximum power of 2.41W. Experimental results demonstrate high accuracy in normalized gate speed, fidelity, and mean square error, particularly with 32-bit fixed-point and floating-point versions, establishing FQsun's capability as a precise quantum emulator. Benchmarking on quantum algorithms such as Quantum Fourier Transform, Parameter-Shift Rule, and Random Quantum Circuits reveals that FQsun achieves superior power-delay product, outperforming traditional software simulators on powerful CPUs by up to 9,870 times.

Quantum information allows us to build quantum money schemes, where a bank can issue banknotes in the form of authenticatable quantum states that cannot be cloned or counterfeited. Similar to paper banknotes, in existing quantum money schemes, a banknote consists of an unclonable quantum state and a classical serial number, signed by bank. Thus, they lack one of the most fundamental properties cryptographers look for in a currency scheme: privacy. In this work, we first further develop the formal definitions of privacy for quantum money schemes. Then, we construct the first public-key quantum money schemes that satisfy these security notions. Namely, - Assuming existence of indistinguishability obfuscation (iO) and hardness of Learning with Errors (LWE), we construct a public-key quantum money scheme with anonymity against users and traceability by authorities. Since it is a policy choice whether authorities should be able to track banknotes or not, we also construct an untraceable money scheme from the same cryptographic assumptions, where no one (not even the authorities) can track banknotes. Further, we show that the no-cloning principle, a result of quantum mechanics, allows us to construct schemes, with security guarantees that are classically impossible, for a seemingly unrelated application: voting! - Assuming iO and LWE, we construct a universally verifiable quantum voting scheme with classical votes. Finally, as a technical tool, we introduce the notion of publicly rerandomizable encryption with strong correctness, where no adversary is able to produce a malicious ciphertext and a malicious randomness such that the ciphertext before and after rerandomization decrypts to different values! We believe this might be of independent interest. - Assuming LWE, we construct a (post-quantum) classical publicly rerandomizable encryption scheme with strong correctness.

Finding reliable approximations to the quantum many-body problem is one of the central challenges of modern physics. Elemental to this endeavor is the development of advanced numerical techniques pushing the limits of what is tractable. One such recently proposed numerical technique are neural quantum states. This new type of wavefunction based Ans\"atze utilizes the expressivity of neural networks to tackle fundamentally challenging problems, such as the Mott transition. In this paper we aim to gauge the universalness of one representative of neural network Ans\"atze, the hidden-fermion slater determinant approach. To this end, we study five different fermionic models each displaying volume law scaling of the entanglement entropy. For these, we correlate the effectiveness of the Ansatz with different complexity measures. Each measure indicates a different complexity in the absence of which a conventional Ansatz becomes efficient. We provide evidence that whenever one of the measures indicates proximity to a parameter region in which a conventional approach would work reliable, the neural network approach also works reliable and efficient. This highlights the great potential, but also challenges for neural network approaches: Finding suitable points in theory space around which to construct the Ansatz in order to be able to efficiently treat models unsuitable for their current designs.

We propose a Continuous-Time Quantum Walks (CTQW) model for one-dimensional Dirac dynamics simulation with higher-order approximation. Our model bridges CTQW with a discrete-time model called Dirac Cellular Automata (DCA) via Quantum Fourier Transformation (QFT). From our continuous-time model, we demonstrate how varying time intervals and position space sizes affect both quantum entanglement between the internal space and external (position) space of the quantum state and the relativistic effect called Zitterbewegung. We find that the time interval changes the transition range for each site, and the position space sizes affect the value of transition amplitude. Therefore, it shows that the size of spacetime plays a crucial role in the observed quantum entanglement and relativistic phenomena in quantum computers. These results enhance the understanding of the interplay between internal and external spaces in Dirac dynamics through the insights of quantum information theory and enrich the application of the quantum walks-based algorithm.

The phenomena where a quantum system can be exponentially accelerated to its stationary state has been refereed to as Quantum Mpemba Effect (QMpE). Due to its analogy with the classical Mpemba effect, hot water freezes faster than cold water, this phenomena has garnered significant attention. Although QMpE has been characterized and experimentally verified in different scenarios, sufficient and necessary conditions to achieve such a phenomenon are still under investigation. In this paper we address a sufficient condition for QMpE through a general approach for open quantum systems dynamics. With help of the Mpemba parameter introduced in this work to quantify how strong the QMpE can be, we discuss how our conditions can predict and explain the emergence of weak and strong QMpE in a robust way. As application, by harnessing intrinsic non-classical nature of squeezed thermal environments, we show how strong QMpE can be effectively induced when our conditions are met. Due to the thermal nature of environment considered in our model, our work demonstrates that a hot qubit freezes faster than a cold qubit only in presence of squeezed reservoirs. Our results provide tools and new insights opening a broad avenue for further investigation at most fundamental levels of this peculiar phenomena in the quantum realm.

Secure multiparty computation enables collaborative computations across multiple users while preserving individual privacy, which has a wide range of applications in finance, machine learning and healthcare. Secure multiparty computation can be realized using oblivious transfer as a primitive function. In this paper, we present an experimental implementation of a quantum-secure quantum oblivious transfer (QOT) protocol using an adapted quantum key distribution system combined with a bit commitment scheme, surpassing previous approaches only secure in the noisy storage model. We demonstrate the first practical application of the QOT protocol by solving the private set intersection, a prime example of secure multiparty computation, where two parties aim to find common elements in their datasets without revealing any other information. In our experiments, two banks can identify common suspicious accounts without disclosing any other data. This not only proves the experimental functionality of QOT, but also showcases its real-world commercial applications.

BosonSampling is a popular candidate for near-term quantum advantage, which has now been experimentally implemented several times. The original proposal of Aaronson and Arkhipov from 2011 showed that classical hardness of BosonSampling is implied by a proof of the "Gaussian Permanent Estimation" conjecture. This conjecture states that $e^{-n\log{n}-n-O(\log n)}$ additive error estimates to the output probability of most random BosonSampling experiments are $\#P$-hard. Proving this conjecture has since become the central question in the theory of quantum advantage. In this work we make progress by proving that $e^{-n\log n -n - O(n^\delta)}$ additive error estimates to output probabilities of most random BosonSampling experiments are $\#P$-hard, for any $\delta>0$. In the process, we circumvent all known barrier results for proving the hardness of BosonSampling experiments. This is nearly the robustness needed to prove hardness of BosonSampling -- the remaining hurdle is now "merely" to show that the $n^\delta$ in the exponent can be improved to $O(\log n).$ We also obtain an analogous result for Random Circuit Sampling. Our result allows us to show, for the first time, a hardness of classical sampling result for random BosonSampling experiments, under an anticoncentration conjecture. Specifically, we prove the impossibility of multiplicative-error sampling from random BosonSampling experiments with probability $1-e^{-O(n)}$, unless the Polynomial Hierarchy collapses.

We show that the Aharonov-Casher phase is a geometric phase that depends on the details of the path taken by a particle having a magnetic moment that is subjected to an electric field. Consequently, it is not a topological phase. The proof of this assertion is obtained by developing a counterexample that illustrates the dependence of the AC phase on the specifics of the path.

We investigate the extendibility problem for Brauer states, focusing on the symmetric two-sided extendibility and the de Finetti extendibility. By employing the representation theory of the unitary and orthogonal groups, we provide a general recipe for finding the $(n,m)$-extendible and $n$-de Finetti-extendible Brauer states. In the two-sided case we describe the special symmetry that only appears for $(n,n)$-extendible symmetric states. From the concrete form of the commutant to the diagonal action of the orthogonal group, we explicitly determine the set of parameters for which the Brauer states are $(1,2)$-, $(1,3)$- and $(2,2)$-extendible in any dimension $d$. Using the branching rules from $\mathrm{SU}(d)$ to $\mathrm{SO}(d)$, we obtain the set of $n$-de Finetti-extendible Brauer states in low dimensions, and analytically describe the $n\to\infty$ limiting shape for $d=3$. Finally, we derive some general properties pertaining to the extendibility of Werner, isotropic and Brauer states.

This study explores robust entangled states described using the framework of discrete Wigner functions. Notably, these states are known to outperform the Bell state in measures of entanglement in the presence of non-Markovian noise. Our study focuses on methods for preparing these states using quantum circuits that can be implemented on superconducting hardware and testing the efficacy of these methods on IBM's quantum device. We present quantum circuits for state preparation and validate them through tomographic reconstruction on the IBM \emph{ibm\_brisbane} device. We propose a teleportation scheme that leverages these entangled states as a resource. We believe that these entangled states have the potential to be used in place of the traditional Bell state in scenarios where non-Markovian errors are prevalent.

This work addresses the complexities involved in designing distributed quantum algorithms, highlighting that quantum entanglement does not bypass the Fischer-Lynch-Paterson (FLP) impossibility theorem in asynchronous networks. Although quantum resources such as entanglement offer potential speedups, the inherent constraints of classical communication remain. We develop a leader election algorithm as a proof of concept, demonstrating how entanglement can enhance efficiency while still contending with asynchronous delays. This algorithm serves as a foundation for a broader blueprint for future distributed quantum algorithms, providing insights into both the real performance gains and the limitations that entanglement offers in a distributed setting.

Quantum annealing is a meta-heuristic approach tailored to solve combinatorial optimization problems with quantum annealers. In this tutorial, we provide a fundamental and comprehensive introduction to quantum annealing and modern data management systems and show quantum annealing's potential benefits and applications in the realm of database optimization. We demonstrate how to apply quantum annealing for selected database optimization problems, which are critical challenges in many data management platforms. The demonstrations include solving join order optimization problems in relational databases, optimizing sophisticated transaction scheduling, and allocating virtual machines within cloud-based architectures with respect to sustainability metrics. On the one hand, the demonstrations show how to apply quantum annealing on key problems of database management systems (join order selection, transaction scheduling), and on the other hand, they show how quantum annealing can be integrated as a part of larger and dynamic optimization pipelines (virtual machine allocation). The goal of our tutorial is to provide a centralized and condensed source regarding theories and applications of quantum annealing technology for database researchers, practitioners, and everyone who wants to understand how to potentially optimize data management with quantum computing in practice. Besides, we identify the advantages, limitations, and potentials of quantum computing for future database and data management research.

We study the quantum dynamics of the encoding scheme proposed in [Nguyen et al., PRX Quantum 4, 010316 (2023)], which encodes optimization problems on graphs with arbitrary connectivity into Rydberg atom arrays. Here, a graph vertex is represented by a wire of atoms, and the (crossing) crossing-with-edge gadget is placed at the intersection of two wires to (de)couple their degrees of freedom and reproduce the graph connectivity. We consider the fundamental geometry of two vertex-wires intersecting via a single gadget and look at minimum gap scaling with system size along adiabatic protocols. We find that both polynomial and exponential scaling are possible and, by means of perturbation theory, we relate the exponential closing of the minimum gap to an unfavorable localization of the ground-state wavefunction. Then, on the QuEra Aquila neutral atom machine, we observe such localization and its effect on the success probability of finding the correct solution to the encoded optimization problem. Finally, we propose possible strategies to avoid this quantum bottleneck, leading to an exponential improvement in the adiabatic performance.

Floquet systems are periodically driven systems. In this framework, the system Hamiltonian and associated spectra of interest are modified, giving rise to new quantum phases of matter and nonequilibrium dynamics without static counterparts. Here we experimentally demonstrate a self-induced Floquet system in the interacting Rydberg gas. This originates from the motion of photoionized charge particles in a static magnetic field. Importantly, by leveraging the Rydberg electomagnetically induced transparency spectrum, we probe the nonequilibrium dynamics in the bistable regime, where the strong Rydberg atom interaction competes with the internal driving from flying charges, and identify the emergence of a discrete time crystalline phase. Our work fills the experimental gap in the understanding the relation of multistability and dissipative discrete time crystalline phase. In this regard, it constitutes a highly controlled platform for exploring exotic nonequilibrium physics in dissipative interacting systems.

Neutral atom array has emerged as a promising platform for quantum computation owing to its high-fidelity two-qubit gate, arbitrary connectivity and overwhelming scalability. Nevertheless, fault-tolerant quantum computing on the neutral atom platform requires consideration of the types of errors that neutral atoms are prone to. One typical and major error is leakage error from Rydberg state when implementing multi-qubit gate. Such leakage error is harmful by propagating multiple pauli errors in quantum circuit. Researchers have proposed erasure conversion protocol, which utilizes fast leakage detection to convert leakage error to benign erasure error. This method has a favorable error distance d, but is limited to certain atom species. Here, we propose a new method to deal with such leakage error in measurement-based quantum computation (MBQC), to which we refer as "Leakage Tracking". We remove the demand for mid-circuit leakage detection but infer the probabilities and locations of pauli errors through gate sequence and final leakage detection. We show that this method has an error distance de = d and reaches a high threshold 1.7% per CZ gate for pure leakage error and perfect final leakage detection. In presence of atom loss and other pauli errors, we show the advantage in error distance over erasure conversion when the ratio of leakage error is close to one.

Modern quantum devices are highly susceptible to errors, making the verification of their correct operation a critical problem. Usual tomographic methods rapidly become intractable as these devices are scaled up. In this paper, we introduce a general framework for the efficient verification of large quantum systems. Our framework combines robust fidelity witnesses with efficient classical post-processing to implement measurement back-propagation. We demonstrate its usefulness by focusing on the verification of bosonic quantum systems, and developing efficient verification protocols for large classes of target states using the two most common types of Gaussian measurements: homodyne and heterodyne detection. Our protocols are semi-device independent, designed to function with minimal assumptions about the quantum device being tested, and offer practical improvements over previous existing approaches. Overall, our work introduces efficient methods for verifying the correct preparation of complex quantum states, and has consequences for calibrating large quantum devices, witnessing quantum properties, supporting demonstrations of quantum computational speedups and enhancing trust in quantum computations.

Conventional decoding algorithms for polar codes strive to balance achievable performance and computational complexity in classical computing. While maximum likelihood (ML) decoding guarantees optimal performance, its NP-hard nature makes it impractical for real-world systems. In this letter, we propose a novel ML decoding architecture for polar codes based on the Grover adaptive search, a quantum exhaustive search algorithm. Unlike conventional studies, our approach, enabled by a newly formulated objective function, uniquely supports Gray-coded multi-level modulation without expanding the search space size compared to the classical ML decoding. Simulation results demonstrate that our proposed quantum decoding achieves ML performance while providing a pure quadratic speedup in query complexity.

The no-cloning principle has played a foundational role in quantum information and cryptography. Following a long-standing tradition of studying quantum mechanical phenomena through the lens of interactive games, Broadbent and Lord (TQC 2020) formalized cloning games in order to quantitatively capture no-cloning in the context of unclonable encryption schemes. The conceptual contribution of this paper is the new, natural, notion of Haar cloning games together with two applications. In the area of black-hole physics, our game reveals that, in an idealized model of a black hole which features Haar random (or pseudorandom) scrambling dynamics, the information from infalling entangled qubits can only be recovered from either the interior or the exterior of the black hole -- but never from both places at the same time. In the area of quantum cryptography, our game helps us construct succinct unclonable encryption schemes from the existence of pseudorandom unitaries, thereby, for the first time, bridging the gap between "MicroCrypt" and unclonable cryptography. The technical contribution of this work is a tight analysis of Haar cloning games which requires us to overcome many long-standing barriers in our understanding of cloning games. Answering these questions provably requires us to go beyond existing methods (Tomamichel, Fehr, Kaniewski and Wehner, New Journal of Physics 2013). In particular, we show a new technique for analyzing cloning games with respect to binary phase states through the lens of binary subtypes, and combine it with novel bounds on the operator norms of block-wise tensor products of matrices.

A near-minimal instance of optical cooling is experimentally presented wherein the internal-state entropy of a single atom is reduced more than twofold by illuminating it with broadband, incoherent light. Since the rate of optical pumping by a thermal state increases monotonically with its temperature, the cooling power in this scenario increases with higher thermal occupation, an example of a phenomenon known as cooling by heating. In contrast to optical pumping by coherent, narrow-band laser light, here we perform the same task with fiber-coupled, broadband sunlight, the brightest laboratory-accessible source of continuous blackbody radiation.

We systematically investigate local phonon hopping in the radial direction of a linear trapped-ion string. We measure the decay of hopping as a function of key trap parameters and analyze the results in terms of the decay time and the number of oscillations. We attribute the loss of coherence to nonlinear coupling between different modes. Despite quantitative differences, the overall trends in our numerical simulations are similar to those of the experimental results. This work establishes a method for evaluating phonon hopping coherence and provides insight into the underlying decoherence mechanisms.

Symmetry is one of the most significant foundational principles underlying nature. The resource theory of asymmetry (RTA) is a resource-theoretic framework for investigating asymmetry as a resource to break constraints imposed by symmetries. It has recently undergone significant developments, resulting in applications in a variety of research areas since symmetry and its breaking are ubiquitous in physics. Nevertheless, the resource conversion theory at the core of RTA remains incomplete. In the independent and identically distributed (i.i.d.) setup, where identical copies of a state are converted to identical copies of another state, conversion theory among pure states has been completed only for $U(1)$ group and finite groups. Here, we establish an i.i.d. conversion theory among any pure states in RTA for any continuous symmetry described by a compact Lie group, which includes the cases where multiple conserved quantities are involved. We show that the quantum geometric tensor is an asymmetry monotone for pure states that determines the optimal approximate asymptotic conversion rate. Our formulation achieves a unified understanding of conversion rates in prior studies for different symmetries. As a corollary of the formula, we also affirmatively prove the Marvian-Spekkens conjecture on reversible asymptotic convertibility in RTA, which has remained unproven for a decade.

The advantage of quantum protocols lies in the inherent properties of the shared quantum states. These states are sometimes provided by sources that are not trusted, and therefore need to be verified. Finding secure and efficient quantum state verification protocols remains a big challenge, and recent works illustrate trade-offs between efficiency and security for different groups of states in restricted settings. However, whether a universal trade-off exists for all quantum states and all verification strategies remains unknown. In this work, we instantiate the categorical composable cryptography framework to show a fundamental limit for quantum state verification for all cut-and-choose approaches used to verify arbitrary quantum states. Our findings show that the prevailing cut-and-choose techniques cannot lead to quantum state verification protocols that are both efficient and secure.

How many T gates are needed to approximate an arbitrary $n$-qubit quantum state to within error $\varepsilon$? Improving prior work of Low, Kliuchnikov, and Schaeffer, we show that the optimal asymptotic scaling is $\Theta\left(\sqrt{2^n\log(1/\varepsilon)}+\log(1/\varepsilon)\right)$ if we allow ancilla qubits. We also show that this is the optimal T-count for implementing an arbitrary diagonal $n$-qubit unitary to within error $\varepsilon$. We describe applications in which a tensor product of many single-qubit unitaries can be synthesized in parallel for the price of one.

It is widely accepted that noisy quantum devices are limited to logarithmic depth circuits unless mid-circuit measurements and error correction are employed. However, this conclusion holds only for unital error channels, such as depolarizing noise. Building on the idea of the "quantum refrigerator" [Ben-Or, Gottesman and Hassidim (2013)], we improve upon previous results and show that geometrically local circuits in the presence of nonunital noise, in any dimension $d\geq 1$, can correct errors without mid-circuit measurements and extend computation to any depth, with only polylogarithmic overhead in the depth and the number of qubits. This implies that local quantum dynamics subjected to sufficiently weak nonunital noise is computationally universal and nearly as hard to simulate as noiseless dynamics. Additionally, we quantify the contraction property of local random circuits in the presence of nonunital noise.

This study explores the electronic structure of the CH$_2$ molecule, modeled as a (6e, 23o) system using a 52-qubit quantum experiment, which is relevant for interstellar and combustion chemistry. We focused on calculating the dissociation energies for CH$_2$ in the ground state triplet and the first excited state singlet, applying the Sample-based Quantum Diagonalization (SQD) method within a quantum-centric supercomputing framework. We evaluated the ability of SQD to provide accurate results compared to Selected Configuration Interaction (SCI) calculations and experimental values for the singlet-triplet gap. To our knowledge, this is the first study of an open-shell system, such as the CH$_2$ triplet, using SQD. To obtain accurate energy values, we implemented post-SQD orbital optimization and employed a warm-start approach using previously converged states. While the results for the singlet state dissociation were only a few milli-Hartrees from the SCI reference values, the triplet state exhibited greater variability. This discrepancy likely arises from differences in bit-string handling within the SQD method for open- versus closed-shell systems, as well as the inherently complex wavefunction character of the triplet state. The SQD-calculated singlet-triplet energy gap matched well with experimental and SCI values. This study enhances our understanding of the SQD method for open-shell systems and lays the groundwork for future applications in large-scale electronic structure studies using quantum algorithms.

Open many-body quantum systems can exhibit intriguing nonequilibrium phases of matter, such as time crystals. In these phases, the state of the system spontaneously breaks the time-translation symmetry of the dynamical generator, which typically manifests through persistent oscillations of an order parameter. A paradigmatic model displaying such a symmetry breaking is the boundary time crystal, which has been extensively analyzed experimentally and theoretically. Despite the broad interest in these nonequilibrium phases, their thermodynamics and their fluctuating behavior remain largely unexplored, in particular for the case of coupled time crystals. In this work, we consider two interacting boundary time crystals and derive a consistent interpretation of their thermodynamic behavior. We fully characterize their average dynamics and the behavior of their quantum fluctuations, which allows us to demonstrate the presence of quantum and classical correlations in both the stationary and the time-crystal phases displayed by the system. We furthermore exploit our theoretical derivation to explore possible applications of time crystals as quantum batteries, demonstrating their ability to efficiently store energy.

This paper introduces a numerical framework for establishing lower bounds on the conditional von-Neumann entropy in device-independent quantum cryptography and randomness extraction scenarios. Leveraging a hierarchy of semidefinite programs derived from the Navascu\'es-Pironio-Acin (NPA) hierarchy, our tool enables efficient computation of entropy bounds based solely on observed statistics, assuming the validity of quantum mechanics. The method's computational efficiency is ensured by its reliance on projective operators within the non-commutative polynomial optimization problem. The method facilitates provable bounds for extractable randomness in noisy scenarios and aligns with modern entropy accumulation theorems. Consequently, the framework offers an adaptable tool for practical quantum cryptographic protocols, expanding secure communication possibilities in untrusted environments.

After nearly two decades of research, the question of a quantum PCP theorem for quantum Constraint Satisfaction Problems (CSPs) remains wide open. As a result, proving QMA-hardness of approximation for ground state energy estimation has remained elusive. Recently, it was shown [Bittel, Gharibian, Kliesch, CCC 2023] that a natural problem involving variational quantum circuits is QCMA-hard to approximate within ratio N^(1-eps) for any eps > 0 and N the input size. Unfortunately, this problem was not related to quantum CSPs, leaving the question of hardness of approximation for quantum CSPs open. In this work, we show that if instead of focusing on ground state energies, one considers computing properties of the ground space, QCMA-hardness of computing ground space properties can be shown. In particular, we show that it is (1) QCMA-complete within ratio N^(1-eps) to approximate the Ground State Connectivity problem (GSCON), and (2) QCMA-hard within the same ratio to estimate the amount of entanglement of a local Hamiltonian's ground state, denoted Ground State Entanglement (GSE). As a bonus, a simplification of our construction yields NP-completeness of approximation for a natural k-SAT reconfiguration problem, to be contrasted with the recent PCP-based PSPACE hardness of approximation results for a different definition of k-SAT reconfiguration [Karthik C.S. and Manurangsi, 2023, and Hirahara, Ohsaka, STOC 2024].

It is of great interest to understand the thermalization of open quantum many-body systems, and how quantum computers are able to efficiently simulate that process. A recently introduced disispative evolution, inspired by existing models of open system thermalization, has been shown to be efficiently implementable on a quantum computer. Here, we prove that, at high enough temperatures, this evolution reaches the Gibbs state in time scaling logarithmically with system size. The result holds for Hamiltonians that satisfy the Lieb-Robinson bound, such as local Hamiltonians on a lattice, and includes long-range systems. To the best of our knowledge, these are the first results rigorously establishing the rapid mixing property of high-temperature quantum Gibbs samplers, which is known to give the fastest possible speed for thermalization in the many-body setting. We then employ our result to the problem of estimating partition functions at high temperature, showing an improved performance over previous classical and quantum algorithms.

The efficiency of locally generating unitary designs, which capture statistical notions of quantum pseudorandomness, lies at the heart of wide-ranging areas in physics and quantum information technologies. While there are extensive potent methods and results for this problem, the evidently important setting where continuous symmetries or conservation laws (most notably U(1) and SU(d)) are involved is known to present fundamental difficulties. In particular, even the basic question of whether any local symmetric circuit can generate 2-designs efficiently (in time that grows at most polynomially in the system size) remains open with no circuit constructions provably known to do so, despite intensive efforts. In this work, we resolve this long-standing open problem for both U(1) and SU(d) symmetries by explicitly constructing local symmetric quantum circuits which we prove to converge to symmetric unitary 2-designs in polynomial time using a combination of representation theory, graph theory, and Markov chain methods. As a direct application, our constructions can be used to efficiently generate near-optimal random covariant quantum error-correcting codes, confirming a conjecture in [PRX Quantum 3, 020314 (2022)].

We consider quantum circuit models where the gates are drawn from arbitrary gate ensembles given by probabilistic distributions over certain gate sets and circuit architectures, which we call stochastic quantum circuits. Of main interest in this work is the speed of convergence of stochastic circuits with different gate ensembles and circuit architectures to unitary t-designs. A key motivation for this theory is the varying preference for different gates and circuit architectures in different practical scenarios. In particular, it provides a versatile framework for devising efficient circuits for implementing $t$-designs and relevant applications including random circuit and scrambling experiments, as well as benchmarking the performance of gates and circuit architectures. We examine various important settings in depth. A key aspect of our study is an "ironed gadget" model, which allows us to systematically evaluate and compare the convergence efficiency of entangling gates and circuit architectures. Particularly notable results include i) gadgets of two-qubit gates with KAK coefficients $\left(\frac{\pi}{4}-\frac{1}{8}\arccos(\frac{1}{5}),\frac{\pi}{8},\frac{1}{8}\arccos(\frac{1}{5})\right)$ (which we call $\chi$ gates) directly form exact 2- and 3-designs; ii) the iSWAP gate family achieves the best efficiency for convergence to 2-designs under mild conjectures with numerical evidence, even outperforming the Haar-random gate, for generic many-body circuits; iii) iSWAP + complete graph achieve the best efficiency for convergence to 2-designs among all graph circuits. A variety of numerical results are provided to complement our analysis. We also derive robustness guarantees for our analysis against gate perturbations. Additionally, we provide cursory analysis on gates with higher locality and found that the Margolus gate outperforms various other well-known gates.

The Jordan-Schwinger map is widely employed to switch between bosonic or fermionic mode operators and spin observables, with numerous applications ranging from quantum field theories of magnetism and ultracold quantum gases to quantum optics. While the construction of observables obeying the algebra of spin operators across multiple modes is straightforward, a mapping between bosonic or fermionic Fock states and spin states has remained elusive beyond the two-mode case. Here, we generalize the Jordan-Schwinger map by algorithmically constructing complete sets of spin states over several bosonic or fermionic modes, allowing one to describe arbitrary multi-mode systems faithfully in terms of spins. As a byproduct, we uncover a deep link between the degeneracy of multi-mode spin states in the bosonic case and Gaussian polynomials. We demonstrate the feasibility of our approach by deriving explicit relations between arbitrary three-mode Fock and spin states, which provide novel interpretations of the genuinely tripartite entangled GHZ and W state classes.

Quantum speed limit is a fundamental speed limit for the evolution of quantum states. It is the single-most important interpretation of the time energy uncertainty relation. Recently the speed limit of quantum correlations have been proposed like the concurrence for pure quantum states. In this direction, we derive a speed limit bound for a quantum correlation named the concurrence for the generally mixed quantum states of two qubits. By this we mean that we find an expression for the minimum time required to reach a given value of entanglement starting from an arbitrary initial generally mixed state. We discuss the connection of the findings of this article in the interdisciplinary area of the condensed matter physics or the many body physics and quantum information science such as on the topic of Lieb-Robinson bound in a quantitative manner.

This paper investigates the impact of noise in the quantum query model, a fundamental framework for quantum algorithms. We focus on the scenario where the oracle is subject to non-unitary (or irreversible) noise, specifically under the \textit{faulty oracle} model, where the oracle fails with a constant probability and acts as identity. Regev and Schiff (ICALP'08) showed that quantum advantage is lost for the search problem under this noise model. Our main result shows that every quantum query algorithm can be made robust in this noise model with a roughly quadratic blow-up in query complexity, thereby preserving quantum speedup for all problems where the quantum advantage is super-cubic. This is the first non-trivial robustification of quantum query algorithms against an oracle that is noisy.

Quantum technologies provide many applications for information processing tasks that are impossible to realize within classical physics. These capabilities include such fundamental resources as generating secure, i.e. private and unpredictable random values. Yet, the problem of quantifying the amount of generated randomness is still not fully solved. This work presents a comprehensive analysis of the design and performance optimization of a Quantum Random Number Generator (QRNG) based on Bell inequality violations. We investigate key protocol parameters, including the smoothing parameter ($\epsilon_{\text{s}}$), test round probability ($\gamma$), and switching delays, and their effects on the generation rate and quality of randomness. We identify optimal ranges for $\gamma$ and $p_\Omega$ (the protocol's non-aborting probability) to balance the trade-off between randomness consumption and net randomness generation. Additionally, we explore the impact of switching delays on the system's performance, providing strategies to mitigate these effects. Our results indicate substantial developments in QRNG implementations and offer higher randomness expansion rates. The work provides practical guidelines for the efficient and secure design of QRNG systems and other cryptographic protocols.

We study the convergence properties of Variational Quantum Circuits (VQCs) to investigate how they can differ from their classical counterparts. It is known that a VQC is a linear model in a feature map determined by its architecture. Learning a classical model on the same feature map will lead to a solution called the Minimum Norm Least Square (MNLS) estimator. In this work, we characterize the separation between quantum and classical models by their respective weight vector. We show that a necessary condition for a quantum model to avoid dequantization by its classical surrogate is to have a large weight vector norm. Furthermore, we suggest that this can only happen with a high dimensional feature map.Through the study of some common quantum architectures and encoding schemes, we obtain bounds on the norms of the quantum weight vector and the corresponding MNLS weight vector. It is possible to find instances allowing for such separation, but in these cases, concentration issues become another concern. We finally prove that there exists a linear model with large weight vector norm and without concentration, potentially achievable by a quantum circuit.

In 2005, H{\o}yer and \v{S}palek showed that constant-depth quantum circuits augmented with multi-qubit Fanout gates are quite powerful, able to compute a wide variety of Boolean functions as well as the quantum Fourier transform. They also asked what other multi-qubit gates could rival Fanout in terms of computational power, and suggested that the quantum Threshold gate might be one such candidate. Threshold is the gate that indicates if the Hamming weight of a classical basis state input is greater than some target value. We prove that Threshold is indeed powerful--there are polynomial-size constant-depth quantum circuits with Threshold gates that compute Fanout to high fidelity. Our proof is a generalization of a proof by Rosenthal that exponential-size constant-depth circuits with generalized Toffoli gates can compute Fanout. Our construction reveals that other quantum gates able to "weakly approximate" Parity can also be used as substitutes for Fanout.

Belief Propagation (BP) decoders for quantum error correcting codes are not always precise. There is a growing interest in the application of tensor networks to quantum error correction in general and, in particular, in degenerate quantum maximum likelihood decoding and the tensor network decoder. We develop a unified view to make the generalized BP proposal by Kirkley et. al explicit on arbitrary graphical models. We derive BP schemes and provide inference equations for BP on loopy tensor networks and, more generally, loopy graphical models. In doing so we introduce a tree-equivalent approach which allows us to relate the tensor network BlockBP to a generalized BP for loopy networks. Moreover, we show that the tensor network message passing approach relies essentially on the same approximation as the method by Kirkley. This allows us to make tensor network message passing available for degenerate quantum maximum likelihood decoding. Our method and results are key to obtaining guidelines regarding how the exchange between complexity and decoding accuracy works between BP and tensor network decoders. Finally, we discuss how the tree-equivalent method and the method by Kirkley can justify why message scheduling improves the performance of BP.

We study quantum algorithms for verifying properties of the output probability distribution of a classical or quantum circuit, given access to the source code that generates the distribution. We consider the basic task of uniformity testing, which is to decide if the output distribution is uniform on $[d]$ or $\epsilon$-far from uniform in total variation distance. More generally, we consider identity testing, which is the task of deciding if the output distribution equals a known hypothesis distribution, or is $\epsilon$-far from it. For both problems, the previous best known upper bound was $O(\min\{d^{1/3}/\epsilon^{2},d^{1/2}/\epsilon\})$. Here we improve the upper bound to $O(\min\{d^{1/3}/\epsilon^{4/3}, d^{1/2}/\epsilon\})$, which we conjecture is optimal.

We propose the X$^3$Z$^3$ Floquet code, a type of dynamical code with improved performance under biased noise compared to other Floquet codes. The enhanced performance is attributed to a simplified decoding problem resulting from a persistent symmetry under infinitely biased noise, which suprisingly exists in a code without constant stabilisers. Even if such a symmetry is allowed, we prove that a general dynamical code with two-qubit parity measurements cannot admit one-dimensional decoding graphs, a key feature resulting in the high performance of bias-tailored stabiliser codes. Despite this limitation, we demonstrate through our comprehensive numerical simulations that the symmetry of the X$^3$Z$^3$ Floquet code renders its performance under biased noise far better than several leading Floquet code candidates. Furthermore, to maintain high-performance implementation in hardware without native two-qubit parity measurements, we introduce ancilla-assisted bias-preserving parity measurement circuits. Our work establishes the X$^3$Z$^3$ code as a prime quantum error-correcting code candidate, particularly for devices with reduced connectivity, such as the honeycomb and heavy-hexagonal architectures.

Distribution of entanglement is an essential task in quantum information processing and the realization of quantum networks. In our work, we theoretically investigate the scenario where a central source prepares an N-partite entangled state and transmits each entangled subsystem to one of N receivers through noisy quantum channels. The receivers are then able to perform local operations assisted by unlimited classical communication to distill target entangled states from the noisy channel output. In this operational context, we define the EPR distribution capacity and the GHZ distribution capacity of a quantum channel as the largest rates at which Einstein-Podolsky-Rosen (EPR) states and Greenberger-Horne-Zeilinger (GHZ) states can be faithfully distributed through the channel, respectively. We establish lower and upper bounds on the EPR distribution capacity by connecting it with the task of assisted entanglement distillation. We also construct an explicit protocol consisting of a combination of a quantum communication code and a classical-post-processing-assisted entanglement generation code, which yields a simple achievable lower bound for generic channels. As applications of these results, we give an exact expression for the EPR distribution capacity over two erasure channels and bounds on the EPR distribution capacity over two generalized amplitude damping channels. We also bound the GHZ distribution capacity, which results in an exact characterization of the GHZ distribution capacity when the most noisy channel is a dephasing channel.

We present new advances in achieving exponential quantum speedups for solving optimization problems by low-depth quantum algorithms. Specifically, we focus on families of combinatorial optimization problems that exhibit symmetry and contain planted solutions. We rigorously prove that the 1-step Quantum Approximate Optimization Algorithm (QAOA) can achieve a success probability of $\Omega(1/\sqrt{n})$, and sometimes $\Omega(1)$, for finding the exact solution in many cases. Furthermore, we construct near-symmetric optimization problems by randomly sampling the individual clauses of symmetric problems, and prove that the QAOA maintains a strong success probability in this setting even when the symmetry is broken. Finally, we construct various families of near-symmetric Max-SAT problems and benchmark state-of-the-art classical solvers, discovering instances where all known classical algorithms require exponential time. Therefore, our results indicate that low-depth QAOA could achieve an exponential quantum speedup for optimization problems.

We explore the use of a spatial mode sorter to image a nanomechanical resonator, with the goal of studying the quantum limits of active imaging and extending the toolbox for optomechanical force sensing. In our experiment, we reflect a Gaussian laser beam from a vibrating nanoribbon and pass the reflected beam through a commercial spatial mode demultiplexer (Cailabs Proteus). The intensity in each demultiplexed channel depends on the mechanical mode shapes and encodes information about their displacement amplitudes. As a concrete demonstration, we monitor the angular displacement of the ribbon's fundamental torsion mode by illuminating in the fundamental Hermite-Gauss mode (HG$_{00}$) and reading out in the HG$_{01}$ mode. We show that this technique permits readout of the ribbon's torsional vibration with a precision near the quantum limit. Our results highlight new opportunities at the interface of quantum imaging and quantum optomechanics.

The classification of topological phases of matter is a fundamental challenge in quantum many-body physics, with applications to quantum technology. Recently, this classification has been extended to the setting of Adaptive Finite-Depth Local Unitary (AFDLU) circuits which allow global classical communication. In this setting, the trivial phase is the collection of all topological states that can be prepared via AFDLU. Here, we propose a complete classification of the trivial phase by showing how to prepare all solvable anyon theories that admit a gapped boundary via AFDLU, extending recent results on solvable groups. Our construction includes non-Abelian anyons with irrational quantum dimensions, such as Ising anyons, and more general acyclic anyons. Specifically, we introduce a sequential gauging procedure, with an AFDLU implementation, to produce a string-net ground state in any topological phase described by a solvable anyon theory with gapped boundary. In addition, we introduce a sequential ungauging and regauging procedure, with an AFDLU implementation, to apply string operators of arbitrary length for anyons and symmetry twist defects in solvable anyon theories. We apply our procedure to the quantum double of the group $S_3$ and to several examples that are beyond solvable groups, including the doubled Ising theory, the $\mathbb{Z}_3$ Tambara-Yamagami string-net, and doubled $SU(2)_4$ anyons.

We construct a family of two-dimensional topological stabilizer codes on continuous variable (CV) degrees of freedom, which generalize homological rotor codes and the toric-GKP code. Our topological codes are built using the concept of boson condensation -- we start from a parent stabilizer code based on an $\mathbb{R}$ gauge theory and condense various bosonic excitations. This produces a large class of topological CV stabilizer codes, including ones that are characterized by the anyon theories of $U(1)_{2n}\times U(1)_{-2m}$ Chern-Simons theories, for arbitrary pairs of positive integers $(n,m)$. Most notably, this includes anyon theories that are non-chiral and nevertheless do not admit a gapped boundary. It is widely believed that such anyon theories cannot be realized by any stabilizer model on finite-dimensional systems. We conjecture that these CV codes go beyond codes obtained from concatenating a topological qudit code with a local encoding into CVs, and thus, constitute the first example of topological codes that are intrinsic to CV systems. Moreover, we study the Hamiltonians associated to the topological CV stabilizer codes and show that, although they have a gapless spectrum, they can become gapped with the addition of a quadratic perturbation. We show that similar methods can be used to construct a gapped Hamiltonian whose anyon theory agrees with a $U(1)_2$ Chern-Simons theory. Our work initiates the study of scalable stabilizer codes that are intrinsic to CV systems and highlights how error-correcting codes can be used to design and analyze many-body systems of CVs that model lattice gauge theories.

We investigate the orientation dependence of Enhanced Ionization (EI) during strong-field-driven nuclear motion in acetylene (C$_2$H$_2$). Here, we both initiate and probe molecular dynamics in acetylene with intense 6-fs cross-polarized pulse pairs, separated by a variable delay. Following multiple ionization by the first pulse, acetylene undergoes simultaneous elongation of the carbon-carbon and carbon-hydrogen bonds, enabling further ionization by the second pulse and the formation of a very highly charged state, [C$_2$H$_2]^{6+}$. At small inter-pulse delays ($<$20 fs), this enhancement occurs when the molecule is aligned to the probe pulse. Conversely, at large delays ($>$40 fs), formation of [C$_2$H$_2]^{6+}$ occurs when the molecule is aligned to the pump pulse. By analyzing the polarization and time dependence of sequentially ionized [C$_2$H$_2]^{6+}$, we resolve two distinct pathways that both contribute to a large increase in the multiple ionization yield. This cross-polarized pulse pair scheme uniquely enables selective probing of deeply bound orbitals, providing new insights on orientation-dependent EI in highly charged hydrocarbons.

Fractional Chern insulators (FCI) with crystalline symmetry possess topological invariants that fundamentally have no analog in continuum fractional quantum Hall (FQH) states. Here we demonstrate through numerical calculations on model wave functions that FCIs possess a fractionally quantized electric polarization, $\vec{\mathscr{P}}_{\text{o}}$, where $\text{o}$ is a high symmetry point. $\vec{\mathscr{P}}_{\text{o}}$ takes fractional values as compared to the allowed values for integer Chern insulators because of the possibility that anyons carry fractional quantum numbers under lattice translation symmetries. $\vec{\mathscr{P}}_{\text{o}}$, together with the discrete shift $\mathscr{S}_{\text{o}}$, determine fractionally quantized universal contributions to electric charge in regions containing lattice disclinations, dislocations, boundaries, and/or corners, and which are fractions of the minimal anyon charge. We demonstrate how these invariants can be extracted using Monte Carlo computations on model wave functions with lattice defects for 1/2-Laughlin and 1/3-Laughlin FCIs on the square and honeycomb lattice, respectively, obtained using the parton construction. These results comprise a class of fractionally quantized response properties of topologically ordered states that go beyond the known ones discovered over thirty years ago.

We investigate the interplay between self-duality and spatially modulated symmetry of generalized $N$-state clock models, which include the transverse-field Ising model and ordinary $N$-state clock models as special cases. The spatially modulated symmetry of the model becomes trivial when the model's parameters satisfy a specific number-theoretic relation. We find that the duality is non-invertible when the spatially modulated symmetry remains nontrivial, and show that this non-invertibility is resolved by introducing a generalized $\mathbb{Z}_N$ toric code, which manifests ultraviolet/infrared mixing, as the bulk topological order. In this framework, the boundary duality transformation corresponds to the boundary action of a bulk symmetry transformation, with the endpoint of the bulk symmetry defect realizing the boundary duality defect. Our results illuminate not only a holographic perspective on dualities but also a relationship between spatially modulated symmetry and ultraviolet/infrared mixing in one higher dimension.

We investigate the time evolution generated by the two-sided chord Hamiltonian in the double-scaled SYK model, which produces a probability distribution over operators in the double-scaled algebra. Via the bulk-to-boundary map, this distribution translates into dynamic profiles of bulk states within the chord Hilbert space. We derive analytic expressions for these states, valid across a wide parameter range and at all time scales. Additionally, we show how distinct semi-classical behaviors emerge by localizing within specific regions of the energy spectrum in the semi-classical limit. We reformulate the doubled Hilbert space formalism as an isometric map between the one-particle sector of the chord Hilbert space and the doubled zero-particle sector. Using this map, we obtain analytic results for correlation functions and examine the dynamical properties of operator Krylov complexity for chords, establishing an equivalence between the chord number generating function and the crossed four-point correlation function. We also consider finite-temperature effects, showing how operator spreading slows as temperature decreases. In the semi-classical limit, we apply a saddle point analysis and include the one-loop determinant to derive the normalized time-ordered four-point correlation function. The leading correction mirrors the \(1/N\) connected contribution observed in the large-\(p\) SYK model at infinite temperature. Finally, we analyze the time evolution of operator Krylov complexity for a matter chord in the triple-scaled regime, linking it to the renormalized two-sided length in JT gravity with matter.

We give a construction of Quantum Low-Density Parity Check (QLDPC) codes with near-optimal rate-distance tradeoff and efficient list decoding up to the Johnson bound in polynomial time. Previous constructions of list decodable good distance quantum codes either required access to a classical side channel or were based on algebraic constructions that preclude the LDPC property. Our construction relies on new algorithmic results for codes obtained via the quantum analog of the distance amplification scheme of Alon, Edmonds, and Luby [FOCS 1995]. These results are based on convex relaxations obtained using the Sum-of-Squares hierarchy, which reduce the problem of list decoding the distance amplified codes to unique decoding the starting base codes. Choosing these base codes to be the recent breakthrough constructions of good QLDPC codes with efficient unique decoders, we get efficiently list decodable QLDPC codes.

In mixed quantum states, the notion of symmetry is divided into two types: strong and weak symmetry. While spontaneous symmetry breaking (SSB) for a weak symmetry is detected by two-point correlation functions, SSB for a strong symmetry is characterized by the Renyi-2 correlators. In this work, we present a way to construct various SSB phases for strong symmetries, starting from the ground state phase diagram of lattice gauge theory models. In addition to introducing a new type of mixed-state topological phases, we provide models of the criticalities between them, including those with gapless symmetry-protected topological order. We clarify that the ground states of lattice gauge theories are purified states of the corresponding mixed SSB states. Our construction can be applied to any finite gauge theory and offers a framework to study quantum operations between mixed quantum phases.

Differential absorption Lidar (DIAL) in the ultraviolet (UV) region is an effective approach for monitoring tropospheric ozone. 4H-SiC single-photon detectors (SPDs) are emergent devices for UV single-photon detection. Here, we demonstrate a 4H-SiC SPD-based ozone DIAL. We design and fabricate the 4H-SiC single-photon avalanche diode with a beveled mesa structure and optimized layer thickness. An active quenching circuit with a quenching time of 1.03 ns is developed to significantly mitigate the afterpulsing effect while enhancing the maximum count rate. After characterization, the SPD exhibits excellent performance with a photon detection efficiency of 16.6% at 266 nm, a dark count rate of 138 kcps, a maximum count rate of 13 Mcps, and an afterpulse probability of 2.7% at room temperature. Then, we apply two 4H-SiC SPDs in an ozone DIAL. The measured ozone concentrations at altitudes of 1-3.5 km agree well with the results of a commercial ozone DIAL. Our work provides an alternative solution for general UV Lidar applications.

We study the spin-1/2 XX chain with a modulated Gamma interaction (GI), which results from the superposition of uniform and staggered Gamma terms. We diagonalize the Hamiltonian of the model exactly using the Fermionization technique. We then probe the energy gap and identify the gapped and gapless regions. We also examine the staggered chiral, staggered nematic and dimer order parameters to determine the different phases of the ground state phase diagram with their respective long-range orders. Our findings indicate that the model undergoes first-order, second-order, gapless-gapless, and gapped-gapped phase transitions.

In a previous theoretical work [arXiv:2205.01461], T. Esslinger group proposed a scheme to realize a spatial-temporal lattice, which possesses dual periodicity on space and time, in a cavity-boson system pumped by a travelling wave laser. However, the prediction was made under the mean-field approximation. In this work, we investigate the dynamics beyond mean-field approximation. By including the fluctuation of the cavity field, we obtain a larger set of equations of motion. Numerical results show that the spatial-temporal lattice is melted in the mean-field level but survives in the quantum fluctuation.

Recently, it was shown by Danielson-Satishchandran-Wald (DSW) that for the massive or charged body in a quantum spatial separated superposition state, the presence of a black hole can decohere the superposition inevitably towards capturing the radiation of soft photons or gravitons. In this work, we study the DSW decoherence effect for the static charged body in the Reissner-Nordstr\"om black holes. By calculating the decohering rate for this case, it is shown that the superposition is decohered by the low frequency photons that propagate through the black hole horizon. For the extremal Reissner-Nordstr\"om black hole, the decoherence of quantum superposition is completely suppressed due to the black hole Meissner effect.

Some advantages of the algebraic approach to many body physics, based on resolvent algebras, are illustrated by the simple example of non-interacting bosons which are confined in compact regions with soft boundaries. It is shown that the dynamics of these systems converges to the spatially homogeneous dynamics for increasing regions and particle numbers and a variety of boundary forces. The corresponding correlation functions of thermal equilibrium states also converge in this limit. Depending on the filling of the regions with particles, the limits can either be spatially homogeneous, including the Bose-Einstein condensates, or they become inhomogeneous with varying, but finite local particle densities. In case of this spontaneous breakdown of the spatial symmetry, the presence of condensates can be established by exhibiting temporal correlations over large temporal distances (memory effects).

In this work we analyzed the physical origin of the primordial inhomogeneities during the inflation era. The proposed framework is based, on the one hand, on semiclassical gravity, in which only the matter fields are quantized and not the spacetime metric. Secondly, we incorporate an objective collapse mechanism based on the Continuous Spontaneous Localization (CSL) model, and we apply it to the wavefunction associated with the inflaton field. This is introduced due to the close relation between cosmology and the so-called ``measurement problem'' in Quantum Mechanics. In particular, in order to break the homogeneity and isotropy of the initial Bunch-Davies vacuum, and thus obtain the inhomogeneities observed today, the theory requires something akin to a ``measurement'' (in the traditional sense of Quantum Mechanics). This is because the linear evolution driven by Schr\"odinger's equation does not break any initial symmetry. The collapse mechanism given by the CSL model provides a satisfactory mechanism for breaking the initial symmetries of the Bunch-Davies vacuum. The novel aspect in this work is that the constructed CSL model arises from the simplest choices for the collapse parameter and operator. From these considerations, we obtain a primordial spectrum that has the same distinctive features as the standard one, which is consistent with the observations from the Cosmic Microwave Background.

Inspired by an ontic view of the wavefunction in quantum mechanics and motivated by the universal interaction of gravity, we discuss a possible gravity implication in the state collapse mechanism. Concretely, we investigate the stability of the spatial superposition of a massive quantum state under the gravity effect. In this context, we argue that the stability of the spatially superposed state depends on its gravitational self-energy originating from the effective mass density distribution through the spatially localized eigenstates. We reveal that the gravitational self-interaction between the different spacetime curvatures created by the eigenstate effective masses leads to the reduction of the superposed state to one of the possible localized states. Among others, we discuss such a gravity-driven state reduction. Then, we approach the corresponding collapse time and the induced effective electric current in the case of a charged state, as well as the possible detection aspects.

We provide a complete classification of the integrability and nonintegrability of the spin-1 bilinear-biquadratic model with a uniaxial anisotropic field, which includes the Heisenberg model and the Affleck-Kennedy-Lieb-Tasaki model. It is rigorously shown that all systems, except for the known integrable systems, are nonintegrable, meaning that they do not have nontrivial local conserved quantities. In particular, this result guarantees the nonintegrability of the Affleck-Kennedy-Lieb-Tasaki model, which is a fundamental assumption for quantum many-body scarring. Furthermore, we give simple necessary conditions for integrability in an extended model of the bilinear-biquadratic model with anisotropic interactions. Our result has accomplished a breakthrough in nonintegrability proofs by expanding their scope to spin-1 systems.

The `quantum gravity in the lab' paradigm suggests that quantum computers might shed light on quantum gravity by simulating the CFT side of the AdS/CFT correspondence and mapping the results to the AdS side. This relies on the assumption that the duality map (the `dictionary') is efficient to compute. In this work, we show that the complexity of the AdS/CFT dictionary is surprisingly subtle: there might be cases in which one can efficiently apply operators to the CFT state (a task we call 'operator reconstruction') without being able to extract basic properties of the dual bulk state such as its geometry (which we call 'geometry reconstruction'). Geometry reconstruction corresponds to the setting where we want to extract properties of a completely unknown bulk dual from a simulated CFT boundary state. We demonstrate that geometry reconstruction may be generically hard due to the connection between geometry and entanglement in holography. In particular we construct ensembles of states whose entanglement approximately obey the Ryu-Takayanagi formula for arbitrary geometries, but which are nevertheless computationally indistinguishable. This suggests that even for states with the special entanglement structure of holographic CFT states, geometry reconstruction might be hard. This result should be compared with existing evidence that operator reconstruction is generically easy in AdS/CFT. A useful analogy for the difference between these two tasks is quantum fully homomorphic encryption (FHE): this encrypts quantum states in such a way that no efficient adversary can learn properties of the state, but operators can be applied efficiently to the encrypted state. We show that quantum FHE can separate the complexity of geometry reconstruction vs operator reconstruction, which raises the question whether FHE could be a useful lens through which to view AdS/CFT.

We study a quantum system that consists of two fermionic chains coupled by a driven quantum point contact (QPC). The QPC contains a bond with a periodically varying tunneling amplitude. Initially the left chain is packed with fermions while the right one is empty. We numerically track the evolution of the system and demonstrate that, at frequencies above a critical one, the current through the QPC halts, and the particle imbalance between the chains remains forever. This implies a spectacular breakdown of the Floquet version of the eigenstate thermalization hypothesis which predicts a homogeneous particle density profile at large times. We confirm the effect for various driving protocols and interparticle interactions.

We explore the states of matter arising from the spontaneous symmetry breaking (SSB) of $\mathbb{Z}_2$ non-onsite symmetries. In one spatial dimension, we construct a frustration-free lattice model exhibiting SSB of a non-onsite symmetry, which features the coexistence of two ground states with distinct symmetry-protected topological (SPT) orders. We analytically prove the two-fold ground-state degeneracy and the existence of a finite energy gap. Fixing the symmetry sector yields a long-range entangled ground state that features long-range correlations among non-invertible charged operators. We also present a constant-depth measurement-feedback protocol to prepare such a state with a constant success probability in the thermodynamic limit, which may be of independent interest. Under a symmetric deformation, the SSB persists up to a critical point, beyond which a gapless phase characterized by a conformal field theory emerges. In two spatial dimensions, the SSB of 1-form non-onsite symmetries leads to a long-range entangled state (SPT soup) - a condensate of 1d SPT along any closed loops. On a torus, there are four such locally indistinguishable states that exhibit algebraic correlations between local operators, which we derived via a mapping to the critical $O(2)$ loop model. This provides an intriguing example of `topological quantum criticality'. Our work reveals the exotic features of SSB of non-onsite symmetries, which may lie beyond the framework of topological holography (SymTFT).