Partial differential equations (PDEs) play a crucial role in financial mathematics, particularly in portfolio optimization, and solving them using classical numerical or neural network methods has always posed significant challenges. Here, we investigate the potential role of quantum circuits for solving PDEs. We design a parameterized quantum circuit (PQC) for implementing a polynomial based on tensor rank decomposition, reducing the quantum resource complexity from exponential to polynomial when the corresponding tensor rank is moderate. Building on this circuit, we develop a Quantum Physics-Informed Neural Network (QPINN) and a Quantum-inspired PINN, both of which guarantee the existence of an approximation of the PDE solution, and this approximation is represented as a polynomial that incorporates tensor rank decomposition. Despite using 80 times fewer parameters in experiments, our quantum models achieve higher accuracy and faster convergence than a classical fully connected PINN when solving the PDE for the Merton portfolio optimization problem, which determines the optimal investment fraction between a risky and a risk-free asset. Our quantum models further outperform a classical PINN constructed to share the same inductive bias, providing experimental evidence of quantum-induced improvement and highlighting a resource-efficient pathway toward classical and near-term quantum PDE solvers.
Dynamic quantum circuits with mid-circuit measurements (MCMs) and feed-forward operations play a crucial role in various applications, such as quantum error correction and quantum algorithms. With advancements in quantum hardware enabling the implementation of MCM and feed-forward loops, the use of dynamic circuits has become increasingly prevalent. There is a significant need for a benchmarking framework specially designed for dynamic circuits to capture their unique properties, as current benchmarking tools are designed primarily for unitary circuits and cannot be trivially extended to dynamic circuits. We propose dynamarq, a scalable and hardware-agnostic benchmarking framework for dynamic circuits. We collect a set of dynamic circuit benchmarks spanning various applications and propose a broad set of circuit features to characterize the structure of these dynamic circuits. We run them on two IBM quantum processors and the Quantinuum Helios-1E emulator, and propose scalable, application-dependent fidelity scores for each benchmark based on hardware execution results. We perform statistical modeling to identify correlations between circuit features and fidelity scores, and demonstrate highly accurate fidelity prediction using our model. Our model parameters are also transferable across hardware backends and calibration cycles. Our framework facilitates the understanding of dynamic circuit structures and provides insights for designing and optimizing dynamic circuits to achieve high execution fidelity on quantum hardware.
We present an approach for entangling spin qubits via capacitive coupling mediated by an ac electric field-driven multielectron mediator quantum dot. To illustrate this method, we consider the case of a driven two-electron dot that mediates entanglement between resonant exchange qubits defined in three-electron triple quantum dots, which enable direct capacitive coupling and interaction with microwave fields via intrinsic spin-charge mixing. The method can also be applied to other types of spin qubits that can be coupled capacitively. We show that this approach leads to rapid, single-pulse universal entangling gates for resonant exchange qubits that are activated via the drive on the mediator dot. Unlike conventional tunneling-based two-qubit gates between exchange-only qubits, the capacitive interaction-based gates we describe do not require an extensive sequence of pulses to mitigate leakage. We describe how this drive-activated local entangling approach can be integrated with the driven sideband-based long-range approach for cavity-mediated entangling gates developed in our previous work in order to enable modularity for spin-based quantum information processing.
We investigate high frequency motional states of trapped atomic ions. Trapped ions in rf traps are confined by an approximate harmonic potential and exhibit quantum motional states that mediate essential techniques in quantum computing, simulation, networking, and precision measurement. However, motional state decoherence mechanisms, heating and dephasing, are broadly limiting: reduced two-qubit gate fidelities; lower fidelity and lifetime of highly nonclassical bosonic states; long laser cooling times; and large recoil heating rates. These also challenge the scalability of increasingly sophisticated protocols. We propose high motional frequency ion trapping as an operating regime that addresses these challenges and reshapes the design landscape for quantum information experiments and quantum control techniques. We report an experimentally motivated investigation of realizing this high-frequency regime and discuss the consequences for laser cooling, motional state coherence, fidelity and lifetime of nonclassical bosonic states, and scalability of experimental runtimes. We report clear design trajectories for ion traps to reach high motional frequency, a new limiting mechanism for laser cooling at these high frequencies, and more than an order-of-magnitude speedup in experimental duty cycles with larger speed ups possible for quantum error correction protocols. Taken together, high motional frequency ion trapping has broad implications for the future of quantum information experiments.
Hybrid quantum-classical applications pose significant resource management challenges due to heterogeneity and dynamism in both infrastructure and workloads. Quantum-HPC environments integrate quantum processing units (QPUs) with diverse classical resources (CPUs, GPUs), while applications span coupling patterns from tightly coupled execution to loosely coupled task parallelism with varying resource requirements. Traditional HPC schedulers lack visibility into application semantics and cannot respond to fluctuating resource availability at runtime. This paper presents a middleware-based approach for adaptive resource, workload, and task management in hybrid quantum-HPC systems. We make four contributions: (i) a conceptual four-layer middleware architecture that decomposes management across workflow, workload, task, and resource levels, enabling application-aware scheduling over heterogeneous quantum-HPC resources; (ii) a set of execution motifs capturing interaction and coupling characteristics of hybrid applications, realized as quantum mini-apps for systematic workload characterization; (iii) Pilot-Quantum, a middleware framework built on the pilot abstraction that enables late binding and dynamic resource allocation, adapting to resource and workload dynamics at runtime; and (iv) Q-Dreamer, a performance modeling toolkit providing reusable components for informed workload partitioning, including a circuit-cutting optimizer that analytically derives optimal partitioning strategies. Evaluation on heterogeneous HPC platforms (Perlmutter, NVIDIA DGX with H100/B200 GPUs) demonstrates efficient multi-backend orchestration across CPUs, GPUs, and QPUs for diverse execution motifs. Q-Dreamer predicts optimal circuit cutting configurations with up to 82% accuracy.
Reservoir computing promises a fast method for handling large amounts of temporal data. This hinges on constructing a good reservoir--a dynamical system capable of transforming inputs into a high-dimensional representation while remembering properties of earlier data. In this work, we introduce a reservoir based on recurrent quantum feature maps where a fixed quantum circuit is reused to encode both current inputs and a classical feedback signal derived from previous outputs. We evaluate the model on the Mackey-Glass time-series prediction task using our recently introduced CP feature map, and find that it achieves lower mean squared error than standard classical baselines, including echo state networks and multilayer perceptrons, while maintaining compact circuit depth and qubit requirements. We further analyze memory capacity and show that the model effectively retains temporal information, consistent with its forecasting accuracy. Finally, we study the impact of realistic noise and find that performance is robust to several noise channels but remains sensitive to two-qubit gate errors, identifying a key limitation for near-term implementations.
High-gain spontaneous parametric down-conversion (SPDC) produces bright squeezed vacuum with rich high-dimensional entanglement, but its output is inherently multimodal and non-perturbative, making the full modal characterization a major computational bottleneck. We propose a physics-guided deep neural network that reconstructs the source's modal fingerprint: the high-dimensional correlation signature across radial and azimuthal indices. We designed a FiLM-modulated convolutional architecture that predicts the joint (m,l) distribution, and training is driven by a hybrid loss that couples data-driven metrics (JSD, KL, MSE, Wasserstein) with a soft orbital-angular-momentum (OAM) conservation term, providing an essential inductive bias toward physically consistent solutions. Across gain regimes, our method achieves high-fidelity reconstruction with average JSD of 1.96e-3, WEMD of 1.54e-3, and KL divergence of 7.85e-3, delivering an approximate 128-fold speedup over full numerical simulation and more than 30% accuracy gains over U-Net baselines. These results demonstrate that physics-guided learning, via a soft OAM-conservation regularizer and physically generated training targets, enables rapid and data-efficient modal characterization. Compared with traditional numerical simulation, our mesh-free method has demonstrated good generalization with limited or contaminated training data and has enabled fast "online" prediction of the quantum dynamics of a high-dimensional entanglement system for real-world experimental implementation.
Periodic (Floquet) driving enables Hamiltonian engineering and nonequilibrium phases, but interacting systems eventually heat by absorbing energy from the drive. Disorder can greatly delay this process, yielding long-lived prethermal plateaus. Here we show that this protection can fail when pulse-train control introduces a second driving frequency and when the disorder fluctuates. Using a natural-abundance 13C nuclear-spin network in diamond, we observe sharp peaks in the late-time heating rate at the double- and triple-spin-flip resonance conditions predicted by bimodal Floquet interference, and track their evolution with drive frequency. A switching-noise model attributes the resonant absorption to stochastic electron-spin dynamics that intermittently tune rare nuclear clusters into multi-photon resonance. Our results reveal a resonance-activated limit for disorder-stabilized Floquet phases and suggest new routes to DC-field quantum sensing based on an abrupt breakdown of prethermalization.
A single photon in a superposition of $d$ modes naturally encode a $d$-dimensional quantum system, a so-called qudit. We show that such superpositions can be leveraged to achieve a quantum speed-up of remote remote state preparation (RSP): a primitive for several quantum network protocols. For a superposition over $d\geq 2$ modes, the photon state can encode up to ${\rm Log}_2(d)$ qubits, which we exploit in a proposed reflection based RSP protocol with multiple variations. For single qubit RSP, we achieve a performance comparable to the best known existing schemes but with reduced requirements for phase stabilization. For many qubit RSP the achievable success rates remain high despite needing exponentially many temporal modes, since only one photon needs to be transmitted and detected to prepare multiple qubits. By simultaneously preparing many qubits at once, we bypass limited qubit lifetimes limited qubit lifetimes and improve fidelities beyond what is achievable with existing RSP protocols.
Quantum annealing is a quantum algorithm to solve combinatorial optimization problems. In the current quantum annealing devices, the dynamic range of the input Ising Hamiltonian, defined as the ratio of the largest to the smallest coefficient, significantly affects the quality of the output solution due to limited hardware precision. Several methods have been proposed to reduce the dynamic range by reducing large coefficients in the Ising Hamiltonian. However, existing studies do not take into account minor-embedding, which is an essential process in current quantum annealers. In this study, we revisit three existing coefficient-reduction methods under the constraints of minor-embedding. We evaluate to what extent these methods reduce the dynamic range of the minor-embedded Hamiltonian and improve the sample quality obtained from the D-Wave Advantage quantum annealer. The results show that, on the set of problems tested in this study, the interaction-extension method effectively improves the sample quality by reducing the dynamic range, while the bounded-coefficient integer encoding and the augmented Lagrangian method have only limited effects. Furthermore, we empirically show that reducing external field coefficients at the logical Hamiltonian level is not required in practice, since minor-embedding automatically has the role of reducing them. These findings suggest future directions for enhancing the sample quality of quantum annealers by suppressing hardware errors through preprocessing of the input problem.
Monitored quantum circuits host a rich variety of exotic non-equilibrium phases. Among the most representative examples are measurement-induced phase transitions between distinct area-law entangled states. However, because these transitions are characterized by specific entanglement quantities such as mutual information or topological entanglement entropy that are nonlinear functionals of the density matrix, their experimental observation requires multiple identical quantum trajectories via post-selection, which becomes exponentially unfeasible for large systems. Here, we leverage modern machine learning tools to address this challenge. We devise a neural network architecture combining a convolutional neural network with an attention mechanism, and use raw measurement outcomes directly as input to classify trivial, long-range entangled, and symmetry-protected topological phases. We show that the system's relaxation to a steady-state phase manifests as a sharp convergence in the classifier's accuracy, entirely bypassing the need for quantum state reconstruction. We systematically study the performance of our network as a function of sample size, input data, spatial and temporal constraints, and system size scalability. Our results demonstrate that this approach is robust and post-selection free, offering a practical pathway for experimentally probing measurement-induced phases.
Subradiance, a hallmark cooperative phenomenon in waveguide QED, is characterized by a universal power-law scaling of decay rates with system size and underpins many applications in quantum information storage. Here, we demonstrate that disorder drives a sharp transition in the typical subradiant decay rates from power-law to exponential scaling, a phenomenon we term the subradiant scaling transition (SST). Through rigorous finite-size scaling analysis, we establish the SST as a critical phenomenon, characterized by a diverging characteristic scale of the decay rates at the transition point $W_c=0$. Physically, the SST originates from Anderson localization, manifested by the physical equivalence between the characteristic scale and the localization length of the subradiant states. Our findings provide deep insights into the interplay between disorder and collective dynamics, unifying the underlying physical mechanisms of exponentially-scaled subradiant decay rates and Anderson localization in waveguide QED.
High-dimensional quantum systems greatly outperform their two-dimensional counterparts in channel capacity, quantum complexity and efficiency, quantum communication security, etc. Bell-state analyzer (BSA) is a crucial prerequisite for a number of quantum communication protocols. We propose an approach for completely and deterministically distinguishing a set of arbitrary $d$-dimensional ($d \geq 3$) Bell states via indefinite causal order (ICO). In previous schemes, bit and phase information are discriminated in succession. Exploiting the gravitational ICO as the sole resource, we propose some high-dimensional BSA schemes. Independent of the dimensions, a set of generalized Bell states are completely and deterministically discriminated by adjusting the form of the embedded local single-qudit gates within ICO switch and measuring each qudit in the $\{|0\rangle, |1\rangle, \cdots, |d-1\rangle\}$ basis. Notably, in our high-dimensional BSA process, the indefinite causal structure is not consumed. Hence a completely nondestructive high-dimensional BSA can be achieved by iterating the indefinite causal structure process for two rounds.
We study a discrete-time quantum walk in presence of a detector at $x_D$ initially. The detector here is repeatedly removed after a span of $t_R$, the removal time, and reinserted at random locations. Two relocation rules are considered here: In Model~1, the detector is reinserted at any site beyond $x_D$, while in Model~2, reinsertion is done within a restricted window around the position of the detector at that time. Both variants behave like Semi Infinite Walk (SIW) for large $t_R$, where the detector behaves effectively as a fixed boundary. However, in the rapid-relocation regime, i.e., when $t_R$ is small, the behaviours are different. Model~1 permits greater spreading due to unrestricted reinsertion, which is different from Model~2. The time evolution of occupation probability ratio of our walker to that of an infinite walker at $x_D$, i.e., $f(x_D,t)/f_\infty(x_D,t)$, initially show the feature of a SIW upto $t=t_R$, then show some oscillatory behaviour and finally reach a saturation value for both the models. The ratio enhancing under certain conditions of $x_D$ and $t_R$, is a purely quantum mechanical effect. The saturation ratio shows a crossover behavior below and above a removal time $t_R^*$. At sites $x \neq x_D$ the occupation probablity ratios at a certain time reveals that for small $t_R$, the behaviours of the two models are drastically different from each other, as well as from Semi Infinite Walk (SIW), Quenched Quantum Walk (QQW) and Moving Detector Quantum Walk (MDQW). The correlation ratios of the two models with that of Infinite Walk (IW) show interesting time dependence for sites to the left or right of the initial detector position $x_D$.
We investigate absorption and scattering of structured light by atoms, treating the photon and the atomic center of mass as spatially localized wave packets. We show that vortex photons can transfer orbital angular momentum (OAM) to the atomic center of mass with near-perfect efficiency in head-on collisions when the impact parameter $b$ is smaller than the atomic transverse coherence length $\sigma$, which ranges from nanometers to sub-micrometer scales. Larger offsets result in a shifted mean OAM and a finite variance, both controlled by the ratio $b/\sigma$. The wave-packet nature of light enables electronic transitions that violate standard selection rules, albeit with a clear hierarchy where the dipole transition dominates. For femtosecond pulses, the finite spatial coherence of the photon leads to measurable shaping of the resonant absorption lines. We demonstrate a transverse recoil of the atom in a vicinity of the photonic vortex, dubbed "the superkick", and its dual effect - "the selfkick" - when an initially twisted atomic packet experiences recoil upon absorbing a gaussian photon. These phenomena are within reach of experimental capabilities using structured light in combination with cold atomic beams and ions in Penning traps, providing a route to the controlled generation and manipulation of non-gaussian atomic packets.
Black hole spacetimes provide a natural setting for quantum systems in curved spacetime, where effects such as Hawking radiation arise from event horizons. In this work, we investigate the impact of the Hawking effect on quantum imaginarity in Schwarzschild spacetime, focusing on nonlocal advantage of quantum imaginarity (NAQI) and assisted imaginarity distillation. For NAQI, it is significantly affected by Hawking radiation, exhibiting a pronounced difference between physically accessible and inaccessible regions. It is suppressed in the physically accessible region with increasing Hawking temperature and may vanish, while remaining absent in the physically inaccessible region across the parameter regime. For assisted imaginarity distillation, the Hawking effect modifies the assisted fidelity in a state-dependent manner. In the physically accessible region, the fidelity generally decreases with increasing temperature, indicating reduced distillation capability, whereas the physically inaccessible region exhibits the opposite monotonic trend, indicating enhanced distillation capability. These results highlight distinct operational behaviors of physically accessible and inaccessible regions under relativistic effects, providing insight into quantum imaginarity in curved spacetime.
We propose a scheme to engineer the superradiant phase transition (SPT) in cavity magnonics by periodically modulating the frequency of the magnon mode. The studied system is composed of a yttrium iron garnet (YIG) sphere positioned inside a microwave cavity, where magnons in the YIG sphere are strongly coupled to microwave photons. Under the Floquet drive, the effective frequencies of both the cavity and magnon modes can be readily controlled via the frequency and strength of Floquet field. This tunability allows the cavity magnonic system to support a rich steady-state phase diagram, featuring parity-symmetric, parity-symmetry-broken, bistable, and unstable phases. With the increase of Floquet-field strength, the system exhibit a discontinuous phase transition from the parity-symmetric phase to the parity-symmetry-broken phase at a critical threshold, accompanied by an abrupt jump of the magnon occupation from zero to a finite value. Upon further increase of Floquet-field strength, the magnon occupation declines continuously from a nonzero value back to zero, corresponding to a second-order phase transition that restores the parity-symmetric phase. Additionally, fluctuations in magnon number during the SPT process are examined. Our work establishes an alternative route to engineer the cavity-magnon SPT without relying on microwave parametric drive.
We present a unified quantum-mechanical derivation of the Wallis formula from two solvable radial systems: the circular states of the three-dimensional isotropic harmonic oscillator and the lowest-radial-branch states of the planar Fock--Darwin problem, including the lowest Landau level sector. In both cases, the radial probability density has the exact form $P(r)\propto r^\nu e^{-\lambda r^2}$, which yields the scale-independent reciprocal observable $Q=\langle r\rangle\langle r^{-1}\rangle$. The two systems realize the even and odd half-integer Gamma-function branches of the same moment formula, so that the associated finite Wallis partial products are determined by $Q$ in one case and by $Q^{-1}$ in the other. In the large-angular-momentum regime, the corresponding states become localized on a thin spherical shell or a narrow annulus, with vanishing relative radial width, so that $Q\to1$ and both finite-product representations reduce to the Wallis formula for $\pi$.
We extend the algebraic diversity (AD) framework from classical signal processing to quantum measurement theory. The central result -- the Quantum Algebraic Diversity (QAD) Theorem -- establishes that a group-structured positive operator-valued measure (POVM) applied to a single copy of a quantum state produces a group-averaged density matrix estimator that recovers the spectral structure of the true density matrix, analogous to the classical result that a group-averaged outer product recovers covariance eigenstructure from a single observation. We establish a formal Classical-Quantum Duality Map connecting classical covariance estimation to quantum state tomography, and prove an Optimality Inheritance Theorem showing that classical group optimality transfers to quantum settings via the Born map. SIC-POVMs are identified as algebraic diversity with the Heisenberg-Weyl group, and mutually unbiased bases (MUBs) as algebraic diversity with the Clifford group, revealing the hierarchy $\mathrm{HW}(d) \subseteq \mathcal{C}(d) \subseteq S_d$ that mirrors the classical hierarchy $\mathbb{Z}_M \subseteq G_{\min} \subseteq S_M$. The double-commutator eigenvalue theorem provides polynomial-time adaptive POVM selection. A worked qubit example demonstrates that the group-averaged estimator from a single Pauli measurement recovers a full-rank approximation to a mixed qubit state, achieving fidelity 0.91 where standard single-basis tomography produces a rank-1 estimate with fidelity 0.71. Monte Carlo simulations on qudits of dimension $d = 2$ through $d = 13$ (200 random states per dimension) confirm that the Heisenberg-Weyl QAD estimator maintains fidelity above 0.90 across all dimensions from a single measurement outcome, while standard tomography fidelity degrades as $\sim 1/d$, with the improvement ratio scaling linearly with $d$ as predicted by the $O(d)$ copy reduction theorem.
High-fidelity quantum operations require the system dynamics to be strictly confined to the computational subspace. In practice, however, control fields inevitably couple to leakage levels, giving rise to quantum state leakage that significantly reduces the fidelity of the operation. To address this challenge, we propose a general strategy for actively suppressing leakage errors by applying small, static offsets to tunable system parameters. This approach systematically mitigates leakage's detrimental impact on quantum control, without modifying the original control framework or incurring additional time overhead. By avoiding the need for extra suppression pulses or complex optimization procedures altogether, it offers a streamlined solution for leakage compensation while remaining fully compatible with subsequent optimal control techniques. Numerical validation conducted on superconducting quantum circuits demonstrates effective leakage suppression, enabling high-fidelity single-qubit gates, precise control of two-qubit interactions, and perfect state transfer in multi-level systems. Moreover, when integrated with optimal control techniques, our approach also allows for the cooperative suppression of both leakage errors and residual crosstalk. Therefore, this work provides a feasible technical pathway toward the low error thresholds required for fault-tolerant quantum computation.
Finding the ground state of complex quantum systems remains a central challenge in many-body physics, quantum chemistry, and combinatorial optimization, due to the exponential growth of the Hilbert-space dimension and the entangled structure of ground states. We show that quantum Landau--Lifshitz-Gilbert (QLLG) dynamics, proposed in [Phys. Rev. Lett. 133, 266704 (2024)], provides a physically realizable, real-time nonlinear mechanism that selectively suppresses excited-state components and drives the system toward the lowest-energy eigenstate contained in the initial state. Unlike purely numerical methods such as the imaginary-time projection method, QLLG combines coherent precession with dissipative suppression, enabling experimentally accessible ground-state preparation. For random initial states in the $N$-qubit Hilbert space of dimension $2^N$, convergence occurs in times scaling linearly with system size, $N$, and inversely with the spectral gap. We provide numerical simulations of our analytical results with a Hamiltonian describing an interacting spin chain with Heisenberg exchange and a Zeeman term. Our results identify nonlinear quantum dissipation as a powerful tool for real-time ground-state preparation in large quantum systems and quantum optimization.
Quantum entanglement plays a fundamental role in quantum cryptography and computation. An important example of quantum entanglement can be found in the correlations of Einstein, Podolsky, and Rosen (EPR). However, despite the plethora of articles related to the topic, different interpretations of the EPR correlations coexist, and a consensus has not yet been reached. In this article, we seek to demonstrate, through the simple and direct application of quantum formalism, how events separated by timelike intervals can, strangely enough, help us better understand some aspects of the so-called "quantum nonlocality" associated with EPR correlations.
It was recently shown that Newtonian dynamics of macroscopic particles can be derived from unitary Schrödinger evolution under an assumption on the system-environment interaction, namely that the interaction Hamiltonian effectively exhibits a random-matrix structure, leading to stochastic yet unitary evolution on state space. The derivation is geometric: classical phase space is realized as a submanifold of quantum state space, and Schrödinger evolution, when restricted to the corresponding tangent bundle, reproduces Newtonian motion, while environmental interactions ensure localization near this submanifold. In the present work, this framework is extended to quantum fields. We construct manifolds of states localized near classical field configurations and show that classical fields arise as coordinates on these manifolds. The extension is achieved by embedding both particle and field degrees of freedom into a joint state-space geometry and analyzing the induced evolution on the tangent bundle of localized states. Within this setting, the unitary Schrödinger dynamics, combined with the random-matrix model of system-environment interaction, yields effective diffusion in state space together with repeated localization due to environmental recording. As a result, although field states are not themselves confined near classical configurations, the interaction constrains the particle to probe only a restricted sector of the field, corresponding to a tubular neighborhood of localized field states. The resulting dynamics reproduces classical field equations, including the sourced Klein-Gordon equation and the corresponding force law. Classical field behavior thus emerges from unitary quantum dynamics without recourse to coherent states or modifications of the Schrödinger equation, and the formulation extends naturally to other fields, including the electromagnetic field.
Classical simulation of quantum circuits remains indispensable for algorithm development, hardware validation, and error analysis in the noisy intermediate-scale quantum (NISQ) era. However, state-vector simulation faces exponential memory scaling, with an n-qubit system requiring O(2^n) complex amplitudes, and existing simulators often lack the flexibility to exploit heterogeneous computing resources at runtime. This paper presents a GPU-accelerated quantum circuit simulation framework that introduces three contributions: (1) an empirical backend selection algorithm that benchmarks CuPy, PyTorch-CUDA, and NumPy-CPU backends at runtime and selects the optimal execution path based on measured throughput; (2) a directed acyclic graph (DAG) based gate fusion engine that reduces circuit depth through automated identification of fusible gate sequences, coupled with adaptive precision switching between complex64 and complex128 representations; and (3) a memory-aware fallback mechanism that monitors GPU memory consumption and gracefully degrades to CPU execution when resources are exhausted. The framework integrates with Qiskit, Cirq, PennyLane, and Amazon Braket through a unified adapter layer. Benchmarks on an NVIDIA A100-SXM4 (40 GiB) GPU demonstrate speedups of 64x to 146x over NumPy CPU execution for state-vector simulation of circuits with 20 to 28 qubits, with speedups exceeding 5x from 16 qubits onward. Hardware validation on an IBM quantum processing unit (QPU) confirms Bell state fidelity of 0.939, a five-qubit Greenberger-Horne-Zeilinger (GHZ) state fidelity of 0.853, and circuit depth reduction from 42 to 14 gates through the fusion pipeline. The system is designed for portability across NVIDIA consumer and data-center GPUs, requiring no vendor-specific compilation steps.
Universal photon blockade in a two-mode Jaynes-Cummings model incorporating third-order Kerr nonlinearity is demonstrated with a single two-level atom coupled to a waveguide microcavity. Realization of this universal photon blockade is attributed to the cooperative effects of field-atom coupling and Kerr nonlinearity. More importantly, this antibunching is found to be robust against the atomic spontaneous emission, driving field strength, and defect-induced cavity mode coupling. The strong antibunching effect in this resonance-driven scheme is essentially different from those without Kerr nonlinearity. Moreover, this work expands the platform for achieving universal photon blockade and reveals the cooperative advantages of nonlinearities in enhancing the purity and brightness of single-photon sources, representing a novel strategy toward high-performance single-photon sources in integrated quantum optical devices.
Violation of the Clauser-Horne-Shimony-Holt (CHSH) inequality certifies genuine quantum correlations. In this work, we formalize in Lean 4 the rigidity theorem -- any strategy achieving near-optimal CHSH value must be locally isometric to the canonical qubit strategy. In the course of formalization, we identified a gap in the argument of McKague, Yang, and Scarani (arXiv:1203.2976).
In superconducting quantum circuits, decoherence improvements are frequently obtained through process interventions that simultaneously modify surface chemistry, microstructural topology, and device geometry, leaving mechanistic attribution structurally underdetermined. Predictive materials engineering requires measurable structural statistics to be separated from geometry-dependent coupling coefficients into independently testable factors. We introduce the concept of classical and quantum microstructure. In that context, we formulate a channel-wise separable framework for decoherence in superconducting transmon qubits in which each loss channel is described by a reduced prescriptor. Here, a channel-specific microstructural state variable is determined independently of device geometry, and a geometry-dependent coupling functional is computable from field solutions without reference to surface chemistry. We derive this product form from a spatially resolved kernel representation and establish a perturbative separability criterion that defines the regime where independent variation of the variables is valid. The framework specifies five prescriptor classes for dominant loss pathways in transmon-class devices. Falsifiability is operationalized through a pre-committed 2x2 experimental protocol in which the variables must satisfy independent ratio checks within propagated uncertainty. A Minimum-Dataset Specification standardizes reporting for cross-laboratory inference. Part I establishes the conceptual and mathematical architecture; coordinated experimental validation is reserved for Part II.
We present an analytical theory for the most subradiant modes in a finite one-dimensional emitter array coupled to either an ideal or a nonideal waveguide. Using an effective non-Hermitian Hamiltonian together with a Bragg-edge open-boundary ansatz, we derive compact eigenvalue expressions showing that the linewidths of the most subradiant states exhibit a universal N^{-3} scaling in both cases. However, in the deep-subwavelength regime, the decay rates display even-odd oscillations due to boundary interference. Furthermore, we demonstrate that the collective energy shift of the most subradiant state approaches a constant value that depends on the atomic separation, with the leading finite-size correction scaling as N^{-2}. These results unify the roles of Bragg-edge interference, finite-size effects, and near-field dipole-dipole interactions in shaping ultranarrow, strongly shifted subradiant resonances, providing a transparent framework beyond the ideal-waveguide limit and opening potential applications in subradiant spectroscopy and waveguide-QED-based sensing.
Entropic uncertainty relations provide an information-theoretic framework for quantifying the fundamental indeterminacy inherent in quantum mechanics. We propose more stringent quantum-memory-assisted entropic uncertainty relations for complete sets of mutually unbiased bases in multipartite scenarios. We present lower and upper bounds of the quantum uncertainties based on the complementarity of the observables, the purity of the measured state, the (conditional) von-Neumann entropies, the Holevo quantities and mutual information. The results are illustrated by several representative cases, showing that our bounds are tighter than and outperform previously existing bounds.
We introduce the notion of dismagicker: non-Clifford unitary gate designed to reduce the non-stabilizerness (also called magic) of quantum many-body states. Although both entanglement and non-stabilizerness are fundamental quantum resources, they require distinct control strategies. While disentanglers (unitary operations that lower entanglement) are well-established in tensor network methods, analogous concept for non-stabilizerness suppression has been largely missing. In this work, we define dismagicker as non-Clifford unitary operation that actively suppresses non-stabilizerness, steering states toward classically simulatable stabilizer states. We develop optimization method for constructing dismagickers within the Matrix Product States framework. Our numerical results show that the non-stabilizerness reduction procedure, when combined with entanglement reduction steps with Clifford circuits, significantly improves the accuracy for both classical simulation of many-body systems and quantum state preparation on quantum devices. Dismagicker enriches our toolkit for the manipulation of many-body states by unifying non-stabilizerness and entanglement reduction.
We find that reinforcement exponentially reduces computation time of the quantum search problem from $\sqrt{D}$ to $\ln D$ in a $D$-dimensional system. Therefor, a reinforced quantum search is expected to exhibit an exponentially larger noise threshold compared to a standard search algorithm in a noisy environment. We use numerical simulations to characterize the level of noise tolerance via reinforcement in the presence of both coherent and incoherent noise, considering a system of $N$ qubits and a single $D$-level (qudit) system. Our results show that reinforcement significantly enhances the algorithm's success probability and improves the scaling of its computation time with system size. These findings indicate that reinforcement offers a promising strategy for error mitigation, especially when a precise noise model is unavailable.
The development of quantum networks (QNs) relies on efficient mechanisms for distributing entanglement among multiple quantum users (QUs) under practical system constraints. This paper investigates the problem of entanglement rate maximization in a dual-connectivity (DC) wireless quantum network comprising multiple quantum base stations (QBSs). Under the DC architecture, each QU can associate with up to two QBSs, thereby enhancing resource utilization compared to conventional single-connectivity (SC) schemes. The joint QBS-QU association and entanglement generation rate allocation problem is formulated as a mixed-integer nonlinear programming problem that incorporates practical constraints, including limited QBS entanglement generation capacity as well as heterogeneous minimum entanglement rate demands and fidelity requirements for QUs. To efficiently solve this challenging problem, an alternating optimization (AO) algorithm is developed, which decomposes the original formulation into entanglement rate allocation and association subproblems. Simulation results demonstrate that the proposed DC architecture significantly outperforms SC schemes, while the AO algorithm achieves near-optimal performance with substantially reduced computational complexity.
Mutually Unbiased Bases (MUBs) constitute a fundamental geometric structure in quantum theory, known for providing an optimal measurement scheme for quantum state tomography. In prime and prime-power dimensions, analytical constructions of maximal sets of MUBs are well-known and standard construction relies on the Weyl-Heisenberg (WH) group and finite fields. In non-prime-power dimensions, on the other hand, the existence of such maximal sets remains an open question. We present a generalized numerical method of constructing MUBs without any reliance on a priori group structure or specific algebraic frameworks. Formulating the problem at the level of Gram matrix, we reduce the search for complete sets of $d+1$ MUBs in dimension $d$ to a phase space optimisation problem. We use the fact that the MUB Gram matrix is a projection matrix, and the third- and fourth-order trace constraints are necessary and sufficient conditions for a valid projection matrix. We further develop a classification framework based on third-order Bargmann invariants and automorphism groups, allowing us to probe the underlying algebraic and geometric structure of the resulting configurations. Numerical applications of this method in dimensions $3$, $4$, and $5$ demonstrate that all numerically constructed solutions are mutually isomorphic, are isolated points in phase space, and possess automorphism groups that coincide exactly with the Clifford group, the normalizer of the WH group. Though the scope of the search was limited, in dimension $d = 6$ our numerical search yielded no MUBs within explored parameter space.
Unitary $k$-designs are central to quantum information and quantum many-body physics as efficient proxies for Haar-random dynamics. We study how chaotic Hamiltonian evolution can generate unitary $k$-designs. Standard approaches typically rely on many independent Hamiltonian realizations or fine-tuning evolution times. Here we show that unitary designs can instead arise from a quenched temporal ensemble, where Hamiltonians are sampled once and held fixed, while randomness enters only through the evolution times. We analyze a two-step protocol (2SP), applying $H_1$ for time $t_1$ and $H_2$ for time $t_2$, and a three-step protocol (3SP) with an additional quench, with all times randomly drawn from a prescribed distribution. Time averaging imposes energy-index matching in the frame potential (FP), which quantifies the distance to Haar random. Analytically and numerically, we show that 2SP cannot realize a general unitary $k$-design, whereas 3SP can do so for arbitrary $k$. The advantage of 3SP is that the additional random phases impose stronger constraints, eliminating independent permutation degrees of freedom in the FP. For Gaussian unitary ensemble Hamiltonians, we prove these results rigorously and show that under imperfect time averaging, 3SP achieves the same accuracy as 2SP with a parametrically narrower time window.
Parametrised quantum circuits are a central framework for near term quantum machine learning. However, it remains challenging to determine in advance how architectural choices, such as encoding strategies, gate placement, and entangling structure, influence both the expressive capacity of the model and its trainability during optimisation. We introduce a data-agnostic framework, one requiring no knowledge of a training dataset or optimisation trajectory, that maps a broad family of circuits into a single architecture matrix built over learnable features and parameters. We show that this framework provides an explicit link between circuit structure, the correlations among learnable features, and the geometry of training kernels through the factorisation of each of these objects as quadratic forms in terms of these matrices. We show how correlations between learnable features arise from shared parameter-induced harmonics generated by non-commuting gate-observable interactions during Heisenberg back-propagation, and how these correlations are encoded directly in the architecture matrix. From this perspective, kernel structure and coefficient statistics can be reconstructed analytically from circuit design alone, without reference to a dataset or optimisation trajectory. The resulting framework makes circuit-induced structure explicit, separating architectural effects from data-dependent ones, and provides a principled foundation for analysing and comparing parametrised quantum circuits based on intrinsic, design-level signatures.
Beyond ground state energy estimation, quantum phase estimation (QPE) applied to many-electron systems has the potential to output a projection on the ground state, that would enable the evaluation of observables other than the energy. In this article, after recalling the role of QPE free parameters, we detail the derivation of first-order and unified conditions on unitaries that allow us to control the energy estimation precision and lead to tighter bounds than in previous works. We then introduce a novel condition that allows us to also control the state projection precision. We apply these conditions to a Trotterization case, leading to tighter bounds than the previous ones. The main results in this article are formal, with a first numerical illustration on the H2 molecule that allows us to derive useful insights.
A common view in monitored quantum dynamics is that local measurements suppress entanglement growth. We show that this intuition can fail in a one-dimensional spinful fermionic chain governed by a BCS Hamiltonian with pairing strength $\Delta$ and subject to continuous, on-site, spin-resolved charge measurements at rate $\gamma$. Using free-fermion simulations and quasiparticle analysis, we show that pairing suppresses entanglement growth, while measurements suppress pairing. Their competition yields measurement-enhanced entanglement: for $\Delta>0$, the steady-state entanglement $S_s$ increases with $\gamma$ over a finite interval $0<\gamma<\gamma_{\rm peak}$. This occurs because stronger measurements suppress pairing correlations, which would otherwise suppress entanglement growth. Using a nonlinear sigma-model calculation and free-fermion simulations, we provide evidence that for $\Delta>0$ and small but finite $\gamma$, the steady-state entanglement scales as $S_s\sim \ln^2 L$. This implies that, in this setting, measurement-enhanced entanglement does not persist in the thermodynamic limit.
Quantum clock synchronization (QCS) aims to establish a shared temporal reference between distant nodes by exploiting uniquely quantum phenomena such as entanglement, single-photon interference, and quantum correlations. In contrast to classical synchronization and time-transfer techniques, which are limited by signal propagation delays, atmospheric disturbances, and oscillator drift, QCS protocols offer the potential to surpass classical precision bounds and enhance resilience against adversarial manipulations. As precise and secure time synchronization underpins distributed quantum networks, navigation systems, and emerging quantum Internet infrastructures, understanding QCS principles, capabilities, and implementation challenges has become increasingly important. This survey provides a unified and critical overview of the rapidly growing QCS research landscape, highlighting fundamentals, protocol types, enabling resources, performance constraints, security considerations, and practical implementations of QCS. We first introduce the theoretical underpinnings of QCS, including entanglement-assisted time transfer, Hong-Ou-Mandel interference-based synchronization, and quantum slow-clock transport. We then categorize the main QCS protocols, ranging from ticking-qubit and entanglement-based schemes to time-of-arrival correlation methods, conveyor-belt synchronization, and quantum-enhanced two-way time transfer. This organization clarifies the relationships between protocol families and their achievable precision advantages over classical methods. Key quantum resources such as spontaneous parametric down-conversion-based entangled photon pairs, Greenberger-Horne-Zeilinger and W multipartite states, squeezed and frequency-entangled light, quantum frequency combs, and quantum memories are reviewed in the context of scalability and robustness.
Quantum simulation and computing traditionally has been based on two main paradigms, namely, digital and analog. In the digital paradigm, usually single and two-qubit gates (where qubit is an acronym for quantum bit) are employed as building blocks for scalable, universal quantum computing, although errors add up fast and error correction will be ultimately needed for scaling up. In the analog paradigm, large analog blocks are normally employed for a unitary dynamics that carries out the computation, enabling quantum operations on many qubits with reduced errors, but with the drawback of a limited choice of evolutions and lack of universality. In the past decade, a new paradigm has emerged, showing interesting possibilities for quantum simulation and computing in the near and mid term. This is the paradigm of digital-analog quantum technologies, which proposes to combine the best of both paradigms: large analog blocks, provided by native interactions of the employed quantum platform, enabling scalability, combined with digital gates, allowing for more versatility and, ultimately, universality. In this Perspective, I give an overview of the evolution of the field along the past decade, and an outlook for its future possibilities.
Characterizing quantum states is essential for validating quantum devices, yet conventional quantum state tomography becomes prohibitively expensive as system size grows. Direct tomography offers a distinct route by enabling selective access to individual complex density-matrix elements, with a particular advantage for sparse target states and some verification tasks. Here we introduce a direct quantum state tomography scheme combining strong-measurement estimation with a fan-out coupling architecture. It enables mutually commuting interactions between system qubits and a single meter qubit, thereby achieving constant circuit depth, independent of system size. Notably, the involutory fan-out coupling reduces to the identity under repetition, enabling straightforward noise scaling for quantum error mitigation. We experimentally validate the scheme on a superconducting quantum processor via the IBM Quantum Platform, demonstrating four-qubit state reconstruction and single-circuit GHZ-state fidelity estimation up to 20 qubits with error mitigation. Consistent results with standard tomography and improved efficiency establish our scheme as a promising approach to reconstructing full quantum states and scalable verification tasks.
This thesis develops a decision-theoretic framework for extracting thermodynamic work from temporal correlations in quantum systems. We model a classical agent -- lacking quantum memory -- performing adaptive work extraction through continuous inference and decision-making under uncertainty. By introducing $\rho^*$-ideal protocols, we demonstrate that exploiting memory effects allows adaptive strategies to surpass non-adaptive bounds. We formalize this via the Time-Ordered Free Energy (TOFE), a novel upper bound for causal, adaptive operations that reveals a thermodynamic gap linked to adaptive ordered discord. Additionally, we tackle work extraction from unknown sources using reinforcement learning. By adapting multi-armed bandit algorithms, we show an agent can simultaneously learn an unknown i.i.d. quantum state and extract work, achieving polylogarithmic cumulative dissipation that significantly outperforms standard tomography. Overall, this work lays the foundation for predictive and learning-based quantum thermodynamics.
We formulate a global-position colored-permutation encoding for the capacitated vehicle routing problem. Each of the $K$ vehicles selects a disjoint partial permutation, and the sum of these $K$ color layers forms a full $n\times n$ permutation matrix that assigns every customer to exactly one visit position. This representation uses $n^2K$ binary decision variables arranged as $K$ color layers over a common permutation structure, while vehicle capacities are enforced by weighted sums over the entries of each color class, requiring no explicit load register and hence no extra logical qubits beyond the routing variables. In contrast, many prior quantum encodings introduce an explicit capacity or load representation with additional qubits. Our construction is designed to exploit the Constraint-Enhanced QAOA framework together with its encoded-manifold analyses. Building on a requirements-based view of quantum utility in CVRP, we develop a routing optimization formulation that directly targets one of the main near-term bottlenecks, namely the additional logical-qubit cost of vehicle labels and explicit capacity constraints. Our proposal shows strong algorithmic performance in addition to qubit efficiency. On a standard benchmark suite, our end-to-end pipeline recovers the independently verified optima. The feasibility oracle may also be of independent interest as a reusable polynomial-time decoding and certification primitive for quantum and quantum-inspired routing pipelines.
Assembling large-scale, defect-free Rydberg atom arrays is a key technology for neutral-atom quantum computation. Dynamic holographic optical tweezers enable the assembly and reconfiguration of such arrays, but phase mismatches between successive holograms can induce destructive interference and transient trap loss during spatial-light-modulator refresh. In this work, we introduce the weighted-projective Gerchberg--Saxton (WPGS) algorithm, a phase-stable approach to dynamic hologram updates for large-scale Rydberg atom-array reconfiguration. By enforcing inter-frame trap-phase continuity while retaining weighted intensity equalization, WPGS suppresses refresh-induced transient degradation. The phase-difference distribution between consecutive holograms further provides a simple diagnostic of transient robustness. Moreover, enforcing the phase constraint reduces the number of iterations required at each update step, thereby accelerating hologram generation. Numerical simulations of 2D and 3D reconfiguration with more than $10^3$ traps, including multilayer assembly and interlayer transport, show robust transient intensities and significantly faster updates than conventional methods. These results establish inter-frame phase continuity as a practical design principle for dynamic holographic control and scalable neutral-atom array reconfiguration.
We propose a quantum measurement-based framework for probabilistic transformation of grayscale images using adaptive positive operator-valued measures (POVMs). In contrast, to existing approaches that are largely centered around segmentation or thresholding, the transformation is formulated here as a measurement-induced process acting directly on pixel intensities. The intensity values are embedded in a finite-dimensional Hilbert space, which allows the construction of data-adaptive measurement operators derived from Gaussian models of the image histogram. These operators naturally define an unsharp measurement of the intensity observable, with the reconstructed image obtained through expectation values of the measurement outcomes. To control the degree of measurement localization, we introduce a nonlinear sharpening transformation with a sharpening parameter, $\gamma$, that induces a continuous transition from unsharp measurements to projective measurements. This transition reflects an inherent trade-off between probabilistic smoothing and localization of intensity structures. In addition to the nonlinear sharpening parameter, we introduce another parameter $k$ (number of gaussian centers) which controls the resolution of the image during the transformation. Experimental results on standard benchmark images show that the proposed method gives effective data-adaptive transformations while preserving structural information.
The phenomenon of interaction-free measurement (IFM) enables the probabilistic detection of an absorbing object with reduced photon absorption. We report the experimental implementation of a simultaneous IFM of multiple objects using a single quantum probe on the cloud-based Ascella photonic processor of company Quandela. We demonstrate sequential IFM of up to 5 objects using a single photon, significantly extending the original IFM scheme for a single object. The experimental error-mitigated results confirm the theoretical predictions for this sequential IFM setup, and demonstrate a practical approach to scaling IFM to more complex quantum interrogation tasks.
Theories of the measured homodyne current generated by a stochastic Schrödinger equation (SSE) can be tested in a simulation of the Einstein-Podolsky-Rosen (EPR) correlations for a two-mode squeezed state. We carry out such a simulation, and determine the correct stochastic term for the measured current in the broad-band limit. Stratonovich rather than Ito stochastic noise agrees with experiment. We show that this is relevant to measurement noise and errors in quantum technologies. By analyzing the SSE trajectories as measurement settings are changed, we propose a modern version of Schrodinger's gedanken experiment, where one measures position and momenta simultaneously, ``one by direct, the other by indirect measurement''.
The phrase ``buy a quantum computer'' hides several different procurement problems. An institution may be seeking cloud access for teaching, reserved capacity for research, a local instrument for hardware training, an optimization appliance, or a strategic installation that reshapes facilities, staffing, and budgets. Because these choices differ in purpose, operating burden, and useful lifetime, the decision should be framed as acquisition of \emph{quantum capability} rather than selection of a presumed hardware winner. This manuscript develops a practical procurement framework that distinguishes five capability layers, separates peer-reviewed results from commercial offerings, pricing anchors, and public roadmaps, and compares the main commercial platform families -- superconducting circuits, trapped ions, neutral atoms, quantum annealing, and photonics -- through the lens of institutional fit, access model, and refresh pressure. The main conclusion is that most institutions should begin with the smallest layer of capability that produces repeatable near-term value, builds internal expertise, and preserves strategic flexibility. Large on-premises systems are justified only when mission requirements, site readiness, staffing, governance, and upgrade paths are already clear.
This paper presents a quantum search approach to combinatorial constraint satisfaction problems, demonstrated through the generation of magic squares. We reformulate magic square construction as a quantum search problem in which a reversible, constraint-sensitive oracle marks valid configurations for amplitude amplification via Grover's algorithm. Classical pre-processing using the Siamese construction and partial constraint checks generates a compact candidate domain before quantum encoding. Rather than integrating classical and quantum solvers in an iterative loop, this work uses the classical component for structured initialisation and the quantum component for search, and benchmarks the quantum approach against classical brute-force enumeration and backtracking. Our Qiskit implementation demonstrates the design of multi-register modular arithmetic circuits, oracle logic, and diffusion operators. Experiments are conducted on small grid instances, as larger grids are intractable on classical statevector simulators due to exponential memory growth. The results validate the correctness of the proposed quantum search pipeline and confirm the theoretical quadratic query advantage over classical search.
PulsePol is an elegantly designed pulse-sequence-based quantum control scheme that enables polarization transfer between electron and nuclear spins, for example, in nitrogen-vacancy (NV) centers. However, previous analyses of PulsePol assumed very strong, near-ideal, instantaneous microwave pulses, which is rarely achievable at higher magnetic fields. We revisit the PulsePol scheme under finite-pulse constraints and show that its performance significantly degrades due to finite-pulse effects. Using bimodal Floquet theory, we identify the symmetry-breaking mechanism responsible for this deterioration in fidelity. By phase adjustment, we reestablish the proper symmetry of the interaction-frame spin Hamiltonian, leading to a sequence called Q-PulsePol, where "Q" reflects the restored quadrature symmetry. Our results demonstrate robustness to finite-pulse effects and improved polarization transfer efficiency, establishing Q-PulsePol as a practical and reliable scheme for bulk hyperpolarization of nuclear spins in solids using a single-mode (zero-quantum or double-quantum) transfer. This work bridges idealized quantum control with realistic pulse engineering, establishing design rules for spin-based quantum control protocols.
Contextuality and nonlocality are distinct manifestations at the foundation of quantum mechanics, yet their coexistence within a single quantum state remains subtle. In a hybrid CHSH--KCBS scenario involving the entanglment of a qubit and a qutrit, the qutrit supports the KCBS contextuality test, and the CHSH nonlocality arises from correlations between the qubit and qutrit. Here, we derive the analytical closed-form expressions for both inequalities and also simulate this physics on a quantum circuit. We show that contextuality is governed solely by a population parameter $p_2$, associated with the occupation of the qutrit subsystem in the $|2\rangle$ level, which plays a distinguished role in the KCBS structure. In contrast, nonlocality depends irreducibly on coherence, involving both amplitudes and phases encoded in parameters $(X_i, Y_i)$. This separation of physical resources reveals parameter regimes that optimize KCBS violation while suppress CHSH violation, and vice versa. As a result, the optimal regions do not overlap, and coexistence is restricted to a narrow intermediate regime in parameter space.
Data-driven surrogates can replace expensive multiphysics solvers for parametric PDEs, yet building compact, accurate neural operators for three-dimensional problems remains challenging: in Fourier Neural Operators, dense mode-wise spectral channel mixing scales linearly with the number of retained Fourier modes, inflating parameter counts and limiting real-time deployability. We introduce HQ-LP-FNO, a hybrid quantum-classical FNO that replaces a configurable fraction of these dense spectral blocks with a compact, mode-shared variational quantum circuit mixer whose parameter count is independent of the Fourier mode budget. A parameter-matched classical bottleneck control is co-designed to provide a rigorous evaluation framework. Evaluated on three-dimensional surrogate modeling of high-energy laser processing, coupling heat transfer, melt-pool convection, free-surface deformation, and phase change, HQ-LP-FNO reduces trainable parameters by 15.6% relative to a classical baseline while lowering phase-fraction mean absolute error by 26% and relative temperature MAE from 2.89% to 2.56%. A sweep over the quantum-channel budget reveals that a moderate VQC allocation yields the best temperature metrics across all tested configurations, including the fully classical baseline, pointing toward an optimal classical-quantum partitioning. The ablation confirms that mode-shared mixing, naturally implemented by the VQC through its compact circuit structure, is the dominant contributor to these improvements. A noisy-simulator study under backend-calibrated noise from ibm-torino confirms numerical stability of the quantum mixer across the tested shot range. These results demonstrate that VQC-based parameter-efficient spectral mixing can improve neural operator surrogates for complex multiphysics problems and establish a controlled evaluation protocol for hybrid quantum operator learning in practice.
We propose a globally-admissible phenomenological spectral density of the bath for the non-Markovian Brownian motion of an optomechanical resonator, motivated by the near-resonance experimental observation of a non-Ohmic spectrum in [Nat. Commun. 6, 7606 (2015)]. To avoid divergences arising from a naive global extrapolation, we construct this phenomenological bath spectral density that reproduces the observed local-power-law behavior near the mechanical resonance while remaining well defined globally, ensuring the finiteness of the bath-induced renormalizations and quadrature fluctuations of the resonator. The corresponding model of the structured environment produces a nonlocal mechanical susceptibility whose analytic pole structure encodes the observed linewidth. The resulting dissipation kernel exhibits a power-law-modulated exponential decay with transient negativity, signaling strong memory effects. In the weak-coupling regime, the optical readout based on homodyne detection enables near-resonance spectroscopy and, with a calibrated drive on the resonator, permits, in principle, the reconstruction of the full mechanical susceptibility, thereby providing access to both the dissipative and dispersive bath contributions. Our results provide a consistent route from locally-inferred spectral properties to globally-admissible open-system descriptions and establish a framework for probing structured environments in cavity optomechanics.
Quantum coherence provides a controllable thermodynamic resource that can raise or lower the effective temperature of a cavity mode, enabling efficiency tuning in quantum heat engines. Here, we derive analytic expressions for the effective engine temperature, demonstrating the enhanced temperature tunability achievable via $N$-level ground-state coherence. We further unify ground- and excited-state coherence within a single analytic framework, revealing their interplay as a mechanism for thermodynamic control. Such quantum resources serve as tunable parameters that enable switching between heating, cooling, and cancellation regimes, driving the effective temperature from near-zero to divergence. Ultimately, our framework connects and generalizes previous models of quantum heat engines, and we identify rubidium atoms as a promising candidate for experimentally realizing these coherence-assisted effects.
Recently, Yamaguchi and Kempf [Phys. Rev. Lett. 136:010801, arXiv:2501.02757] proved that encrypted qubits can be cloned. In this work, we generalize the encrypted cloning protocol and prove that it also applies to higher-order quantum systems. Given that a straightforward generalization of the protocol using the exponential of the shift and phase operators fails to satisfy the unitary requirement for a quantum gate, we propose a different approach. We introduce a new operator to be used in the encryption process and show that it is unitary. We adapt the decryption operator from the reference paper to fit in the framework of multi-level quantum systems. We analyze the circuit implementation of the proposed operators and show that the overhead imposed by larger dimensions scales linearly with qudit dimension.
Contextuality and measurement incompatibility are two fundamental aspects of nonclassicality, and their manifestations in observed quantum correlations are often deeply interconnected. Recently, measurement incompatibility has been studied in connection with nonlocality, particularly in terms of their robustness under various quantum channels. This line of investigation helps establish a connection between the channels that break nonlocality and those that break incompatibility. In this study, we focus on an asymmetric bipartite Bell scenario involving three and four inputs on Alice and Bob sides, respectively, with each of these inputs having dichotomous outcomes. Under the assumption of locality, the observed statistics in this asymmetric scenario obeys the Elegant Bell inequality (EBI). Here, we use a different version of the EBI that relies on the assumption of the preparation noncontextuality. By taking the violation of this noncontextual version of EBI as a witness of preparation contextuality we establish a connection between the channels that break contextuality and the channels that break triple-wise measurement incompatibility. Our results suggest that any channel which breaks EBI contextuality will also break Clauser-Horne-Shimony-Holt (CHSH) nonlocality; however, the reverse does not hold. We also show that a depolarising channel that breaks N-wise incompatibility can also break a certain form of contextuality, witnessed by a generalised inequality involving N measurements on one wing of a bipartite Bell scenario.
We investigate the nonequilibrium dynamics of an open photon Bose-Einstein condensate in a dye-filled microcavity using a Lindblad master-equation approach, treating the condensate and the noncondensed fluctuations on the same footing. The driven-dissipative condensate exhibits a long-lived, metastable plateau stabilized by a ghost attractor, a fixed point that lies outside the physical domain in configuration space, yet stalls the condensate dynamics for exceedingly long times before it dephases to zero [Phys. Rev. Lett. 135, 053402 (2025)]. Despite the nonequilibrium origin of this dynamical stabilization, the condensate exhibits quasithermal fluctuations in the plateau in that the relative order-parameter fluctuations scale as the inverse square root of the system size. A linear stability analysis further reveals the presence of exceptional points, resulting in multiple non-Hermitian phase transitions associated with the relaxation dynamics into and out of the metastable condensate.
Damage in infrastructure is often hidden until it becomes costly or dangerous. Common examples include corrosion under insulation, early fatigue damage in steel, corrosion of embedded reinforcement, and abnormal current flow in batteries and power equipment. Magnetic methods are attractive because they can sense through coatings, insulation, and concrete cover without couplants, but field performance is often limited by lift-off, low-frequency drift, background magnetic noise, and the weak low-frequency response of pickup coils. This review examines two room-temperature quantum receiver platforms: optically pumped atomic magnetometers (OPMs) and nitrogen-vacancy (NV) diamond magnetometers. Rather than treating them as stand-alone sensors, we compare them as parts of a full measurement chain that includes source physics, geometry, readout, calibration, and interpretation. The literature is organized into four magnetic signal classes: driven induction responses, leakage fields in magnetic flux leakage inspection, passive self-fields linked to stress or corrosion, and fields produced by operational currents. OPMs are strongest for low-frequency, phase-referenced induction measurements, while NV sensors are strongest for near-surface field mapping, vector or gradient measurements, and differential current sensing in compact solid-state heads. Across all applications, deployment depends less on best-case sensitivity than on usable bandwidth, dynamic range, background rejection, geometry control, calibration, and validation. The clearest path to field use is therefore robust instrument engineering tied to qualification methods that reflect real inspection conditions.
Neural quantum states are powerful variational wavefunctions, but it remains unclear which many-body states can be represented efficiently by modern additive architectures. We introduce Walsh complexity, a basis-dependent measure of how broadly a wavefunction is spread over parity patterns. States with an almost uniform Walsh spectrum require exponentially large Walsh complexity from any good approximant. We show that shallow additive feed-forward networks cannot generate such complexity in the tame regime, e.g. polynomial activations with subexponential parameter scaling. As a concrete example, we construct a simple dimerized state prepared by a single layer of disjoint controlled-$Z$ gates. Although it has only short-range entanglement and a simple tensor-network description, its Walsh complexity is maximal. Full-cube fits across system size and depth are consistent with the complexity bound: for polynomial activations, successful fitting appears only once depth reaches a logarithmic scale in $N$, whereas activation saturation in $\tanh$ produces a sharp threshold-like jump already at depth $3$. Walsh complexity therefore provides an expressibility axis complementary to entanglement and clarifies when depth becomes an essential resource for additive neural quantum states.
Non-reciprocity and geometric frustration enable many-body systems to avoid crystalline order and instead exhibit complex, liquid-like behavior. Here we show that their interplay is richer than the sum of its parts, leading to surprising structural and dynamical phenomena. In our minimal model, two copies of Ising gauge theory are non-reciprocally coupled in a way that crucially preserves a local $\mathbb{Z}_2$ symmetry. We discover that the combined Wilson loop observable of the two copies exhibits linear asymptotic scaling, with a quasiparticle-pair confinement length tuned by the strength of the non-reciprocal coupling. Key dynamical features are revealed in the behavior of individual deconfined excitations due to strong interactions induced by the non-reciprocity, leading to motion on a critical percolation cluster that follows a self-avoiding trail. Mapping from this quasiparticle dynamics onto the magnetic noise spectrum, we discover that non-reciprocity tunes topological logarithmic contributions and causes long-lived metastable states due to quasiparticle trapping. Our work opens the way for broader investigations of geometrically frustrated non-reciprocity.
The ultrafast nonlinear optical response of molecular ensembles is fundamentally altered under strong light-matter coupling. To rigorously isolate the genuine many-body contributions, an exact time-domain field-subtraction protocol is developed within a fully non-perturbative Maxwell-Liouville framework explicitly incorporating the two-exciton manifold in real space and time. This approach reveals that while collective cavity delocalization drives the macroscopic nonlinear signal toward a severe harmonic cancellation (an effect termed "spectral starvation"), intrinsic many-body molecular interactions robustly resurrect genuine polaritonic double-quantum coherences (DQCs). This many-body resurrection is governed by a universal two-photon matching rule, $\Delta_B + 4J = \Omega_R$, linking molecular anharmonicity ($\Delta_B$) to the macroscopic Rabi splitting ($\Omega_R$) and excitonic coupling ($J$). Crucially, this dictates that J-aggregates ($J < 0$) uniquely isolate the resonant many-body state below the dense two-exciton scattering continuum, protecting the macroscopic coherence from spatial fragmentation. This predictive framework establishes a direct phase diagram to engineer and protect optical nonlinearities across diverse strongly coupled platforms.
Coherent interfaces between microwave-frequency quantum systems and low-loss optical links are essential for quantum networks. However, existing microwave-optical transducers often trade conversion efficiency against added noise, bandwidth, and device integrability. Here, we demonstrate coherent microwave-to-optical transduction based on magnon-exciton coupling in the layered antiferromagnet CrSBr. Driving the antiferromagnetic resonance with microwave signals imprints coherent modulation on a reflected optical probe, generating optical sidebands that are resonantly enhanced near excitonic transitions. While prior magnon-based approaches to microwave-to-optical transduction have typically relied on intrinsically weak off-resonant magneto-optical effects (e.g., Faraday rotation), our scheme exploits strong light-matter interactions at exciton resonances. Even in a bulk crystal without cavity enhancement, we observe coherent conversion over an intrinsically broadband window of ~ 300 MHz. We further show that multiple exciton-polariton resonances inherit the magnon-coupled response, suggesting a route to broaden the usable optical detuning range and to mitigate optical dissipation. Our results establish magnon-coupled excitons in layered magnets as a scalable platform for broadband microwave-optical interfaces, with pathways to higher cooperativity via reduced magnetic volume and cavity integration.
Literature provides several bounds for quantum local recovery, which essentially consider the number of message qudits, the distance, the length, and the locality of the involved codes. We give a family of $J$-affine variety codes that result in impure CSS codes. These quantum codes exceed several of the above mentioned bounds that apply to pure quantum locally recoverable codes. We also discuss a connection between bounds on quantum local recovery and on weight-constrained stabilizer codes.
The statistics of gaps between quantum energy levels is a hallmark criterion in quantum chaos and quantum integrability studies. The relevant distributions corresponding to exactly integrable vs. fully chaotic systems are universal and described by the Poisson vs. Wigner-Dyson curves. In the transitional regime between integrability and chaos, the distributions are much less universal and have not been understood quantitatively until now. We point out that the relevant statistics that controls these distributions is that of the matrix elements of the nonintegrable perturbation Hamiltonian in the energy eigenbasis of the unperturbed integrable system. With this insight, we formulate a simple random matrix ensemble that correctly reproduces the level spacing distributions in a variety of test systems. For the distribution of matrix elements appearing in our construction, we furthermore discover surprising universal features: across a variety of physical systems with diverse degrees of freedom, these distributions are dominated by simple power laws.
The CHSH mod 3 Bell inequality is a natural testbed for higher-dimensional quantum nonlocality, yet its maximal quantum violation and self-testing properties have remained unresolved. We determine its exact maximal quantum value and show that, up to unitary equivalence and the natural symmetries of the inequality, it admits a unique optimal irreducible strategy; equivalently, there are four symmetry-related optimal irreducible strategies. Each of these strategies uses a maximally entangled two-qutrit state. We further prove that any strategy whose value is within $\varepsilon$ of the optimum is $O(\sqrt{\varepsilon})$-close, up to local isometries, to a direct sum of optimal irreducible strategies.
We investigate whether commutativity is necessary to represent relativistic locality for localization observables of relativistic quantum systems in Minkowski spacetime. A well known no-go theorem by Halvorson and Clifton shows that commutativity of localization effects for causally separated regions is incompatible with other seemingly natural assumptions about spatial localization. Since commutativity is taken to represent locality in the Araki-Haag-Kastler framework of QFT, this prompts the question whether it follows from more elementary locality principles of quantum theory. Using Busch's operational analysis in terms of no-signaling and relativistic consistency, we argue that for particle-like systems commutativity is not implied by these principles. Assuming a natural local detectability principle, elementary localization observables are not localized in arbitrarily small spacetime neighborhoods of the relevant spatial regions, but rather in regions containing the entire rest space (a Cauchy surface) on which the measurement is performed. This reflects the particle picture itself, where localization occurs at a unique place on a rest space filled with ideal detectors, and therefore does not directly conflict with the Araki-Haag-Kastler notion of locality. We also show that commutativity and localization can coexist for less idealized localization procedures. To this end, we introduce conditional localization POVMs associated with bounded spatial regions interpreted as laboratories. By the gentle measurement lemma, these observables describe conditional localization probabilities and can, in principle, satisfy commutativity for causally separated laboratories. They may therefore be represented by local observables in the Araki-Haag-Kastler sense. Explicit examples will be presented in forthcoming work within local QFT.
The year 2025 had been designated by UNESCO as the International Year of Quantum Science and Technology. 125 years ago Max Planck's discovery of radiation quanta started the quantum era and 100 years ago quantum mechanics was discovered by Schroedinger, Heisenberg, Bohr, Pauli, Dirac, Born, Fermi and many others. By now, quantum mechanics is the theoretical foundation of most fields of physics and chemistry, and it is the basis for modern nanotechnology. How about plasma physics? How important are quantum effects in plasmas? In what experiments quantum effects are observed and where do they govern the behavior of plasmas? How can these effects be treated theoretically and via computer simulations? Starting with a brief historical overview we discuss the broad parameter range that is characteristic for plasmas and outline where quantum effects are relevant. This is the case primarily for warm dense matter and inertial fusion plasmas. We provide an overview on the theoretical quantum methods that are available for these dense plasmas and how their respective advantages can be combined in order to achieve predictive capability. The key is a downfolding approach that is based on first principles simulations.
We study the gravitational production of spectator massless vector particles in a single-field inflationary scenario, and the related entanglement generation across the Hubble horizon. Accordingly, we consider a quasi-de Sitter background evolution, with additional metric inhomogeneities induced by the inflaton quantum fluctuations. Afterwards, we compute the corresponding production amplitude and show that it depends only on the transverse polarizations, appearing \emph{de facto} gauge-invariant, consistently with our interpretation of the vector field as the electromagnetic one. We notice that particle wavelengths turn out to be small compared to the Hubble radius, thus favoring sub-Hubble production relative to super-Hubble one. In particular, highly energetic vector particles are preferentially produced and we show that polarization effects provide a significant contribution to this behavior. Moreover, the production of nearly collinear particle pairs appears as the most probable configuration, due to the background conformal invariance of the theory and the plane-wave (massless particle-like) nature of the metric perturbation. We thus specialize our treatment to super-Hubble scales, confirming their subdominant contribution to the number density of produced particles, albeit setting a corresponding lower bound on the reheating temperature. In this scheme, we explore superhorizon entanglement between sub- and super-Hubble field modes, computing the corresponding von Neumann entropy and discussing the effects of horizon crossing on the generation of primordial entanglement.
Tensor network methods, particularly those based on Matrix Product States (MPS), provide a powerful framework for simulating quantum many-body systems. A persistent computational challenge in these methods is the selection of the bond dimension chi, which controls the trade-off between accuracy and computational cost. Fixed bond dimension strategies either waste resources in low-entanglement regions or lose fidelity in high-entanglement regions. This work introduces an adaptive bond dimension management framework that uses von Neumann entropy feedback coupled with a Proportional-Integral-Derivative (PID) controller to dynamically adjust chi at each bond during simulation. An Exponential Moving Average (EMA) filter stabilizes entropy measurements against transient fluctuations, and a predictive scheduling module anticipates future bond dimension requirements from entropy trends. The per-bond granularity of the allocation ensures that computational resources concentrate where entanglement is largest. The framework integrates GPU-accelerated Singular Value Decomposition (SVD) via CuPy and the cuSOLVER backend, achieving individual SVD speedups of 4.1x at chi=256 and 7.1x at chi=2048 relative to CPU-based NumPy for isolated matrix factorisations (measured on an NVIDIA A100-SXM4-40GB GPU with CuPy 13.4.1 and CUDA 12.8). At the system level, benchmarks on the spin-1/2 antiferromagnetic Heisenberg chain demonstrate a 2.7x reduction in total DMRG wall time compared to fixed-chi simulations, with energy accuracy within 0.1% of the Bethe ansatz solution. Integration with the Density Matrix Renormalization Group (DMRG) algorithm yields ground-state energies per site converging to E/N = -0.4432 for the isotropic Heisenberg model at chi = 128. Validation against Amazon Web Services (AWS) Braket SV1 statevector simulator confirms agreement within 2-5% for small systems.
Recently, studies have explored the statistics of matrix elements of local operators in the Lieb-Liniger model. It was found that the probability distribution function for off-diagonal matrix elements $\langle \boldsymbol{\mu}|\mathcal{O}|\boldsymbol{\lambda} \rangle$ within the same macro-state is well described by the Fréchet distributions. This represents a significant development for the Eigenstate Thermalization Hypothesis (ETH). In this paper, we investigate a similar phenomenon in another solvable model: the disorder-free Sachdev-Ye-Kitaev (SYK) model. The Hamiltonian of this model consists of 4-body interactions of Majorana fermions. Unlike the conventional SYK model, the coupling strengths in this model are fixed to a constant, earning it the name ``disorder-free.'' We evaluate the matrix elements of operators constructed from products of $n$ Majorana fermions: $\mathcal{O} = \chi_{a_1}\chi_{a_2}\ldots \chi_{a_n}$. For a general choice of indices and $n \geq 4$, we find that the statistics of the off-diagonal matrix elements are well-fitted by a generalized inverse Gaussian distribution rather than Fréchet distributions.
Information flow is central to contemporary accounts of cognition, yet its physical basis in living neural matter remains poorly specified. Here, we develop a multiscale resource-theoretical framework motivated by the \textit{thermocoherent effect}, where heat flow is reciprocally coupled to a delocalized information flow carried by shared coherence and not reducible to local subsystem variables. Extending this line of work in light of recent results on correlation-enabled Mpemba-type thermal relaxation, we argue that the operational relevance of correlations depends less on their taxonomy than on their dynamical accessibility under the underlying interaction geometry. Relational structure encoded in the state of a single composite system -- including quantum entanglement, quantum discord, and classical correlations -- may therefore act as a usable physical resource that remains hidden from local subsystem descriptions. We propose that electrical, chemical, ionic, and thermal transport processes in neural matter may, under suitable microscopic conditions, generate or transduce partially hidden relational resources whose mutual coupling can progressively build larger-scale thermocoherent organization across spatial or spatiotemporal partitions in neural tissue. Ion-channel interfaces, hydrogen-bonded proton networks, aromatic $\pi$-electron architectures, and phosphate-rich motifs emerge as plausible substrate classes in which such resources may arise, become transiently accessible under environmental coupling, and leave coarse-grained signatures in neural dynamics. The resulting picture is neither a claim of macroscopic quantum cognition nor a reduction of cognition to abstract coding, but a falsifiable framework in which microscopic relational resources can bias transport, relaxation, signaling, and cross-scale neural coordination.
The rapid integration of Large Language Models (LLMs) into scientific writing fundamentally challenges traditional definitions of authorship, responsibility, and scientific integrity. As researchers transition from using computers as deterministic tools to managing them as ``virtual collaborators,'' the nature of human contribution must be re-evaluated. Using the drafting process of a recent computational physics manuscript as a case study, this essay explores the indispensable role of the Human-in-the-Loop (HITL). We demonstrate that while AI excels at structural organization and syntax generation, the human author bears the ultimate responsibility for enforcing rigorous physical logic, maintaining academic diplomacy, and anticipating peer-review critiques. In this paradigm, the human contribution shifts from writing boilerplate text to acting as a Principal Investigator who actively mentors and steers the AI's reasoning. To ensure accountability and preserve the integrity of the scientific record in this new era, I argue that the community must mandate the publication of full, unedited AI interaction transcripts as standard supplementary material.
Combining optical tweezers with fluorescence microscopy is a powerful tool for single-cell analysis, playing a pivotal role in disease diagnosis, cell sorting, and the investigation of cellular dynamics. However, fluorescence detection faces challenges such as blinking, photobleaching and autofluorescence in biotissues. To address these limitations, we developed a magnetic detection strategy by integrating quantum magnetometry using nitrogen-vacancy centers into optical tweezers, demonstrating precise trapping and manipulation of individual cells in microfluidic environment. We detected a magnetic signal of 89 {\mu}T from a single cell labeled with magnetic nanoparticles, compared to a noise floor of 3.9 {\mu}T observed in unlabeled cells. This platform provides a promising approach for high-precision single-cell analysis and holds significant potential for probing cellular activities within biological microenvironments.
We formulate Lagrangian descriptors (LDs) in the path integral framework. Averaging the classical LD over fluctuations about extremal trajectories defines a quantum LD that incorporates quantum effects. Invariant manifolds, which sharply organize classical transport, become finite-width phase space structures under quantum fluctuations, and their overlap provides a geometric mechanism consistent with tunneling as fluctuation-induced delocalization of transport barriers. We demonstrate this approach for the Hamiltonian saddle, where path integral sampling reveals manifold broadening and barrier penetration. This establishes a geometric framework for studying phase space transport and tunneling beyond the classical regime, while also providing a natural route toward the application of LDs to field theory.
This paper completes a previous work by constructing a class of positive-energy relativistic spatial localization observables in Minkowski spacetime within quantum field theory, using the stress-energy-momentum tensor smeared with suitable test functions. For each timelike direction, the construction yields a family of positive operator-valued measures (POVMs) on spacelike hypersurfaces, well defined on every n-particle sector and satisfying a natural relativistic causality condition excluding superluminal propagation of detection probabilities. These observables arise from local or quasi-local field-theoretic quantities and provide a rigorous version of earlier heuristic proposals. In the one-particle sector, the construction reduces to the observable introduced previously, and its first moment reproduces the Newton-Wigner position operator under suitable normalization conditions. Because the normally ordered stress-energy-momentum tensor is not positive on the full Fock space, as implied by the Reeh-Schlieder theorem, we study quantum energy inequalities and derive lower bounds controlling deviations from positivity. This leads to regularized families of positive operators approximating the localization effects. We also construct conditional localization observables for finite laboratories using modified local energy operators and their Friedrichs self-adjoint extensions. Using Haag duality and Kadison's result on affiliation, we show that the resulting conditional POVMs belong to local von Neumann algebras and therefore commute for causally separated regions, in agreement with the Araki-Haag-Kastler framework. These results support the view that commutativity of localization observables is recovered at the level of conditional measurements in finite spacetime regions.
Quantum machine learning (QML) stands at the intersection of quantum computing and artificial intelligence, offering the potential to solve problems that remain intractable for classical methods. However, the current landscape of QML software frameworks suffers from severe fragmentation: models developed in TensorFlow Quantum cannot execute on PennyLane backends, circuits authored in Qiskit Machine Learning cannot be deployed to Amazon Braket hardware, and researchers who invest in one ecosystem face prohibitive switching costs when migrating to another. This vendor lock-in impedes reproducibility, limits hardware access, and slows the pace of scientific discovery. In this paper, we present a framework-agnostic quantum neural network (QNN) architecture that abstracts away vendor-specific interfaces through a unified computational graph, a hardware abstraction layer (HAL), and a multi-framework export pipeline. The core architecture supports simultaneous integration with TensorFlow, PyTorch, and JAX as classical co-processors, while the HAL provides transparent access to IBM Quantum, Amazon Braket, Azure Quantum, IonQ, and Rigetti backends through a single application programming interface (API). We introduce three pluggable data encoding strategies (amplitude, angle, and instantaneous quantum polynomial encoding) that are compatible with all supported backends. An export module leveraging Open Neural Network Exchange (ONNX) metadata enables lossless circuit translation across Qiskit, Cirq, PennyLane, and Braket representations. We benchmark our framework on the Iris, Wine, and MNIST-4 classification tasks, demonstrating training time parity (within 8\% overhead) compared to native framework implementations, while achieving identical classification accuracy.
We present a systematic numerical investigation of the "entanglement geometry gravity" chain in random tensor networks (RTN) established by the ER EPR conjecture and Jacobson's thermodynamic derivation. First, we verify the kinematic foundation: the entanglement first law $\delta\langle K\rangle=\delta S$ (slope=1.000), the encoding of geometry by mutual information (correlation=0.92), and the locality of holographic perturbations (3.3x). We also confirm that gravitational dynamics (JT gravity) does not emerge, identifying a sharp kinematics-dynamics boundary. Second, and more importantly, we discover that many-body localization (MBL) is the mechanism that protects emergent holographic geometry from thermalization. Replacing Haar-random evolution (geometry lifetime $t\sim6$) with an XXZ Hamiltonian plus on-site disorder, we observe a finite-size crossover at disorder strength $W_c\approx10-12$ above which mutual-information-lattice correlations persist indefinitely ($r>0.5$ for $t>50$). We map the full parameter space: the optimal regime is a near-Ising anisotropy $\Delta\approx50$ with $W=30$ yielding $r=0.779\pm0.002$ (confirmed by a fine scan over $\Delta\in[30,70]$); only holographic (RTN) initial states sustain geometry, while product, Néel, and Bell-pair states do not. MBL preserves the spatial structure of entanglement (adjacent/non-adjacent MI ratio ~2.6-4.2x vs. 1.0x in the thermal phase), rather than its total amount. A comparison with classical cellular automata reveals that MBL uniquely breaks the entanglement-structure trade-off imposed by quantum monogamy: classical systems achieve spatial structure only at the cost of negligible mutual information, while MBL sustains both.
Combinatorial optimization problems become computationally intractable as these NP-hard problems scale. We previously proposed extraction-type majority voting logic (E-MVL), a quantum-inspired algorithm using digital logic circuits. E-MVL mimics the thermal spin dynamics of simulated annealing (SA) through controlled sparsification of spin interactions for efficient ground-state search. This study investigates the performance potential of E-MVL through systematic optimization and comprehensive benchmarking against SA. The target problem is the Sherrington-Kirkpatrick (SK) model with bimodal and Gaussian coupling distributions. Through equilibrium state analysis, we demonstrate that the sparsity control mechanism provides a consistent search of the solution space regardless of the problem's coupling distribution (bimodal, Gaussian) or size. E-MVL not only achieves the best performance among all tested algorithms-solving exact solutions up to 1600 spins where the best SA baseline is limited to 400 spins-but also provides insights that significantly improve SA's own temperature scheduling. These results establish E-MVL's dual contribution as both an efficient optimizer and a practical methodology for enhancing SA performance. Moreover, FPGA implementation achieved an approximately 6-fold faster solution speed than SA.
We present QCommute, a software tool implemented in C++ for symbolic computation of nested commutators between a Hamiltonian and local observables in quantum many-body spin-1/2 systems on one-, two-, and three-dimensional hypercubic lattices. The computation is performed algebraically directly in the thermodynamic limit, and the Hamiltonian parameters are kept symbolic. Importantly, this way the entire parameter space is covered in a single run. The implementation supports extensive parallelization to achieve high computational performance. QCommute enables the investigation of quantum dynamics in strongly correlated regimes that are inaccessible to perturbative approaches, either through direct Taylor expansion in time or via advanced techniques such as the recursion method.
We establish fundamental uncertainty relations for the hydrodynamic variables arising from the Madelung representation of quantum fields in curved spacetime. Through canonical quantization of the density $n$ and phase $\theta$ variables and their conjugate momenta, we derive exact uncertainty principles that depend on spacetime geometry through the lapse function $N$ and spatial metric $\gamma_{ij}$. These relations reveal how gravitational fields modulate quantum fluctuations and provide first-principles constraints for scalar field dark matter models and stochastic quantum gravity.
Simulating lattice gauge theories on quantum computers presents unique challenges that drive the development of novel theoretical frameworks. The orbifold lattice approach offers a scalable method for simulating SU($N$) gauge theories in arbitrary dimensions. In this work, we present three improvements: (i) two new simplified Hamiltonians, (ii) an encoding of the SU(2) theory with smaller number of qubits, and (iii) a reduction in the requirement for large scalar masses to reach the Kogut-Susskind limit, achieved via the inclusion of an additional term in the Hamiltonian. These advancements significantly reduce circuit depth and qubit requirements for quantum simulations. We benchmarked these improvements using Monte Carlo simulations of SU(2) in (2+1) dimensions. Preliminary results demonstrate the effectiveness of these developments and further validate the use of noncompact variables as a promising framework for scalable quantum simulations of gauge theories.
The Boltzmann-Loschmidt dispute of 1876 questioned the possibility of a statistical irreversible description by time reversible classical equations of motion of atoms. Here we show analytically and numerically that the quantum chaos diffusion of cold atoms, or ions, in a harmonic trap and pulsed optical lattice can be inverted back in time with up to 100\% efficiency. This is in sharp contrast to classical evolution where exponentially small errors break time reversibility. We argue that the existing experimental skills allow highlighting the Boltzmann-Loschmidt dispute from a quantum perspective.
It is well-known that quantum mechanics admits two distinct evolutions: the unitary evolution, which is deterministic and well described by the Schrödinger equation, and the collapse of the wave function, which is probablistic, generally non-unitary, and cannot be described by the Schrödinger equation. In this paper, starting with pure states, we show how the continuous collapse of the wave function can be described by the Schrödinger equation with a stochastic, time-dependent Hamiltonian. We analytically solve for the Hamiltonian responsible for projective measurements on an arbitrary $n$-level system and the position measurement on an harmonic oscillator in the ground state, and propose several experimental schemes to verify and utilize the conclusions. A critical feature is that the Hamiltonian must be state-dependent. We then discuss how the above formalism can also be applied to describe the collapse of the wave function of mixed quantum states. The formalism we proposed may unify the two distinct evolutions in quantum mechanics.
A question raised by Freedman & Hastings (2023) still stands: To produce a mathematical theory that would unify quantum entanglement/tensor-structure with parameterized/bundle-structure via their amalgamation (a hypothetical pushout) along bare quantum (information) theory -- a question motivated by the role that vector bundles of spaces of quantum states play in the K-theoretic classification of topological phases of matter. Here we produce a possible answer to this question. To that end, first we make precise a form of the relevant pushout diagram in monoidal category theory. With the question thus formalized, we proceed to compute this pushout and prove that it gives what is known as the external tensor product on vector bundles/K-classes, or rather on flat such bundles (flat K-theory), i.e., those equipped with monodromy encoding topological Berry phases. The external tensor product was recently highlighted in the context of topological phases of matter and through our work in quantum programming theory but has not otherwise found due attention in quantum theory yet.
Prototype-based clustering algorithms such as k-means are sensitive to the selection of initial cluster centroids, with poor initialization leading to slower convergence and suboptimal solutions trapped in local minima. We present Adaptive Quantum Optimized Centroid Initialization (AQOCI), a method that formulates the centroid initialization problem as a Quadratic Unconstrained Binary Optimization (QUBO) problem and solves it using quantum annealing or quantum-inspired solvers. AQOCI extends a prior method (QOCI) by introducing an iterative refinement mechanism inspired by the Gauss-Seidel and Jacobi methods, enabling the recovery of real-valued centroid coordinates from binary solver outputs through adaptive scaling and offset adjustments. We evaluate AQOCI using three solver backends: TABU search, simulated annealing, and D-Wave's HybridBQM on synthetic Gaussian data with controlled sweeps over cluster separation, cluster count, dimensionality, and sample size, as well as on the MOTIF malware classification dataset, comparing against standard k-means with random initialization and k-means++ initialization. On the MOTIF dataset, AQOCI produces clusterings that are competitive with and, at smaller sample sizes, superior to k-means++, with V-measure improvements of up to 26\%. On synthetic data with heavily overlapping clusters, AQOCI--SimAnn outperforms k-means++ in V-measure. On well-separated synthetic data, k-means++ is clearly superior, and AQOCI exhibits a consistent performance plateau attributable to the binary encoding resolution. The dimensionality sweep demonstrates scalability to at least $d = 10$ without degradation.
The doubly minimized Petz Renyi mutual information of order $\alpha$ is defined as the minimization of the Petz divergence of order $\alpha$ of a fixed bipartite quantum state relative to any product state. The doubly minimized sandwiched Renyi mutual information is defined analogously using the sandwiched divergence in place of the Petz divergence. In this work, we establish several properties of these two types of Renyi mutual information. In particular, for the Petz case, we prove additivity for $\alpha\in [1/2,2]$. For the sandwiched case, we establish a novel duality relation for $\alpha\in [2/3,\infty]$ via Sion's minimax theorem, and we subsequently use this duality relation to prove additivity for the same range of $\alpha$. Previously, additivity for the sandwiched case was known only for $\alpha\in [1,\infty]$, but it had been conjectured to hold for $\alpha\in [1/2,\infty]$.
The doubly minimized Petz Renyi mutual information of order $\alpha$ is defined as the minimum of the Petz divergence of order $\alpha$ of a given bipartite quantum state relative to all product states. The doubly minimized sandwiched Renyi mutual information is defined analogously, with the Petz divergence replaced by the sandwiched divergence. In this work, we study certain binary quantum state discrimination problems related to correlation detection. We show that the corresponding direct exponent is determined by the doubly minimized Petz Renyi mutual information of order $\alpha\in (1/2,1)$, and that the strong converse exponent is determined by the doubly minimized sandwiched Renyi mutual information of order $\alpha\in (1,\infty)$. This provides an operational interpretation of these types of Renyi mutual information and generalizes previous results for classical probability distributions to the quantum setting. For completeness, we also study the corresponding moderate deviation regime both below and above the threshold, and determine the Stein exponent and the second-order asymptotics.
The next generation of distributed quantum processors combines single-location quantum computing and quantum networking techniques to permit large entangled qubit groups to be established through remote processors, and quantum algorithms can be executed distributively. We present DisQ, as the first formal model of distributed quantum processors, and permit the analysis of distributed quantum programs in the new computation environment. The core of DisQ is a distributed quantum programming language that combines the concepts of Chemical Abstract Machine (CHAM) and Markov Decision Processes (MDP) with the objective of providing clearly distinguishing quantum concurrent and distributed behaviors. Based on the DisQ language, we develop a simulation relation, based on classical simulation infrastructure, to check the equivalence of a quantum algorithm and its distributed versions so that users can develop the distributed version of a sequential quantum program via a simulation check.
I consider the longstanding issue of the hermiticity of the Dirac equation in curved spacetime. Instead of imposing hermiticity by adding ad hoc terms, I renormalize the field by a scaling function, which is related to the determinant of the metric, and then regularize the renormalized field on a discrete lattice. I found that, for time-independent and diagonal (or conformally flat) coordinates, the Dirac equation returns a pseudo-Hermitian (i.e., PT-symmetric) Hamiltonian when properly regularized on the lattice. Notably, the PT-symmetry is unbroken, ensuring a real energy spectrum and unitary time evolution. This establishes stringent conditions for the existence of complex spectra in 1D non-Hermitian (NH) models. Conversely, time-dependent spacetime coordinates break pseudohermiticity, yielding NH Hamiltonians with nonunitary time evolution. Similarly, space-dependent coordinates lead to the NH skin effect (NHSE), i.e., the accumulation of localized states on the boundaries. Arguably, these NH effects are physical: time dependence leads to local gain and loss processes and nonunitary growth or decay. Conversely, space dependence leads to the NHSE with spatial decay of the fields in a preferential direction. In other words, the curvature gradients induce an imaginary gauge field, corresponding to a drift force acting in space and time, pushing the eigenmodes to the boundaries or forcing their probability density to increase or decrease over time. Hence, temporal curvature gradients produce nonunitary gain or loss, while spatial curvature gradients correspond to the NHSE, allowing for the description of these two phenomena in a unified framework. This also suggests a duality between NH physics and spacetime deformations, framing NH physics in purely geometric terms. This metric-induced nonhermiticity unveils an unexpected connection between the spacetime metric and NH phases of matter.
We present a five-module pedagogical framework for teaching physics-informed machine learning (ML) through two progressively complex physical systems: a driven, damped nonlinear pendulum and a one-dimensional quantum anharmonic oscillator. Five model architectures are implemented and compared: a standard artificial neural network (ANN), a one-dimensional convolutional neural network (CNN), a long short-term memory (LSTM) network, and two physics-informed neural networks (PINNs) -- one per physical system. All models are implemented in PyTorch~2.9 and executed on an NVIDIA RTX~5090 GPU, making the framework directly applicable to modern deep learning laboratory courses. Quantitative benchmarks show that data-driven models achieve mean absolute errors of $1.3\times10^{-2}$~rad (pendulum ANN) and $4.4\times10^{-5}$~a.u.\ (quantum CNN), while the curriculum-trained pendulum PINN reaches an MAE of $3.1\times10^{-2}$~rad using only collocation points. A systematic CPU-vs-GPU benchmark reveals speedups ranging from $1.2\times$ (small ANN) to $24.6\times$ (LSTM), providing a concrete pedagogical demonstration of when GPU acceleration is -- and is not -- warranted. The framework is packaged as self-contained Jupyter notebooks designed for a graduate-level \emph{Deep Neural Networks for Physical Systems} course, with embedded reflection questions that guide students from data-driven thinking toward physics-constrained formulations.
We investigate prepare-and-measure scenarios in which a sender and a receiver use entanglement to send quantum information over a channel with limited capacity. We formalise this framework, identify its basic properties and provide numerical tools for optimising quantum protocols for generic communication tasks. The seminal protocol for sending quantum information over a classical channel is teleportation. We study a natural stochastic generalisation in which the sender holds $N$ qubits from which the receiver can recover one on demand. We show that with two bits of communication alone, this task can be solved exactly for all $N$, if the sender and receiver have access to stronger-than-quantum nonlocality. We then consider entanglement-based protocols and show that these can be constructed systematically by leveraging connections to several well-known quantum information primitives, such as teleportation, cloning machines and random access coding. In particular, we show that by using genuine multi-particle entangled measurements, one can construct a universal stochastic teleportation machine, i.e.~a device whose teleportation fidelity is independent of the quantum input.
Quantum error correction codes defined on hyperbolic lattices leverage the unique geometric properties of the hyperbolic space to enhance the performance of quantum error correction. By embedding qubits in hyperbolic lattices, these codes achieve higher encoding rates and lower qubit overhead compared to those defined on conventional Euclidean lattices. Building on recent advances in hyperbolic crystallography, we introduce a unified framework for the systematic construction and scalable benchmarking of CSS quantum error correction codes on hyperbolic lattices. A central component of this framework is the Hyperbolic Cycle Basis algorithm, which employs graph-theoretic methods to efficiently identify all plaquette cycles (parity-check supports) and nontrivial cycles (logical operators). This enables scalable and automated benchmarking of a broad class of CSS codes defined on hyperbolic geometries. We apply this framework to construct and simulate two representative hyperbolic quantum error correction codes (HQECCs), evaluating key performance metrics such as encoding rate, error threshold, and code distance for different sublattices. While HQECCs serve as concrete examples, the framework can be adapted to a wide range of CSS codes, including those with more intricate stabilizer structures such as Floquet codes. This work establishes a foundation for systematic exploration and benchmarking of CSS codes on hyperbolic lattices, paving the way toward practical, high-performance quantum error correction.
Developing scalable, fault-tolerant atomic quantum processors requires precise control over large arrays of optical beams. This remains a major challenge due to inherent imperfections in classical control hardware, such as inter-channel crosstalk and beam leakage. In this work, we introduce a hardware co-designed intelligent quantum control framework to address these limitations. We construct a mathematical model of the photonic control hardware, integrate it into the quantum optimal control (QOC) framework, and apply reinforcement learning (RL) techniques to discover optimal control strategies. We demonstrate that the proposed framework enables robust, high-fidelity parallel single-qubit gate operations under realistic control conditions, where each atom is individually addressed by an optical beam. Specifically, we implement and benchmark three optimization strategies: a classical hybrid Self-Adaptive Differential Evolution-Adam (SADE-Adam) optimizer, a conventional RL approach based on Proximal Policy Optimization (PPO), and a novel end-to-end differentiable RL method. Using SADE-Adam as a baseline, we find that while PPO performance degrades as system complexity increases, the end-to-end differentiable RL consistently achieves gate fidelities above 99.9$\%$, exhibits faster convergence, and maintains robustness under varied channel crosstalk strength and randomized dynamic control imperfections.
The characterization of Hamiltonians and other components of open quantum dynamical systems plays a crucial role in quantum computing and other applications. Scientific machine learning techniques have been applied to this problem in a variety of ways, including by modeling with deep neural networks. However, the majority of mathematical models describing open quantum systems are linear, and the natural nonlinearities in learnable models have not been incorporated using physical principles. We present a data-driven model for open quantum systems that includes learnable, thermodynamically consistent terms. The trained model is interpretable, as it directly estimates the system Hamiltonian and linear components of coupling to the environment. We validate the model on synthetic two and three-level data, as well as experimental two-level data collected from a quantum device at Lawrence Livermore National Laboratory.
The development of complex circuits for practical applications in the current quantum computing ecosystem is based on basic primitives such as Bell states, which provide superposition, entanglement, and coherence. The range of domain-specific quantum applications has been greatly expanded by the availability of simulators and platforms such as IBM Quantum, which are supported by Qiskit. However, disparities between ideal simulator outputs and actual quantum processing unit (QPU) executions in the Noisy Intermediate-Scale Quantum (NISQ) era require the application of quantum error mitigation techniques. Limitations arise from hardware constraints in superconducting qubit systems and from the limited resources of classical simulators as quantum circuits grow. Quantum decoherence, which lowers gate fidelity and builds up at the circuit level with increasing depth, is specifically caused by material-induced flaws and interfaces. This creates a clear connection between circuit reliability, device performance, and material attributes. To address this, the current work uses both simulation and actual hardware on the IBM Sherbrooke 127-qubit processor to study three basic circuit classes over 4 to 10 qubits: the quantum Fourier transform, the Greenberger-Horne-Zeilinger state, and the W state. The study examines trade-offs between circuit complexity, noise robustness, and resource utilization by contrasting simulator and QPU results. The results imply that circuit fidelity can serve as an indirect probe of material-limited noise, opening the door to a framework for designing quantum circuits that accounts for both hardware and materials to achieve scalable quantum advantage.
Engineering effective Hamiltonians is essential for advancing quantum technologies including quantum simulation, sensing, and computing. This paper presents a general framework for effective Hamiltonian engineering, enabling robust, precise, and efficient quantum control strategies. To achieve efficiency, we focus on creating target zeroth-order effective Hamiltonians while minimizing higher-order contributions and enhancing robustness against systematic errors. The control design identifies the minimal subspace of the toggling-frame Hamiltonian and the full set of achievable, zeroth-order, effective Hamiltonians. The framework also enables robust state transfer, characterization of achievable density matrices, and extension to stochastic parameter fluctuations via a cumulant expansion. Examples are included to illustrate the process flow and resultant precision and robustness.
We summarize the key ingredients required for universal topological quantum computation using Majorana zero modes in networks of topological superconductor nanowires. Particular emphasis is placed on the use of both sparse and dense logical qubit encodings, and on the transitions between them via projective parity measurements. Combined with hybridization, these operations extend the computational capabilities beyond braiding alone and enable universal gate sets. In addition to outlining the theoretical foundations-including the algebra of Majorana operators, along with the stabilizer formalism-we introduce an efficient numerical method for simulating the time-dependent dynamics of such systems. This method, based on the time dependent Pfaffian formalism, allows for the classical simulation of realistic device architectures that incorporate braiding, projective measurements, and disorder. The result is a semi-pedagogical overview and computational toolbox designed to support further exploration of topological quantum computing platforms.
Quantum Reservoir Computing (QRC) uses quantum dynamics to efficiently process temporal data. In this work, we investigate a QRC framework based on two coupled Kerr nonlinear oscillators, a system well-suited for time-series prediction tasks due to its complex nonlinear interactions and potentially high-dimensional state space. We explore how its performance in forecasting both linear and nonlinear time-series depends on key physical parameters: input drive strength, Kerr nonlinearity, and oscillator coupling, and analyze the role of entanglement in improving the reservoir's computational performance, focusing on its effect on predicting non-trivial time series. Using logarithmic negativity to quantify entanglement and normalized root mean square error (NRMSE) to evaluate predictive accuracy, individual parameter sweeps show that optimal performance occurs at moderate but non-zero entanglement. Furthermore, an aggregated binned analysis reveals that this moderate entanglement is consistently associated with the optimal average predictive performance across the parameter space, an observation that persists up to a threshold in the input frequency. This relationship persists under some levels of dissipation and dephasing. In particular, we find that higher dissipation rates can enhance performance. These findings contribute to the broader understanding of quantum reservoirs for high performance, efficient quantum machine learning and time-series forecasting.
Genuine multipartite entanglement is arguably the most valuable form of entanglement in the multipartite case, with applications, for instance, in quantum metrology. In order to detect that form of entanglement in multipartite quantum states, one typically uses entanglement witnesses. The aim of this paper is to generalize the results of [G. Tóth and O. Gühne, Phys. Rev. A \textbf{72}, 022340 (2005)] in order to provide a construction of witnesses of genuine multipartite entanglement tailored to entangled subspaces originating from the \textit{multi-qudit} stabilizer formalism -- a framework well known for its role in quantum error correction, which also provides a very convenient description of a broad class of entangled multipartite states (both pure and mixed). Our construction includes graph states of arbitrary local dimension. We then show that in certain situations, the obtained witnesses detecting genuine multipartite entanglement in quantum systems of higher local dimension are superior in terms of noise robustness to those derived for multiqubit states.
This article explores an operational model for transition amplitudes between measurements proposed by Goyal et al. within the quantum reconstruction program. To classify suitable amplitude algebras, we distinguish mathematical axioms, physical choices, and their consequences. This leads to several improvements on the published work: Our coordinate-independent approach requires no two-dimensional amplitudes a priori. All scalar field and vector space axioms are traced from model axioms and observer choices, including additive and multiplicative units and inverses. Existing mathematical characterizations identify allowable amplitude algebras as the real associative composition algebras, namely the complex numbers and the quaternions, as well as their split forms. Observed probabilities are quadratic in amplitudes, akin to the Born rule. We examine selected implications of the proposed axioms, reformulate observer questions, and highlight the broad applicability of our framework to subsequent discovery.
Negatively charged nitrogen-vacancy (NV) centers and other color centers in diamonds have emerged as promising platforms for quantum communication, quantum information processing, and nanoscale sensing, owing to their long spin coherence times, fast spin control, and efficient photon coupling. Deterministic placement of individual color centers into nanophotonic structures is critical for scalable device integration, and ion implantation is the most viable technique. Nanofabrication processes, including diamond etching, are essential to realize these structures but can introduce crystal strain through lattice damage. In this work, we investigate the impact of ion implantation and nanofabrication-induced strain on the electronic spin levels of NV-centers. We demonstrate that the zero-field continuous-wave optically detected magnetic resonance (CW-ODMR) spectroscopy serves as a sensitive probe of local crystal strain. We report the presence of a shear strain feature in diamond substrates arising from the ion-implantation and nanofabrication processes, as evidenced by the asymmetric splitting black observed in the zero-field CW-ODMR spectrum of NV-centers.
In this work, we examine the paradox proposed by Einstein, Podolsky, and Rosen (EPR). They argued that since one may know the exact momentum of a particle without measurement and subsequently measure its position, a contradiction with the Heisenberg uncertainty principle arises. We demonstrate that there is no paradox by two equivalent approaches: first, by computing the quantum conditional expectation to make predictions after a measurement; and second, using the von Neumann post-measurement state. We establish the equivalence between these two methods. In both cases the predictor is an operator valued function of the observables being measured. This ensures that no violation of the Heisenberg uncertainty principle occurs.
Block encoding of sparse matrices underpins powerful quantum algorithms such as quantum singular value transformation, Hamiltonian simulation, and quantum linear solvers, yet its efficient gate-level realization for general sparse matrices remains a major challenge. We introduce a unified framework that addresses key obstacles including the overhead of multi-controlled X (MCX) gates, amplitude reordering, and hardware connectivity, enabling simplified block encoding constructions with explicit gate-level implementations. Central to our approach is a connection to combinatorial optimization, which enables systematic assignment of control qubits to satisfy nearest-neighbor connectivity constraints, along with coherent permutation operators that preserve superposition while enabling structured amplitude reordering. We demonstrate our methods on structured sparse matrices, achieving systematic reductions in control overhead and circuit depth. Our framework bridges the gap between theoretical formulations and hardware-efficient quantum circuit implementations.
We propose and analyze an all-mechanical route to coherent control and quantum-state reconstruction of the fundamental flexural mode of a suspended carbon nanotube (CNT) operated in the anharmonic (Duffing/Kerr). A nearby atomic force microscope (AFM) provides a single, localized actuator that applies calibrated, time-dependent forces to the CNT. In the presence of mechanical anharmonicity this enables spectrally selective control of the lowest vibrational transition and thus supports effective two-level protocols such as Rabi oscillations and Ramsey interferometry. The same actuator also implements phase-space displacements required for Wigner function tomography via displaced-parity sampling, thereby unifying control and tomography without optical heating and without dedicated on-chip microwave drive lines at the CNT resonator. We develop explicit pulse sequences and a master equation framework that connect experimentally accessible signals to energy relaxation and phase coherence times and to parity-based quantum signatures, including negative regions of the Wigner function. The approach is compatible with multiple readout modalities, including direct AFM-based detection and dispersive coupling to superconducting circuitry such as Cooper-pair box, and/or a microwave cavity. Together, these techniques provide complete access to populations, coherence, and parity within a single device architecture. This minimal scheme provides a practical route to all-mechanical quantum control and state-resolved characterization of decoherence in mesoscopic mechanical systems.
Quantum Anonymous Veto (QAV) protocols enable secure and anonymous decision-making by allowing participants to detect the presence of a veto without revealing individual choices. While existing QAV schemes offer strong theoretical guarantees, they face significant limitations in practical implementation due to resource requirements, scalability issues, and the need for multipartite entanglement. In this work, we propose a novel deterministic QAV protocol that leverages only bipartite entanglement in the form of Bell states and achieves conclusive veto detection in a single round. Our approach eliminates the need for multi-qubit entangled states and iterative rounds, thereby significantly reducing experimental overhead and enhancing scalability. The protocol preserves critical properties such as voter anonymity, correctness, and verifiability, making it well-suited for implementation on near-term quantum devices. Furthermore, we outline a practical photonic realization based on polarization-path encoding and discrete-time quantum walks, demonstrating its feasibility within current quantum optical platforms. This work contributes a resource-efficient and experimentally viable alternative to existing QAV schemes, advancing the prospects of secure quantum decision-making in distributed systems.
Recently, proposals for realizing a nonreciprocal superradiant quantum phase transition (SQPT) have been put forward, based on either nonreciprocal interactions between two spin ensembles or the Sagnac-Fizeau shift in a spinning cavity. However, experimental implementation of such a nonreciprocal SQPT remains challenging. This motivates the search for new mechanisms capable of producing a nonreciprocal SQPT. Here, we propose an alternative approach to realize a nonreciprocal SQPT, induced by the magnon Kerr effect (MKE), in a cavity magnonic system, where magnons in a yttrium iron garnet (YIG) sphere are coupled to cavity photons. The MKE coefficient is positive ($K>0$) when the bias magnetic field is aligned along the crystallographic axis [100], but negative ($K<0$) when aligned along the axis [110]. We show that the steady-state phase diagram for $K > 0$ differs markedly from that for $K < 0$. This contrast is the origin of the nonreciprocal SQPT. By further studying the steady-state magnon occupation and its fluctuations versus the parametric drive strength, we demonstrate that the SQPT becomes nonreciprocal, characterized by distinct critical thresholds for $K > 0$ and $K < 0$. Moreover, we introduce a bidirectional contrast ratio to quantify this nonreciprocal behavior. Our work provides a new mechanism for realizing the nonreciprocal SQPT, with potential applications in designing nonreciprocal quantum devices.
We study the problem of efficiently learning an unknown $n$-qubit unitary channel in diamond distance given query access. We present a general framework showing that if Pauli operators remain low-complexity under conjugation by a unitary, then the unitary can be learned efficiently. This framework yields polynomial-time algorithms for a wide range of circuit classes, including $O(\log \log n)$-depth circuits, quantum $O(\log n)$-juntas, near-Clifford circuits, the Clifford hierarchy, fermionic matchgate circuits, and certain compositions thereof. Our results unify and generalize prior work, and yield efficient learning algorithms for more expressive circuit classes than were previously known. Our framework is powered by new learning algorithms for unitaries whose Pauli spectrum is either supported on a small subgroup or is sparse. If the Pauli spectrum is supported on a subgroup of size $2^k$, we give an $\widetilde{O}(2^k/\epsilon)$-query algorithm and a nearly matching $\Omega(2^k/\epsilon)$ lower bound. For $k = 2n$, we recover the optimal $O(4^n/\epsilon)$-query algorithm of Haah, Kothari, O'Donnell, and Tang [FOCS '23]. If the Pauli spectrum is supported on $s$ Pauli operators, we give an $O(s^2/\epsilon^2)$-query algorithm and an $\Omega(s/\epsilon)$ lower bound.
Erasure qubits -- qubits designed to have an error profile that is dominated by detectable leakage errors -- are a promising way to cut down the resources needed for quantum error correction. There have been several recent experiments demonstrating erasure qubits in superconducting quantum processors, most notably the dual-rail qubit defined by the one-photon subspace of two coupled cavities. An outstanding challenge is that the ancillary transmons needed to facilitate erasure checks and two-qubit gates introduce a substantial amount of noise, limiting the benefits of working with erasure-biased qubits. Here, we show how to suppress the adverse effects of transmon-induced noise while performing erasure checks or two-qubit gates. We present control schemes for these operations that suppress erasure check errors by three orders of magnitude and reduce the logical two-qubit gate infidelities by up to three orders of magnitude.
Quantum key distribution (QKD) is a cryptographic technique that uses quantum mechanical principles to enable secure key exchange. Practical deployment of QKD requires robust, cost-effective systems that can operate in challenging field environments. A major challenge is achieving reliable clock synchronization without adding hardware complexity. Conventional approaches often use separate classical light signals, which increase costs and introduce noise that degrades quantum channel performance. To address this limitation, we demonstrate a QKD system incorporating a recently proposed qubit-based distributed frame synchronization method, deployed over a metropolitan fiber network in Nanning, China. Using the polarization-encoded one-decoy-state BB84 protocol and the recently proposed qubit-based distributed frame synchronization method, our system achieves synchronization directly from the quantum signal, eliminating the need for dedicated synchronization hardware. Furthermore, to counteract dynamic polarization disturbances in urban fibers, the system integrates qubit-based polarization feedback control, enabling real-time polarization compensation through an automated polarization controller using data recovered from the qubit-based synchronization signals. During 12 hours of continuous operation, the system maintained a low average quantum bit error rate (QBER) of 1.12/%, achieving a secure key rate of 26.6 kbit/s under 18 dB channel loss. Even under a high channel loss of 40 dB, a finite-key secure rate of 115 bit/s was achieved. This study represents the first successful long-term validation of a frame-synchronization based QKD scheme in a real urban environment, demonstrating exceptional stability and high-loss tolerance, and offering an alternative for building practical, scalable, and cost-efficient quantum-secure communication networks.
We present a quantum algorithm for simulating rovibrational Hamiltonians on fault-tolerant quantum computers. The method integrates exact curvilinear kinetic energy operators and general-form potential energy surfaces expressed in a hybrid finite-basis/discrete-variable representation. The Hamiltonian is encoded as a unitary quantum circuit using a quantum read-only memory construction based on the Walsh-Hadamard transform, enabling high-accuracy quantum phase estimation of rovibrational energy levels and dynamics simulations. Our technique provides asymptotic reductions in both logical qubit count and T-gate complexity that are exponential in the number of atoms and at least polynomial in the total Hilbert-space size, relative to existing block-encoding techniques based on linear combinations of unitaries and variational basis representation. Compared with classical variational methods, it offers exponential memory savings and polynomial reductions in time complexity. The quantum volume required for computing the rovibrational spectrum of water can be reduced by up to 100 000 times compared with other quantum methods, increasing to at least 1 million for a classically intractable 30-dimensional (12-atom) molecular system. For this case with a six-body coupled potential, estimating spectroscopic-accuracy energy levels would require about three months on a 1 MHz fault-tolerant quantum processor with fewer than 300 logical qubits, versus over 30 000 years on the fastest current classical supercomputer. These estimates are approximate and subject to technological uncertainties, and realizing the asymptotic advantage will require substantial quantum resources and continued algorithmic progress.
Exact scientific discovery requires more than heuristic search: candidate constructions must be turned into exact objects and checked independently. We address this gap by extending TeXRA with an independent Lean 4 verification layer, turning it into a human-guided multi-agent platform for exact scientific discovery. The platform couples symbolic synthesis, combinatorial and linear-programming search, exact reconstruction of numerical candidates, and formal verification in Lean. We apply this platform to nonadditive quantum error-correcting codes with prescribed transversal diagonal gates within the subset-sum linear-programming (SSLP) framework. In the distance-2 regime where logical states occupy distinct residue classes, the platform yields a Lean-certified catalogue of 14,116 codes for $K\in\{2,3,4\}$ and up to six physical qubits, realizing cyclic logical orders 2 through 18, from which we extract closed-form infinite families. We also construct a residue-degenerate $((6,4,2))$ code implementing the logical controlled-phase gate $\mathrm{diag}(1,1,1,i)$. At distance 3, we resolve the transversal-$T$ problem for $((7,2,3))$ codes within the complementary binary-dihedral $\mathrm{BD}_{16}$ setting: among the 12 candidates surviving the SSLP filters, 10 admit exact realizations and 2 are excluded by no-go proofs. All accepted constructions, families, and no-go results are formalized and checked in Lean, illustrating how AI-assisted workflows can bridge search, exact reconstruction, and formal proof in the physical sciences.
The scalability of current quantum networks is limited due to noisy quantum components and high implementation costs, thereby limiting the security advantages that quantum networks provide over their classical counterparts. Quantum Augmented Networks (QuANets) address this by integrating quantum components in classical network infrastructure to improve robustness and end-to-end security. To enable such integration, Quantum Anonymous Notification (QAN) is a method to anonymously inform a receiver of an incoming quantum communication. Therefore, several quantum primitives will serve as core tools, namely, quantum voting, quantum anonymous protocols, quantum secret sharing, etc. However, all current quantum protocols can be compromised in the presence of several common channel noises. In this work, we propose an improved quantum anonymous notification (QAN) protocol that utilizes rotation operations on shared GHZ states to produce an anonymous notification in an n-user quantum-augmented network. We study the behavior of this modified QAN protocol under the dephasing noise model and observe stronger resilience to false notifications than earlier QAN approaches. The QAN framework is also proposed to be integrated with a machine-learning classifier, an enhanced quantum-augmented network. Finally, we discuss how this notification layer integrates with QuANets so that receivers can allow switch-bypass handling of quantum payloads, reducing header-based information leakage and vulnerability to targeted interference at compromised switches.
Amplitude damping fundamentally limits qubit lifetimes by irreversibly leaking energy and information into the environment. Standard Wiseman--Milburn feedback offers only modest improvement because it acts on a single measured quadrature and its corrective drive is degraded by loop delay. We introduce a compact hybrid upgrade with two components: (i) a coherently coupled \emph{ancilla} qubit that receives the homodyne current and feeds back \emph{quantum-coherently} on the system, recovering information from \emph{both} field quadratures and intentionally engineered to decay much faster than the system; and (ii) a lightweight supervised predictor that forecasts the near-future homodyne current, phase-aligning the correction to overcome hardware latency. A Lindblad treatment yields closed-form effective decay rates: the ancilla suppresses the emission channel by a cooperativity factor, while the predictor further suppresses the residual decay in proportion to forecast quality. Using IBM-scale parameters (baseline \(T_1 = 50~\mu\mathrm{s}\)), numerical simulations surpass the W--M limit, achieving \(\sim 3\!-\!4\times\) longer \(T_1\) together with improved population retention and integrated energy. The method is modular and hardware-compatible: ancilla coupling and supervised prediction can be added to existing W--M loops to convert leaked information into a precise, time-advanced corrective drive. We also include a detailed, student-friendly derivation of the effective rates for both ancilla-assisted and prediction-enhanced feedback, making the impact of each design element analytically transparent.
We investigate quench dynamics in the quantum $S=1/2$ XXZ antiferromagnetic chain with staggered and anisotropic interactions in the flat-band limit. Our quench protocol interchanges the odd- and even-bond strengths of a fully dimerized chain, enabling us to derive exact time-dependent states for arbitrary even system sizes by working in the Bell basis. We obtain closed-form, size-independent expressions for the von Neumann and second-order Rényi entanglement entropies. We further calculate exact Loschmidt echoes and the corresponding return rate functions across various anisotropies and system sizes, and identify Loschmidt zeros in finite chains. Our analysis reveals distinct finite-size scaling of the Loschmidt echo at critical times with chain length and identifies the precise conditions on the anisotropy parameter governing the periodicity of the dynamical observables. In addition to the analytic study, we perform two types of numerical experiments on IBM-Q quantum devices. First, we use the Hadamard test to estimate the Bell-basis expansion coefficients and reconstruct the dynamical states, achieving accurate entanglement entropies and the Loschmidt echo for small systems. Second, we implement Trotter-error-free time-evolution circuits combined with randomized Pauli measurements. Post-processing via statistical correlations and classical shadows yields reliable estimates of the second-order Rényi entanglement entropy and the Loschmidt echo, showing satisfactory agreement with exact results.
Exactly solvable dissipative models provide an analytical tool for studying the relaxation dynamics in open quantum systems. In this work, we study an exactly solvable model based on an anisotropic variant of the Yao-Lee spin-orbital model, with dissipation acting in the spin sector. We map Liouvillian dynamics to fermions hopping in a doubled Hilbert space under a non-Hermitian Hamiltonian and demonstrate the model's exact solvability. We analyze the model's strong and weak symmetries, which protect an exponentially large manifold of non-equilibrium steady states, establishing the system as a physically feasible dissipative spin liquid. Furthermore, we analyze the transient dynamics in a translationally invariant sector and discover that the single-particle Liouvillian spectrum hosts an exceptional ring in momentum space. We map out a characteristic $\mathcal{PT}$ symmetry breaking transition driven by the dissipation strength, which governs the crossover from oscillatory to decaying relaxation of physical observables. Our work provides a physically motivated, solvable setting for exploring the coexistence of dissipative spin liquid physics and Liouvillian spectral singularities.
We revisit the Pauli-Clifford connection to introduce a real, grade-preserving algebraic framework for $n$-qubit quantum computation based on the tensor product $C\ell_{2,0}(\mathbb{R})^{\otimes n}$. In this setting, the bivector $J = e_{12}$ satisfies $J^{2} = -1$ and supplies the complex structure on the $J$-closure of a minimal left ideal via right multiplication, while Pauli operations arise as left actions of Clifford elements. The Peirce decomposition organizes the algebra into sector blocks determined by primitive idempotents, with nilpotent elements generating transitions between sectors. Quantum states are represented as equivalence classes modulo the left annihilator, exhibiting the quotient description underlying the minimal left ideal. Adopting a canonical stabilizer mapping, the $n$-qubit computational basis state $|0\cdots 0\rangle$ is given natively by a tensor product of these idempotents. This structural choice leads to a compatibility law that is stable under the geometric product for $n$ qubits and aligns symbolic Clifford multiplication with unitary evolution on the Hilbert space.
Optically-active solid-state systems such as self-assembled quantum dots, rare-earth ions, and color centers in diamond and SiC are promising candidates for quantum network, computing, and sensing applications. Although the nuclei in these systems naturally lead to electron spin decoherence, they can be repurposed, if they are controllable, as long-lived quantum memories. Prior work showed that a metric known as the one-tangling power can be used to quantify the entanglement dynamics of sparse systems of spin-1/2 nuclei coupled to color centers in diamond and SiC. Here, we generalize these findings to a wide range of electron-nuclear central-spin systems, including those with spin > 1/2 nuclei, such as in III-V quantum dots (QDs), rare-earth ions, and some color centers. Focusing on the example of an (In)GaAs QD, we offer a procedure for pinpointing physically realistic parameter regimes that yield maximal entanglement between the central electron and surrounding nuclei. We further harness knowledge of naturally-occurring degeneracies and the tunability of the system to generate maximal entanglement between target subsets of spins when the QD electron is subject to dynamical decoupling. We also leverage the one-tangling power as an exact and immediate method for computing QD electron spin dephasing times with and without the application of spin echo sequences, and use our analysis to identify coherence-sustaining conditions within the system.
We re-analyze a recent experiment by Sharoglazova et al. highlighting the role of the transient regime. We prove that in the evanescent state of the stationary regime their experimental data can be interpreted in terms of Bohmian quantum mechanics. At the same time, Bohm's quantum potential can be re-interpreted as a kinetic-energy term in the framework of Nelson's stochastic quantum mechanics, with a hidden-variable, non-classical, speed fitting the experimental data as well. The experiment can be interpreted as well within orthodox quantum mechanics and is therefore not conclusive in selecting or challenging any framework.
We present an analytical model showing how the gauge-invariant loop phase in a three-level closed-loop atomic system imprints as bright-dark lobes in Laguerre Gaussian probe beam intensity patterns. In the weak probe limit, the output intensity in such systems include Beer-Lambert absorption, a scattering term and loop phase dependent interference term with optical depth controlling visibility. These systems enable mapping of arbitrary phases via interference rotation and offer a platform to measure Berry phase. Berry phase emerge as a geometric holonomy acquired by the dark states during adiabatic traversal of LG phase defined in a toroidal parameter space. Manifesting as fringe shifts which are absent in open systems, experimental realization using cold atoms or solid state platforms appears feasible, positioning structured light in closed-loop systems as ideal testbeds for geometric phases in quantum optics.
Non-stabilizerness, also known as ``magic,'' quantifies how far a quantum state departs from the stabilizer set. It is a central resource behind quantum advantage and a useful probe of the complexity of quantum many-body states. Yet standard magic quantifiers, such as the stabilizer Rényi entropy (SRE) for qubits and the mana for qutrits, are costly to evaluate numerically, with the computational complexity growing rapidly with the number $N$ of qudits. Here we introduce efficient, numerically exact algorithms that exploit the fast Hadamard transform to compute the SRE for qubits ($d=2$) and the mana for qutrits ($d=3$) for pure states given as state vectors. Our methods compute SRE and mana at cost $O(N d^{2N})$, providing an exponential improvement over the naive $O(d^{3N})$ scaling, with substantial parallelism and straightforward GPU acceleration. We further show how to combine the fast Hadamard transform with Monte Carlo sampling to estimate the SRE of state vectors, and we extend the approach to compute the mana of mixed states. All algorithms are implemented in the open-source Julia package HadaMAG ( this https URL ), which provides a high-performance toolbox for computing SRE and mana with built-in support for multithreading, MPI-based distributed parallelism, and GPU acceleration. The package, together with the methods developed in this work, offers a practical route to large-scale numerical studies of magic in quantum many-body systems.
We analyze quantum state preservation in open quantum systems using quantum error-correcting (QEC) codes explicitly embedded in microscopic system-bath models. Rather than assuming abstract quantum channels, we consider multi-qubit registers coupled to bosonic thermal environments, derive a second-order master equation for the reduced dynamics, and use it to benchmark the five-qubit, Steane, and toric codes under local and collective noise. We compute state fidelities as functions of system-bath coupling strength, bath temperatures, and the number of correction cycles. In the low-temperature regime, repeated error correction with the five-qubit code significantly suppresses decoherence and relaxation for weak-to-moderate couplings. In the high-temperature regime, thermal excitations reduce the effectiveness of all codes, although within the parameter range studied, the five-qubit code still yields the highest fidelities among the three codes. For two-qubit Werner states, we identify a critical evolution time associated with an early-time crossover, before which the overhead of QEC does not compensate for the noise-induced degradation; this critical time increases with entanglement, reflecting the greater fragility of strongly entangled states. Overall, our results provide a microscopic master-equation-based framework for benchmarking QEC performance in realistic open-system environments and for assessing code behavior in near-term noisy quantum architectures.
Contextuality is a central feature distinguishing quantum from classical probability theories, but its operational meaning is often stated only qualitatively. In this Letter, we study a simple information-theoretic question: how much additional contextual information must a classical simulation introduce when it tries to keep a shared internal description fixed across contexts? To make this question precise, we analyze a minimal external-label simulation model in which the remaining context dependence is carried only by an auxiliary label. For this model, we define an obstruction cost as the minimum mutual information between the context and the auxiliary label required to reproduce the observed statistics. We then prove a conservative quantitative lower bound: any linear witness that separates the observed statistics from the zero-obstruction set yields a positive lower bound on this cost. We do not claim that this bound is tight, and we do not claim that the simulation model covers every possible classical architecture. Its role is narrower and more explicit: under fixed shared-state semantics, contextuality can be read as a certificate of irreducible external bookkeeping cost in a simple and well-defined simulation model.
Randomness is intrinsic to quantum mechanics; the outcome of a measurement on a quantum state is a random variable. This feature has been applied to randomness certification, where one party must decide whether the data they receive is truly random. However, existing demonstrations are not black-box, to avoid falsely certifying deterministic data, assumptions must be made on how the data was generated. Here we demonstrate genuine randomness certification in the black-box setting -- one in which no deterministic adversary, even with unlimited computational power, will succeed in getting their data certified. We use it to provably generate random numbers using only measurements on single particle states and without a random seed.
The current state, emerging trends, and practical challenges of optical fiber-based power network SCADA quantum communication must be addressed to fully utilize the technological platform's potential in real-world power system SCADA communications involving massive volumes of real-time data, as well as in managing, encoding, and applications such as quantum cryptography. Quantum key distribution (QKD) is an essential part of the cybersecurity paradigm for quantum communication. Even though quantum computing with individual circuits yields probabilistic outcomes for the problem at hand, real-world datasets are complex and challenging to handle, even with telemetry. When using the cybersecurity triad of availability, confidentiality, and integrity (CIA) in reverse order (AIC), availability is given priority in electric power networks. This research assesses the use of the BB84, E91, B92, and SARG04 cryptographic protocols by applying them to large, multivariate power-system SCADA datasets and comparing the outcomes. By leveraging the variety of QKD protocols available with quantum electronics hardware, this simulation work provides a promising avenue for developing implementable frameworks and deploying SCADA/PMU networks in actual power systems.
Atom loss is a major error source in neutral-atom quantum computers, accounting for over 40% of the total physical errors in recent experiments. Its nonlinear and correlated nature poses significant challenges: current syndrome extraction circuits require additional overhead or sacrifice loss tolerance, and existing decoders are computationally inefficient, suboptimal, or lack provable guarantees. To address these challenges, we propose the Pauli Envelope framework, which bounds the effect of atom loss with low-weight, efficiently computable Pauli approximations, generalizing existing loss-to-Pauli methods and enabling rigorous analysis. Guided by this framework, we design improved atom-replenishing syndrome extraction circuits, the Mid-SWAP syndrome extraction, which achieves optimal loss distance and minimal space-time overhead for rotated surface codes. We also propose two decoders: an Envelope-MLE decoder achieving the optimal loss distance d_loss ~ d, and an Envelope-Matching decoder achieving d_loss ~ 2d/3 via Minimum-Weight Perfect Matching (MWPM), surpassing the previous best (d_loss ~ d/2) and readily integrating with fast correlated decoding techniques for transversal logical circuits. Circuit-level simulations demonstrate up to 40% higher thresholds and 30% higher effective distances compared with existing methods in the loss-dominated regime. Moreover, we explore correlated atom loss and show that it is easier to correct than independent loss, with thresholds rising from 5.15% to 7.82%. Remarkably, our Envelope-MLE decoder improves the error suppression factor of a hybrid MLE--machine-learning decoder from \Lambda = 2.14 to \Lambda = 2.24 on recent experimental data.
We introduce an improved one-shot characterisation of randomness extraction against quantum side information (privacy amplification), strengthening known one-shot bounds and providing a unified derivation of the tightest known asymptotic constraints. Our main tool is a new class of smooth conditional entropies defined by lifting classical smooth divergences through measurements. A key role is played by the measured smooth Rényi relative entropy of order 2, which we show to admit an equivalent variational form: it can be understood as allowing for smoothing over not only states, but also non-positive Hermitian operators. Building on this, we establish a tightened leftover hash lemma, significantly improving over all known smooth min-entropy bounds on extractable randomness and recovering the sharpest classical achievability results. We extend these methods to decoupling, the coherent analogue of privacy amplification, obtaining a corresponding improved one-shot bound. Relaxing our smooth entropy bounds leads to one-shot achievability results in terms of measured Rényi divergences, tightening the bounds of [Dupuis, arXiv:2105.05342] and recovering the state-of-the-art asymptotic i.i.d. error exponents shown there. We show an approximate optimality of our results by giving a matching one-shot converse bound up to additive logarithmic terms. This yields an optimal second-order asymptotic expansion of privacy amplification under trace distance, establishing a significantly tighter one-shot achievability result than previously shown in [Shen et al., arXiv:2202.11590] and proving its optimality for all hash functions.
Photonic Quantum Machine Learning (PQML) is an emerging method to implement scalable, energy-efficient quantum information processing by combining photonic quantum computing technologies with machine learning techniques. The features of photonic technologies offer several benefits: room-temperature operation; fast (low delay) processing of signals; and the possibility of representing computations in high-dimensional (Hilbert) spaces. This makes photonic technologies a good candidate for the near-term development of quantum devices. However, noise is still a major limiting factor for the performance, reliability, and scalability of PQML implementations. This review provides a detailed and systematic analysis of the sources of noise that will affect PQML implementations. We will present an overview of the principal photonic quantum computer designs and summarize the many different types of quantum machine learning algorithms that have been successfully implemented using photonic quantum computer architectures such as variational quantum circuits, quantum neural networks, and quantum support vector machines. We identify and categorize the primary sources of noise within photonic quantum systems and how these sources of noise behave algorithm-specifically with respect to degrading the accuracy of learning, unstable training, and slower convergence than expected. Additionally, we review traditional and advanced techniques for characterizing noise and provide an extensive survey of strategies for mitigating the effects of noise on learning performance. Finally, we discuss recent advances that demonstrate PQML's capability to operate in real-world settings with realistic noise conditions and future obstacles that will challenge the use of PQML as an effective quantum processing platform.
We characterize single-mode vacuum squeezing generated by a SNAIL Parametric Amplifier (SPA) operated under conditions representative of practical sensing and qubit-readout experiments. Motivated by prior expectations that Kerr-induced distortion limits squeezing in degenerate parametric amplifiers, we varied external flux and pump power to explore operating points where Kerr nonlinearity is theoretically minimized. We find that for practical applications where the squeezing frequency is fixed, the Kerr was variable by about a factor of two and the achievable squeezing showed no significant dependence on Kerr. Theoretical modeling supports this observation and indicates that baseline Kerr values in state-of-the-art SPAs are already too small to impose a practical limitation. Instead, squeezing was dominated by internal resonator loss and insertion loss in the microwave chain. These results indicate that, in practical SPAs, reducing loss, rather than suppressing Kerr, is the primary route to improved squeezing performance.
The rich dynamics and large Hilbert space of quantum harmonic oscillators make them natural candidates for hardware-efficient and error-correctable quantum information processing. However, implementing direct entangling operations between oscillators remains an outstanding challenge. Existing strategies typically rely on parametrically activating interactions that populate the excited states of a nonlinear element, which introduces additional dissipation channels and potential leakage from the encoded manifold. Here, we engineer a Raman-assisted cross-Kerr interaction between microwave photons hosted in two superconducting cavities. Crucially, this dynamics does not excite the mediating nonlinear coupler, thereby suppressing coupler induced decoherence and leakage out of the bosonic code space. We use this direct nonlinear coupling to implement a controlled-phase gate within the single- and two-photon subspaces of two oscillators, deterministically generating entanglement between them. Finally, we use these engineered dynamics to implement a photon-number parity check on a storage cavity via purely bosonic interactions with an ancillary cavity, demonstrating an enhancement in the storage lifetime. Our work provides a promising pathway toward engineering robust operations that act entirely within a protected bosonic code space and realizing fault-tolerant quantum information processing with bosonic elements.
Code-switching offers a route to universal, fault-tolerant quantum computation by circumventing the limitation implied by the Eastin-Knill theorem against a universal transversal gate set within a single quantum code. Here, we present a fault-tolerant code-switching protocol between two versions of the $[[8, 3, 2]]$ code. One version supports weakly fault-tolerant single-qubit Clifford gates, while the other supports a logical $\overline{\mathrm{CCZ}}$ gate via transversal $T/T^\dagger$ together with logical $\overline{\mathrm{CZ}}$, $\overline{\mathrm{CNOT}}$, and $\overline{\mathrm{SWAP}}$ gates. Because both codes have distance 2, the protocol operates in a postselected, error-detecting regime: single faults lead to detectable outcomes, and accepted runs exhibit quadratic suppression of logical error rates. This yields a universal scheme for postselected fault-tolerant computation. We validate the protocol numerically through simulations of state preparation, code switching, and a three-logical-qubit implementation of Grover's search.
Gaussian baths are widely used to model non-Markovian environments, yet the cost of accurate simulation at long times remains poorly understood, especially when spectral densities exhibit nonanalytic behavior as in a range of realistic models. We rigorously bound the complexity of representing bath correlation functions on a time interval $[0,T]$ by sums of complex exponentials, as employed in recent variants of pseudomode and hierarchical equations of motion methods. These bounds make explicit the dependence on the maximal simulation time $T$, inverse temperature $\beta$, and the type and strength of singularities in an effective spectral density. For a broad class of spectral densities, the required number of exponentials is bounded independently of $T$, achieving time-uniform complexity. The $T$-dependence emerges only as polylogarithmic factors for spectral densities with strong singularities, such as step discontinuities and inverse power-law divergences. The temperature dependence is mild for bosonic environments and disappears entirely for fermionic environments. Thus, the true bottleneck for long-time simulation is not the simulation duration itself, but rather the presence of sharp nonanalytic features in the bath spectrum. Our results are instructive both for long-time simulation of non-Markovian open quantum systems, as well as for Markovian embeddings of classical generalized Langevin equations with memory kernels.
We introduce Catalytic Quantum Error Correction (CQEC), a state recovery protocol exploiting catalytic covariant transformations. CQEC recovers a known target state from noisy copies without an error \emph{magnitude} threshold: recovery succeeds whenever the coherent modes satisfy $\mathcal{C}(\rho_0) \subseteq \mathcal{C}(\rho_\mathrm{noisy})$, regardless of noise strength. The main practical bottleneck -- catalyst preparation requiring $n^* \sim d^4 e^{2\gamma}$ copies -- is resolved by a three-stage pipeline combining CPMG dynamical decoupling, Clifford twirling, and the recursive swap test, achieving $F_\mathrm{cat} > 0.96$ with only 8~copies ($10^9$-fold reduction). Numerical validation across four quantum algorithms ($d = 4$--$64$), a cryptographic protocol, and three noise models confirms $F > 0.999$ in the asymptotic limit across 200~configurations.
We present a protocol for actively suppressing Gauss law violations in quantum simulations of SU(2) lattice gauge theory. The protocol uses mid-circuit measurements to extract a characterization of the gauge-violation sector at each lattice vertex, resolving both the total angular momentum and magnetic quantum numbers of the violation via a group quantum Fourier transform. Syndrome-conditional recovery operations map the state back to the gauge-invariant subspace through an iterative sweep over vertices, a procedure we call gauge cooling. We show that while the Knill-Laflamme conditions are not generically satisfied at vertices with nontrivial singlet multiplicity, every single-qubit error is detected by the gauge syndrome. We demonstrate gauge cooling on a single-plaquette simulation of the Kogut-Susskind Hamiltonian truncated to the spin-$1/2$ representation under depolarizing and amplitude damping noise, showing that the protocol restores gauge invariance and improves fidelity at noise rates representative of current superconducting hardware.
A generalized formulation of non-relativistic quantum mechanics is developed within multidimensional geometric (NG) frameworks characterized by a power-law dispersion relation \(E \propto |p|^{j}\), where \(j = N - 1\). Starting from the generalized Minkowski distance in \(L^j\)-normed spaces, the conventional quadratic kinetic structure of three-dimensional geometry is extended to higher-order spatial derivatives, yielding a consistent \(j\)-th order Schrödinger equation. The formalism is applied to free particles and to particles confined within a one-dimensional infinite potential well for 2G, 3G, 4G, and 5G geometries. While plane-wave solutions and translational invariance are preserved, the spectral structure is modified, with bound-state energies scaling as \((2n+1)^{j}\), leading to cubic and quartic growth in higher geometries. The corresponding eigenfunctions exhibit mixed exponential, trigonometric, and hyperbolic forms determined by the roots of negative unity. A generalized probability framework based on \(j\)-fold conjugation is introduced, ensuring a real-valued probability density and consistent expectation values. Despite these generalizations, the Heisenberg uncertainty principle is preserved. The formulation presents quantum mechanics as a geometry-dependent theory in which dispersion relations, spectral properties, and probabilistic structure emerge from the underlying spatial metric.
We propose concrete protocols to realize quantum criticality due to excited-state quantum phase transitions (ESQPTs) experimentally in presumably the simplest and most resilient system involving a single trapped ion oscillating in a radio-frequency Paul trap. We identify a specific class of excited states of the Extended Rabi Model (ERM) Hamiltonian, which occur between two critical ESQPT energies of the model in its (anti)Jaynes-Cummings superradiant phase. Properties of these states motivate the definition of several ESQPT witness observables. We study their critical scaling behaviors as well as various distinct state evolutions by driving the system across the quantum criticalities by changing the qubit-phonon coupling strength linearly in time at different finite rates. A mapping of the theoretical control parameters of the ERM to the experimental parameters of a trapped ion setup is provided, and simulations are performed for values referencing existing state-of-the-art setups, addressing both unitary state evolutions as well as relevant open-system corrections.
We provide here a universal approximation theorem with precise quantitative error bounds for noisy quantum neural networks. We focus on applications to Quantitative Finance, where target functions are often given as expectations. We further provide a detailed numerical analysis, testing our results on actual noisy quantum hardware.
The concept of a particle is ambiguous in quantum field theory. It is generally agreed that particles depend not only on spacetime, but also on coordinates used to parametrise spacetime points. One of us has in contrast proposed a coordinate-frame-independent model of quantum particles within the framework of quantum field theory in curved spacetime. The aim of this article is to present a scalar-field-equation solution that is not only a zero-rank tensor under general coordinate transformations, but also common for anti-de-Sitter, de-Sitter, closed and open Einstein static universes. Moreover, it locally reduces to a Minkowski plane-wave solution and is non-perturbative in curvature. The former property makes it suitable for the standard applications of quantum theory in particle physics, while the latter allows then to gain insights into quantum physics in the strong-gravity regime.
Lattice gauge theory is an important framework for studying gauge theories that arise in the Standard Model and condensed matter physics. Yet many systems (or regimes of those systems) are difficult to study using conventional techniques, such as action-based Monte Carlo sampling. In this paper, we demonstrate the use of gauged Gaussian projected entangled pair states as an ansatz for a lattice gauge theory involving dynamical physical matter. We study a $\mathbb{Z}_2$ gauge theory on a two dimensional lattice with a single flavor of fermionic matter on each lattice site. For small systems, our results show agreement with results computed by exactly diagonalizing the Hamiltonian, and demonstrate that the approach is computationally feasible for larger system sizes where exact results are unavailable. This is a further step on the road to studying higher dimensions and other gauge groups with manageable computational costs while avoiding the sign problem.
We study classical and quantum spin models derived from one-dimensional cellular automata (CA) with nonlinear update rules, focusing on rules 30, 54 and 201. We argue that the classical models, defined such that their ground states correspond to allowed trajectories of the CA, are frustrated and can be described in terms of local defect variables. Including quantum fluctuations through the addition of a transverse field, we study their ground state phase diagram and quantum phase transitions. We show that the nonlinearity of the CA rule leads to a quantum order-by-disorder mechanism, which selects a particular (rule-dependent) spatial structure for small transverse fields, with spontaneous breaking of the translation symmetry in some cases. Using numerical results for larger fields, we also observe a first-order quantum phase transition into a quantum paramagnet, as in previous studies of spin models based on linear CA rules.
The observable spacetime can be viewed as worldline coincidences (events) between a particle system and the observers of an extended (material) reference frame (ERF). Particle positions are then operationally well defined with respect to that frame. In the ideal regime where the ERF contributes negligibly to the stress--energy tensor, the metric field $g_{ab}$ is indifferent to its physical presence. Accordingly, $g_{ab}$ may be viewed as encoding spacetime intervals relative to any ideal ERF placed in the region of interest. In quantum theory, by contrast, the localization events defining such intervals are naturally accompanied by correlations with local observers of the ERF. Motivated by this complementarity, we propose that the metric encodes, in geometric form, the relational information carried by correlations with a local reference frame, thereby dispensing with its explicit presence. Under a suitable constraint on the corresponding conditional entropy, this framework yields the full nonlinear Einstein equation with a reference spacetime whose scalar curvature equals the cosmological constant.
Understanding the stability of integrability in many-body quantum systems is key to controlling dynamics and predicting thermalization. While the breakdown of integrability in short-range interacting systems is well understood, the role of long-range couplings -- ubiquitous and experimentally realizable -- remains unclear. We show that in fully connected models, integrability is either robust or extremely fragile, depending on whether perturbations are non-extensive, extensive one-body, or extensive two-body. In contrast to finite short-range systems, where any of these perturbations can induce chaos at finite strength, in fully connected finite models, chaos is triggered by extensive two-body perturbations and even at infinitesimal strength. Chaos develops within energy bands defined by symmetries, leading to a fragmented realization of the eigenstate thermalization hypothesis and clarifying how microcanonical shells can be constructed in such systems. We also introduce a general symmetry-based framework that explains the stability of integrability.
Building on Lin's breakthrough MIP$^{co}$ = coRE and an encoding of non-local games as universal sentences in the language of tracial von Neumann algebras, we show that locally universal tracial von Neumann algebras have undecidable universal theories. This implies that no such algebra admits a computable presentation. Our results also provide, for the first time, explicit examples of separable II$_1$ factors without computable presentations, and in fact yield a broad family of them, including McDuff factors, factors without property Gamma, and property (T) factors. We also obtain analogous results for locally universal semifinite von Neumann algebras and tracial C*-algebras. The latter provides strong evidence for a negative solution to the Kirchberg Embedding Problem. We discuss how these are obstructions to approximation properties in the class of tracial and semifinite von Neumann algebras.
We compute the degeneracy of energy levels in the Kitaev quantum double model for any discrete group $G$ on any planar graph forming the skeleton of a closed orientable surface of arbitrary genus. The derivation is based on the fusion rules of the properly identified vertex and plaquette excitations, which are selected among the anyons, i.e., the simple objects of the Drinfeld center $\mathcal{Z}(\mathrm{Vec}_G)$. These degeneracies are given in terms of the corresponding $S$-matrix elements and allow one to obtain the exact finite-temperature partition function of the model, valid for any finite-size system.
This is the manual of the first version of QEDtool, an object-oriented Python package that performs numerical quantum electrodynamics calculations, with focus on full state reconstruction in the internal degrees of freedom, correlations and entanglement quantification. Our package rests on the evaluation of Feynman amplitudes in the momentum-helicity basis within a relativistic framework. Users can specify both pure and mixed initial scattering states in polarization space. From the specified initial state and Feynman amplitudes, QEDtool reconstructs correlations that fully characterize the quantum polarization and entanglement within the final state. These quantities can be expressed in any inertial frame by arbitrary, built-in Lorentz transformations.
Accurate evaluation of nonlinear photonic integrated circuits requires separating input and output coupling efficiencies (i.e., $\eta_1$ and $\eta_2$), yet the conventional linear-transmission calibration method recovers only their product (i.e., $\eta_1\,\eta_2$) and therefore introduces systematic bias when inferring on-chip performance from off-chip data. We present bidirectional nonlinear optical tomography (BNOT), a direction-aware metrology that uses forward and backward pumping of complementary nonlinear probes, with process-appropriate detection, to break the ``degeneracy'' of $\eta_1\,\eta_2$ and estimate individual interface efficiencies with tight confidence intervals. The method links off-chip measurements to on-chip quantities through a compact observation model that explicitly incorporates pump fluctuations and detector noise, and it frames efficiency extraction as a joint constrained optimization. Monte Carlo studies show unbiased convergence of the estimated efficiencies to ground truth with low error across realistic operating regimes. Using these efficiency estimates to reconstruct on-chip nonlinear figures of merit yields distributions centered on the true values with reduced variance, whereas conventional ``degenerate'' calibration is biased and can substantially misestimate on-chip performance. BNOT is hardware-compatible and platform-agnostic, and provides unbiased characterization of off- and on-chip coupling efficiencies across nonlinear processes, enabling reproducible, coupling-resolved benchmarking for scalable systems in quantum optics, frequency conversion, and precision metrology.
We study the quantum relaxation dynamics for a lattice version of the one-dimensional (1D) $N$-flavor Gross-Neveu (GN) model after a Hamiltonian parameter quench. Allowing for a system-reservoir coupling $\gamma$, we numerically describe the system dynamics through a time-dependent self-consistent Lindblad master equation. For a closed ($\gamma=0$) finite-size system subjected to an interaction parameter quench, the order parameter dynamics exhibits oscillations and revivals. In the thermodynamic limit, our results imply that the order parameter reaches its post-quench stationary value in accordance with the eigenstate thermalization hypothesis (ETH). However, time-dependent finite-momentum correlation matrix elements equilibrate only if $\gamma>0$. Our findings are consistent with the system being described by a pertinent Generalized Gibbs Ensemble (GGE) and, accordingly, highlight subtle yet important aspects of the post-quench relaxation dynamics of quantum many-body systems.
We present a spectroscopic investigation of $^{169}\mathrm{Tm}^+$ that provides two key foundations for its use as a platform for advanced quantum applications. First, we establish the complete spectroscopic road map for optical cycling (including laser cooling) by performing high-resolution spectroscopy on $^{169}\mathrm{Tm}^+$ ions in an ion trap. We characterize the primary $313\,\mathrm{nm}$ and complementary $448/453\,\mathrm{nm}$ cycling transitions, identify the essential near-infrared repumping frequencies, and determine the magnetic-dipole hyperfine $A$ constants for all relevant levels. Second, we report a detailed characterization of a metastable state as a candidate for hosting a robust qubit, performing lifetime measurements and Zeeman-resolved microwave hyperfine spectroscopy with $\mathrm{kHz}$ precision.
Non-relativistic quantum particles in the Earth's gravitational field are successfully described by the Schrödinger equation with Newton's gravitational potential. Particularly, quantum mechanics is in agreement with such experiments as free fall and quantum interference induced by gravity. However, quantum mechanics is a low-energy approximation to quantum field theory. The latter is successful by the description of high-energy experiments. Gravity is embedded in quantum field theory through the general-covariance principle. This framework is known in the literature as quantum field theory in curved spacetime, where the concept of a quantum particle is, though, ambiguous. In this article, we study in this framework how a Hawking particle moves in the far-horizon region of Schwarzschild spacetime by computing its propagator. We find this propagator differs from that which follows from the path-integral formalism -- the formalism which adequately describes both free fall and quantum interference induced by gravity.
Can purely mechanical systems generate intelligent language? We prove that dissipative quantum dynamics with analytically tractable non-local context aggregation produce coherent text generation, while conservation laws cause fundamental failure. Employing Koopman operators with closed-form path integral propagators, we show irreversible computation fundamentally requires both controlled information dissipation and causal context aggregation. Spectral analysis reveals emergent eigenvalue structure, separating into decay modes (forgetting), growth modes (amplification), and neutral modes (preservation) -- the essential ingredients for directed information flow. Hamiltonian constraints force the elimination of these dissipative modes and degrading performance despite unchanged model capacity. This establishes language generation as dissipative quantum field theory, proving mechanical systems acquire intelligence through the combination of dissipation and non-locality, not through conservation.
Quantifying how much a quantum state breaks a symmetry is essential for characterizing phases, nonequilibrium dynamics, and open-system behavior. Quantum resource theory provides a rigorous operational framework to define and characterize such quantifiers of symmetry-breaking. As a starter, we exemplify the usefulness of resource theory by noting that second-Rényi entanglement asymmetry can increase under symmetric operations, and hence is not a resource monotone, and should not solely be used to capture Quantum Mpemba effect. More importantly, motivated by mixed-state physics where weak and strong symmetries are inequivalent, we formulate a new resource theory tailored to strong symmetry, identifying free states and strong-covariant operations. This framework systematically identifies quantifiers of strong symmetry breaking for a broad class of symmetry groups, including a strong entanglement asymmetry. A particularly transparent structure emerges for U(1) symmetry, where the resource theory for the strong symmetry breaking has a completely parallel structure to the entanglement theory: the variance of the conserved quantity fully characterizes the asymptotic manipulation of strong symmetry breaking. By connecting this result to the knowledge of the geometry of quantum state space, we obtain a quantitative framework to track how weak symmetry breaking is irreversibly converted into strong symmetry breaking in open quantum systems. We further propose extensions to generalized symmetries and illustrate the qualitative impact of strong symmetry breaking in analytically tractable QFT examples and applications.
Conformal invariance often accompanies criticality in Hermitian systems. However, its fate in non-Hermitian settings is less clear, especially near exceptional points where the Hamiltonian becomes non-diagonalizable. Here we investigate whether a 1+1-dimensional gapless non-Hermitian system can admit a conformal description, focusing on a PT-symmetric free-fermion field theory. Working in the biorthogonal formalism, we identify the conformal structure of this theory by constructing a traceless energy-momentum tensor whose Fourier modes generate a Virasoro algebra with central charge $c=-2$. This yields a non-Hermitian, biorthogonal realization of a logarithmic conformal field theory, in which correlation functions exhibit logarithmic scaling and the spectrum forms Virasoro staggered modules that are characterized by universal indecomposability parameters. We further present a microscopic construction and show how the same conformal data (with finite-size corrections) can be extracted from the lattice model at exceptional-point criticality, thereby supporting the field-theory prediction.
Excitons in anisotropic two-dimensional (2D) materials, defined by direction-dependent effective masses, are of pronounced interest for their roles in excitonic and magneto-optical phenomena. A perpendicular magnetic field complicates the separation of center-of-mass (c.m.) and relative motions, especially when electron and hole masses are comparable. Conventional theories often employ an approximate c.m. separation using factorized wave functions, modifying magnetic Hamiltonian terms and possibly introducing inaccuracies in magnetoexciton energy predictions. This work develops an exact analytical framework for c.m. and relative motion separation in anisotropic 2D magnetoexcitons, without resorting to the stationary-c.m. approximation. Starting from the full electron-hole Hamiltonian in a homogeneous magnetic field, the formalism uses the conserved pseudomomentum to derive a relative-motion Hamiltonian, revealing new anisotropy-dependent couplings and magnetic coefficients absent in approximate models. The resulting Schrödinger equation is treated via the Feranchuk-Komarov operator method and Levi-Civita transformation, allowing non-perturbative, systematically convergent solutions. Application to monolayer black phosphorus and titanium trisulfide, both freestanding and encapsulated in hexagonal boron nitride, yields magnetoexciton energies, diamagnetic coefficients, and probability densities for the ten lowest states across considerable magnetic-field ranges. The results demonstrate the significant influence of anisotropy-dependent coupling on magnetic response in systems with strong mass anisotropy. This formalism is generalizable to other anisotropic 2D semiconductors, establishing a foundation for advanced magneto-optical studies.
We propose and test logarithmic Krylov (logK) complexity, an operator growth measure akin to Krylov complexity defined through a replica approach, as a viable probe of early-time operator scrambling without false positives. In finite-dimensional quantum systems, such as the Lipkin--Meshkov--Glick (LMG) model and the mixed-field Ising model at the chaotic point, we provide numerical evidence that logK-complexity discriminates between genuine and saddle-dominated scrambling at early times, correctly avoiding the exponential contribution coming from the unstable saddle in the former case, and closely tracking the conventional Krylov complexity in the latter. In integrable quantum systems admitting infinite-dimensional Krylov subspaces, such as the SYK$_{2}$ model and the quantum inverted harmonic oscillator, we show that by modifying the Krylov spreading operator, obtained through generalizing the analytic continuation procedure in the replica trick, the logK complexity can be refined to capture the integrable properties of the theories. We supplement these analyses by extending the Krylov formalism in classical dynamical systems and defining classical versions of these operator growth measures, showing that the false positives arising from unstable saddles in classical phase space are non-existent.
The non-Hermitian skin effect (NHSE), characterized by a macroscopic accumulation of eigenstates at the edge of a system with open boundaries, is often ascribed to a non-trivial point-gap topology of the Bloch Hamiltonian. We revisit this connection and show that the eigenspectrum of non-normal operators is highly sensitive to boundary conditions and generic perturbations, and therefore does not constitute a stable object encoding topological information. Instead, topological properties are reflected in the singular-value spectrum of finite systems and, in the semi-infinite limit, correspond to boundary-localized eigenmodes implied by the index of the corresponding Toeplitz operator. For a Hatano-Nelson ladder, where point-gap winding and non-normality can be varied independently, we demonstrate that the NHSE can occur without point-gap winding and, conversely, that point-gap winding can persist without the NHSE. These results establish that the NHSE originates from spectral instability and non-reciprocity rather than topology, and that the commonly assumed relation between spectral winding and boundary localization relies on translational invariance and is therefore not generic.
We analyze the challenges of benchmarking scientific (multi)-agentic systems, including the difficulty of distinguishing reasoning from retrieval, the risks of data/model contamination, the lack of reliable ground truth for novel research problems, the complications introduced by tool use, and the replication challenges due to the continuously changing/updating knowledge base. We discuss strategies for constructing contamination-resistant problems, generating scalable families of tasks, and the need for evaluating systems through multi-turn interactions that better reflect real scientific practice. As an early feasibility test, we demonstrate how to construct a dataset of novel research ideas to test the out-of-sample performance of our system. We also discuss the results of interviews with several researchers and engineers working in quantum science. Through those interviews, we examine how scientists expect to interact with AI systems and how these expectations should shape evaluation methods.