Non-equilibrium properties of strongly interacting gauge theories are often intractable with classical simulation methods. Due to recent developments of quantum simulations, studies of their properties in two spatial dimensions are becoming accessible. By demonstrating the existence of an approximate spectrum-generating algebra for a pure gauge plaquette ladder, we predict and verify the existence of Quantum Many-Body Scars in spin-1 Quantum Link Models. The analysis of the model is facilitated by a dualization process that maps the original gauge theory to a constrained spin chain. Was it not for the constraint, the system would have an exact spectrum-generating algebra. We propose a set of observables for diagnosing an approximate spectrum-generating algebra, which is expected to guide quantum simulators toward interesting physical regimes.
We demonstrate a mechanism for the production of massive excitations in graphs. We treat the number of neighbors at each vertex in the graph (degree) as a scalar field. Then we introduce a mechanism inspired by the Higgs mechanism in quantum field theory(QFT), that couples the degree field to a vector-like field, introduced via the graph edges, represented mathematically by the incident matrices of the graph. The coupling between the two fields produces a massless ground state and massive excitations, separated by a mass gap. The excitations can be treated as emergent massive particles, propagating inside the graph. We study how the size of the graph and its density, represented by the ratio of edges over vertices, affects the mass gap and the localization properties of the massive excitations. We show that the most massive excitations, corresponding to the heaviest emergent particles, localize on regions of the graph with high density, consisting of vertices with a large degree. On the other hand, the least massive excitations, corresponding to the lightest emergent particles localize on a few vertices but with a smaller degree. Excitations with intermediate masses are less localized, spreading on more vertices instead. Our study shows that emergence of matter-like structures with various mass properties, is possible in discrete physical models, relying only on a few fundamental properties like the connectivity of the models.
Causal Dynamical Triangulations (CDT) is a methodology to define and compute the gravitational path integral, whose aim is a fully fledged nonperturbative quantum field theory of gravity and spacetime. Analogous to lattice formulations of nongravitational quantum fields, CDT provides a blueprint for lattice quantum gravity, where - crucially - the dynamical, curved and causal nature of spacetime is built into the structure of the lattices from the outset. The regularized path integral involves a sum over triangulated spacetimes, each assembled from flat, Minkowskian building blocks. The degrees of freedom of general relativity are encoded in a coordinate-free manner in the neighbourhood relations of the building blocks and the length of their edges, which also serves as a short-distance cutoff. A well-defined Wick rotation makes this path integral amenable to Monte Carlo simulations. Despite the absence of an a priori preferred background geometry, numerical experiments have revealed the dynamical emergence of a quantum universe near the Planck scale. Its global properties are compatible with those of a de Sitter space, providing strong evidence for a well-defined classical limit. At the same time, large quantum fluctuations lead to unexpected properties on short scales, most prominently, a spectral dimension near 2, replacing the classical value of 4. Computer simulations indicate the presence of an ultraviolet fixed point under renormalization, opening the door to a nontrivial continuum theory. Efforts are under way to construct observables that can elucidate the nonperturbative quantum origins of early-universe cosmology.
The characteristics of a thermal system depend strongly on its response to thermal gradients and the underlying microscopic interactions among constituents. In the present study, we investigate the thermodynamic and transport properties of the quark-gluon plasma (QGP) at finite baryon chemical potential within a deep-learning-assisted quasi-particle model (DLQPM). The temperature ($\mathrm{T}$) and baryon chemical potential ($\mu_B$)-dependent thermal masses of quasi-particles are estimated using neural networks trained to reproduce lattice QCD (lQCD) results for the equation of state, obtained via a Taylor-like expansion around vanishing baryon chemical potential. The trained model acts as an effective emulator, enabling us to estimate the thermodynamic and transport properties at finite $\mu_B$. We compute the speed of sound, specific heat, viscosity, and conductivity of the deconfined medium. Our findings are in good agreement with available lattice calculations and other phenomenological models. The present study demonstrates that a DNN-based approach provides an efficient framework for studying the properties of the QGP at finite baryon density.
Entanglement is a key quantity for characterizing quantum correlations in particle scattering processes, but its direct evaluation is computationally demanding on quantum hardware. In this work, we investigate whether fermion density profiles, which are easier to access, can serve as proxies for entanglement by framing the problem as a classification task across multiple entanglement thresholds. Using the fermion scattering in the Thirring model as a test bed, we compare Quantum Convolutional Neural Networks (QCNNs) with classical CNNs of comparable parameter counts, and find that QCNNs achieve consistently competitive or superior accuracy with faster convergence and lower variance. Notably, we observe that increasing the model size does not improve the performance within the architectures studied here, and larger models appear to be more sensitive to the choice of encoding. Instead, a compact 4-qubits QCNN provides the best results, suggesting the importance of trainability and encoding choices over model scaling. These findings demonstrate the potential of quantum and quantum-inspired machine learning models for extracting nontrivial quantum information from accessible observables, with implications for high-energy physics and quantum many-body systems.
We develop a comprehensive framework for constructing quantum error correcting codes (QECCs) from Abelian lattice gauge theories (LGTs) using quantum reference frames (QRFs) as a unifying formalism. We consider LGTs with arbitrary compact Abelian gauge groups supported on lattices in arbitrary numbers of spatial dimensions, and we work with both pure gauge theories and theories with couplings to bosonic and fermionic matter. The codes that we construct fall into two classes: First, Gauss law codes identify the code subspace with the full gauge-invariant sector of the theory. In models with matter coupled to gauge fields, these codes inherit a natural subsystem structure in which gauge-invariant Wilson loops and dressed matter excitations factorize the code space. Second, vacuum codes restrict the code subspace to the matter vacuum sector within the gauge-invariant subspace, yielding codes where errors correspond to gauge-invariant charge excitations rather than to violations of the Gauss law. Despite their distinct setup, we show that when the gauge group is finite, vacuum codes are unitarily equivalent to pure gauge theory Gauss law codes, and that when the group is continuous, this is only true upon a charge coarse-graining of the vacuum code. In all cases, QRFs provide a systematic apparatus for fully characterizing the codes' algebraic structures and correctable error sets. For clarity, we illustrate our general results in $\mathbb{Z}_2$-gauge theory, as well as in scalar and fermionic QED. These findings offer fundamental insights into the parallelism between quantum error correction and gauge theory and point toward practical advantages for simulating LGTs on noisy quantum devices.
Is gauge symmetry merely a redundancy in our description, or does it carry a deeper information-theoretic significance? Quantum error-correcting codes (QECCs) show that redundancy can serve as a resource for protecting information against noise. In this work, we ask whether gauge theories can be understood in similar terms, and make this idea concrete in lattice quantum electrodynamics (QED), building on and extending earlier works that established a bridge between gauge systems, stabilizer codes, and quantum reference frames (QRFs). For Abelian gauge groups, we show that explicit recovery operations can be constructed using group-theoretical methods for error sets determined by both ideal and non-ideal QRFs. Applied to lattice QED, this yields two QECC structures: one in the pure-gauge sector and one including fermions. We construct a gauge-field QRF based on spanning trees of the lattice and a fermionic field QRF from the matter field, thereby making explicit how physical information is encoded. While the syndromes of gauge-violating errors associated with constraint measurements are generically degenerate, QRFs resolve this degeneracy and single out families of correctable errors. This establishes lattice QED as a QECC beyond the stabilizer setting and shows concretely how gauge symmetry provides an encoding structure that supports error correction.
One proposal to compute parton distributions from first principles is the large momentum effective theory (LaMET), which requires the Fourier transform of matrix elements computed non-perturbatively. Lattice quantum chromodynamics (QCD) provides calculations of these matrix elements over a finite range of Fourier harmonics that are often noisy or unreliable in the largest computed harmonics. It has been suggested that enforcing an exponential decay of the missing harmonics helps alleviate this issue. Using non-perturbative data, we show that the uncertainty introduced by this inverse problem in a realistic setup remains significant without very restrictive assumptions, and that the importance of the exact asymptotic behavior is minimal for values of $x$ where the framework is currently applicable. We show that the crux of the inverse problem lies in harmonics of the order of $\lambda=zP_z \sim 5-15$, where the signal in the lattice data is often barely existent in current studies, and the asymptotic behavior is not firmly established. We stress the need for more sophisticated techniques to account for this inverse problem, whether in the LaMET or related frameworks like the short-distance factorization. We also address a misconception that, with available lattice methods, the LaMET framework allows a "direct" computation of the $x$-dependence, whereas the alternative short-distance factorization only gives access to moments or fits of the $x$-dependence.
We present lattice results for $f_K/f_{\pi}$ in the iso-symmetric limit of pure QCD (isoQCD) with $N_f=2+1$ flavours, along with a determination of $|V_{us}|/|V_{ud}|$ and a study on the unitarity of the first row of the Cabibbo-Kobayashi-Maskawa (CKM) matrix after introducing strong isospin-breaking and QED effects. The results obtained are based on a combination of a Wilson unitary action and the mixed-action setup introduced in arXiv:2309.14154, arXiv:2510.20450. The combination of the two regularisations enables a more precise control over the continuum-limit extrapolation.
We investigate the temperature dependence of the shear viscosity ($\eta$) and bulk viscosity ($\zeta$) of the gluon plasma using lattice QCD over the range 0.76--2.25$\,T_c$, extending from below the transition temperature $T_c$ across the transition region and into the deconfined phase. At each temperature, we employ three large, fine lattices, which enables controlled continuum extrapolations of the energy-momentum tensor correlators. Using gradient flow together with a recently developed blocking technique, we achieve percent-level precision for these correlators, providing strong constraints for a model-based spectral analysis. Since the inversion to real-time information is intrinsically ill posed, we extract viscosities by fitting spectral functions whose ultraviolet behavior is matched to the best available perturbative result, while the infrared region is described by a Lorentzian transport peak. The dominant modeling uncertainty associated with the transport peak width is bracketed by varying it over a physically motivated range set by thermal scales. We find that the shear-viscosity-to-entropy-density ratio, $\eta/s$, exhibits a minimum near the transition temperature $T_c$ and increases for $T>T_c$, whereas the bulk-viscosity-to-entropy-density ratio, $\zeta/s$, decreases monotonically over the entire temperature range studied.
The orbifold lattice has been proposed as a route to practical quantum simulation of Yang--Mills theory, with claims of exponential speedup over all known approaches. Through analytical derivations, Monte Carlo simulation, and explicit circuit construction, we identify compounding costs entirely absent in Kogut--Susskind formulations: a mass-dependent Trotter overhead that scales as $m^4$, non-singlet contamination that grows as $m^2$ and worsens with penalty terms, and a mandatory mass extrapolation. Monte Carlo simulations of SU(3) establish a universal scaling: the continuum limit forces $m^2 \propto 1/a$, binding the Trotter step to the lattice spacing through a cost unique to orbifolds. For a fiducial $10^3$ calculation, the orbifold is $10^4$--$10^{10}$ times more expensive than every published alternative. These results indicate that the claimed computational advantages do not at present survive quantitative scrutiny.