Any given prescription of quantum time travel necessarily endows a Hilbert space to the chronology-violating (CV) system on the closed timelike curve (CTC). However, under the two foremost models, Deutsch's prescription (D-CTCs) and postselected teleportation (P-CTCs), the CV system is treated very differently: D-CTCs assign a definite form to the state on this system, while P-CTCs do not. To further explore this distinction, we present a methodology by which an operational notion of state may be assigned to their respective CV systems. This is accomplished via a conjunction of state tomography and weak measurements, with the latter being essential in leaving any notions of self-consistency intact. With this technique, we are able to verify the predictions of D-CTCs and, perhaps more significantly, operationally assign a state to the system on the P-CTC. We show that, for any given combination of chronology-respecting input and unitary interaction, it is always possible to recover the unique state on the P-CTC, and we provide a few specific examples in the context of select archetypal temporal paradoxes. We also demonstrate how this state may be derived from analysis of the P-CTC prescription itself, and we explore how it compares to its counterpart in the CV state predicted by D-CTCs.

In this study, we investigate the decoherence of a spatially superposed electrically neutral spin-$\frac12$ particle in the presence of a relativistic quantum electromagnetic field in Minkowski spacetime. We demonstrate that decoherence due to the spin-magnetic field coupling can be categorized into two distinct factors: local decoherence, originating from the two-point correlation functions along each branch of the superposed trajectories, and nonlocal decoherence, which arises from the correlation functions between the two superposed trajectories. These effects are linked to phase damping and amplitude damping. We also show that if the quantum field is prepared in a thermal state, decoherence monotonically increases with the field temperature.

The extreme sensitivity of critical systems has been explored to improve quantum sensing and weak signal detection. The closing of the energy gap and abrupt change in the nature of the ground state at a quantum phase transition (QPT) critical point enhance indicators of parameter estimation, such as the quantum Fischer information. Here, we show that even if the system lacks a QPT, the quantum Fischer information can still be amplified due to the presence of an excited-state quantum phase transition (ESQPT). This is shown for a light-driven anharmonic quantum oscillator model that describes the low-lying spectrum of an exciton-polariton condensate proposed as a platform for quantum computation. In the classical limit, the ESQPT translates into the emergence of a hyperbolic point that explains the clustering of the energy levels at the vicinity of the ESQPT and the changed structure of the corresponding eigenstates, justifying the enhanced sensitivity of the system. Our study showcases the relationship between non-conventional quantum critical phenomena and quantum sensing with potential experimental applications in exciton-polariton systems.

The Bell-CHSH inequality in the vacuum state of a relativistic scalar quantum field is revisited by making use of the Hilbert space ${\cal H} \otimes {\cal H}_{AB}$, where ${\cal H}$ and ${\cal H}_{AB}$ stand, respectively, for the Hilbert space of the scalar field and of a generic bipartite quantum mechanical system. The construction of Hermitian, field-dependent, dichotomic operators is devised as well as the Bell-CHSH inequality. Working out the $AB$ part of the inequality, the resulting Bell-CHSH correlation function for the quantum field naturally emerges from unitary Weyl operators. Furthermore, introducing a Jaynes-Cummings type Hamiltonian accounting for the interaction between the scalar field and a pair of qubits, the quantum corrections to the Bell-CHSH inequality in the vacuum state of the scalar field are evaluated till the second order in perturbation theory.

Via elementary examples it is demonstrated that the singularities of classical physics (sampled by the Big Bang in cosmology) need not necessarily get smeared out after quantization. It is proposed that the role of quantum singularities can be played by the so called Kato's exceptional-point spectral degeneracies.

Fingerprinting techniques are widely used for localization because of their accuracy, especially in the presence of wireless channel noise. However, the fingerprinting techniques require significant storage and running time, which is a concern when implementing such systems on a global worldwide scale. In this paper, we propose an efficient quantum Euclidean similarity algorithm for wireless localization systems. The proposed quantum algorithm offers exponentially improved complexity compared to its classical counterpart and even the state-of-the-art quantum localization systems, in terms of both storage space and running time. The basic idea is to entangle the test received signal strength (RSS) vector with the fingerprint vectors at different locations and perform the similarity calculation in parallel to all fingerprint locations. We give the details of how to construct the quantum fingerprint, how to encode the RSS measurements in quantum particles, and finally; present the quantum algorithm for calculating the Euclidean similarity between the online RSS measurements and the fingerprint ones. Implementation and evaluation of our algorithm in a real testbed using a real IBM quantum machine as well as a simulation for a larger testbed confirm its ability to correctly obtain the estimated location with an exponential enhancement in both time and space compared to the traditional classical fingerprinting techniques and the state-of-the-art quantum localization techniques.

Quantum computing (QC) has the potential to revolutionize fields like machine learning, security, and healthcare. Quantum machine learning (QML) has emerged as a promising area, enhancing learning algorithms using quantum computers. However, QML models are lucrative targets due to their high training costs and extensive training times. The scarcity of quantum resources and long wait times further exacerbate the challenge. Additionally, QML providers may rely on a third-party quantum cloud for hosting the model, exposing the models and training data. As QML-as-a-Service (QMLaaS) becomes more prevalent, reliance on third party quantum clouds can pose a significant threat. This paper shows that adversaries in quantum clouds can use white-box access of the QML model during training to extract the state preparation circuit (containing training data) along with the labels. The extracted training data can be reused for training a clone model or sold for profit. We propose a suite of techniques to prune and fix the incorrect labels. Results show that $\approx$90\% labels can be extracted correctly. The same model trained on the adversarially extracted data achieves approximately $\approx$90\% accuracy, closely matching the accuracy achieved when trained on the original data. To mitigate this threat, we propose masking labels/classes and modifying the cost function for label obfuscation, reducing adversarial label prediction accuracy by $\approx$70\%.

We investigate the stability of quantum many-body scars under perturbations, within the PXP model. We numerically compute the fidelity and average correlations to monitor the state evolution and to identify revivals. The results indicate that, on the one hand, the entanglement entropy of PXP scars exhibit great sensitivity, in the sense that their profile approaches the ones expected for thermal states already for very small perturbations. On the other hand, other scar signatures, such as the revivals of states having large overlap with scars, show remarkable robustness. Additionally, we examined the effects of minor disturbances on initial states that previously exhibited high overlap with scars and consistent revivals. Our analysis revealed that different types of disturbances can induce markedly different behaviors, such as partially "freezing" the chain, leading to sustained oscillations, or accelerating the process of thermalization.

Nonreciprocity can not only generate quantum resources, but also shield noise and reverse interference from driving signals. We investigate the advantages of nonreciprocal coupling in sensing a driving signal. In general, we find that the nonreciprocal coupling performs better than the corresponding reciprocal coupling. And we show that homodyne measurement is the optimal measurement. A single non-reciprocal coupling can increase measurement precision up to 2 times. Using $N$ non-reciprocal couplings in parallel, the measurement precision can be improved by $N^2$ times compared with the corresponding reciprocal coupling. In a non-zero temperature dissipative environment, we demonstrate that the nonreciprocal quantum sensing has better robustness to thermal noise than the reciprocal quantum sensing.

We address a combinatorial optimization problem to determine the placement of a predefined number of sensors from multiple candidate positions, aiming to maximize information acquisition with the minimum number of sensors. Assuming that the data from predefined candidates of sensor placements follow a multivariate normal distribution, we defined mutual information (MI) between the data from selected sensor positions and the data from the others as an objective function, and formulated it in a Quadratic Unconstrainted Binary Optimization (QUBO) problem by using a method we proposed. As an example, we calculated optimal solutions of the objective functions for 3 candidates of sensor placements using a quantum annealing machine, and confirmed that the results obtained were reasonable. The formulation method we proposed can be applied to any number of sensors, and it is expected that the advantage of quantum annealing emerges as the number of sensors increases.

The eikonal approximation (EA) is widely used in various high-energy scattering problems. In this work we generalize this approximation from the scattering problems with time-independent Hamiltonian to the ones with periodical Hamiltonians, {\it i.e.}, the Floquet scattering problems. We further illustrate the applicability of our generalized EA via the scattering problem with respect to a shaking spherical square-well potential, by comparing the results given by this approximation and the exact ones. The generalized EA we developed is helpful for the research of manipulation of high-energy scattering processes with external field, {\it e.g.}, the manipulation of atom, molecule or nuclear collisions or reactions via strong laser fields.

Before the advent of fault-tolerant quantum computers, variational quantum algorithms (VQAs) play a crucial role in noisy intermediate-scale quantum (NISQ) machines. Conventionally, the optimization of VQAs predominantly relies on manually designed optimizers. However, learning to optimize (L2O) demonstrates impressive performance by training small neural networks to replace handcrafted optimizers. In our work, we propose L2O-$g^{\dagger}$, a $\textit{quantum-aware}$ learned optimizer that leverages the Fubini-Study metric tensor ($g^{\dagger}$) and long short-term memory networks. We theoretically derive the update equation inspired by the lookahead optimizer and incorporate the quantum geometry of the optimization landscape in the learned optimizer to balance fast convergence and generalization. Empirically, we conduct comprehensive experiments across a range of VQA problems. Our results demonstrate that L2O-$g^{\dagger}$ not only outperforms the current SOTA hand-designed optimizer without any hyperparameter tuning but also shows strong out-of-distribution generalization compared to previous L2O optimizers. We achieve this by training L2O-$g^{\dagger}$ on just a single generic PQC instance. Our novel $\textit{quantum-aware}$ learned optimizer, L2O-$g^{\dagger}$, presents an advancement in addressing the challenges of VQAs, making it a valuable tool in the NISQ era.

The idea of post-measurement coincidence pairing simplifies substantially long-distance, repeater-like quantum key distribution (QKD) by eliminating the need for tracking the differential phase of the users' lasers. However, optical frequency tracking remains necessary and can become a severe burden in future deployment of multi-node quantum networks. Here, we resolve this problem by referencing each user's laser to an absolute frequency standard and demonstrate a practical post-measurement pairing QKD with excellent long-term stability. We confirm the setup's repeater-like behavior and achieve a finite-size secure key rate (SKR) of 15.94 bit/s over 504 km fiber, which overcomes the absolute repeaterless bound by 1.28 times. Over a fiber length 100 km, the setup delivers an impressive SKR of 285.68 kbit/s. Our work paves the way towards an efficient muti-user quantum network with the local frequency standard.

The virtual Z gate has been established as an important tool for performing quantum gates on various platforms, including but not limited to superconducting systems. Many such platforms offer a limited set of calibrated gates and compile other gates, such as the Y gate, using combinations of X and virtual Z gates. Here, we show that the method of compilation has important consequences in an open quantum system setting. Specifically, we experimentally demonstrate that it is crucial to choose a compilation that is symmetric with respect to virtual Z rotations. This is particularly pronounced in dynamical decoupling (DD) sequences, where improper gate decomposition can result in unintended effects such as the implementation of the wrong sequence. Our findings indicate that in many cases the performance of DD is adversely affected by the incorrect use of virtual Z gates, compounding other coherent pulse errors. In addition, we identify another source of coherent errors: interference between consecutive pulses that follow each other too closely. This work provides insights into improving general quantum gate performance and optimizing DD sequences in particular.

Genuine multipartite nonlocality and nonlocality arising in networks composed of several independent sources have been separately investigated. While some genuinely entangled states cannot be verified by violating a single Bell-type inequality, a quantum network consisting of different sources allows for the certification of the non-classicality of all sources. In this paper, we propose the first method to verify both types of nonlocality simultaneously in a single experiment. We consider a quantum network comprising a bipartite source and a tripartite source. We demonstrate that there are quantum correlations cannot be simulated if the tripartite source distributes biseparable systems while the bipartite source distributes even stronger-than-quantum systems. These correlations can be used to verify both the genuine multipartite nonlocality of generalized Greenberger-Horne-Zeilinger states and the full network nonlocality that is stronger than all the existing results. Experimentally, we observe both types of nonlocality in a high fidelity photonic quantum network by violating a single network Bell inequality.

We study the possibility of discriminating between two bosonic dephasing quantum channels. We show that unambiguous discrimination is not realizable. We then consider discrimination with nonzero error probability and minimize this latter in the absence of input constraints. In the presence of an input energy constraint, we derive an upper bound on the error probability. Finally, we extend these results from single-shot to multi-shot discrimination, envisaging the asymptotic behavior.

We introduce a scheme to characterise a qudit T gate that has different noise than a set of Clifford gates. We developed our scheme through representation theory and ring theory to generalise non-Clifford interleaved benchmarking to qudit systems. By restricting to the qubit case, we recover the dihedral benchmarking scheme. Our characterisation scheme provides experimental physicists a practical method for characterising universal qudit gate sets and advances randomised benchmarking research by providing the characterisation of a complete qudit library.

The cost of distributed quantum operations such as the telegate and teledata protocols is high due to latencies from distributing entangled photons and classical information. This paper proposes an extension to the telegate and teledata protocols to allow for asynchronous classical communication which hides the cost of distributed quantum operations. We then discuss the benefits and limitations of these asynchronous protocols and propose a potential way to improve these asynchronous protocols using nonunitary operators. Finally, a quantum network card is described as an example of how asynchronous quantum operations might be used.

This paper introduces a quantum heat engine model that utilizes an ultracold atomic gas coupled with a nanomechanical mirror. The mirror's vibration induces an opto-mechanical sideband in the control field, affecting the behavior of the cold gas and subsequently influencing the output radiation of the engine. The model incorporates mirror vibration while omitting cavity confinement, establishing a bridge between a multi-level atom-laser interacting system that plays with coherences and the mechanical vibration of the nanomechanical mirror, which jointly function as heat engines. Three distinct heat engine configurations are proposed: the first involves a vibration-free three-level $\Lambda$-type system, the second introduces nanomechanical vibration to the three-level $\Lambda$-type system, and the third constitutes a composite engine that combines the previous setups along with nanomechanical vibration. The spectral brightness of a three-level heat engine is diminished with mirror vibration, whereas for a composite heat engine, there is a slight enhancement in the brightness peak. However, the maximum brightness is attained when there is no vibration. Comparisons between the proposed model and an ideal system are made regarding entropy balance, adhering to the constraints of the second law of thermodynamics. The model observed that when subjected to mirror vibration, the proposed heat engines diverged from the characteristics expected in an ideal heat engine.

Quantum state tomography, a process that reconstructs a quantum state from measurements on an ensemble of identically prepared copies, plays a crucial role in benchmarking quantum devices. However, brute-force approaches to quantum state tomography would become impractical for large systems, as the required resources scale exponentially with the system size. Here, we explore a machine learning approach and report an experimental demonstration of reconstructing quantum states based on neural network generative models with an array of programmable superconducting transmon qubits. In particular, we experimentally prepare the Greenberger-Horne-Zeilinger states and random states up to five qubits and demonstrate that the machine learning approach can efficiently reconstruct these states with the number of required experimental samples scaling linearly with system size. Our results experimentally showcase the intriguing potential for exploiting machine learning techniques in validating and characterizing complex quantum devices, offering a valuable guide for the future development of quantum technologies.

Facing the worldwide steady progress in building quantum computers, it is crucial for cryptographic community to design quantum-safe cryptographic primitives. To achieve this, we need to investigate the capability of cryptographic analysis tools when used by the adversaries with quantum computers. In this article, we concentrate on truncated differential and boomerang cryptanalysis. We first present a quantum algorithm which is designed for finding truncated differentials of symmetric ciphers. We prove that, with a overwhelming probability, the truncated differentials output by our algorithm must have high differential probability for the vast majority of keys in key space. Afterwards, based on this algorithm, we design a quantum algorithm which can be used to find boomerang distinguishers. The quantum circuits of both quantum algorithms contain only polynomial quantum gates. Compared to classical tools for searching truncated differentials or boomerang distinguishers, our algorithms fully utilize the strengths of quantum computing, and can maintain the polynomial complexity while fully considering the impact of S-boxes and key scheduling.

A key virtue of spin qubits is their sub-micron footprint, enabling a single silicon chip to host the millions of qubits required to execute useful quantum algorithms with error correction. With each physical qubit needing multiple control lines however, a fundamental barrier to scale is the extreme density of connections that bridge quantum devices to their external control and readout hardware. A promising solution is to co-locate the control system proximal to the qubit platform at milli-kelvin temperatures, wired-up via miniaturized interconnects. Even so, heat and crosstalk from closely integrated control have potential to degrade qubit performance, particularly for two-qubit entangling gates based on exchange coupling that are sensitive to electrical noise. Here, we benchmark silicon MOS-style electron spin qubits controlled via heterogeneously-integrated cryo-CMOS circuits with a low enough power density to enable scale-up. Demonstrating that cryo-CMOS can efficiently enable universal logic operations for spin qubits, we go on to show that mill-kelvin control has little impact on the performance of single- and two-qubit gates. Given the complexity of our milli-kelvin CMOS platform, with some 100-thousand transistors, these results open the prospect of scalable control based on the tight packaging of spin qubits with a chiplet style control architecture.

Accurate thermometry of laser-cooled ions is crucial for the performance of the trapped-ions quantum computing platform. However, most existing methods face a computational exponential bottleneck. Recently, a thermometry method based on bichromatic driving was theoretically proposed by Ivan Vybornyi et al. to overcome this obstacle, which allows the computational complexity to remain constant with the increase of ion numbers. In this paper, we provide a detailed statistical analysis of this method and prove its robustness to several imperfect experimental conditions using Floquet theory. We then experimentally verify its good performance on a linear segmented surface-electrode ion trap platform for the first time. This method is proven to be effective from near the motional ground state to a few mean phonon numbers. Our theoretical analysis and experimental verification demonstrate that the scheme can accurately and efficiently measure the temperature in ion crystals.

We consider the problem of energy cost needed for acceleration (deceleration) of the evolution of a quantum system using the Masuda-Nakamura fast forward protocol. In particular, we focus on dynamics by considering models for a quantum box with a moving wall and harmonic oscillator with time-dependent frequency. For both models we computed the energy needed for acceleration (deceleration) as a function of time. The results obtained are compared with those of other acceleration (deceleration) protocols

Involvement of the environment is indispensable for establishing the statistical distribution of system. We analyze the statistical distribution of a quantum system coupled strongly with a heat bath. This distribution is determined by tracing over the bath's degrees of freedom for the equilibrium system-plus-bath composite. The stability of system distribution is largely affected by the system--bath interaction strength. We propose that the quantum system exhibits a stable distribution only when its system response function in the frequency domain satisfies $\tilde\chi(\omega = 0+)>0$. We show our results by investigating the non-interacting bosonic impurity system from both the thermodynamic and dynamic perspectives. Our study refines the theoretical framework of canonical statistics, offering insights into thermodynamic phenomena in small-scale systems.

Simulating quantum dynamics is one of the most promising applications of quantum computers. While the upper bound of the simulation cost has been extensively studied through various quantum algorithms, much less work has focused on establishing the lower bound, particularly for the simulation of open quantum system dynamics. In this work, we present a general framework to calculate the lower bound for simulating a broad class of quantum Markov semigroups. Given a fixed accessible unitary set, we introduce the concept of convexified circuit depth to quantify the quantum simulation cost and analyze the necessary circuit depth to construct a quantum simulation scheme that achieves a specific order. Our framework can be applied to both unital and non-unital quantum dynamics, and the tightness of our lower bound technique is illustrated by showing that the upper and lower bounds coincide in several examples.

Silicon nanomechanical resonators display ultra-long lifetimes at cryogenic temperatures and microwave frequencies. Achieving quantum control of single-phonons in these devices has so far relied on nonlinearities enabled by coupling to ancillary qubits. In this work, we propose using atomic forces to realize a silicon nanomechanical qubit without coupling to an ancillary qubit. The proposed qubit operates at 60 MHz with a single-phonon level anharmonicity of 5 MHz. We present a circuit quantum acoustodynamics architecture where electromechanical resonators enable dispersive state readout and multi-qubit operations. The combination of strong anharmonicity, ultrahigh mechanical quality factors, and small footprints achievable in this platform could enable quantum-nonlinear phononics for quantum information processing and transduction.

A master equation containing a nonlinear term that gives rise to disentanglement has been recently investigated. In this study, a modified version, which is applicable for indistinguishable particles, is proposed, and explored for both the Bose-Hubbard and the Fermi-Hubbard models. It is found for both Bosons and Fermions that disentanglement can give rise to quantum phase transitions.

To optimize the entanglement detection, we formulate the metrologically operational entanglement condition in quantum Fisher information(QFI) by maximizing the QFI on the measurement orbit. Specifically, we consider two classes of typical local observables, i.e. the local orthonormal observables (LOO) and symmetric informationally complete positive operator-valued measures (SIC-POVM). Result shows that the SIC-POVM is superior to LOO in entanglement detection, which in some sense hints the yet unconfirmed generally superiority of SIC-POVM in quantum information processing.

We report a detailed characterization of two inductively coupled superconducting fluxonium qubits for implementing high-fidelity cross-resonance gates. Our circuit stands out because it behaves very closely to the case of two transversely coupled spin-1/2 systems. In particular, the generally unwanted static ZZ-term due to the non-computational transitions is nearly absent despite a strong qubit-qubit hybridization. Spectroscopy of the non-computational transitions reveals a spurious LC-mode arising from the combination of the coupling inductance and the capacitive links between the terminals of the two qubit circuits. Such a mode has a minor effect on our specific device, but it must be carefully considered for optimizing future designs.

F. Pastawski and J. Preskill discussed error correction of quantum annealing (QA) based on a parity-encoded spin system, known as the Sourlas-Lechner-Hauke-Zoller (SLHZ) system. They pointed out that the SLHZ system is closely related to a classical low-density parity-check (LDPC) code and demonstrated its error-correcting capability through a belief propagation (BP) algorithm assuming independent random spin-flip errors. In contrast, Ablash et al. suggested that the SLHZ system does not receive the benefits of post-readout decoding. The reason is that independent random spin-flips are not the most relevant error arising from sampling excited states during the annealing process, whether in closed or open system cases. In this work, we revisit this issue: we propose a very simple decoding algorithm to eliminate errors in the readout of SLHZ systems and show experimental evidence suggesting that SLHZ system exhibits error-correcting capability in decoding annealing readouts. Our new algorithm can be thought of as a bit-flipping algorithm for LDPC codes. Assuming an independent and identical noise model, we found that the performance of our algorithm is comparable to that of the BP algorithm. The error correcting-capability for the sampled readouts was investigated using Monte Carlo calculations that simulate the final time distribution of QA. The results show that the algorithm successfully eliminates errors in the sampled readouts under conditions where error-free state or even code state is not sampled at all. Our simulation suggests that decoding of annealing readouts will be successful if the correctable states can be sampled by annealing, and annealing can be considered to play a role as a pre-process of the classical decoding process. This knowledge will be useful for designing and developing practical QA based on the SLHZ system in the near future.

The QMA-completeness of the local Hamiltonian problem is a landmark result of the field of Hamiltonian complexity that studies the computational complexity of problems in quantum many-body physics. Since its proposal, substantial effort has been invested in better understanding the problem for physically motivated important families of Hamiltonians. In particular, the QMA-completeness of approximating the ground state energy of local Hamiltonians has been extended to the case where the Hamiltonians are geometrically local in one and two spatial dimensions. Among those physically motivated Hamiltonians, stoquastic Hamiltonians play a particularly crucial role, as they constitute the manifestly sign-free Hamiltonians in Monte Carlo approaches. Interestingly, for such Hamiltonians, the problem at hand becomes more ''classical'', being hard for the class MA (the randomized version of NP) and its complexity has tight connections with derandomization. In this work, we prove that both the two- and one-dimensional geometrically local analogues remain MA-hard with high enough qudit dimension. Moreover, we show that related problems are StoqMA-complete.

Quantum optimization solvers typically rely on one-variable-to-one-qubit mapping. However, the low qubit count on current quantum computers is a major obstacle in competing against classical methods. Here, we develop a qubit-efficient algorithm that overcomes this limitation by mapping a candidate bit string solution to an entangled wave function of fewer qubits. We propose a variational quantum circuit generalizing the quantum approximate optimization ansatz (QAOA). Extremizing the ansatz for Sherrington-Kirkpatrick spin glass problems, we show valuable properties such as the concentration of ansatz parameters and derive performance guarantees. This approach could benefit near-term intermediate-scale and future fault-tolerant small-scale quantum devices.

We employ a method involving coherent periodic modulation of Raman laser intensity to induce resonance transitions between energy levels of a spin-orbit coupled atom in a symmetric double-well trap. By integrating photon-assisted tunneling (PAT) technique with spin-orbit coupling (SOC), we achieve resonance transitions between the predefined energy levels of the atom, thereby enabling further precise control of the atom's dynamics. We observe that such photon-like resonance can induce a transition from a localized state to atomic Rabi oscillation between two wells, or effectively reduce tunneling as manifested by a quantum beating phenomenon. Moreover, such resonance transitions have the potential to induce spin flipping in a spin-orbit coupled atom. Additionally, the SOC-mediated transition from multiphoton resonance to fundamental resonance and the SOC-induced resonance suppression are also discovered. In these cases, the analytical results of the effective coupling coefficients of the resonance transition derived from a four-level model can account for the entire dynamics, demonstrating surprisingly good agreement with the numerically exact results based on the realistic continuous model.

Quantum learning tasks often leverage randomly sampled quantum circuits to characterize unknown systems. An efficient approach known as "circuit reusing," where each circuit is executed multiple times, reduces the cost compared to implementing new circuits. This work investigates the optimal reusing parameter that minimizes the variance of measurement outcomes for a given experimental cost. We establish a theoretical framework connecting the variance of experimental estimators with the reusing parameter R. An optimal R is derived when the implemented circuits and their noise characteristics are known. Additionally, we introduce a near-optimal reusing strategy that is applicable even without prior knowledge of circuits or noise, achieving variances close to the theoretical minimum. To validate our framework, we apply it to randomized benchmarking and analyze the optimal R for various typical noise channels. We further conduct experiments on a superconducting platform, revealing a non-linear relationship between R and the cost, contradicting previous assumptions in the literature. Our theoretical framework successfully incorporates this non-linearity and accurately predicts the experimentally observed optimal R. These findings underscore the broad applicability of our approach to experimental realizations of quantum learning protocols.

Two-way quantum computers (2WQC) are proposed extension of standard 1WQC: adding conjugated state preparation operation $\langle 0|$ similar to postselection $|0\ra \langle 0|$, by performing a process which from perspective of CPT symmetry is the original state preparation process, for example by reversing EM impulses used for state preparation. As there were concerns that this extension might violate no-cloning theorem for example for attacks on quantum cryptographic protocols like BB84, here we extend the original proof to show this theorem still holds for 2WQC and postselection.

This work explores the application of the concurrent variational quantum eigensolver (cVQE) for computing excited states of the Schwinger model. By designing suitable ansatz circuits utilizing universal SO(4) or SO(8) qubit gates, we demonstrate how to efficiently obtain the lowest two, four, and eight eigenstates with one, two, and three ancillary qubits for both vanishing and non-vanishing background electric field cases. Simulating the resulting quantum circuits classically with tensor network techniques, we demonstrate the capability of our approach to compute the two lowest eigenstates of systems with up to $\mathcal{O}(100)$ qubits. Given that our method allows for measuring the low-lying spectrum precisely, we also present a novel technique for estimating the additive mass renormalization of the lattice based on the energy gap. As a proof-of-principle calculation, we prepare the ground and first-excited states with one ancillary and four physical qubits on quantum hardware, demonstrating the practicality of using the cVQE to simulate excited states.

Modularity is a promising approach for scaling up quantum computers and therefore integrating higher qubit counts. The essence of such architectures lies in their reliance on high-fidelity and fast quantum state transfers enabled by generating entanglement between chips. In addressing the challenge of implementing quantum coherent communication channels to interconnect quantum processors, various techniques have been proposed to account for qubit technology specifications and the implemented communication protocol. By employing Design Space Exploration (DSE) methodologies, this work presents a comparative analysis of the cavity-mediated interconnect technologies according to a defined figure of merit, and we identify the configurations related to the cavity and atomic decay rates as well as the qubit-cavity coupling strength that meet the efficiency thresholds. We therefore contribute to benchmarking contemporary cavity-mediated quantum interconnects and guide the development of reliable and scalable chip-to-chip links for modular quantum computers.

One of the most promising applications of quantum networks is entanglement assisted sensing. The field of quantum metrology exploits quantum correlations to improve the precision bound for applications such as precision timekeeping, field sensing, and biological imaging. When measuring multiple spatially distributed parameters, current literature focuses on quantum entanglement in the discrete variable case, and quantum squeezing in the continuous variable case, distributed amongst all of the sensors in a given network. However, it can be difficult to ensure all sensors pre-share entanglement of sufficiently high fidelity. This work probes the space between fully entangled and fully classical sensing networks by modeling a star network with probabilistic entanglement generation that is attempting to estimate the average of local parameters. The quantum Fisher information is used to determine which protocols best utilize entanglement as a resource for different network conditions. It is shown that without entanglement distillation there is a threshold fidelity below which classical sensing is preferable. For a network with a given number of sensors and links characterized by a certain initial fidelity and probability of success, this work outlines when and how to use entanglement, when to store it, and when it needs to be distilled.

The problem of formulating thermodynamics in a relativistic scenario remains unresolved, although many proposals exist in the literature. The challenge arises due to the intrinsic dynamic structure of spacetime as established by the general theory of relativity. With the discovery of the physical nature of information, which underpins Landauer's principle, we believe that information theory should play a role in understanding this problem. In this work, we contribute to this endeavor by considering a relativistic communication task between two partners, Alice and Bob, in a general Lorentzian spacetime. We then assume that the receiver, Bob, reversibly operates a local heat engine powered by information, and seek to determine the maximum amount of work he can extract from this device. Since Bob cannot extract work for free, by applying both Landauer's principle and the second law of thermodynamics, we establish a bound on the energy Bob must spend to acquire the information in the first place. This bound is a function of the spacetime metric and the properties of the communication channel.

We investigate the possibility of achieving a slow signal field at the level of single photons inside nanofibers by exploiting stimulated Brillouin scattering, which involves a strong pump field and the vibrational modes of the waveguide. The slow signal is significantly amplified for a pump field with a frequency higher than that of the signal, and attenuated for a lower pump frequency. We introduce a configuration for obtaining a propagating slow signal without gain or loss and with a relatively wide bandwidth. This process involves two strong pump fields with frequencies both higher and lower than that of the signal, where the effects of signal amplification and attenuation compensate each other. We account for thermal fluctuations due to the scattering off thermal phonons and identify conditions under which thermal contributions to the signal field are negligible. The slowing of light through Brillouin optomechanics may serve as a vital tool for optical quantum information processing and quantum communications within nanophotonic structures.

The history based formalism known as Quantum Measure Theory (QMT) generalizes the concept of probability-measure so as to incorporate quantum interference. Because interference can result in a greater intensity than the simple sum of the component intensities, the \textit{quantum measure} can exceed unity, exhibiting its non-classical nature in a particularly striking manner. Here we study the two-site hopper within the context of QMT; and in an optical experiment, we determine the measure of a specific hopper event, using an ancilla based event filtering scheme. For this measure we report a value of $1.172$, which exceeds the maximum value permissible for a classical probability (namely $1$) by $13.3$ standard deviations. If an unconventional theoretical concept is to play a role in meeting the foundational challenges of quantum theory, then it seems important to bring it into contact with experiment as much as possible. Our experiment does this for the quantum measure.

Process tensors are quantum combs describing the evolution of open quantum systems through multiple steps of a quantum dynamics. While there is more than one way to measure how different two processes are, special care must be taken to ensure quantifiers obey physically desirable conditions such as data processing inequalities. Here, we analyze two classes of distinguishability measures commonly used in general applications of quantum combs. We show that the first class, called Choi divergences, does not satisfy an important data processing inequality, while the second one, which we call generalized divergences, does. We also extend to quantum combs some other relevant results of generalized divergences of quantum channels. Finally, given the properties we proved, we argue that generalized divergences may be more adequate than Choi divergences for distinguishing quantum combs in most of their applications. Particularly, this is crucial for defining monotones for resource theories whose states have a comb structure, such as resource theories of quantum processes and resource theories of quantum strategies.

Quantum computers are gaining importance in various applications like quantum machine learning and quantum signal processing. These applications face significant challenges in loading classical datasets into quantum memory. With numerous algorithms available and multiple quality attributes to consider, comparing data loading methods is complex. Our objective is to compare (in a structured manner) various algorithms for loading classical datasets into quantum memory (by converting statevectors to circuits). We evaluate state preparation algorithms based on five key attributes: circuit depth, qubit count, classical runtime, statevector representation (dense or sparse), and circuit alterability. We use the Pareto set as a multi-objective optimization tool to identify algorithms with the best combination of properties. To improve comprehension and speed up comparisons, we also visually compare three metrics (namely, circuit depth, qubit count, and classical runtime). We compare seven algorithms for dense statevector conversion and six for sparse statevector conversion. Our analysis reduces the initial set of algorithms to two dense and two sparse groups, highlighting inherent trade-offs. This comparison methodology offers a structured approach for selecting algorithms based on specific needs. Researchers and practitioners can use it to help select data-loading algorithms for various quantum computing tasks.

The analysis of empirical data through model-free inequalities leads to the conclusion that violations of Bell-type inequalities by empirical data cannot have any significance unless one believes that the universe operates according to the rules of a mathematical model.

Hands-on experimental experience with quantum systems in the undergraduate physics curriculum provides students with a deeper understanding of quantum physics and equips them for the fast-growing quantum science industry. Here we present an experimental apparatus for performing quantum experiments with single nitrogen-vacancy (NV) centers in diamond. This apparatus is capable of basic experiments such as single-qubit initialization, rotation, and measurement, as well as more advanced experiments investigating electron-nuclear spin interactions. We describe the basic physics of the NV center and give examples of potential experiments that can be performed with this apparatus. We also discuss the options and inherent trade-offs associated with the choice of diamond samples and hardware. The apparatus described here enables students to write their own experimental control and data analysis software from scratch all within a single semester of a typical lab course, as well as to inspect the optical components and inner workings of the apparatus. We hope that this work can serve as a standalone resource for any institution that would like to integrate a quantum instructional lab into its undergraduate physics and engineering curriculum.

Conference Key Agreement (CKA) provides a secure method for multi-party communication. A recently developed interference-based prepare-and-measure quantum CKA possesses the advantages of measurement-device-independence, namely, being immune to side-channels from the detector side. Besides, it achieves good key rate performance, especially for high-loss channels, due to the use of single photon interference. Meanwhile, several fully passive QKD schemes have been proposed, which eliminate all side channels from the source modulation side. We extend the passive idea to an interference-based CKA, which has a high level of implementation security for many-user communication.

Quantum Machine Learning (QML) has gathered significant attention through approaches like Quantum Kernel Machines. While these methods hold considerable promise, their quantum nature presents inherent challenges. One major challenge is the limited resolution of estimated kernel values caused by the finite number of circuit runs performed on a quantum device. In this study, we propose a comprehensive system of rules and heuristics for estimating the required number of circuit runs in quantum kernel methods. We introduce two critical effects that necessitate an increased measurement precision through additional circuit runs: the spread effect and the concentration effect. The effects are analyzed in the context of fidelity and projected quantum kernels. To address these phenomena, we develop an approach for estimating desired precision of kernel values, which, in turn, is translated into the number of circuit runs. Our methodology is validated through extensive numerical simulations, focusing on the problem of exponential value concentration. We stress that quantum kernel methods should not only be considered from the machine learning performance perspective, but also from the context of the resource consumption. The results provide insights into the possible benefits of quantum kernel methods, offering a guidance for their application in quantum machine learning tasks.

Photonic graph states are important for measurement- and fusion-based quantum computing, quantum networks, and sensing. They can in principle be generated deterministically by using emitters to create the requisite entanglement. Finding ways to minimize the number of entangling gates between emitters and understanding the overall optimization complexity of such protocols is crucial for practical implementations. Here, we address these issues using graph theory concepts. We develop optimizers that minimize the number of entangling gates, reducing them by up to 75$\%$ compared to naive schemes for moderately sized random graphs. While the complexity of optimizing emitter-emitter CNOT counts is likely NP-hard, we are able to develop heuristics based on strong connections between graph transformations and the optimization of stabilizer circuits. These patterns allow us to process large graphs and still achieve a reduction of up to $66\%$ in emitter CNOTs, without relying on subtle metrics such as edge density. We find the optimal emission orderings and circuits to prepare unencoded and encoded repeater graph states of any size, achieving global minimization of emitter and CNOT resources despite the average NP-hardness of both optimization problems. We further study the locally equivalent orbit of graphs. Although enumerating orbits is $\#$P complete for arbitrary graphs, we analytically calculate the size of the orbit of repeater graphs and find a procedure to generate the orbit for any repeater size. Finally, we inspect the entangling gate cost of preparing any graph from a given orbit and show that we can achieve the same optimal CNOT count across the orbit.

Fluxonium qubit is a promising building block for quantum information processing due to its long coherence time and strong anharmonicity. In this paper, we realize a 60 ns direct CNOT-gate on two inductively-coupled fluxonium qubits using selective darkening approach, resulting in a gate fidelity as high as 99.94%. The fidelity remains above 99.9% for 24 days without any recalibration between randomized benchmarking measurements. Compared with the 99.96% fidelity of a 60 ns identity gate, our data brings the investigation of the non-decoherence-related errors during gate operations down to $2 \times 10^{-4}$. The present result adds a simple and robust two-qubit gate into the still relatively small family of "the beyond three nines" demonstrations on superconducting qubits.

Recent investigations have demonstrated that multi-phonon scattering processes substantially influence the thermal conductivity of materials, posing significant computational challenges for classical simulations as the complexity of phonon modes escalates. This study examines the potential of quantum simulations to address these challenges, utilizing Noisy Intermediate Scale Quantum era (NISQ) quantum computational capabilities and quantum error mitigation techniques to optimize thermal conductivity calculations. Employing the Variational Quantum Eigensolver (VQE) algorithm, we simulate phonon-phonon contributions based on the Boltzmann Transport Equation (BTE). Our methodology involves mapping multi-phonon scattering systems to fermionic spin operators, necessitating the creation of a customized ansatz to balance circuit accuracy and depth. We construct the system within Fock space using bosonic operators and transform the Hamiltonian into the sum of Pauli operators suitable for quantum computation. By addressing the impact of non-unitary noise effects, we benchmark the noise influence and implement error mitigation strategies to develop a more efficient model for quantum simulations in the NISQ era.

Creating macroscopic spatial quantum superposition with a nanoparticle has a multitude of applications, ranging from testing the foundations of quantum mechanics, matter-wave interferometer for detecting gravitational waves and probing the electromagnetic vacuum, dark matter detection and quantum sensors to testing the quantum nature of gravity in a lab. In this paper, we investigate the role of rotation in a matter-wave interferometer, where we show that imparting angular momentum along the direction of a defect, such as one present in the nitrogen-vacancy centre of a nanodiamond can cause an enhancement in spin contrast for a wide-ranging value of the angular momentum, e.g. $10^{3}-10^{6}$~Hz for a mass of order $10^{-14}-10^{-17}$ Kg nanodiamond. Furthermore, the imparted angular momentum can enhance the spatial superposition by almost a factor of two and possibly average out any potential permanent dipoles in the nanodiamond.

A defining property of Hawking radiation is that states with very low entanglement masquerade as highly mixed states; this property is captured by a quantum computational phenomenon known as spoofing entanglement. Motivated by the potential implications for black hole information and the emergence of spacetime connectivity, as well as possible applications of spoofing entanglement, we investigate the geometrization of two types of entanglement spoofers in AdS/CFT: so-called EFI pairs and pseudoentangled state ensembles. We show that (a strengthened version of) EFI pairs with a semiclassical bulk dual have a Python's Lunch; the maximally mixed state over the pseudoentangled state ensemble likewise features a Python's Lunch. Since a Python's Lunch must lie behind an event horizon, we find that black holes are the exclusive gravitational source of entanglement spoofing in the semiclassical limit. Finally, we use an extant construction of holographic pseudorandom states to yield a candidate example of a pseudoentangled state ensemble with a semiclassical bulk dual.

The complexity of quantum many-body problems scales exponentially with the size of the system, rendering any finite size scaling analysis a formidable challenge. This is particularly true for methods based on the full representation of the wave function, where one simply accepts the enormous Hilbert space dimensions and performs linear algebra operations, e.g., for finding the ground state of the Hamiltonian. If the system satisfies an underlying symmetry where an operator with degenerate spectrum commutes with the Hamiltonian, it can be block-diagonalized, thus reducing the complexity at the expense of additional bookkeeping. At the most basic level, required for Krylov space techniques (like the Lanczos algorithm) it is necessary to implement a matrix-vector product of a block of the Hamiltonian with arbitrary block-wavefunctions, potentially without holding the Hamiltonian block in memory. An efficient implementation of this operation requires the calculation of the position of an arbitrary basis vector in the canonical ordering of the basis of the block. We present here an elegant and powerful, multi-dimensional approach to this problem for the $U(1)$ symmetry appearing in problems with particle number conservation. Our divide-and-conquer algorithm uses multiple subsystems and hence generalizes previous approaches to make them scalable. In addition to the theoretical presentation of our algorithm, we provide DanceQ, a flexible and modern - header only - C++20 implementation to manipulate, enumerate, and map to its index any basis state in a given particle number sector as open source software under https://DanceQ.gitlab.io/danceq.

We present an ab initio method for computing vibro-polariton and phonon-polariton spectra of molecules and solids coupled to the photon modes of optical cavities. We demonstrate that if interactions of cavity photon modes with both nuclear and electronic degrees of freedom are treated on the level of the cavity Born-Oppenheimer approximation (CBOA), spectra can be expressed in terms of the matter response to electric fields and nuclear displacements which are readily available in standard density functional perturbation theory (DFPT) implementations. In this framework, results over a range of cavity parameters can be obtained without the need for additional electronic structure calculations, enabling efficient calculations on a wide range of parameters. Furthermore, this approach enables results to be more readily interpreted in terms of the more familiar cavity-independent molecular electric field response properties, such as polarizability and Born effective charges which enter into the vibro-polariton calculation. Using corresponding electric field response properties of bulk insulating systems, we are also able to obtain $\Gamma$ point phonon-polariton spectra of two dimensional (2D) insulators. Results for a selection of cavity-coupled molecular and 2D crystal systems are presented to demonstrate the method.

We analyze boundary spin correlation functions of the hyperbolic-lattice Ising model from the holographic point of view. Using the corner-transfer-matrix renormalization group (CTMRG) method, we demonstrate that the boundary correlation function exhibits power-law decay with quasi-periodic oscillation, while the bulk correlation function always decays exponentially. On the basis of the geometric relation between the bulk correlation path and distance along the outer edge boundary, we find that scaling dimensions for the boundary correlation function can be well explained by the combination of the bulk correlation length and background curvatures inherent to the hyperbolic lattice. We also investigate the cutoff effect of the bond dimension in CTMRG, revealing that the long-distance behavior of the boundary spin correlation is accurately described even with a small bond dimension. In contrast, the sort-distance behavior rapidly loses its accuracy.

As the focus of quantum science shifts from basic research to development and implementation of applied quantum technology, calls for a robust, diverse quantum workforce have increased. However, little research has been done on the design and impact on participants of workforce preparation efforts outside of R1 contexts. In order to begin to answer the question of how program design can or should attend to the needs and interests of diverse groups of students, we performed interviews with students from two Colorado-based quantum education/workforce development programs, one in an undergraduate R1 setting and one in a distributed community setting and serving students largely from two-year colleges. Through analysis of these interviews, we were able to highlight differences between the student populations in the two programs in terms of participation goals, prior and general awareness of quantum science, and career interest and framing of career trajectories. While both groups of students reported benefits from program participation, we highlight the ways in which students' different needs and contexts have informed divergent development of the two programs, framing contextual design of quantum education and workforce efforts as an issue of equity and representation for the burgeoning quantum workforce.

The elementary excitations of quantum spin systems have generally the nature of weakly interacting bosonic quasi-particles, generated by local operators acting on the ground state. Nonetheless in one spatial dimension the nature of the quasiparticles can change radically, since many relevant one-dimensional $S=1/2$ Hamiltonians can be exactly mapped onto models of spinless fermions with local hopping and interactions. Due to the non-local nature of the spin-to-fermion mapping, observing directly the fermionic quasiparticle excitations is impossible using local probes, which are at the basis of all the forms of spectroscopy (such as neutron scattering) traditionally available in condensed matter physics. Here we show theoretically that \emph{quench spectroscopy} for synthetic quantum matter -- which probes the excitation spectrum of a system by monitoring the nonequilibrium dynamics of its correlation functions -- can reconstruct accurately the dispersion relation of fermionic quasiparticles in spin chains. This possibility relies on the ability of quantum simulation experiments to measure non-local spin-spin correlation functions, corresponding to elementary fermionic correlation functions. Our analysis is based on new exact results for the quench dynamics of quantum spin chains; and it opens the path to probe arbitrary quasiparticle excitations in synthetic quantum matter.

We construct the Feynman integral for the Schr\"odinger propagator in the polar conjugate momentum space, which describes the bound state Aharonov-Bohm effect, as a well-defined white noise functional.

This work provides an introduction and overview on some basic mathematical aspects of the single-flux Aharonov-Bohm Schr\"odinger operator. The whole family of admissible self-adjoint realizations is characterized by means of four different methods: von Neumann theory, boundary triplets, quadratic forms and Kre{\u\i}n's resolvent formalism. The relation between the different parametrizations thus obtained is explored, comparing the asymptotic behavior of functions in the corresponding operator domains close to the flux singularity. Special attention is devoted to those self-adjoint realizations which preserve the same rotational symmetry and homogeneity under dilations of the basic differential operator. The spectral and scattering properties of all the Hamiltonian operators are finally described.

We show that the information loss at the cosmological apparent horizon in an expanding universe has a direct correspondence with the Landauer principle of information dynamics. We show that the Landauer limit is satisfied in this case, which implies that the information erasure at the cosmological apparent horizon happens in the most efficient way possible. We also show that our results hold for extensions of the standard entropy formulations. This is the first work which directly provides a correspondence between information dynamics and expanding cosmic horizons, and we discuss several interesting implications of this result.

Research on topological models unveils fascinating physics, especially in the realm of dynamical quantum phase transitions (DQPTs). However, the understanding of entanglement structures and properties near DQPT in models with longer-range hoppings is far from complete. In this work, we study DQPTs in the quenched extended Su-Schrieffer-Heeger (SSH) model. Anomalous DQPTs, where the number of critical momenta exceeds the winding number differences between the pre-quench and post-quench phases, are observed. We find that the entanglement exhibits local maximum (minimum) around the anomalous DQPTs, in line with the level crossings (separations) around the middle of the correlation matrix spectrum. We further categorize the phases in the equilibrium model into two classes and distinctive features in the time evolution of the entanglement involving quenches within and across the two classes are identified. The findings pave the way to a better understanding of topological models with longer-range hoppings in the out-of-equilibrium regime.

Projected Entangled Pair States (PEPS) are recognized as a potent tool for exploring two-dimensional quantum many-body systems. However, a significant challenge emerges when applying conventional PEPS methodologies to systems with periodic boundary conditions (PBC), attributed to the prohibitive computational scaling with the bond dimension. This has notably restricted the study of systems with complex boundary conditions. To address this challenge, we have developed a strategy that involves the superposition of PEPS with open boundary conditions (OBC) to treat systems with PBC. This approach significantly reduces the computational complexity of such systems while maintaining their translational invariance and the PBC. We benchmark this method against the Heisenberg model and the $J_1$-$J_2$ model, demonstrating its capability to yield highly accurate results at low computational costs, even for large system sizes. The techniques are adaptable to other boundary conditions, including cylindrical and twisted boundary conditions, and therefore significantly expands the application scope of the PEPS approach, shining new light on numerous applications.

In the last decade, remarkable advances in integrated photonic technologies have enabled table-top experiments and instrumentation to be scaled down to compact chips with significant reduction in size, weight, power consumption, and cost. Here, we demonstrate an integrated continuously tunable laser in a heterogeneous gallium arsenide-on-silicon nitride (GaAs-on-SiN) platform that emits in the far-red radiation spectrum near 780 nm, with 20 nm tuning range, <6 kHz intrinsic linewidth, and a >40 dB side-mode suppression ratio. The GaAs optical gain regions are heterogeneously integrated with low-loss SiN waveguides. The narrow linewidth lasing is achieved with an extended cavity consisting of a resonator-based Vernier mirror and a phase shifter. Utilizing synchronous tuning of the integrated heaters, we show mode-hop-free wavelength tuning over a range larger than 100 GHz (200 pm). To demonstrate the potential of the device, we investigate two illustrative applications: (i) the linear characterization of a silicon nitride microresonator designed for entangled-photon pair generation, and (ii) the absorption spectroscopy and locking to the D1 and D2 transition lines of 87-Rb. The performance of the proposed integrated laser holds promise for a broader spectrum of both classical and quantum applications in the visible range, encompassing communication, control, sensing, and computing.

An algebraic characterization of the contractions of the Poincar\'e group permits a proper construction of a non-relativistic limit of its tachyonic representation. We arrive at a consistent, nonstandard representation of the Galilei group which was disregarded long ago by supposedly unphysical properties. The corresponding quantum (and classical) theory shares with the relativistic one their fundamentals, and serves as a toy model to better comprehend the unusual behavior of the tachyonic representation. For instance, we see that evolution takes place in a spatial coordinate rather than time, as for relativistic tachyons, but the modulus of the three-momentum is the same for all Galilean observers, leading to a new dispersion relation for a Galilean system. Furthermore, the tachyonic objects described by the new representation cannot be regarded as localizable in the standard sense.

We report an experimental study of a one-dimensional quintuple-quantum-dot array integrated with two quantum dot charge sensors in an InAs nanowire. The device is studied by measuring double quantum dots formed consecutively in the array and corresponding charge stability diagrams are revealed with both direct current measurements and charge sensor signals. The one-dimensional quintuple-quantum-dot array are then tuned up and its charge configurations are fully mapped out with the two charge sensors. The energy level of each dot in the array can be controlled individually by using a compensated gate architecture (i.e., "virtual gate"). After that, four dots in the array are selected to form two double quantum dots and ultra strong inter-double-dot interaction is obtained. A theoretical simulation based on a 4-dimensional Hamiltonian confirms the strong coupling strength between the two double quantum dots. The highly controllable one-dimensional quantum dot array achieved in this work is expected to be valuable for employing InAs nanowires to construct advanced quantum hardware in the future.

Akin to the traditional quasi-classical trajectory method for investigating the dynamics on a single adiabatic potential energy surface for an elementary chemical reaction, we carry out the dynamics on a 2-state ab initio potential energy surface including nonadiabatic coupling terms as friction terms for D+ + H2 collisions. It is shown that the resulting dynamics correctly accounts for nonreactive charge transfer, reactive non charge transfer and reactive charge transfer processes. In addition, it leads to the formation of triatomic DH2+ species as well.

Planar semiconductor heterostructures offer versatile device designs and are promising candidates for scalable quantum computing. Notably, heterostructures based on strained germanium have been extensively studied in recent years, with emphasis on their strong and tunable spin-orbit interaction, low effective mass, and high hole mobility. However, planar systems are still limited by the fact that the shape of the confinement potential is directly related to the density. In this work, we present the successful implementation of a backgate for a planar germanium heterostructure. The backgate, in combination with a topgate, enables independent control over the density and the electric field, which determines important state properties such as the effective mass, the $g$-factor and the quantum lifetime. This unparalleled degree of control paves the way towards engineering qubit properties and facilitates the targetted tuning of bilayer quantum wells, which promise denser qubit packing.

The superior computational power promised by quantum computers utilises the fundamental quantum mechanical principle of entanglement. However, achieving entanglement and verifying that the generated state does not follow the principle of local causality has proven difficult for spin qubits in gate-defined quantum dots, as it requires simultaneously high concurrence values and readout fidelities to break the classical bound imposed by Bell's inequality. Here we employ advanced operational protocols for spin qubits in silicon, such as heralded initialization and calibration via gate set tomography (GST), to reduce all relevant errors and push the fidelities of the full 2-qubit gate set above 99%. We demonstrate a 97.17% Bell state fidelity without correcting for readout errors and violate Bell's inequality with a Bell signal of S = 2.731 close to the theoretical maximum of 2{\sqrt{2}}. Our measurements exceed the classical limit even at elevated temperatures of 1.1K or entanglement lifetimes of 100 {\mu}s.