We prove a vector-valued inequality for the Gaussian noise stability (i.e. we prove a vector-valued Borell inequality) for Euclidean functions taking values in the two-dimensional sphere, for all correlation parameters at most $1/10$ in absolute value. This inequality was conjectured (for all correlation parameters at most $1$ in absolute value) by Hwang, Neeman, Parekh, Thompson and Wright. Such an inequality is needed to prove sharp computational hardness of the product state Quantum MAX-CUT problem, assuming the Unique Games Conjecture.

Large language models, like transformers, have recently demonstrated immense powers in text and image generation. This success is driven by the ability to capture long-range correlations between elements in a sequence. The same feature makes the transformer a powerful wavefunction ansatz that addresses the challenge of describing correlations in simulations of qubit systems. We consider two-dimensional Rydberg atom arrays to demonstrate that transformers reach higher accuracies than conventional recurrent neural networks for variational ground state searches. We further introduce large, patched transformer models, which consider a sequence of large atom patches, and show that this architecture significantly accelerates the simulations. The proposed architectures reconstruct ground states with accuracies beyond state-of-the-art quantum Monte Carlo methods, allowing for the study of large Rydberg systems in different phases of matter and at phase transitions. Our high-accuracy ground state representations at reasonable computational costs promise new insights into general large-scale quantum many-body systems.

We discuss the performance of discrete time crystals (DTC) as quantum sensors. The long-range spatial and time ordering displayed by DTC, leads to an exponentially slow heating, turning DTC into advantageous sensors. Specifically, their performance (determined by the quantum Fisher information) to estimate AC fields, can overcome the shot-noise limit while allowing for long-time sensing protocols. Since the collective interactions stabilize their dynamics against noise, these sensors become robust to imperfections in the protocol. The performance of such a sensor can also be used in a dual role to probe the presence or absence of a many-body localized phase.

To characterize the dynamical behavior of many-body quantum systems, one is usually interested in the evolution of so-called order-parameters rather than in characterizing the full quantum state. In many situations, these quantities coincide with the expectation value of local observables, such as the magnetization or the particle density. In experiment, however, these expectation values can only be obtained with a finite degree of accuracy due to the effects of the projection noise. Here, we utilize a machine-learning approach to infer the dynamical generator governing the evolution of local observables in a many-body system from noisy data. To benchmark our method, we consider a variant of the quantum Ising model and generate synthetic experimental data, containing the results of $N$ projective measurements at $M$ sampling points in time, using the time-evolving block-decimation algorithm. As we show, across a wide range of parameters the dynamical generator of local observables can be approximated by a Markovian quantum master equation. Our method is not only useful for extracting effective dynamical generators from many-body systems, but may also be applied for inferring decoherence mechanisms of quantum simulation and computing platforms.

We test the quantumness of IBM's quantum computer IBM Quantum System One in Ehningen, Germany. We generate generalised n-qubit GHZ states and measure Bell inequalities to investigate the n-party entanglement of the GHZ states. The implemented Bell inequalities are derived from non-adaptive measurement-based quantum computation (NMQC), a type of quantum computing that links the successful computation of a non-linear function to the violation of a multipartite Bell-inequality. The goal is to compute a multivariate Boolean function that clearly differentiates non-local correlations from local hidden variables (LHVs). Since it has been shown that LHVs can only compute linear functions, whereas quantum correlations are capable of outputting every possible Boolean function it thus serves as an indicator of multipartite entanglement. Here, we compute various non-linear functions with NMQC on IBM's quantum computer IBM Quantum System One and thereby demonstrate that the presented method can be used to characterize quantum devices. We find a violation for a maximum of seven qubits and compare our results to an existing implementation of NMQC using photons.

The quantum description of reality is quite different from the classical one. Understanding this difference at a fundamental level is still an interesting topic. Recently, Dragan and Ekert [New J. Phys. 22 (2020) 033038] postulated that considering so-called superluminal observers can be useful in this context. In particular, they claim that the full mathematical structure of the generalized Lorentz transformation may imply the emergence of multiple quantum mechanical trajectories. On the contrary, here we show that the generalized Lorentz transformation, when used in a consistent way, does not provide any correspondence between the classical concept of a definite path and the multiple paths of quantum mechanics.

Pseudo-Hermitian operators generalize the concept of Hermiticity. This class of operators includes the quasi-Hermitian operators, which reformulate quantum theory while retaining real-valued measurement outcomes and unitary time evolution. This thesis is devoted to the study of locality in quasi-Hermitian theory, the symmetries and conserved quantities associated with non-Hermitian operators, and the perturbative features of pseudo-Hermitian matrices. In addition to the presented original research, scholars will appreciate the lengthy introduction to non-Hermitian physics. Local quasi-Hermitian observable algebras are examined. Expectation values of local quasi-Hermitian observables equal expectation values of local Hermitian observables. Thus, quasi-Hermitian theories do not increase the values of nonlocal games set by Hermitian theories. Furthermore, Bell's inequality violations in quasi-Hermitian theories never exceed the Tsirelson bound of Hermitian quantum theory. Exceptional points, which are branch points in the spectrum, are a perturbative feature unique to non-Hermitian operators. Cusp singularities of algebraic curves are related to higher-order exceptional points. To exemplify novelties of non-Hermiticity, one-dimensional lattice models with a pair of non-Hermitian defect potentials with balanced loss and gain, $\Delta \pm i \gamma$, are explored. When the defects are nearest neighbour, the entire spectrum becomes complex when $\gamma$ is tuned past a second-order exceptional point. When the defects are at the edges of the chain and the hopping amplitudes are 2-periodic, as in the Su-Schrieffer-Heeger chain, the $\mathcal{PT}$-phase transition is dictated by the topological phase. Chiral symmetry and representation theory are used to derive large classes of pseudo-Hermitian operators with closed-form intertwining operators.

Conventional methods of measuring entanglement in a two-qubit photonic mixed state require the detection of both qubits. We generalize a recently introduced method which does not require the detection of both qubits, by extending it to cover a wider class of entangled states. Specifically, we present a detailed theory that shows how to measure entanglement in a family of two-qubit mixed states - obtained by generalizing Werner states - without detecting one of the qubits. Our method is interferometric and does not require any coincidence measurement or postselection. We also perform a quantitative analysis of anticipated experimental imperfections to show that the method is experimentally implementable and resistant to experimental losses.

Spin squeezing is vitally important in quantum metrology and quantum information science. The noise reduction resulting from spin squeezing can surpass the standard quantum limit and even reach the Heisenberg Limit (HL) in some special circumstances. However, systems that can reach the HL are very limited. Here we study the spin squeezing in atomic systems with a generic form of quadratic collective-spin interaction, which can be described by the Lipkin-Meshkov-Glick(LMG) model. We find that the squeezing properties are determined by the initial states and the anisotropic parameters. Moreover, we propose a pulse rotation scheme to transform the model into two-axis twisting model with Heisenberg-limited spin squeezing. Our study paves the way for reaching HL in a broad variety of systems.

Synchronization has recently been explored deep in the quantum regime with elementary few-level quantum oscillators such as qudits and weakly pumped quantum Van der Pol oscillators. To engineer more complex quantum synchronizing systems, it is practically relevant to study composite oscillators built up from basic quantum units that are commonly available and offer high controllability. Here, we consider a minimal model for a composite oscillator consisting of two interacting qubits coupled to separate baths, for which we also propose and analyze an implementation on a circuit quantum electrodynamics platform. We adopt a `microscopic' and `macroscopic' viewpoint and study the response of the constituent qubits and of the composite oscillator when one of the qubits is weakly driven. We find that the phase-locking of the individual qubits to the external drive is strongly modified by interference effects arising from their mutual interaction. In particular, we discover a phase-locking blockade phenomenon at particular coupling strengths. Furthermore, we find that interactions between the qubits can strongly enhance or suppress the extent of synchronization of the composite oscillator to the external drive. Our work demonstrates the potential for assembling complex quantum synchronizing systems from basic building units, which is of pragmatic importance for advancing the field of quantum synchronization.

The use of quantum processing units (QPUs) promises speed-ups for solving computational problems. Yet, current devices are limited by the number of qubits and suffer from significant imperfections, which prevents achieving quantum advantage. To step towards practical utility, one approach is to apply hardware-software co-design methods. This can involve tailoring problem formulations and algorithms to the quantum execution environment, but also entails the possibility of adapting physical properties of the QPU to specific applications. In this work, we follow the latter path, and investigate how key figures - circuit depth and gate count - required to solve four cornerstone NP-complete problems vary with tailored hardware properties. Our results reveal that achieving near-optimal performance and properties does not necessarily require optimal quantum hardware, but can be satisfied with much simpler structures that can potentially be realised for many hardware approaches. Using statistical analysis techniques, we additionally identify an underlying general model that applies to all subject problems. This suggests that our results may be universally applicable to other algorithms and problem domains, and tailored QPUs can find utility outside their initially envisaged problem domains. The substantial possible improvements nonetheless highlight the importance of QPU tailoring to progress towards practical deployment and scalability of quantum software.

Chain-mapping techniques combined with time-dependent density matrix renormalization group are powerful tools for simulating the dynamics of open quantum systems interacting with structured bosonic environments. Most interestingly, they leave the degrees of freedom of the environment open to inspection. In this work, we fully exploit the access to environmental observables to illustrate how the evolution of the open quantum system can be related to the detailed evolution of the environment it interacts with. In particular, we give a precise description of the fundamental physics that enables the finite temperature chain-mapping formalism to express dynamical equilibrium states. Furthermore, we analyze a two-level system strongly interacting with a super-Ohmic environment, where we discover a change in the Spin-Boson ground state that can be traced to the formation of polaronic states.

Multicritical Ising models and their perturbations are paradigmatic models of statistical mechanics. In two space-time dimensions, these models provide a fertile testbed for investigation of numerous non-perturbative problems in strongly-interacting quantum field theories. In this work, analog superconducting quantum electronic circuit simulators are described for the realization of these multicritical Ising models. The latter arise as perturbations of the quantum sine-Gordon model with $p$-fold degenerate minima, $p =2, 3,4,\ldots$. The corresponding quantum circuits are constructed with Josephson junctions with $\cos(n\phi + \delta_n)$ potential with $1\leq n\leq p$ and $\delta_n\in[-\pi,\pi]$. The simplest case, $p = 2$, corresponds to the quantum Ising model and can be realized using conventional Josephson junctions and the so-called $0-\pi$ qubits. The lattice models for the Ising and tricritical Ising models are analyzed numerically using the density matrix renormalization group technique. Evidence for the multicritical phenomena are obtained from computation of entanglement entropy of a subsystem and correlation functions of relevant lattice operators. The proposed quantum circuits provide a systematic approach for controlled numerical and experimental investigation of a wide-range of non-perturbative phenomena occurring in low-dimensional quantum field theories.

We present a scalable and modular error mitigation protocol for running $\mathsf{BQP}$ computations on a quantum computer with time-dependent noise. Utilising existing tools from quantum verification, our framework interleaves standard computation rounds alongside test rounds for error-detection and inherits a local-correctness guarantee which exponentially bounds (in the number of circuit runs) the probability that a returned classical output is correct. On top of the verification work, we introduce a post-selection technique we call basketing to address time-dependent noise behaviours and reduce overhead. The result is a first-of-its-kind error mitigation protocol which is exponentially effective and requires minimal noise assumptions, making it straightforwardly implementable on existing, NISQ devices and scalable to future, larger ones.

Bell states form a complete set of four maximally polarization entangled two-qubit quantum state. Being a key ingredient of many quantum applications such as entanglement based quantum key distribution protocols, superdense coding, quantum teleportation, entanglement swapping etc, Bell states have to be prepared and measured. Spontaneous parametric down conversion is the easiest way of preparing Bell states and a desired Bell state can be prepared from any entangled photon pair through single-qubit logic gates. In this paper, we present the generation of complete set of Bell states, only by using unitary transformations of half-wave plate (HWP). The initial entangled state is prepared using a combination of a nonlinear crystal and a beam-splitter (BS) and the rest of the Bell states are created by applying single-qubit logic gates on the entangled photon pairs using HWPs. Our results can be useful in many quantum applications, especially in superdense coding where control over basis of maximally entangled state is required.

The unavoidable interaction between thermal environments and quantum systems leads to the degradation of the quantum features, which can be fought against by engineered environments. In particular, preparing a thermal coherent environment can be promising for prolonging quantum properties relative to incoherent baths. We propose that a thermal coherent state (TCS) can be realized by using an ancilla qubit to thermally and longitudinally driven resonator modes. Using the master equation approach to describe the open system dynamics, we obtain the steady-state solution of the master equation for the qubit and resonator. Remarkably, the state of the resonator is a TCS, while the ancilla qubit remains thermal. Furthermore, we study the second-order correlation coefficient and photon number statistics to validate its quantum properties. To sum up, we also investigate a mechanism for generating quantum coherence based on a hybrid system composed of two-level systems and resonator to claim that an ancilla-assisted engineered thermal coherent bath prolongs the coherence lifetimes of qubits. Our results may provide a promising direction for preparing and practically implementing TCSs and environments for quantum science and technology.

Going as far as possible at SAT problem solving is the main aim of our work. For this sake we have made use of quantum computing from its two, on practice, main models of computation. They have required some reformulations over the former statement of 3-SAT problem in order to accomplish the requirements of both techniques. This paper presents and describes a hybrid quantum computing strategy for solving 3-SAT problems. The performance of this approximation has been tested over a set of representative scenarios when dealing with 3-SAT from the quantum computing perspective.

The study of correlation functions in quantum systems plays a vital role in decoding their properties and gaining insights into physical phenomena. In this context, the Gell-Mann and Low theorem have been employed to simplify computations by canceling connected vacuum diagrams. Building upon the essence of this theorem, we propose a modification to the adiabatic evolution process by adopting the two-state vector formalism with time symmetry. This novel perspective reveals correlation functions as weak values, offering a universal method for recording them on the apparatus through weak measurement. To illustrate the effectiveness of our approach, we present numerical simulations of perturbed quantum harmonic oscillators, addressing the intricate interplay between the coupling coefficient and the number of ensemble copies. Additionally, we extend our protocol to the domain of quantum field theory, where joint weak values encode crucial information about the correlation function. This comprehensive investigation significantly advances our understanding of the fundamental nature of correlation functions and weak measurements in quantum theories.

Diamond has superlative material properties for a broad range of quantum and electronic technologies. However, heteroepitaxial growth of single crystal diamond remains limited, impeding integration and evolution of diamond-based technologies. Here, we directly bond single-crystal diamond membranes to a wide variety of materials including silicon, fused silica, sapphire, thermal oxide, and lithium niobate. Our bonding process combines customized membrane synthesis, transfer, and dry surface functionalization, allowing for minimal contamination while providing pathways for near unity yield and scalability. We generate bonded crystalline membranes with thickness as low as 10 nm, sub-nm interfacial regions, and nanometer-scale thickness variability over 200 by 200 {\mu}m2 areas. We demonstrate multiple methods for integrating high quality factor nanophotonic cavities with the diamond heterostructures, highlighting the platform versatility in quantum photonic applications. Furthermore, we show that our ultra-thin diamond membranes are compatible with total internal reflection fluorescence (TIRF) microscopy, which enables interfacing coherent diamond quantum sensors with living cells while rejecting unwanted background luminescence. The processes demonstrated herein provide a full toolkit to synthesize heterogeneous diamond-based hybrid systems for quantum and electronic technologies.

Combinatorial optimization problems have attracted much interest in the quantum computing community in the recent years as a potential testbed to showcase quantum advantage. In this paper, we show how to exploit multilevel carriers of quantum information -- qudits -- for the construction of algorithms for constrained quantum optimization. These systems have been recently introduced in the context of quantum optimization and they allow us to treat more general problems than the ones usually mapped into qubit systems. In particular, we propose a hybrid classical quantum heuristic strategy that allows us to sample constrained solutions while greatly reducing the search space of the problem, thus optimizing the use of fewer quantum resources. As an example, we focus on the Electric Vehicle Charging and Routing Problem (EVCRP). We translate the classical problem and map it into a quantum system, obtaining promising results on a toy example which shows the validity of our technique.

In this paper, a quantitative investigation of the non-classical and quantum non-Gaussian characters of the photon-subtracted displaced Fock state $|{\psi}\rangle=a^kD(\alpha)|{n}\rangle$, where $k$ is number of photons subtracted, $n$ is Fock parameter, is performed by using a collection of measures like Wigner logarithmic negativity, linear entropy potential, skew information based measure, and relative entropy of quantum non-Gaussianity. It is noticed that the number of photons subtracted ($k$) changes the nonclassicality and quantum non-Gaussianity in a significant amount in the regime of small values of the displacement parameter whereas Fock parameter ($n$) presents a notable change in the large regime of the displacement parameter. In this respect, the role of the Fock parameter is found to be stronger as compared to the photon subtraction number. Finally, the Wigner function dynamics considering the effects of photon loss channel is used to show that the Wigner negativity can only be exposed by highly efficient detectors.

We consider a simple string model to explain and partly demystify the phenomenon of quantum entanglement. The model in question has nothing to do with string theory: it uses macroscopic strings that can be acted upon by Alice and Bob in ways that violate, or fail to violate, in different ways Bell-CHSH inequalities and the no-signaling conditions, also called marginal laws. We present several variants of the model, to address different objections that may arise. This allows us to make fully visible what the quantum formalism already suggests, about the nature of the correlations associated with entangled states, which appear to be created in a contextual manner at each execution of a joint measurement. We also briefly present the hidden measurement interpretation, whose rationale is compatible with the mechanism suggested by our string model, then offer some final thoughts about the possibility that the quantum entanglement phenomenon might affect not only states, but also measurements, and that our physical reality would be predominantly non-spatial in nature.

Quantum computers (QCs) aim to disrupt the status-quo of computing -- replacing traditional systems and platforms that are driven by digital circuits and modular software -- with hardware and software that operates on the principle of quantum mechanics. QCs that rely on quantum mechanics can exploit quantum circuits (i.e., quantum bits for manipulating quantum gates) to achieve "quantum computational supremacy" over traditional, i.e., digital computing systems. Currently, the issues that impede mass-scale adoption of quantum systems are rooted in the fact that building, maintaining, and/or programming QCs is a complex and radically distinct engineering paradigm when compared to challenges of classical computing and software engineering. Quantum service orientation is seen as a solution that synergises the research on service computing and quantum software engineering (QSE) to allow developers and users to build and utilise quantum software services based on pay-per-shot utility computing model. The pay-per-shot model represents a single execution of instruction on quantum processing unit and it allows vendors (e.g., Amazon Braket) to offer their QC platforms, simulators, software services etc. to enterprises and individuals who do not need to own or maintain quantum systems. This research contributes by 1) developing a reference architecture for enabling quantum computing as a service, 2) implementing microservices with the quantum-classic split pattern as an architectural use-case, and 3) evaluating the reference architecture based on feedback by 22 practitioners. In the QSE context, the research focuses on unifying architectural methods and service-orientation patterns to promote reuse knowledge and best practices to tackle emerging and futuristic challenges of architecting and implementing Quantum Computing as a Service (QCaaS).

Secure communication over long distances is one of the major problems of modern informatics. Classical transmissions are recognized to be vulnerable to quantum computer attacks. Remarkably, the same quantum mechanics that engenders quantum computers offer guaranteed protection against these attacks via a quantum key distribution (QKD) protocol. Yet, long-distance transmission is problematic since the signal decay in optical channels occurs at distances of about a hundred kilometers. We resolve this problem by creating a QKD protocol, further referred to as the Terra Quantum QKD protocol (TQ-QKD protocol), using semiclassical pulses containing enough photons for random bit encoding and exploiting erbium amplifiers to retranslate photon pulses and, at the same time, ensuring that at this intensity only a few photons could go outside the channel even at distances about hundred meters. As a result, an eavesdropper will not be able to efficiently utilize the lost part of the signal. A central TQ-QKD protocol's component is the end-to-end control over losses in the transmission channel which, in principle, could allow an eavesdropper to obtain the transmitted information. However, our control precision is such that if the degree of the leak falls below the control border, then the leaking states are quantum since they contain only a few photons. Therefore, available to an eavesdropper parts of the bit encoding states representing `0' and `1' are nearly indistinguishable. Our work presents the experimental realization of the TQ-QKD protocol ensuring secure communication over 1032 kilometers. Moreover, further refining the quality of the scheme's components will greatly expand the attainable transmission distances. This paves the way for creating a secure global QKD network in the upcoming years.

We propose a new proof method for direct coding theorems for wiretap channels where the eavesdropper has access to a quantum version of the transmitted signal on an infinite-dimensional Hilbert space and the legitimate parties communicate through a classical channel or a classical input, quantum output (cq) channel. The transmitter input can be subject to an additive cost constraint, which specializes to the case of an average energy constraint. This method yields errors that decay exponentially with increasing block lengths. Moreover, it provides a guarantee of a quantum version of semantic security, which is an established concept in classical cryptography and physical layer security. Therefore, it complements existing works which either do not prove the exponential error decay or use weaker notions of security. The main part of this proof method is a direct coding result on channel resolvability which states that there is only a doubly exponentially small probability that a standard random codebook does not solve the channel resolvability problem for the cq channel. Semantic security has strong operational implications meaning essentially that the eavesdropper cannot use its quantum observation to gather any meaningful information about the transmitted signal. We also discuss the connections between semantic security and various other established notions of secrecy.

We propose and implement an interpretable machine learning classification model for Explainable AI (XAI) based on expressive Boolean formulas. Potential applications include credit scoring and diagnosis of medical conditions. The Boolean formula defines a rule with tunable complexity (or interpretability), according to which input data are classified. Such a formula can include any operator that can be applied to one or more Boolean variables, thus providing higher expressivity compared to more rigid rule-based and tree-based approaches. The classifier is trained using native local optimization techniques, efficiently searching the space of feasible formulas. Shallow rules can be determined by fast Integer Linear Programming (ILP) or Quadratic Unconstrained Binary Optimization (QUBO) solvers, potentially powered by special purpose hardware or quantum devices. We combine the expressivity and efficiency of the native local optimizer with the fast operation of these devices by executing non-local moves that optimize over subtrees of the full Boolean formula. We provide extensive numerical benchmarking results featuring several baselines on well-known public datasets. Based on the results, we find that the native local rule classifier is generally competitive with the other classifiers. The addition of non-local moves achieves similar results with fewer iterations, and therefore using specialized or quantum hardware could lead to a speedup by fast proposal of non-local moves.

The p-symmetry of the hole wavefunction is associated with a weaker hyperfine interaction as compared to electrons, thus making hole spin qubits attractive candidates to implement long coherence quantum processors. However, recent studies demonstrated that hole qubits in planar germanium (Ge) heterostructures are still very sensitive to nuclear spin bath. These observations highlight the need to develop nuclear spin-free Ge qubits to suppress this decoherence channel and evaluate its impact. With this perspective, this work demonstrates the epitaxial growth of $^\text{73}$Ge-depleted isotopically enriched $^\text{70}$Ge/SiGe quantum wells. The growth was achieved by reduced pressure chemical vapor deposition using isotopically purified monogermane $^\text{70}$GeH$_\text{4}$ and monosilane $^\text{28}$SiH$_\text{4}$ with an isotopic purity higher than 99.9 $\%$ and 99.99 $\%$, respectively. The quantum wells consist of a series of $^\text{70}$Ge/SiGe heterostructures grown on Si wafers using a Ge virtual substrate and a graded SiGe buffer layer. The isotopic purity is investigated using atom probe tomography following an analytical procedure addressing the discrepancies in the isotopic content caused by the overlap of isotope peaks in mass spectra. The nuclear spin background in the quantum wells was found to be sensitive to the growth conditions. The lowest concentration of nuclear spin-full isotopes $^\text{73}$Ge and $^\text{29}$Si in the heterostructure was established at 0.01 $\%$ in the Ge quantum well and SiGe barriers. The measured average distance between nuclear spins reaches 3-4 nm in $^\text{70}$Ge/$^\text{28}$Si$^\text{70}$Ge, which is an order of magnitude larger than in natural Ge/SiGe heterostructures.

We show that the internal quality factors of high impedance superconducting resonators made of granular aluminum can be improved by coating them with micrometric films of solid para-hydrogen molecular crystals. We attribute the average measured $\approx 8\%$ reduction in dissipation to absorption of stray terahertz radiation at the crystal-resonator interface and the subsequent dissipation of its energy in the form of phonons below the pair-breaking gap. Our results prove that, contrary to expectations, replacing the vacuum dielectric atop a superconducting resonator can be beneficial, thanks to the added protection against Cooper pair-braking terahertz radiation. Moreover, at the level of internal quality factors in the $10^5$ range, the hydrogen crystal does not introduce additional losses, which is promising for embedding impurities to couple to superconducting thin-film devices in hybrid quantum architectures.

Non-KAM (Kolmogorov-Arnold-Moser) systems, when perturbed by weak time-dependent fields, offer a fast route to the classical chaos through an abrupt breaking of the invariant phase space tori. However, such behavior is not ubiquitous but rather contingent on whether the total system is in resonance. The resonances are usually determined by the ratios of characteristic frequencies associated with the system and the perturbation. Under the resonance condition, the classical dynamics are highly susceptible to variations in the system parameters. In this work, we employ out-of-time-order correlators (OTOCs) to study the dynamical sensitivity of a perturbed non-KAM system in the quantum limit as the parameter that characterizes the resonances and non-resonances is slowly varied. For this purpose, we consider a quantized kicked harmonic oscillator (KHO) model with the kick being the external time-dependent perturbation. Although the Lyapunov exponent of the KHO at resonances remains close to zero in the weak perturbative regime, making the system weakly chaotic in the conventional sense, the classical phase space undergoes significant structural changes. Motivated by this, we study the OTOCs when the system is in resonance and contrast the results with the non-resonant case. At resonances, we observe that the asymptotic dynamics of the OTOCs are sensitive to these structural changes, where they grow quadratically as opposed to linear or stagnant growth at non-resonances. On the other hand, our findings suggest that the short-time dynamics remain relatively more stable to the variations in the parameter. We will back our results by providing analytical expressions for the OTOCs for a few special cases. We will then extend our findings concerning the non-resonant cases to a broad class of KAM systems.

We introduce a novel approach to solving dynamic programming problems, such as those in many economic models, on a quantum annealer, a specialized device that performs combinatorial optimization. Quantum annealers attempt to solve an NP-hard problem by starting in a quantum superposition of all states and generating candidate global solutions in milliseconds, irrespective of problem size. Using existing quantum hardware, we achieve an order-of-magnitude speed-up in solving the real business cycle model over benchmarks in the literature. We also provide a detailed introduction to quantum annealing and discuss its potential use for more challenging economic problems.

In machine learning and particularly in topological data analysis, $\epsilon$-graphs are important tools but are generally hard to compute as the distance calculation between n points takes time O(n^2) classically. Recently, quantum approaches for calculating distances between n quantum states have been proposed, taking advantage of quantum superposition and entanglement. We investigate the potential for quantum advantage in the case of quantum distance calculation for computing $\epsilon$-graphs. We show that, relying on existing quantum multi-state SWAP test based algorithms, the query complexity for correctly identifying (with a given probability) that two points are not $\epsilon$-neighbours is at least O(n^3 / ln n), showing that this approach, if used directly for $\epsilon$-graph construction, does not bring a computational advantage when compared to a classical approach.

We study the controllable output field generation from a cavity magnomechanical resonator system that consists of two coupled microwave resonators. The first cavity interacts with a ferromagnetic yttrium iron garnet (YIG) sphere providing the magnon-photon coupling. Under passive cavities configuration, the system displays high absorption, prohibiting output transmission even though the dispersive response is anamolous. We replace the second passive cavity with an active one to overcome high absorption, producing an effective gain in the system. We show that the deformation of the YIG sphere retains the anomalous dispersion. Further, tuning the exchange interaction strength between the two resonators leads to the system's effective gain and dispersive response. As a result, the advancement associated with the amplification of the probe pulse can be controlled in the close vicinity of the magnomechanical resonance. Furthermore, we find the existence of an upper bound for the intensity amplification and the advancement of the probe pulse that comes from the stability condition. These findings may find potential applications for controlling light propagation in cavity magnomechanics.

Non-Hermitian topological systems have attracted lots of interest due to their unique topological properties when the non-Hermitian skin effect (NHSE) appears. However, the experimental realization of NHSE conventionally requires non-reciprocal couplings, which are compatible with limited systems. Here we propose a mechanism of loss-induced Floquet NHSE, where the loss provides the basic source of non-Hermicity and the Floquet engineering brings about the Floquet-induced complex next-nearest-neighbor couplings. We also extend the generalized Brillouin zone theory to nonequilibrium systems to describe the Floquet NHSE. Furthermore, we show that this mechanism can realize the second-order NHSE when generalized to two-dimensional systems. Our proposal can be realized in photonic lattices with helical waveguides and other related systems, which opens the door for the study of topological phases in Floquet non-Hermitian systems.

The Blahut-Arimoto algorithm is a well known method to compute classical channel capacities and rate-distortion functions. Recent works have extended this algorithm to compute various quantum analogs of these quantities. In this paper, we show how these Blahut-Arimoto algorithms are special instances of mirror descent, which is a well-studied generalization of gradient descent for constrained convex optimization. Using new convex analysis tools, we show how relative smoothness and strong convexity analysis recovers known sublinear and linear convergence rates for Blahut-Arimoto algorithms. This mirror descent viewpoint allows us to derive related algorithms with similar convergence guarantees to solve problems in information theory for which Blahut-Arimoto-type algorithms are not directly applicable. We apply this framework to compute energy-constrained classical and quantum channel capacities, classical and quantum rate-distortion functions, and approximations of the relative entropy of entanglement, all with provable convergence guarantees.