Any type of spectroscopy can be efficiently simulated on a quantum computer


Abstract

Spectroscopy is the most important method for probing the structure of molecules. However, predicting molecular spectra on classical computers is computationally expensive, with the most accurate methods having a cost that grows exponentially with molecule size. Quantum computers have been shown to simulate simple types of optical spectroscopy efficiently—with a cost polynomial in molecule size—using methods such as time-dependent simulations of photoinduced wavepackets. Here, we show that any type of spectroscopy can be efficiently simulated on a quantum computer using a time-domain approach, including spectroscopies of any order, any frequency range, and involving both electric and magnetic transitions. Our method works by computing any spectroscopic correlation function based on the corresponding double-sided Feynman diagram, the canonical description of spectroscopic interactions. The approach can be used to simulate spectroscopy of both closed and open molecular systems using both digital and analog quantum computers.

The simulation of quantum systems was the first proposed application of quantum computers [1], [2]. Since then, quantum algorithms have been developed [3][13], and experimentally demonstrated [11], [14][21], for many problems in quantum chemistry, promising the solution of previously intractable computational problems.

A promising application for quantum computing is predicting spectra; spectroscopy is the most important experimental probe of molecular structure and being able to predict and interpret spectra is critical to understanding chemical behaviour. Spectroscopic techniques can be characterized by their frequencies and the order of light-matter interactions involved. Spectroscopy encompasses the entire electromagnetic spectrum, from gamma-ray Mössbauer spectroscopy to radio-wave nuclear magnetic resonance (NMR) spectroscopy. Within this range, X-ray, UV-visible, infrared, and microwave spectroscopies probe different molecular degrees of freedom. The simplest, linear, spectroscopies measure light absorption, while nonlinear spectroscopies such as Raman, pump-probe, and 2D spectroscopies involve higher-order interactions, probing chemical dynamics. Finally, differential spectroscopy probes differences in response between two spectra. For example, dichroism spectroscopy measures the differential absorption of two light polarizations to examine anisotropic or chiral molecules.

However, accurate prediction of spectroscopic signals is computationally challenging. This challenge takes different forms depending on whether the computations are carried out in the frequency domain or in the time domain. In the frequency domain, spectra are generated by calculating eigenstates and eigenvalues to give individual spectral peaks. This approach is limited to small molecules and certain spectroscopies; in particular, when electronic and nuclear degrees of freedom are coupled (e.g., in vibronic spectroscopies), the number of relevant transitions grows exponentially with molecular size [22][26]. The alternative, which avoids this scaling problem, is simulating spectroscopy in the time domain, where the computational cost is determined not by the number of eigenstates but by the spectral resolution desired [22][26]. The time-domain approach describes spectra as Fourier transforms of correlation functions, changing spectroscopy from an eigenvalue problem to a dynamics problem. However, this approach also quickly becomes computationally expensive because it requires simulating quantum molecular dynamics, a task for which state-of-the-art methods such as multi-configuration time-dependent Hartree have costs that can grow exponentially with molecule size [27], [28]. Therefore, there remain spectroscopic problems that are computationally intractable using either approach.

Quantum computers could simulate spectroscopy efficiently (in polynomial time), but algorithms for doing so have been developed only for certain types of spectroscopy. As in classical computing, the earliest computational spectroscopy on quantum computers was in the frequency domain. In particular, many algorithms have been developed to simulate purely electronic spectroscopy, whether linear [29][31] or higher-order [32], [33]. Frequency-domain algorithms have also been developed to simulate vibrational and vibronic spectroscopies [34][39]. However, quantum frequency-domain calculations suffer from the same poor scaling as their classical counterparts because the number of transitions to be calculated scales exponentially with system size. However, because dynamics can be simulated efficiently on quantum computers, the time-domain approach can be translated into efficient quantum spectroscopy algorithms, as has been done for linear [18] and 2D [40][42] electronic and vibronic spectroscopy as well as for NMR [43] and X-ray spectroscopy [44]. Nevertheless, these advances have dealt with specific types of spectroscopy, making it unclear whether non-optical, higher-order, or differential spectroscopies could be simulated.

A further problem for most existing time-domain methods is that they assume that the transition-dipole-moment operator \(\mu\) can be implemented on a quantum computer using a linear combination of polynomially many (in system size) unitaries. This assumption is true for electronic spectroscopies, where often a small, constant number of electronic states is relevant. However, it fails for spectroscopies involving continuous degrees of freedom, such as vibronic spectroscopies. In that case, depending on how the continuous degree of freedom is encoded on the quantum computer, known decompositions of \(\mu\) can require exponentially many unitaries.

Table 1: Molecular spectroscopies categorized by the order of the interaction involved, which determines the correlation functions to be computed. \(\mu\) denotes the electric transition-dipole moment and \(m\) the magnetic one. CD: circular dichroism; MCD: magnetic circular dichroism; CARS: coherent anti-Stokes Raman spectroscopy; CSRS: coherent Stokes Raman spectroscopy; 2DCD: two-dimensional circular dichroism.
Order Example Representative correlation function
1 Linear absorption (incl. UV-vis, infrared, X-ray, Mössbauer) \(\langle \mu(t_1)\mu(0)\rho(0) \rangle\)
CD and MCD spectroscopy \(\langle m(t_1)\mu(0)\rho(0) \rangle\)
2 Sum/difference frequency generation \(\langle \mu(t_2)\mu(t_1)\mu(0)\rho(0) \rangle\)
3 Pump-probe spectroscopy (incl. transient absorption) \(\langle \mu(0)\mu(t_1)\mu(t_2)\mu(0)\rho(0) \rangle\)
2D spectroscopy (incl. electronic, infrared, and NMR) \(\langle \mu(t_3)\mu(t_2)\mu(t_1)\mu(0)\rho(0) \rangle\)
Raman spectroscopy (incl. CARS and CSRS) \(\langle \mu(0)\mu(t_1)\mu(t_1)\mu(0)\rho(0) \rangle\)
2DCD spectroscopy \(\langle m(0)\mu(t_1)\mu(t_3)\mu(t_2)\rho(0) \rangle\)
4 Four-wave mixing \(\langle \mu(t_4)\mu(t_3)\mu(t_2)\mu(t_1)\mu(0)\rho(0) \rangle\)
5 Fifth-order Raman spectroscopy \(\langle \mu(t_5)\mu(t_4)\mu(t_3)\mu(t_2)\mu(t_1)\mu(0)\rho(0) \rangle\)
\(n\) \(\langle \mu(t_{n})\cdots\mu(t_1)\mu(0)\rho(0) \rangle\)

Here, we show that any type of spectroscopy can be efficiently simulated on a quantum computer using the time-domain representation. Our method achieves its generality by directly constructing quantum circuits based on double-sided Feynman diagrams, the canonical representations of spectroscopic interactions to any order [45][47]. In addition, it avoids the problem of inefficient unitary decompositions of \(\mu\) by avoiding them altogether; instead, it computes correlation functions as derivatives of circuits that use explicitly unitary operators of the form \(e^{-i\mu t}\) [48].

Our approach includes all previous time-domain techniques as special cases, including purely electronic and purely vibrational ones. The algorithm’s efficiency stems from the time-domain approach, meaning that it inherits the polynomial-time scaling of standard quantum algorithms for simulating time evolution [2], [4], [6][9], [13], [49][51]. The method is equally applicable to electric-field spectroscopies and those that involve magnetic interactions, such as circular dichroism. Finally, the method can simulate spectroscopy in open systems to account for the sensitive dependence of spectra on the molecular environment. All of these features can be implemented whether the dynamics is simulated using digital or analog quantum computers.

1 Spectroscopic correlation functions↩︎

Time-domain computational spectroscopy is based on correlation functions that emerge from the perturbative expansion of the light-matter interaction [45]. All types of spectra can be written as Fourier transforms of sums of correlation functions, with examples given in 1. Light-matter interaction is described by the Hamiltonian \[\require{upgreek} \label{eq:hamiltonian} H(t) = H_0 - \boldsymbol{\mathrm{\upmu}} \cdot \boldsymbol{\mathrm{E}}(t) - \boldsymbol{\mathrm{m}} \cdot \boldsymbol{\mathrm{B}}(t),\tag{1}\] where \(H_0\) is the molecular Hamiltonian, \(\require{upgreek} \boldsymbol{\mathrm{\upmu}}\) is the electric-dipole operator, \(\boldsymbol{\mathrm{E}}(t)\) is the electric field, \(\boldsymbol{\mathrm{m}}\) is the magnetic-dipole operator, and \(\boldsymbol{\mathrm{B}}(t)\) is the magnetic field. Electric transitions are almost always stronger than magnetic ones (\(\require{upgreek} \boldsymbol{\mathrm{\upmu}}\cdot \boldsymbol{\mathrm{E}}(t) \gg \boldsymbol{\mathrm{m}}\cdot \boldsymbol{\mathrm{B}}(t)\)), so it is common to neglect the latter. For now, we will consider only the electric perturbation, and return to spectroscopies with magnetic effects below. For simplicity we will also assume the light is linearly polarized, \(\require{upgreek} \boldsymbol{\mathrm{\upmu}}\cdot \boldsymbol{\mathrm{E}}(t)=\mu E(t)\).

Spectroscopy in the time domain is well understood through perturbation theory, which describes a sequence of light-matter interactions [45], [46]. The total spectroscopic signal is proportional to the molecular polarization \(P(t)\). In the perturbative expansion, the \(n\)th-order term in \(P(t)\) is a convolution of the \(n\)th-order response function \(R^{(n)}(t, t_n, \ldots, t_1)\) with the electric field [45]: \[\begin{gather} P^{(n)}(t) = \int_{0}^{t} \mathop{}\!dt_n \cdots \int_{0}^{t_2} \mathop{}\!dt_1 E(t_n)\cdots E(t_1) \\ \times R^{(n)}(t, t_n, \ldots, t_1), \label{eq:convolution} \end{gather}\tag{2}\] where the interaction-picture response function is \[\begin{gather} \label{eq:responsefunctionnestedcommutator} R^{(n)}(t,t_{n},\ldots,t_1) = \\ \langle \mu(t)[\mu(t_{n}),\cdots [\mu(t_2)[\mu(t_1), \rho(0)]]\cdots] \rangle, \end{gather}\tag{3}\] and where we have assumed that \(E(t)=0\) for \(t<0\).

Because \(R^{(n)}\) is composed of \(n\) nested commutators, it is a sum of \(2^n\) terms, \(R^{(n)} = \sum_{j=1}^{2^n} R^{(n)}_j\), where each is of the form \[\begin{gather} \label{eq:bothsidescorrelationfunction} R^{(n)}_j(t,t_{n},\ldots,t_1) = \\ \langle \mu(t_{K_k})\cdots\mu(t_{K_1}) \rho(0) \mu(t_{B_1})\cdots \mu(t_{B_b}) \rangle, \end{gather}\tag{4}\] representing a molecule interacting with the field \(k\) times on the ket side and \(b\) times on the bra side, where \(t_{K_1} < \cdots < t_{K_k} = t\), \(t_{B_1} < \cdots < t_{B_b}\), i.e., the two sets of interactions times are time ordered. The total number of interactions is one more than the order of the response function, \(k+b=n+1\); the final, (\(n+1\))th \(\mu\) always acts on the ket and occurs because the \(R^{(n)}_j\) are expectation values of \(\mu\) at the final time.

Each \(R^{(n)}_j\) corresponds to one of the possible double-sided Feynman diagrams, which are widely used for diagrammatically cataloging spectroscopic interactions [45], [46]. Identifying and constructing the relevant double-sided Feynman diagrams for a given type of spectroscopy makes it clear which correlation functions contribute to the molecular response and thus the spectrum. 2a gives an example double-sided Feynman diagram.

The response function \(R\) suffices to calculate any spectrum, whether it is resolved in time or frequency. While time-resolved spectra are proportional to \(P(t)\) (itself a function of \(R\)), frequency-resolved spectra are its Fourier transforms. For example, the frequency-resolved linear absorption spectrum is \[\label{eq:linearabsorption} \sigma(\omega) \propto \int_{-\infty}^{\infty} dt\; e^{i\omega t} P^{(1)}(t).\tag{5}\] In the simplest case, of an electric-field consisting of a delta-function pulse, \(E(t_1)= E\delta(t_1)\), we get \[\begin{align} \sigma (\omega ) &\propto \int_{-\infty}^{\infty}dt\; e^{i\omega t} R^{(1)}(t,0) \\ &\propto \int_{-\infty}^{\infty} dt\; e^{i\omega t} \langle \mu(t) [\mu(0), \rho(0)] \rangle. \end{align}\]

In multi-pulse experiments, the electric field is a sum of individual pulses, meaning that spectra can be reported in terms of time delays between pulses. For example, the signal in transient-absorption spectroscopy depends on the delay between the pump and probe pulses, which are often assumed to be delta functions in time. If Fourier transforms are taken of some of the delays, one obtains multi-dimensional frequency-resolved spectra. For example, in 2D spectroscopy, it is common to take Fourier transforms of the delays between both the first and the last pairs of pulses, which, in the limit of delta-function pulses gives [46] \[\begin{gather} \sigma(\omega_3, \tau_2, \omega_1) \propto \int_{0}^{\infty} \int_{0}^{\infty} d\tau_1 \mathop{}\!d\tau_3\; e^{i\omega_1 \tau_1} e^{i\omega_3 \tau_3} \\ \times \sum_{n=1}^8 R_n^{(3)}(\tau_3+\tau_2 + \tau_1, \tau_2+\tau_1, \tau_1, 0), \label{eq:fourier} \end{gather}\tag{6}\] where the first pulse occurs at \(t_1=0\) and where \(\tau_i=t_{i+1}-t_i\) is the delay between pulses \(i\) and \((i+1)\).

Also expressible using correlation functions are differential spectroscopies, which measure differences between spectroscopic signals. For example, linear dichroism spectroscopy measures differences in absorption of linear polarizations of light, \[\begin{align} \sigma_{\mathrm{LD}}(\omega) &= \sigma_{\parallel}(\omega) - \sigma_{\perp}(\omega), \end{align}\] where \(\sigma_{\parallel, \perp}(\omega)\) are given by 5 for incident light polarized in two perpendicular directions.

The approach above extends to correlation functions of arbitrary combinations of electric- and magnetic-dipole operators, obtained by replacing \(\mu(t)\) with \(m(t)\) as required. For example, linear absorption due only to magnetic-dipole transitions involves the correlation function \(\langle m(t)m(0)\rho(0) \rangle\). Because magnetic interactions are weak relative to electric ones, the most common interactions involve mixed correlation functions of \(\mu(t)\) with \(m(t)\) that are linear in \(m(t)\). These include circular dichroism spectroscopy and its two-dimensional variant [52], [53]. Circular dichroism is the relative absorption of left- and right-circularly polarized light [54], \[\begin{align} \sigma_{\mathrm{CD}} (\omega) &= \sigma_{\mathrm{L}}(\omega) -\sigma_{\mathrm{R}}(\omega) \\ & \propto \int_{-\infty}^{\infty} dt \; e^{i\omega t} \nonumber \\ & \quad \big( \langle \mu(t)m^{*}(0)\rho(0) \rangle_\mathrm{L} - \langle \mu(t)m^{*}(0)\rho(0) \rangle_\mathrm{R} \nonumber \\ & \quad + \langle m(t)\mu(0)\rho(0) \rangle_\mathrm{L} - \langle m(t)\mu(0)\rho(0) \rangle_\mathrm{R} \big) \label{eq:circulardichroism} \end{align}\tag{7}\] For magnetic circular dichroism, the sample is subject to a static magnetic field, which can be modelled by adding a field-dependent term \(H_0(\boldsymbol{\mathrm{B}})\) to the molecular Hamiltonian. The resulting Zeeman splitting can break molecular symmetries and allow circular dichroism measurements of achiral molecules [55].

Figure 1: Hadamard test for a two-point correlation function of unitary operators A and B [56]. G(t) = \bra{\psi} A(t) B(0) \ket{\psi} can be computed from measured expectation values on the ancilla qubit, G(t)=\langle \sigma_x \rangle+i\langle \sigma_y \rangle.
Figure 2: Constructing a quantum circuit for a double-sided Feynman diagram. (a) Example of a double-sided Feynman diagram corresponding to a high-order correlation function. Time increases upwards and the dipole operator acts at the specified times. Additional light-matter interactions may occur between times t_2 and t_i. Absorption is shown by arrows towards either the bra or the ket and emission by arrows away from them. The dashed arrow indicates the final dipole operator, which produces the observed signal. (b) The corresponding Hadamard test showing how the double-sided Feynman diagram maps onto a simple quantum circuit, with ket-side operators K controlled off the \ket{1} state of the ancilla (solid circle) and the bra-side operators B controlled off \ket{0} (hollow circle). K and B both comprise sequences of time evolution and applications of the exponentiated dipole operators, M(F_j)=\exp(-i\mu F_j); here, K=M(F_{i+1}) U(\tau_{i}) \cdots U(\tau_1) M(F_1) and B=U(\tau_{i})M(F_i) \cdots M(F_2) U(\tau_1). (c) The expanded quantum circuit, with operations applied in the same order as in the Feynman diagram. As in 1, measuring the expectation values \langle \sigma_x \rangle and \langle \sigma_y \rangle produces a correlation function involving M, which can then be differentiated to give the desired correlation function involving \mu.

2 Quantum circuit↩︎

Quantum computers can simulate time-domain spectroscopy because of their ability to efficiently simulate chemical dynamics. Here, we show how this ability can be used to calculate the response functions \(R_j^{(n)}\), after which it is straightforward to classically carry out the convolutions in 2 or the Fourier transforms in, for example, 5 6 7 . Our approach to calculating \(R_j^{(n)}\) is to convert any double-sided Feynman diagram into the corresponding quantum circuit. In the following examples, we give expressions for correlation functions involving \(\mu\), but the algorithm works for any sequence of Hermitian observables, including cross-correlations with \(m\).

Our work builds on existing quantum algorithms for calculating correlation functions. Early papers [56][59] showed that two-point correlation functions \[G(t) = \bra{\psi} A(t) B(0) \ket{\psi}\] of unitary operators \(A\) and \(B\) can be calculated using the Hadamard test, a form of ancilla-qubit interferometry (see 1[56]. Later, the Hadamard test was generalized to \(n\)-point correlation functions [48], [60].

The challenge with the Hadamard test is that, in its original form, it requires the operators to be unitary. This is sometimes the case, as in spin systems where observables of interest are Pauli matrices (or products of them), which are both Hermitian and unitary. The Hadamard test could be used on non-unitary observables if they could be decomposed into linear combinations of unitaries, each of which could then be evaluated separately and the results summed. However, efficient decompositions of this kind are known in only some cases. This problem is acute in molecular spectroscopy, where correlation functions involve non-unitary observables such as \(\mu\) and \(m\). The exception are two-level systems, where both \(\mu\) and \(m\) are also unitary Pauli operators.

To compute spectroscopic correlation functions of non-unitary operators, we use the method of [48], which uses unitaries of the form \(\exp(-i\mu t)\) and extracts correlation functions through differentiation. For each Hermitian operator \(\mu\) in the desired correlation function, we define the unitary operator that is to be implemented on the quantum computer, \[M(F_j)=\exp \left(-i\mu F_j \right),\] as well as its Heisenberg-picture version, \(M(t;F_j) = U^{\dagger}(t) M(F_j)U(t)\).

Spectroscopic correlation functions can then be recovered as derivatives of the correlation functions of the corresponding \(M\) operators. First, \(\mu\) can be obtained from a derivative of each \(M(F_j)\), \[\label{eq:MtoMu} \mu=-i \frac{\partial}{\partial F_j} M(F_j) \;\biggl|_{F_j=0}.\tag{8}\] Therefore, the simplest spectroscopic correlation function is the derivative \[\bra{\psi}\mu(t)\mu(0)\ket{\psi} =(-i)^2 \frac{\partial^2}{\partial{F_1} \partial{F_2}} Q(0,t,F_1,F_2)\;\biggl|_{F_1=0, F_2=0}\] of the correlation function \(Q(0,t,F_1,F_2)= \bra{\psi}M(t;F_2)M(0;F_1)\ket{\psi}\). More generally, for all response functions, \[\begin{gather} \label{eq:muTom} R_j^{(n)} (t_1,\ldots,t_n,t) = (-i)^{n+1} \frac{\partial^{n+1}}{\partial F_1\cdots\partial F_{n+1}} \\Q_j^{(n)}(t_1,\ldots,t_n,t,F_1,\ldots,F_{n+1}), \end{gather}\tag{9}\] where \(Q_j^{(n)}\) has same form as \(R_j^{(n)}\), except that all \(\mu\) operators are replaced with corresponding \(M\) operators.

To simulate spectroscopy on a quantum computer, our goal becomes constructing quantum circuits for computing \(Q_j^{(n)}\), which can then be differentiated classically to obtain \(R_j^{(n)}\). Our algorithm for calculating the \(Q_j^{(n)}\), illustrated in 2, is a Hadamard test involving operators \(B\) and \(K\) that group the interactions acting on the bra and the ket sides of \(\rho(0)\), respectively. First, we convert the most generic response function (4 ) into the corresponding \(Q_j^{(n)}\),

\[Q_j^{(n)}(t_1,\ldots,t_n,t,F_1,\ldots,F_{n+1}) = \langle M(t_{K_k};F_{K_k})\cdots M(t_{K_1};F_{K_1}) \rho(0) M(t_{B_1};F_{B_1})\cdots M(t_{B_b};F_{B_b}) \rangle,\] where, as before, \(k+b=n+1\). Rearranging using the cyclic property of the trace and moving to the Schrödinger picture, which corresponds to evolution on a quantum computer, this becomes \[\begin{gather} Q_j^{(n)}(t_1,\ldots,t_n,t,F_1,\ldots,F_{n+1}) = \langle \underbrace{ U^\dagger(t_{B_1})M(F_{B_1}) U(t_{B_1}) \cdots U^\dagger(t_{B_b}) M(F_{B_b}) U(t_{B_b}) U^\dagger(t_{K_k})}_{B^\dagger} \times \\ \underbrace{ M(F_{K_k}) U(t_{K_k}) \cdots U^\dagger(t_{K_1})M(F_{K_1}) U(t_{K_1}) }_{K} \rho(0) \rangle, \label{eq:BandK} \end{gather}\tag{10}\] with \(U^\dagger(t_{K_k})\) grouped into \(B^{\dagger}\) so that both \(K\) and \(B\) are time-evolved for the same total duration.

The presence of \(U^\dagger\) in 10 might suggest that it is necessary to simulate reverse time evolution, which would prevent a generalization to open quantum systems. However, this is not the case because the time-evolution operators can be telescoped to forward time evolution only, using \(U(t_m) U^\dagger(t_n) = U(t_m-t_n)\) if \(t_m>t_n\): \[\begin{gather} Q_j^{(n)}(t_1,\ldots,t_n,t,F_1,\ldots,F_{n+1}) = \langle \underbrace{ U^\dagger(t_{B_1})M(F_{B_1}) U^{\dagger}(t_{B_2}-t_{B_1}) \cdots M(F_{B_b}) U^{\dagger}(t_{K_k}-t_{B_b})}_{B^\dagger} \times \\ \underbrace{ M(F_{K_k}) U(t_{K_k}-t_{K_{k-1}}) \cdots U(t_{K_2}-t_{K_1})M(F_{K_1}) U(t_{K_1}) }_{K} \rho(0) \rangle . \label{eq:BandK95telescoped} \end{gather}\tag{11}\]

An example of the algorithm is given in 2 for computing a specific correlation function \[\begin{align} Q^{(i)}_a &= \langle M(t;F_{i+1})\cdots M(t_1;F_1)\rho(0)M(t_2;F_2)\cdots M(t_i;F_i) \rangle \\ &= \langle M(t_2;F_2)\cdots M(t_i;F_i)M(t;F_{i+1})\cdots M(t_1;F_1)\rho(0) \rangle, \end{align}\]

where the ellipses indicate that interactions may occur at intervening times. Partitioning this expression into \(B\) and \(K\) factors, we get \[\begin{align} B &= \hphantom{M(F_{j+1})}U(\tau_i) M(F_i) \cdots M(F_2) U(\tau_1), \tag{12} \\ K &= M(F_{i+1}) U(\tau_i)\hphantom{M(F_j)} \cdots \hphantom{M(F_j)} U(\tau_1) M(F_1), \tag{13} \end{align}\] where there is only forward time evolution in both \(B\) and \(K\), punctuated by applications of \(M(F_j)\) as required, and \(\tau\) is the time interval \(\tau_j=t_{j+1}-t_j\).

The quantum circuit for the Hadamard test with \(B\) and \(K\) is shown in 2b. For a system initially in pure state \(\ket{\psi_0}\), the final state of the quantum circuit is \[\label{eq:finalstate} \ket{\Psi} = \frac{1}{\sqrt{2}} \bigl( \ket{0}\otimes B\ket{\psi_0} + \ket{1}\otimes K\ket{\psi_0} \bigr).\tag{14}\] Then, as in 1, \(Q^{(i)}_a\) can be calculated as the sum of two expectation values, \(\langle \sigma_x \rangle+i\langle \sigma_y \rangle\). The experiment must be repeated until the shot noise decreases below the desired error threshold. For a mixed initial state \(\rho(0)\), all of the results above hold by linearity, and would allow the simulation, for example, of spectroscopy on thermal initial states.

At this point, we have calculated \(Q_j^{(n)}\) and need to numerically differentiate it to recover \(R_j^{(n)}\). Classical numerical differentiation methods are well-studied [61], with the simplest being the two-point finite difference, \[\begin{align} \mu(t) &\approx(-i)\left(\frac{M(t;\delta)-M(t;-\delta)}{2\delta} \right), \end{align}\] which has a leading-order error of \(\delta^2/6\times\partial_{F_j}^3 M(t;F_j)|_{F_j=0}\) [61]. If two-point finite difference is not sufficiently accurate, other numerical differentiation methods can be used, such as fitting the data points to a function whose derivative can be calculated analytically [61]. Numerical differentiation to compute each \(\mu(t)\) requires evaluating the corresponding \(M(t,F)\) at at least two values of \(F\), which increases the final cost of the algorithm, which we return to in 4. Numerical differentiation can be avoided in the special case of a two-level system, where \(\mu=\sigma_x\) is both Hermitian and unitary and can be used to compute the correlation functions directly.

The full quantum circuit for our algorithm is shown in 2c, where time evolution of the system is punctuated by controlled applications of \(M\), as required. Importantly, the time evolution itself need not be controlled because, as shown in 12 13 , the bra and the ket components evolve together between applications of \(M\). With this observation, there is a clear correspondence between the double-sided Feynman diagram in 2a and the resulting circuit 2c.

3 Open systems↩︎

Our algorithm can also be used to simulate spectroscopy in open quantum systems, where time evolution is non-unitary. Doing so is essential for producing spectra in a wide range of chemical and physical environments, which can significantly affect spectral linewidths, intensities, and frequencies.

In open systems, 2 3 4 still hold [45], [46]. However, in an open system, the time evolution of operators is generated by the total system-bath Hamiltonian and is no longer unitary when restricted to the system. Therefore, in 2 3 4 , instances of the unitary gate \(U(t)\) need to be replaced with the open-system dynamics \(\mathcal{U}(t)\), defined through \[\rho(t)=\mathcal{U}(t)\rho(0).\] It might seem that, for a second-order correlation function such as \[\langle\mu(t)\mu(0)\rho(0)\rangle=\langle U^{\dagger}(t)\mu U(t)\rho(0)\rangle\] this approach leads to \(\langle\mathcal{U}^{\dagger}(t)\mu\mathcal{U}(t)\rho(0)\rangle\), which is unphysical because backwards time evolution \(\mathcal{U}^{\dagger}(t)\) is undefined for an open system. However, this difficulty is resolved by the formulation in 12 13 , where the same correlation function can be simulated using only \(\mathcal{U}(t)\) by making the substitution \[\begin{align} B=U(t)\mu \quad &\to \quad B=\mathcal{U}(t)\mu \\ K = \mu U(t) \quad &\to \quad K=\mu \mathcal{U}(t). \end{align}\] Therefore, the quantum circuits in 2, which implement 12 13 also remain unchanged apart from the replacement \(U\to \mathcal{U}\).

The specific implementation of \(\mathcal{U}(t)\) depends on the model being used to describe the open-system dynamics. There are two ways that quantum circuits can be adapted to simulate open dynamics, whether on digital or analog quantum computers, as shown in 3. First (3b), a bath could be explicitly represented or modelled using additional ancilla qubits [41], [62] or analog equivalents [63]. In that case, simulating the unitary system-bath evolution would result in non-unitary reduced system dynamics. Alternatively (3c), engineered noise could be injected into the quantum simulation to model dissipative effects. The latter approach is more natural in analog simulators, where protocols have been developed for injecting chemically relevant noise into simulations of vibronic dynamics [63], [64].

For example, if we wished to simulate the dynamics of a system interacting with a Markovian bath, the open-system dynamics \(\mathcal{U}(t)=e^{\mathcal{L}t}\) is given in terms of the Lindbladian \[\mathcal{L}=-i[H,\rho]+\sum_k \gamma_k(L_k\rho L_k^{\dagger}-\tfrac{1}{2}\{ L_k^{\dagger}L_k,\rho\}),\] with jump operators \(L_k\) acting on the system with rates \(\gamma_k\). There are established approaches for simulating dynamics of this type on both digital [65][67] and analog [64] quantum computers.

Figure 3: Simulating open systems. (a) The circuit elements of the closed-system simulation from 2, which can be modified to simulate open systems. (b) Modification for simulating an open system with an explicit bath, where the system and the bath are governed by global unitary time evolution and the bath is traced out at the end. (c) Modification for simulating an open system using engineered noise, where experimental noise is used to mimic the effects of open systems. With either modification, the circuits retain the property that the ancilla qubit only controls the application of M and not the time evolution itself.

4 Cost↩︎

Our algorithm is efficient for any spectroscopy, with both its memory and time costs scaling polynomially with both molecular size and relevant properties in the desired spectrum.

The memory cost is equal to the number of qubits (or other computational quantum resources) required to encode the system wavefunction, plus one ancilla qubit. The cost of the wavefunction encoding depends on the particular quantum simulation algorithm being used to simulate the dynamics. For a molecule with \(\eta\) atoms, this cost is generally \(O(\eta)\) [2]. This linear scaling is achieved on digital quantum computers in both first-quantized [4], [9], [13] and second-quantized [3], [7] representations, as well as on analog quantum simulators [8].

The time cost, measured in the number of required quantum gates, can be determined by multiplying the cost of each run of the circuit in 2c by the number of necessary repetitions, \[\label{eq:Ncost} \mathrm{Cost} = C_U N_{\mathrm{corr}} N_{\mathrm{samples}} N_\mathrm{shots} N_\mathrm{deriv},\tag{15}\] where the factors depend on the type of spectroscopy and the desired properties of the spectrum: \(C_U\) is the cost of each run of the circuit; \(N_{\mathrm{corr}}\) is the number of correlation functions \(Q_j^{(n)}\) to be calculated for an \(n\)th order spectroscopy; \(N_{\mathrm{samples}}\) is the number of points used to sample each correlation function \(Q_j^{(n)}\); \(N_{\mathrm{shots}}\) is the number of times the circuit must be repeated for each sampling point so that the expectation values \(\langle \sigma_x \rangle\) and \(\langle \sigma_y \rangle\) converge to a desired accuracy; and \(N_{\mathrm{deriv}}\) is the number of different values of \(F_j\) for which \(Q^{(n)}_j\) needs to be calculated so that the corresponding \(R^{(n)}_j\) can be determined by numerical differentiation. Each of these factors is determined by the desired properties of the final spectrum, namely its range \(\omega_{\mathrm{max}}\), resolution \(\Delta \omega\), and accuracy \(\varepsilon\).

We estimate \(C_U\) based on the fact that each run of the circuit imposes only a small overhead compared to simulating the corresponding molecular dynamics. The circuit in 2c is a straightforward time evolution, punctuated by applications of the controlled-\(M\) operators. The additional cost of the controlled operations is minimal compared to the cost of the dynamics, and we neglect it in the following analysis. In general, the gate cost of quantum-simulation algorithms is polynomial in system size, i.e., \(O(\mathrm{poly}(\eta))\). The order of this polynomial depends on the type of simulation; for example, for grid-based, first-quantized chemical dynamics, it is \(O(\eta^2)\) [4], [9], [13]. In terms of the scaling with the total simulation time \(T\) and simulation error \(\varepsilon_\mathrm{sim}\), the asymptotically best known gate cost is \[\label{eq:CU} C_U\in O(T\,\mathrm{poly}(\eta)+\log\varepsilon_\mathrm{sim}^{-1}),\tag{16}\] achieved by qubitization [9], [50], [51].

Naively, \(N_{\mathrm{corr}}\) would equal \(2\times2^n\) if the real and imaginary components were measured for each of the \(2^n\) correlation functions \(Q_j^{(n)}\). However, this value can be reduced by a factor of four by the symmetries of the problem. In particular, the hermiticity of the dipole moment operator implies that the response function \(R_j^{(n)}\) is a sum of conjugate pairs of \(Q_j^{(n)}\) [32], [45], [46]. Each of those pairs can therefore be evaluated using only the real or imaginary component of one of the correlation functions in the pair, as opposed to requiring both the real and imaginary components of both. For example, the first-order response function can be simplified to \[\begin{align} R^{(1)}(t,0) &= \langle \mu(t) [\mu(0), \rho(0)] \rangle\\ &= \langle \mu(t)\mu(0)\rho(0) \rangle - \langle \mu(t)\rho(0) \mu(0) \rangle \label{eq:fourparts}\\ &= \langle \mu(t)\mu(0)\rho(0) \rangle - \langle \mu(t)\mu(0) \rho(0) \rangle^{\dagger}\\ &= 2i\relax \langle \mu(t)\mu(0)\rho(0) \rangle, \end{align}\tag{17}\] meaning that linear-absorption spectroscopy can be computed using a single quantity, \(\relax \langle \mu(t)\mu(0)\rho(0) \rangle\), as opposed to the four real and imaginary parts in 17 . This simplification reduces \(N_\mathrm{corr}\) to \[N_{\mathrm{corr}}=2^{n-1}.\] For specific spectroscopies, \(N_{\mathrm{corr}}\) may be less than this maximum because the rotating-wave approximation or phase matching can result in certain Feynman diagrams not contributing to the signal [45], [46]. For example, in pump-probe spectroscopy, phase matching reduces \(N_{\mathrm{corr}}\) from \(2^{3-1}=4\) to only 3 [45].

\(T\) and \(N_{\mathrm{samples}}\) are governed by their Fourier counterparts \(\omega_{\mathrm{max}}\) and \(\Delta \omega\). To achieve a spectral resolution \(\Delta\omega\), the corresponding time series must be of duration \(T=2\pi/\Delta\omega\), transforming 16 into \[C_U \in O(\Delta\omega^{-1} \mathrm{poly}(\eta) + \log\varepsilon_\mathrm{sim}^{-1}).\] Similarly, for the spectrum to reach maximum frequency \(\omega_{\mathrm{max}}\), the time series must have resolution \(\Delta t = \pi/\omega_{\mathrm{max}}\). This requires \(T/\Delta t = 2\omega_\mathrm{\max}/\Delta\omega\) sampling points along each dimension, or a total number of samples \[N_{\mathrm{samples}}=(T/\Delta t)^n = (2\omega_\mathrm{\max}/\Delta\omega)^n.\]

\(N_{\mathrm{shots}}\) is set by the maximum error \(\varepsilon_\mathrm{shot}\) in the computed spectrum. Each measurement of \(\sigma_x\) and \(\sigma_y\) returns 0 or 1, and needs to be repeated \[N_{\mathrm{shots}}\in O(\varepsilon_\mathrm{shot}^{-2})\] times to ensure the averages \(\langle \sigma_x \rangle\) and \(\langle \sigma_y \rangle\) are computed to within a standard error \(\varepsilon_\mathrm{shot}\). This remains the standard error of every point in the frequency-domain spectrum because the Fourier transform is linear. If desired, the shot-noise scaling could be improved to the Heisenberg limit \(N_{\mathrm{shots}}\in O(\varepsilon^{-1})\) using quantum amplitude estimation [68].

Finally, \(N_{\mathrm{deriv}}\) depends on the method of numerical differentiation. The simplest case would be two-point finite difference, where \[N_{\mathrm{deriv}}=2^{n+1},\] because two points are required for the derivative of \(Q_j^{(n)}\) along each of the \((n+1)\) coordinates \(F_j\).

Overall, using two-point finite difference for the derivatives, we arrive at the total gate cost of \[\label{eq:finalcost} \mathrm{Cost} \in O\left(\frac{2^{2n}}{\varepsilon^2 \Delta \omega} \left(\frac{2\omega_{\mathrm{max}}}{\Delta\omega}\right)^n\mathrm{poly}(\eta) \right),\tag{18}\] given maximum error \(\varepsilon\), which we assume is dominated by the quadratic \(\varepsilon_\mathrm{shot}\), neglecting the logarithmic \(\varepsilon_\mathrm{sim}\). This cost is polynomial in system size and, for any particular spectroscopy (i.e., a given \(n\)), in \(\varepsilon_\mathrm{shot}\), \(\omega_{\mathrm{max}}\), and \(\Delta\omega\). In principle, the cost grows exponentially in \(n\); however, in practice, \(n\) is a small constant, rarely greater than 3 and almost never greater than 5, because high-order spectroscopy is rarely carried out.

5 Discussion↩︎

We have shown that any double-sided Feynman diagram can be converted into a quantum circuit to compute the corresponding correlation function. Therefore, because of the generality of the correlation-function approach to spectroscopy, our algorithm can simulate any type of spectroscopy, regardless of the order of the interactions, whether they are electric or magnetic, whether the system is isolated or open, or whether static external fields are included in the molecular Hamiltonian.

The efficiency of our algorithm relies on the efficiency of Hamiltonian simulation and is independent of any particular properties of the molecular Hamiltonian. Because it is possible to simulate molecular dynamics in polynomial time in system size [2], [4], [6][9], [13], [49][51], the same polynomial scaling applies to spectroscopy simulation. Our approach does not constrain how the molecular system is encoded on the quantum computer or how the simulation is carried out; in particular, it is compatible with both digital [4] and analog [8] molecular simulation, and whether the molecule is represented in the Born-Oppenheimer approximation or not [4]. We neglect the cost of preparing the initial state because it is a constant overhead that would be necessary in any approach to simulating spectroscopy.

Because it uses the time-domain approach to spectroscopy, our method offers an exponential improvement in scaling over frequency-domain approaches. Its cost, given in 18 is determined by the desired spectral properties and is polynomial in the system size \(\eta\). By contrast, frequency-domain approaches scale exponentially with \(\eta\): for an \(\eta\)-atom molecule where each of the \(3\eta-6\) vibrational modes is assumed to have \(b\) accessible states, the number of spectroscopic peaks (transitions between all pairs of states) that need to be calculated in the frequency-domain approach grows exponentially as \(O\left((b^{3\eta-6})^2\right)\) [18].

Because spectroscopy is the most direct way to probe the quantum-mechanical features of molecules, we anticipate that early applications of quantum computers in chemistry will involve the prediction and interpretation of spectroscopic signals that might be too challenging for classical computers.

We thank Ryan MacDonell, Patrick Sinnott, and Arkin Tikku for valuable discussions. We were supported by the Australian Research Council (FT230100653), by the United States Office of Naval Research Global (N62909-24-1-2083), by the Wellcome Leap Quantum for Bio program, and by the Australian Government Research Training Program.

References↩︎

[1]
R. P. Feynman, Simulating physics with computers, https://doi.org/10.1007/bf02650179.
[2]
S. Lloyd, Universal quantum simulators, https://doi.org/10.1126/science.273.5278.1073.
[3]
A. Aspuru-Guzik, A. D. Dutoi, P. J. Love, and M. Head-Gordon, Simulated quantum computation of molecular energies, https://doi.org/10.1126/science.1113479.
[4]
I. Kassal, S. P. Jordan, P. J. Love, M. Mohseni, and A. Aspuru-Guzik, Polynomial-time quantum algorithm for the simulation of chemical dynamics, https://doi.org/10.1073/pnas.0808245105.
[5]
M. Reiher, N. Wiebe, K. M. Svore, D. Wecker, and M. Troyer, Elucidating reaction mechanisms on quantum computers, https://doi.org/10.1073/pnas.1619152114.
[6]
Y. Cao, J. Romero, J. P. Olson, M. Degroote, P. D. Johnson, M. Kieferová, I. D. Kivlichan, T. Menke, B. Peropadre, N. P. D. Sawaya, S. Sim, L. Veis, and A. Aspuru-Guzik, Quantum chemistry in the age of quantum computing, https://doi.org/10.1021/acs.chemrev.8b00803.
[7]
S. McArdle, S. Endo, A. Aspuru-Guzik, S. C. Benjamin, and X. Yuan, Quantum computational chemistry, https://doi.org/10.1103/RevModPhys.92.015003.
[8]
R. J. MacDonell, C. E. Dickerson, C. J. Birch, A. Kumar, C. L. Edmunds, M. J. Biercuk, C. Hempel, and I. Kassal, Analog quantum simulation of chemical dynamics, https://doi.org/10.1039/D1SC02142G.
[9]
Y. Su, D. W. Berry, N. Wiebe, N. Rubin, and R. Babbush, Fault-tolerant quantum simulations of chemistry in first quantization, https://doi.org/10.1103/PRXQuantum.2.040332.
[10]
R. Babbush, W. J. Huggins, D. W. Berry, S. F. Ung, A. Zhao, D. R. Reichman, H. Neven, A. D. Baczewski, and J. Lee, Quantum simulation of exact electron dynamics can be more efficient than classical mean-field methods, https://doi.org/10.1038/s41467-023-39024-0.
[11]
P. Richerme, M. C. Revelle, C. G. Yale, D. Lobser, A. D. Burch, S. M. Clark, D. Saha, M. A. Lopez-Ruiz, A. Dwivedi, J. M. Smith, S. A. Norrell, A. Sabry, and S. S. Iyengar, Quantum computation of hydrogen bond dynamics and vibrational spectra, https://doi.org/10.1021/acs.jpclett.3c01601.
[12]
S. M. Young, H. Häffner, and M. Sarovar, Quantum simulation of weak-field light-matter interactions, https://doi.org/10.1103/PhysRevResearch.5.013027.
[13]
F. H. da Jornada, M. Lostaglio, S. Pallister, B. Şahinoğlu, and K. I. Seetharam, A comprehensive framework to simulate real-time chemical dynamics on a fault-tolerant quantum computer(2025), https://arxiv.org/abs/2504.06348.
[14]
B. P. Lanyon, J. D. Whitfield, G. G. Gillet, M. E. Goggin, M. P. Almeida, I. Kassal, J. D. Biamonte, M. Mohseni, B. J. Powell, M. Barbieri, A. Aspuru-Guzik, and A. G. White, Towards quantum chemistry on a quantum computer, https://doi.org/10.1038/nchem.483.
[15]
A. Kandala, A. Mezzacapo, K. Temme, M. Takita, M. Brink, J. M. Chow, and J. M. Gambetta, Hardware-efficient Variational Quantum Eigensolver for Small Molecules and Quantum Magnets, https://doi.org/10.1038/nature23879.
[16]
F. Arute, K. Arya, R. Babbush, D. Bacon, J. C. Bardin, R. Barends, S. Boixo, M. Broughton, B. B. Buckley, D. A. Buell, B. Burkett, N. Bushnell, Y. Chen, Z. Chen, B. Chiaro, R. Collins, W. Courtney, S. Demura, A. Dunsworth, E. Farhi, A. Fowler, B. Foxen, C. Gidney, M. Giustina, R. Graff, S. Habegger, M. P. Harrigan, A. Ho, S. Hong, T. Huang, W. J. Huggins, L. Ioffe, S. V. Isakov, E. Jeffrey, Z. Jiang, C. Jones, D. Kafri, K. Kechedzhi, J. Kelly, S. Kim, P. V. Klimov, A. Korotkov, F. Kostritsa, D. Landhuis, P. Laptev, M. Lindmark, E. Lucero, O. Martin, J. M. Martinis, J. R. McClean, M. McEwen, A. Megrant, X. Mi, M. Mohseni, W. Mruczkiewicz, J. Mutus, O. Naaman, M. Neeley, C. Neill, H. Neven, M. Y. Niu, T. E. O’Brien, E. Ostby, A. Petukhov, H. Putterman, C. Quintana, P. Roushan, N. C. Rubin, D. Sank, K. J. Satzinger, V. Smelyanskiy, D. Strain, K. J. Sung, M. Szalay, T. Y. Takeshita, A. Vainsencher, T. White, N. Wiebe, Z. J. Yao, P. Yeh, and A. Zalcman, Hartree-Fock on a superconducting qubit quantum computer, https://doi.org/10.1126/science.abb9811.
[17]
W. J. Huggins, B. A. O’Gorman, N. C. Rubin, D. R. Reichman, R. Babbush, and J. Lee, Unbiasing fermionic quantum monte carlo with a quantum computer, https://doi.org/10.1038/s41586-021-04351-z.
[18]
R. J. MacDonell, T. Navickas, T. F. Wohlers-Reichel, C. H. Valahu, A. D. Rao, M. J. Millican, M. A. Currington, M. J. Biercuk, T. R. Tan, C. Hempel, and I. Kassal, Predicting molecular vibronic spectra using time-domain analog quantum simulation, https://doi.org/10.1039/D3SC02453A.
[19]
C. H. Valahu, V. C. Olaya-Agudelo, R. J. MacDonell, T. Navickas, A. D. Rao, M. J. Millican, J. B. Pérez-Sánchez, J. Yuen-Zhou, M. J. Biercuk, C. Hempel, T. R. Tan, and I. Kassal, Direct observation of geometric-phase interference in dynamics around a conical intersection, https://doi.org/10.1038/s41557-023-01300-3.
[20]
C. S. Wang, N. E. Frattini, B. J. Chapman, S. Puri, S. M. Girvin, M. H. Devoret, and R. J. Schoelkopf, Observation of wave-packet branching through an engineered conical intersection, https://doi.org/10.1103/PhysRevX.13.011008.
[21]
T. Navickas, R. J. MacDonell, C. H. Valahu, V. C. Olaya-Agudelo, F. Scuccimarra, M. J. Millican, V. G. Matsos, H. L. Nourse, A. D. Rao, M. J. Biercuk, C. Hempel, I. Kassal, and T. R. Tan, Experimental quantum simulation of chemical dynamics, https://doi.org/10.1021/jacs.5c03336.
[22]
R. G. Gordon, Molecular Motion in Infrared and Raman Spectra, https://doi.org/10.1063/1.1696920.
[23]
L. S. Cederbaum and W. Domcke, A many-body approach to the vibrational structure in molecular electronic spectra. I. Theory, https://doi.org/10.1063/1.432250.
[24]
E. J. Heller, Quantum corrections to classical photodissociation models, https://doi.org/10.1063/1.436029.
[25]
E. J. Heller, The semiclassical way to molecular spectroscopy, https://doi.org/10.1021/ar00072a002.
[26]
E. J. Heller, R. Sundberg, and D. Tannor, Simple aspects of Raman scattering, https://doi.org/10.1021/j100207a018.
[27]
W. Domcke, D. R. Yarkony, and H. Köppel, https://doi.org/10.1142/5406(World Scientific, Singapore, 2004).
[28]
G. A. Worth, H.-D. Meyer, H. Köppel, L. S. Cederbaum, and I. Burghardt, Using the MCTDH wavepacket propagation method to describe multimode non-adiabatic dynamics, https://doi.org/10.1080/01442350802137656.
[29]
C.-K. Lee, C.-Y. Hsieh, S. Zhang, and L. Shi, Simulation of condensed-phase spectroscopy with near-term digital quantum computers, https://doi.org/10.1021/acs.jctc.1c00849.
[30]
K. Huang, X. Cai, H. Li, Z.-Y. Ge, R. Hou, H. Li, T. Liu, Y. Shi, C. Chen, D. Zheng, K. Xu, Z.-B. Liu, Z. Li, H. Fan, and W.-H. Fang, Variational quantum computation of molecular linear response properties on a superconducting quantum processor, https://doi.org/10.1021/acs.jpclett.2c02381.
[31]
T. J. von Buchwald, K. M. Ziems, E. R. Kjellgren, S. P. A. Sauer, J. Kongsted, and S. Coriani, Reduced density matrix formulation of quantum linear response, https://doi.org/10.1021/acs.jctc.4c00574.
[32]
T. Kharazi, T. F. Stetina, L. Ko, G. H. Low, and K. B. Whaley, An efficient quantum algorithm for ab initio approximations of non-linear response functions, https://doi.org/10.1038/s41534-025-01026-9.
[33]
I. Loaiza, D. Motlagh, K. Hejazi, M. S. Zini, A. Delgado, and J. M. Arrazola, Nonlinear spectroscopy via generalized quantum phase estimation(2024), https://arxiv.org/abs/2405.13885.
[34]
J. Huh, G. G. Guerreschi, B. Peropadre, J. R. McClean, and A. Aspuru-Guzik, Boson sampling for molecular vibronic spectra, https://doi.org/10.1038/nphoton.2015.153.
[35]
J. Huh and M.-H. Yung, Vibronic boson sampling: Generalized Gaussian boson sampling for molecular vibronic spectra at finite temperature, https://doi.org/10.1038/s41598-017-07770-z.
[36]
Y. Shen, Y. Lu, K. Zhang, J. Zhang, S. Zhang, J. Huh, and K. Kim, Quantum optical emulation of molecular vibronic spectroscopy using a trapped-ion device, https://doi.org/10.1039/C7SC04602B.
[37]
N. P. D. Sawaya and J. Huh, Quantum algorithm for calculating molecular vibronic spectra, https://doi.org/10.1021/acs.jpclett.9b01117.
[38]
C. S. Wang, J. C. Curtis, B. J. Lester, Y. Zhang, Y. Y. Gao, J. Freeze, V. S. Batista, P. H. Vaccaro, I. L. Chuang, L. Frunzio, L. Jiang, S. M. Girvin, and R. J. Schoelkopf, Efficient multiphoton sampling of molecular vibronic spectra on a superconducting bosonic processor, https://doi.org/10.1103/PhysRevX.10.021060.
[39]
H. Jnane, N. P. D. Sawaya, B. Peropadre, A. Aspuru-Guzik, R. Garcia-Patron, and J. Huh, Analog quantum simulation of non-Condon effects in molecular spectroscopy, https://doi.org/10.1021/acsphotonics.1c00059.
[40]
M. Bruschi, F. Gallina, and B. Fresch, A quantum algorithm from response theory: Digital quantum simulation of two-dimensional electronic spectroscopy, https://doi.org/10.1021/acs.jpclett.3c03499.
[41]
F. Gallina, M. Bruschi, R. Cacciari, and B. Fresch, Simulating non-markovian dynamics in multidimensional electronic spectroscopy via quantum algorithm, https://doi.org/10.1021/acs.jctc.4c01204.
[42]
J. D. Guimarães, J. Lim, M. I. Vasilevskiy, S. F. Huelga, and M. B. Plenio, Accelerating two-dimensional electronic spectroscopy simulations with a probe qubit protocol, https://doi.org/10.1103/PhysRevResearch.7.023130.
[43]
K. Seetharam, D. Biswas, C. Noel, A. Risinger, D. Zhu, O. Katz, S. Chattopadhyay, M. Cetina, C. Monroe, E. Demler, and D. Sels, Digital quantum simulation of NMR experiments, https://doi.org/10.1126/sciadv.adh2594.
[44]
S. Fomichev, K. Hejazi, I. Loaiza, M. S. Zini, A. Delgado, A.-C. Voigt, J. E. Mueller, and J. M. Arrazola, Simulating X-ray absorption spectroscopy of battery materials on a quantum computer(2024), https://arxiv.org/abs/2405.11015.
[45]
S. Mukamel, Principles of Nonlinear Optical Spectroscopy(Oxford University Press, USA, 1995).
[46]
P. Hamm and M. Zanni, https://doi.org/10.1017/CBO9780511675935(Cambridge University Press, 2011).
[47]
J. Yuen-Zhou, J. J. Krich, I. Kassal, A. Johnson, and A. Aspuru-Guzik, https://doi.org/10.1088/978-0-750-31062-8(Institute of Physics Publishing, Bristol, 2014).
[48]
J. S. Pedernales, R. D. Candia, I. L. Egusquiza, J. Casanova, and E. Solano, Efficient Quantum Algorithm for Computing n-time Correlation Functions, https://doi.org/10.1103/physrevlett.113.020505.
[49]
C. Zalka, Simulating quantum systems on a quantum computer, https://doi.org/10.1098/rspa.1998.0162.
[50]
G. H. Low and I. L. Chuang, Hamiltonian Simulation by Qubitization, https://doi.org/10.22331/q-2019-07-12-163.
[51]
R. Babbush, N. Wiebe, J. McClean, J. McClain, H. Neven, and G. K.-L. Chan, Low-depth quantum simulation of materials, https://doi.org/10.1103/PhysRevX.8.011044.
[52]
Z. Liu, A. Jha, X.-T. Liang, and H.-G. Duan, Transient chiral dynamics revealed by two-dimensional circular dichroism spectroscopy, https://doi.org/10.1103/PhysRevE.107.054119.
[53]
Z. Liu, P. Zhang, C. Mei, X.-T. Liang, A. Jha, and H.-G. Duan, Transient chiral dynamics in the Fenna–Matthews–Olson complex revealed by two-dimensional circular dichroism spectroscopy, https://doi.org/10.1021/acs.jpclett.4c01179.
[54]
B. Nordén, A. Rodger, and T. Dafforn, Linear Dichroism and Circular Dichroism: A Textbook on Polarized-Light Spectroscopy(Royal Society of Chemistry, 2010).
[55]
L. D. Barron, https://doi.org/10.1017/CBO9780511535468, 2nd ed. (Cambridge University Press, 2004).
[56]
R. D. Somma, G. Ortiz, E. H. Knill, and J. Gubernatis, Quantum simulations of physics problems, in https://doi.org/10.1117/12.487249, Vol. 5105, edited by E. Donkor, A. R. Pirich, and H. E. Brandt, International Society for Optics and Photonics(SPIE, 2003) pp. 96 – 103.
[57]
B. M. Terhal and D. P. DiVincenzo, Problem of equilibration and the computation of correlation functions on a quantum computer, https://doi.org/10.1103/physreva.61.022301.
[58]
G. Ortiz, J. E. Gubernatis, E. Knill, and R. Laflamme, Quantum algorithms for fermionic simulations, https://doi.org/10.1103/physreva.64.022319.
[59]
R. Somma, G. Ortiz, J. E. Gubernatis, E. Knill, and R. Laflamme, Simulating physical phenomena by quantum networks, https://doi.org/10.1103/physreva.65.042323.
[60]
L. Del Re, B. Rost, M. Foss-Feig, A. F. Kemper, and J. K. Freericks, Robust measurements of \(n\)-point correlation functions of driven-dissipative quantum systems on a digital quantum computer, https://doi.org/10.1103/PhysRevLett.132.100601.
[61]
W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical Recipes 3rd Edition: The Art of Scientific Computing, 3rd ed. (Cambridge University Press, USA, 2007).
[62]
H. Wang, S. Ashhab, and F. Nori, Quantum algorithm for simulating the dynamics of an open quantum system, https://doi.org/10.1103/physreva.83.062317.
[63]
C. W. Kim, J. M. Nichol, A. N. Jordan, and I. Franco, Analog quantum simulation of the dynamics of open quantum systems with quantum dots and microelectronic circuits, https://doi.org/10.1103/PRXQuantum.3.040308.
[64]
V. C. Olaya-Agudelo, B. Stewart, C. H. Valahu, R. J. MacDonell, M. J. Millican, V. G. Matsos, F. Scuccimarra, T. R. Tan, and I. Kassal, Simulating open-system molecular dynamics on analog quantum computers, https://doi.org/10.1103/PhysRevResearch.7.023215.
[65]
R. Sweke, I. Sinayskiy, D. Bernard, and F. Petruccione, Universal simulation of Markovian open quantum systems, https://doi.org/10.1103/PhysRevA.91.062308.
[66]
A. W. Schlimgen, K. Head-Marsden, L. M. Sager, P. Narang, and D. A. Mazziotti, Quantum simulation of the Lindblad equation using a unitary decomposition of operators, https://doi.org/10.1103/PhysRevResearch.4.023216.
[67]
Z. Ding, X. Li, and L. Lin, Simulating open quantum systems using Hamiltonian simulations, https://doi.org/10.1103/PRXQuantum.5.020332.
[68]
G. Brassard, P. Høyer, M. Mosca, and A. Tapp, Quantum amplitude amplification and estimation, in https://doi.org/10.1090/conm/305/05215, Contemporary Mathematics, Vol. 305(Amererican Mathematical Society, 2002) pp. 53–74.