Optimal control of engineered swift equilibration of nanomechanical oscillators


Abstract

We propose a reformulation of the problem of optimally controlled transitions in stochastic thermodynamics. We impose that any terminal cost specified by a thermodynamic functional should depend only on state variables and not on control protocols, according to the canonical Bolza form. In this way, we can unambiguously discriminate between transitions at minimum dissipation between genuine equilibrium states, and transitions at minimum work driving a system from a genuine equilibrium to a non-equilibrium state. For underdamped dynamics subject to a mechanical force, genuine equilibrium means a Maxwell-Boltzmann probability distribution defining a vanishing current velocity. Transitions at minimum dissipation between equilibria are a model of optimal swift engineered equilibration. Remarkably, we show that transitions at minimum work do not directly imply explicit boundary conditions on terminal values of parameters of the mechanical force and on control protocols. Thus, the problem often discussed in the literature, that optimal protocols need terminal jumps to satisfy boundary conditions, completely disappears. The quantitative properties of optimal controls are entirely determined by the form of the penalty modelling an experimental setup. More generally, we use centre manifold theory to analytically account for the tendency of optimal controls to exhibit a turnpike property: optimal protocols in the bulk of the control horizon tend to converge to a universal centre manifold determined only by the running cost. Exponential deviations from the centre manifold occur at the ends of the control horizon in order to satisfy the boundary conditions. Our findings are supported numerically.

For illustration, our analysis is performed on the experimentally relevant case of nano-oscillators, but extension of our approach to feedback control in the presence of anharmonic forces is possible, albeit at the price of a considerably increased computational cost.

Keywords: Nonequilibrium statistical mechanics, stochastic thermodynamics, optimal control, shortcut-to-adiabaticity, micromechanical and nanomechanical oscillators.

1 Introduction↩︎

Existing nano-manipulation techniques mean that machines can be made that operate accurately at length scales of the order of 10 nm–100 nm, e.g. [1][5]. Real-time measurement and control of thermodynamic transitions between target states in time horizons of 10 μs–100 μs has been demonstrated in multiple experimental setups e.g. [6][8]. These developments motivate interest in finding protocols to robustly steer a nano-machine to accomplish precise tasks while satisfying criteria of thermodynamic efficiency. Optimal control theory [9], [10] provides a natural mathematical framework to find efficient protocols. In particular, if efficiency is defined by minimisation of the entropy production, optimal feedback control relates the second law of thermodynamics [11] for open classical systems obeying a Langevin-Smoluchowski (overdamped) dynamics and optimal mass transport [12].

In many applications, protocols are needed to implement fast transitions between equilibrium states [13][17]. One example is in the design of a heat nano-engine that can reach the thermodynamic limit of efficiency at finite cycle time, by means of protocols that reproduce the same output as in a quasistatic process [3]. More generally, transitions between equilibrium end-states avoid the onset of a relaxation dynamics outside of the control horizon. Stability of the end-states is particularly desirable to improve power requirements for monitoring mesoscopic chemical or biological processes such as those reported in [18].

1.1 Optimal vs Engineered Swift Equilibration Protocols↩︎

In much of the existing literature, optimal and engineered swift equilibration protocols rely on the highly stylised hypothesis that instantaneous manipulations of the mechanical potential are possible. Although this hypothesis leads to welcome mathematical simplifications, it also introduces some inherent shortcomings that prevent the model from consistently describing some relevant aspects of actual physical processes. We devote a few words here to motivate our claim.

To start with, it is important to emphasize the conceptual difference between the optimal control and the engineered swift equilibration control models. Both optimal and engineered swift equilibration aim at constructing a thermodynamic transition defined by the assignment of system state indicators at each end of the control horizon. For feedback optimal control (closed-loop; where the control is dependent on the current system state), this transition can be formulated as a generalised Schrödinger bridge problem [19], in which the end values of the probability density of the open system are the boundary data. Assuming all necessary smoothness, the probability density of the open system obeys a Fokker-Planck equation.

Optimal feedback control of generalised Schrödinger bridges minimises an assigned average thermodynamic cost. As the dynamics is modelled by a Markov process, optimisation is attained by minimising the instantaneous average cost rate [9], [10]. The cost is a function of the system phase space coordinates and satisfies a dynamic programming equation. The Fokker-Planck and dynamic programming equations are first order in time and are coupled by a stationary condition on the drift that does not involve time derivatives. The solution is thus fully specified by the assignment of two boundary conditions in time: the end values of the system probability density. These boundary condition cannot generically describe equilibria, even if they do take the form of Maxwell-Boltzmann distributions. Requiring that the probability distributions at the end of the control horizon be equilibria (stationary solutions of the Fokker-Planck equation) entails assigning additional boundary conditions in time, the vanishing of the current velocity, that would overdetermine the problem. In the literature, we often encounter the suggestion that one should associate discontinuities at the end of the control horizon to optimal protocols solving otherwise already well-posed problems to satisfy the equilibrium requirement. Accordingly, the mechanical potential “jumps” to the value needed for the probability distributions at each end of the control horizon to be in equilibrium, without affecting the cost of the transition or the probability distribution of the system.

Mathematically, the end-of-horizon discontinuity idea may be upheld by invoking generalised criteria for existence of solutions to differential equations (see e.g. [9]). Physically, the jump idea is an attempt to model the dynamics on time-scales that are not resolved in the formulation of the generalised Schrödinger bridge problem. It is hardly, however, a tenable solution for several reasons. As pointed out in [20], assuming a jump is of little help from the experimental point of view, as it does not translate into any well-defined operational protocol. In addition, we argue that the self-consistence of this idea is questionable. Underdamped, and with greater reason overdamped, dynamics model a classical open system under the assumption of a strong separation of time scales with those of the environment, subsumed in a white noise force [21]. Within this framework, it is questionable to justify temporal variations of the system’s drift on scales faster than white noise as elements of admissible control protocols. Similar concerns regarding the consistence of control protocols with the derivation of the dynamics from microscopic models are often raised in quantum control theory [22]. Further doubts are also fuelled by the reformulation of the problem in terms of jump processes. The corresponding optimal protocols admit as scaling limits those of the overdamped dynamics only under additional fine-tuning assumptions [23].

The upshot of this discussion is that optimal control protocols are adapted either to establish universal lower bounds as for Landauer’s principle [24], or to describe a physical process in which equilibration of the end states is not a requirement. This is the case, for example, when transitions constitute branches of a Stirling’s engine cycle as in [25], [26].

Within the control horizon, engineered swift equilibration protocols only need to satisfy the Fokker-Planck equation. Hence the requirement that assigned boundary conditions at each end of the control horizon are equilibria does not produce an overdetermined problem. Engineered swift equilibration protocols are then constructed based on the simplicity of their implementation [27]. Whilst this is of practical advantage, it opens the question of how to conceptualise optimal transitions between equilibria. More precisely, the question is how to resolve the neglected time scales in generalised Schödinger bridges.

1.2 Motivation of the present work↩︎

The motivation of this work is to address the mathematical and physical difficulties discussed above. From the mathematical point of view, our perspective is that boundary conditions should be imposed on system state variables rather than controls [9] to avoid overdetermining an optimal control problem already at the formulation level. In other words, problems are well posed when they are amenable to the canonical Bolza form of optimal control theory [9]. From the physical point of view, implementing protocols without bandwidth limitations may cause signal distortions in high-accuracy experiments of quantum control [28]. As a step towards a more realistic model, we instead consider the microscopic mechanical potential as an element of the system state onto which boundary conditions can be imposed. Correspondingly, we identify controls with (the parameters of) a force governing the variation in time of the mechanical potential acting on the open system. Here, we mainly conceptualise this force as the result of macroscopic manipulations whose cost may depend on the setup.

In order to simplify the discussion, we focus on the dynamics of a nano-oscillator in one dimensional configuration space. In other words, we restrict the attention to open systems whose probability distribution under the action of a quadratic mechanical potential is Gaussian at any time. Although restrictive, this assumption corresponds to many experimental setups [1][3], [7], [8], [29]. We note that when dealing with Gaussian states, the distinction between feedback and open-loop control (where the control is exerted by means of parameters independent of the current system state) is somewhat blurred, as the set of controls is finite.

1.3 Structure of the work and main findings↩︎

The structure of this work is the following. In Section 2, we define the nanooscillator control model. Drawing from [2], we suppose that it is possible to experimentally determine whether at the beginning and at the end of the control horizon the system is in equilibrium. Correspondingly, we specify the state of the system by assigning first and second order cumulants and the mechanical force on the system at both ends of the control horizon. We identify controls with the time derivative of the parameters of the mechanical force.

In Section 3, we show that we can unambiguously discriminate engineered swift equilibration transitions at minimal dissipation from transitions connecting target states by performing a minimum amount of thermodynamic work on the system. In this latter case, target states are described by probability distributions that are not necessarily in equilibrium with the microscopic mechanical force exerted on an open system. This distinction also allows us to characterise the relaxation to equilibrium process that may eventually take place after the end of the control, which is done in Section 9.

In Section 4, we describe the set of admissible controls and use Pontryagin’s maximum principle to formulate first order optimality conditions for engineered swift equilibration at minimum dissipation and minimum work transitions in Sections 5 and 6. From a physics point of view, the most natural formulation is to identify the cost of a transition with the work done on the system. Controls are then restricted to a compact set. However, due to difficulties we discuss in implementing this numerically, we describe how self-concordant barriers can be used in their place. Self-concordant barriers are used as a tool to apply methods of singular perturbation theory [30][32] for reduction of first order conditions for optimality to a universal normal form in Section 7. In Section 8 we describe their overdamped limit in the bulk of the control horizon.

In Section 10, we contrast numerical solutions of the equations expressing first order conditions for optimality (indirect optimisation) and their normal form obtained in Section 7 with numerical methods of direct optimisation [33], [34].

The last section is devoted to conclusions and outlook.

2 Open system dynamics↩︎

We consider a one-dimensional underdamped linear dynamics in non-dimensional variables \[\begin{align} & \mathrm{d}\mathscr{q}_{t}=\varepsilon \,\mathscr{p}_{t} \mathrm{d}t \\ & \mathrm{d}\mathscr{p}_{t}=-\left( \mathscr{p}_{t}-\varepsilon\,\mathscr{u}_{t}+\varepsilon\,\mathscr{k}_{t}\,\mathscr{q}_{t}\right)\mathrm{d}t+\sqrt{2}\,\mathrm{d}\mathscr{w}_{t} \end{align} \label{dyn:ud}\tag{1}\] where \(\varepsilon\) denotes the order parameter of the overdamped expansion. We relate non-dimensional variables to dimensional ones by identifying \[\begin{align} \varepsilon=\sqrt{\frac{\tau^{2}}{m\,\beta\,\ell^2}} \nonumber \end{align}\] where \(\tau\) is the Stokes time, \(m\) the mass of the system, \(\beta\) the inverse temperature of the environment, and \(\ell\) the typical length scale of the problem.

In (1 ), the drift on momentum increments is the sum of a linear friction term and the gradient of the quadratic potential \[\begin{align} U_{t}(q)=\frac{\mathscr{k}_{t}\,(q -\mathscr{c}_{t})^{2}}{2} \nonumber \end{align}\] characterised by a time-dependent stiffness \(\mathscr{k}_{t}\) and stationary point \(\mathscr{c}_{t}\). The product of these quantities \[\begin{align} \mathscr{u}_{t}=\mathscr{k}_{t}\,\mathscr{c}_{t} \nonumber \end{align}\] determines the position independent term in (1 ). In what follows, we find expedient to regard \(\mathscr{u}_{t}\) as independent dynamical varible.

2.1 Control equations↩︎

We assume that stiffness and centre obey the first order differential equations \[\begin{align} & \dot{\mathscr{k}}_{t}=\lambda_{t} \\ & \dot{\mathscr{u}}_{t}=\gamma_{t} \end{align} \label{dyn:ceqs}\tag{2}\] We identify the pair of deterministic time dependent functions \((\lambda_{t},\gamma_{t})\) with the control quantities of the dynamics. We require the controls to be essentially bounded [35]. Physically, (2 ) states that the microscopic mechanical potential only varies in response of external actions with delay, modelled by the solution of differential equations.

2.2 Cumulant dynamics↩︎

As well known, the transition probability of a system of linear stochastic differential equations with additive noise is Gaussian at any time. Correspondingly, the Fokker-Planck equation reduces to a solvable finite hierarchy of differential equations for first and second order cumulants. Hence, if we define \[\begin{align} & \begin{aligned} & \mathscr{x}^{\scriptscriptstyle{(1)}}_{t}=\operatorname{E}\left(\mathscr{q}_{t}-\mathscr{x}^{\scriptscriptstyle{(5)}}_{t}\right)^{2} \\ & \mathscr{x}^{\scriptscriptstyle{(2)}}_{t}=\operatorname{E}\left(\mathscr{p}_{t}-\mathscr{x}^{\scriptscriptstyle{(6)}}_{t}\right)\left(\mathscr{q}_{t}-\mathscr{x}^{\scriptscriptstyle{(5)}}_{t}\right) \\ & \mathscr{x}^{\scriptscriptstyle{(3)}}_{t}=\operatorname{E}\left(\mathscr{p}_{t}-\mathscr{x}^{\scriptscriptstyle{(6)}}_{t}\right)^{2} \\ & \mathscr{x}^{\scriptscriptstyle{(4)}}_{t}=\mathscr{k}_{t} \end{aligned} &&and & \begin{align} &\mathscr{x}^{\scriptscriptstyle{(5)}}_{t}= \operatorname{E}\mathscr{q}_{t} \\ &\mathscr{x}^{\scriptscriptstyle{(6)}}_{t}= \operatorname{E}\mathscr{p}_{t} \\ & \mathscr{x}^{\scriptscriptstyle{(7)}}_{t}= \mathscr{u}_{t} \end{align} \nonumber \end{align}\] then the set of cumulant equations complemented by the control equations (2 ) \[\begin{align} \begin{aligned} & \dot{\mathscr{x}}_{t}^{\scriptscriptstyle{(1)}}=2 \,\varepsilon\, \mathscr{x}_{t}^{\scriptscriptstyle{(2)}} \\ & \dot{\mathscr{x}}_{t}^{\scriptscriptstyle{(2)}}=- \mathscr{x}_{t}^{\scriptscriptstyle{(2)}}-\varepsilon \left(\mathscr{x}_{t}^{\scriptscriptstyle{(4)}}\mathscr{x}_{t}^{\scriptscriptstyle{(1)}} -\mathscr{x}_{t}^{\scriptscriptstyle{(3)}}\right) \\ & \dot{\mathscr{x}}_{t}^{\scriptscriptstyle{(3)}}=2 \left(1-\mathscr{x}_{t}^{\scriptscriptstyle{(3)}}-\varepsilon \,\mathscr{x}_{t}^{\scriptscriptstyle{(4)}}\, \mathscr{x}_{t}^{\scriptscriptstyle{(2)}} \right) \\ & \dot{\mathscr{x}}_{t}^{\scriptscriptstyle{(4)}}= \lambda_{t} \end{aligned} \label{dyn:2cum} \end{align}\tag{3}\] and \[\begin{align} \begin{aligned} & \dot{\mathscr{x}}_{t}^{\scriptscriptstyle{(5)}}=\varepsilon\,\mathscr{x}_{t}^{\scriptscriptstyle{(6)}} \\ & \dot{\mathscr{x}}_{t}^{\scriptscriptstyle{(6)}}= -\mathscr{x}_{t}^{\scriptscriptstyle{(6)}}+\varepsilon\left(\mathscr{x}_{t}^{\scriptscriptstyle{(7)}}-\mathscr{x}_{t}^{\scriptscriptstyle{(4)}} \,\mathscr{x}_{t}^{\scriptscriptstyle{(5)}} \right) \\ & \dot{\mathscr{x}}_{t}^{\scriptscriptstyle{(7)}}=\gamma_{t} \end{aligned} \label{dyn:1cum} \end{align}\tag{4}\] fully characterise the statistics of (1 ). In particular, the system of equations (3 ) governing the evolution of second order cumulants is independent of the system (4 ). In an ideal experiment, it is thus possible to control an open system with Gaussian statistics by squeezing or spreading the distribution while holding the mean value fixed or, conversely, by shifting the mean value without affecting the average size of the fluctuations around it.

2.3 Gaussian boundary conditions for the control problem↩︎

We are interested in transitions between Gaussian distributions of Maxwell-Boltzmann form at both ends of a control horizon \([0\,,t_{\mathfrak{f}}]\). We thus always require \[\begin{align} \mathscr{x}_{0}^{\scriptscriptstyle{(2)}}=\mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(2)}}=\mathscr{x}_{0}^{\scriptscriptstyle{(6)}}=\mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(6)}}=0 \label{bc:MB} \end{align}\tag{5}\] meaning that the position-momentum cross correlation and the mean value of the momentum process vanish at both ends of the control horizon. We identify the open system temperature with the variance of the momentum process. Hence, a unit value at both ends of the control horizon \[\begin{align} \mathscr{x}_{0}^{\scriptscriptstyle{(3)}}=\mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(3)}}=1 \label{bc:isoth} \end{align}\tag{6}\] entails that the system is at the same temperature as the surrounding bath at the beginning and end of the transition.

In order to fully specify a transition governed by (1 ) and (2 ), we need to assign the value of the mechanical force acting on the system at both ends of the control horizon. Concretely, we need to assign \(\mathscr{x}_{t}^{\scriptscriptstyle{(4)}}\), and \(\mathscr{x}_{t}^{\scriptscriptstyle{(7)}}\) at \(t=0,t_{\mathfrak{f}}\). This is mathematically equivalent to the specification of the probability current [36]. When the Gaussian Maxwell-Boltzmann conditions (5 ), (6 ) hold, engineered swift equilibration transitions correspond to the requirements \[\begin{align} & \mathscr{x}_{0}^{\scriptscriptstyle{(1)}}=\frac{1}{\mathscr{x}_{0}^{\scriptscriptstyle{(4)}}} && \mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(1)}}=\frac{1}{\mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(4)}}} \label{bc:sc2} \end{align}\tag{7}\] and \[\begin{align} & \mathscr{x}_{0}^{\scriptscriptstyle{(5)}}=\frac{\mathscr{x}_{0}^{\scriptscriptstyle{(7)}}}{\mathscr{x}_{0}^{\scriptscriptstyle{(4)}}} && \mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(5)}}=\frac{\mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(7)}}}{\mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(4)}}} \label{bc:sc1} \end{align}\tag{8}\] In this framework, transitions between or towards non-equilibrium states are those where the boundary conditions 7 and 8 are not satisfied.

3 Work done on the system as a thermodynamic cost functional↩︎

The average work done on the system during a stochastic thermodynamic transition in the horizon \([0\,,t_{\mathfrak{f}}]\) is [37] \[\begin{align} \mathcal{W}_{t_{\mathfrak{f}}}=\operatorname{E}\int_{0}^{t_{\mathfrak{f}}}\mathrm{d}s\,(\partial_{s}U_{s})(\mathscr{q}_{s}) \nonumber \end{align}\] where the partial derivative only acts on the explicit time dependence of the mechanical potential \(U\). A straightforward application of Itô’s lemma shows that, for any sufficiently regular but not necessarily harmonic mechanical potential, the mean work is amenable to the form \[\begin{align} \mathcal{W}_{t_{\mathfrak{f}}}=\operatorname{E} \left(U_{t_{\mathfrak{f}}}(\mathscr{q}_{t_{\mathfrak{f}}})-U_{0}(\mathscr{q}_{0})+\frac{\mathscr{p}_{t_{\mathfrak{f}}}^{2} -\mathscr{p}_{0}^{2} }{2}\right)+\operatorname{E}\int_{0}^{t_{\mathfrak{f}}}\mathrm{d}t\,\mathscr{p}_{t}^{2}-t_{\mathfrak{f}} \nonumber \end{align}\] According to the first law of thermodynamics, the mean heat released by the system is \[\begin{align} \mathcal{Q}_{t_{\mathfrak{f}}}= \mathcal{W}_{t_{\mathfrak{f}}}-\operatorname{E} \left(U_{t_{\mathfrak{f}}}(\mathscr{q}_{t_{\mathfrak{f}}})-U_{0}(\mathscr{q}_{0})+\frac{\mathscr{p}_{t_{\mathfrak{f}}}^{2} -\mathscr{p}_{0}^{2} }{2}\right) \nonumber \end{align}\] By Clausius’ law, the mean heat released is the entropy variation of the environment in non-dimensional units. The total entropy variation during a transition is the sum of the entropy variation of the environment and the system. The system entropy variation is specified by the Gibbs-Shannon entropy of the probability distribution \(\mathrm{f}_{t}\) of the system \[\begin{align} \mathcal{S}_{t}= -\int_{\mathbb{R}^2}\mathrm{d}p\mathrm{d}q\,\mathrm{f}_{t}(p,q)\ln \mathrm{f}_{t}(p,q)=-\operatorname{E}\ln \mathrm{f}_{t}(\mathscr{p}_{t},\mathscr{q}_{t}) \nonumber \end{align}\] We thus arrive at the expression of the second law of thermodynamics \[\begin{align} \mathcal{E}_{t_{\mathfrak{f}}}=\mathcal{S}_{t_{\mathfrak{f}}}-\mathcal{S}_{0}+\mathcal{Q}_{t_{\mathfrak{f}}}=\operatorname{E}\int_{0}^{t_{\mathfrak{f}}}\mathrm{d}t\,\Big{(}\mathscr{p}_{t} +\partial_{\mathscr{p}_{t}}\ln \mathrm{f}_{t}(\mathscr{p}_{t},\mathscr{q}_{t})\Big{)}^2\geq\,0 \label{td:ep} \end{align}\tag{9}\] which holds in this form for any mechanical potential [37]. We refer to [38] for the derivation of (9 ) from fluctuation relations.

When the system dynamics is linear and the probability distributions at both ends of the control horizon are Gaussian and of Maxwell-Boltzmann type , i.e.(5 ) and (6 ) hold true, we can further exhibit the dependence of the work on the cumulants \[\begin{align} \mathcal{W}_{t_{\mathfrak{f}}}&= \frac{\mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(4)}}}{2}\left (\mathscr{x}^{\scriptscriptstyle{(1)}}_{t_{\mathfrak{f}}}+\left (\mathscr{x}^{\scriptscriptstyle{(5)}}_{t_{\mathfrak{f}}} - \frac{\mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(7)}}}{\mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(4)}}}\right )^{2}\right )-\frac{\mathscr{x}_{0}^{\scriptscriptstyle{(4)}}}{2} \left (\mathscr{x}^{\scriptscriptstyle{(1)}}_{0}+\left (\mathscr{x}^{\scriptscriptstyle{(5)}}_{0} - \frac{\mathscr{x}_{0}^{\scriptscriptstyle{(7)}}}{\mathscr{x}_{0}^{\scriptscriptstyle{(4)}}}\right )^{2}\right ) \nonumber\\ &+\int_{0}^{t_{\mathfrak{f}}}\mathrm{d}t\,\left(\mathscr{x}^{\scriptscriptstyle{(3)}}_{t}+\mathscr{x}^{\scriptscriptstyle{(6)}}_{t}{}^{2}\right)-t_{\mathfrak{f}} \label{td:work} \end{align}\tag{10}\] When we control the transition via (2 ), all quantities appearing in the boundary terms of work are state variables of the system. In other words, (10 ) is the total transition cost sum of a running cost produced by the integral over the average entropy production rate and a terminal cost in standard Bolza form (see e.g. [9]). We can therefore unambiguously discriminate between at least two classes of transitions.

  1. Transitions between equilibrium states: in such a case the average work coincides with the mean heat release \(\mathcal{Q}_{t_{\mathfrak{f}}}\) : \[\begin{align} \mathcal{W}_{t_{\mathfrak{f}}}= \mathcal{Q}_{t_{\mathfrak{f}}}=\int_{0}^{t_{\mathfrak{f}}}\mathrm{d}t\,\left(\mathscr{x}^{\scriptscriptstyle{(3)}}_{t}+\mathscr{x}^{\scriptscriptstyle{(6)}}_{t}{}^{2}\right)-t_{\mathfrak{f}} \label{work:ep} \end{align}\tag{11}\] The corresponding value of the mean entropy production is \[\begin{align} \mathcal{E}_{t_{\mathfrak{f}}}= \mathcal{Q}_{t_{\mathfrak{f}}}+\frac{1}{2}\ln \frac{\mathscr{x}^{\scriptscriptstyle{(1)}}_{t_{\mathfrak{f}}}}{\mathscr{x}^{\scriptscriptstyle{(1)}}_{0}}\ge 0 \label{work:ep2} \end{align}\tag{12}\] We readily see that (11 ), (12 ) vanish if the system is maintained at Maxwell-Boltzmann equilibrium \[\begin{align} & \begin{cases} \mathscr{x}^{\scriptscriptstyle{(1)}}_{t}=\mathrm{const.} \\ \mathscr{x}^{\scriptscriptstyle{(3)}}_{t}=1 & \& \mathscr{x}^{\scriptscriptstyle{(6)}}_{t}=0 \end{cases} & \forall\,t\,\in\,[0,t_{\mathfrak{f}}] \nonumber \end{align}\] Transitions in this class are models of optimal engineered swift equilibration.

  2. Minimum work transitions to a target state \[\begin{align} \mathcal{W}_{t_{\mathfrak{f}}}= \frac{\mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(4)}}}{2}\left (\mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(1)}}+\left (\mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(5)}} - \frac{\mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(7)}}}{\mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(4)}}}\right )^{2}\right )-\frac{1}{2} +\int_{0}^{t_{\mathfrak{f}}}\mathrm{d}t\,\left(\mathscr{x}^{\scriptscriptstyle{(3)}}_{t}+\mathscr{x}^{\scriptscriptstyle{(6)}}_{t}{}^{2}\right)-t_{\mathfrak{f}} \label{work:terminal} \end{align}\tag{13}\]

The upshot is that, when thermodynamic transitions connect genuine equilibrium states, work optimisation is equivalent to the minimisation of mean heat release. In addition, when the control is exerted via (2 ), transitions at minimum work (13 ) become clearly distinct from those minimising the entropy production.

4 Formulation of the optimal control problems↩︎

The work functional (10 ) is not convex in the controls \(\lambda_{t}\) and \(\gamma_{t}\). In order to ensure that the optimisation problem is well posed, we need either to refine the space of admissible controls or to introduce a convex penalty on protocols that can be enacted to steer the dynamics of the system.

4.1 Penalty on admissible controls↩︎

We model the total cost of controlling the transition as \[\begin{align} \mathcal{C}_{t_{\mathfrak{f}}}=\mathcal{W}_{t_{\mathfrak{f}}}+\int_{0}^{t_{\mathfrak{f}}}\mathrm{d}t\, \left(g\,V(\lambda_{t})+\tilde{g}\,\tilde{V}(\gamma_{t})\right) \label{penalty:general} \end{align}\tag{14}\] The “penalty parameters” \(g\) and \(\tilde{g}\) couple a cost term weighing the controls exerted on the mechanical potential felt by the nano-particle to the thermodynamic functional. We consider three cases qualitatively describing analogous physical setups, but with mathematical properties that may be more adapted either to numerical or analytical inquiry.

Because the dynamics of the second order cumulants fully decouples from that of the first order cumulants, we restrict attention to transitions that only change the position variance of the system.

4.1.1 Hard penalty↩︎

We suppose that admissible controls are time dependent functions that are confined to a compact set, i.e. satisfying for any \(t\in [0\,,t_{\mathfrak{f}}]\) \[\begin{align} & |\lambda_{t}|\,\leq\,\Lambda\,<\infty \label{penalty:hard} \end{align}\tag{15}\]

Finite dimensional approximations reduce the minimisation of a cost functional to finding the minima of a multidimensional function on a compact set. Weierstrass extreme value theorem [9] then guarantees the existence of such minima. Physically, (15 ) directly models constraints on the intensity of controls that admit straightforward motivation in actual experimental setups: for instance constraints on the power range of an optical device used to confine a colloidal particle [39]. Furthermore, the assumption (15 ) is appealing, because as long as the controls satisfy (15 ), the optimisation problem is well posed by setting \[\begin{align} V(\lambda_{t})=0 \nonumber \end{align}\] in (14 ). Correspondingly, the cost function preserves the universal form prescribed by stochastic thermodynamics.

4.1.2 Logarithmic penalty↩︎

Direct methods of optimal control [40], [41] approximate the total cost by means of a multidimensional objective function. Optimisation is thus recast into a minimisation problem to be tackled by non-linear programming. In such a setup, constraints such as (15 ) are “softened” by couching them into the form of potential barriers. Major developments in optimisation theory over the last thirty years led to the discovery [42] and refinement [43], [44] of interior point methods, giving rise to algorithms capable of minimising a convex programming problem in polynomial time. The key ingredient is the adoption of “self-concordant” barriers to model constraints. In one dimension, the defining properties of these barriers are:

  1. the second derivative of the barrier is a Lipschitz function with universal constant equal to \(2\) \[\begin{align} \left |\frac{\mathrm{d}^{3}f}{\mathrm{d} x^{3}}(x)\right |\,\leq\,2 \left(\frac{\mathrm{d}^{2}f}{\mathrm{d} x^{2}}(x)\right)^{3/2} \label{penalty:scf} \end{align}\tag{16}\]

  2. the first derivative be a Lipschitz function with tunable constant \(\vartheta\): \[\begin{align} \left |\frac{\mathrm{d}f}{\mathrm{d} x}(x)\right |\,\leq\,\vartheta^{1/2} \left(\frac{\mathrm{d}^{2}f}{\mathrm{d} x^{2}}(x)\right)^{1/2} \label{penalty:scb} \end{align}\tag{17}\]

Any univariate function satisfying (16 ) is called self-concordant. An objective constrained by self-concordant barriers attains a unique absolute minimum depending on the penalty parameter. As this latter varies, the minimum treads a “central path” asymptotically tending to the solution of the hard barrier problem. Self-concordance ensures that numerical computation of the limit takes polynomial time.

Having in mind the above considerations, a suitable choice for the numerical analysis of (14 ) is the self-concordant potential \[\begin{align} V(z)= \begin{cases} -\ln \left(1-\dfrac{z^{2}}{\Lambda^{2}}\right) & |z|\,\leq\,\Lambda \\[0.3cm] +\infty& |z|\,>\,\Lambda \end{cases} \label{penalty:log} \end{align}\tag{18}\] Complementary to direct methods are indirect methods based on the solution of extremal equations that optimal control necessarily satisfy. The Legendre transform of the barrier then enters the formulation of Pontryagin’s maximum principle determining the extrema equations. For (18 ) we find \[\begin{align} \arg\max_{z \in \mathbb{R}}\left(y\,z -g\,V(z)\right)=\frac{g}{y}\left(\sqrt{1+\frac{\Lambda^{2}\,y^{2}}{g^{2}}}-1\right) \nonumber \end{align}\]

4.1.3 Harmonic penalty↩︎

Many qualitative features of optimal control problems subject to a “hard penalty” can be captured by considering a harmonic model of penalty such as \[\begin{align} V(z)=\Bigl(\frac{z}{\Lambda}\Bigr)^{2} \label{penalty:harmonic} \end{align}\tag{19}\] which also enjoys the self-concordant property. A harmonic potential approximates (18 ) near the minimum i.e. for control actions that have a low impact on the total cost (14 ). In Section 7, we use 19 to apply centre manifold analysis to derive the universal form of the first order conditions for optimal control problem associated to the cumulant dynamics equations (3 ), (4 ).

4.2 Effective cost↩︎

Based on the discussion in Section 3 above, we set out to analyse the following mathematical problems.

4.2.1 [C1]: Transitions between equilibrium states.↩︎

The quantity to minimise is \[\begin{align} \mathcal{C}_{t_{\mathfrak{f}}}=\int_{0}^{t_{\mathfrak{f}}}\mathrm{d}t \,\left(\mathscr{x}_{t}^{\scriptscriptstyle{(3)}}+g\,V(\lambda_{t})\right) \label{ep:cost} \end{align}\tag{20}\] where \(g=0\) if the admissible control is subject to the constraint (15 ) and \(g>0\) if instead we penalise large values of the controls with a confining potential such as (18 ) or (19 )

The cost functional only consists of a time integral and is thus in Lagrange form [9]. To apply Pontryagin’s maximum principle in its most general form, we write the total cost in the equivalent Mayer form. \[\begin{align} \mathscr{x}_{t}^{\scriptscriptstyle{(0)}}=\int_{0}^{t}\mathrm{d}s \,\left(\mathscr{x}_{s}^{\scriptscriptstyle{(3)}}+g\,V(\lambda_{s})\right) \nonumber \end{align}\] and define \[\begin{align} \tilde{\mathcal{C}}_{t_{\mathfrak{f}}}=\mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(0)}} \nonumber \end{align}\] subject to the dynamical constraint \[\begin{align} \dot{\mathscr{x}}_{t}^{\scriptscriptstyle{(0)}}=\mathscr{x}_{t}^{\scriptscriptstyle{(3)}}+g\,V(\lambda_{t}) \label{ep:Mayer} \end{align}\tag{21}\] in addition to (3 ).

4.2.2 [C2]: Transitions at minimum work.↩︎

The cost functional is \[\begin{align} \mathcal{C}_{t_{\mathfrak{f}}}= \frac{x_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(4)}}\,x_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(1)}}}{2}+\int_{0}^{t_{\mathfrak{f}}}\mathrm{d}s\,\left(x_{s}^{\scriptscriptstyle{(3)}}+g\,V(\lambda_{s})\right) \label{mw:cost} \end{align}\tag{22}\] with an additional terminal cost now specified by the final value of the position correlation and of the stiffness of the microscopic mechanical potential felt by the system. Applying (21 ), we get the equivalent Mayer form \[\begin{align} \tilde{\mathcal{C}}_{t_{\mathfrak{f}}}= \frac{x_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(4)}}\,x_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(1)}}}{2}+\mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(0)}} \nonumber \end{align}\]

5 First Order Optimality Conditions↩︎

Pontryagin’s maximum principle identifies first order, i.e. necessary, conditions for optimality versus “needle” or McShane-Pontryagin variations of controls [9]. It thus extends calculus of variations to trajectories that are also stationary with respect to discontinuous controls. To this end, the maximum principle applies to the Mayer form of a total cost functional. Once we impose the dynamics through Lagrange multipliers, we obtain an action functional of the form \[\begin{align} \mathcal{A}_{t_{\mathfrak{f}}}=c\,\frac{x_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(4)}}\,x_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(1)}}}{2}+\mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(0)}}+ \int_{0}^{t_{\mathfrak{f}}}\mathrm{d}t\left(\sum_{\mathfrak{i}=\mathfrak{0}}^{\mathfrak{4}}\mathscr{y}_{t}^{\scriptscriptstyle{(i)}}\dot{\mathscr{x}}_{t}^{\scriptscriptstyle{(i)}}-\tilde{H}(\boldsymbol{\mathscr{x}}_{t},\boldsymbol{\mathscr{y}}_{t},\lambda_{t})\right) \label{MP:action} \end{align}\tag{23}\] where \(\tilde{H}\) is the “pre-Hamiltonian” \[\begin{align} \tilde{H}(\boldsymbol{\mathscr{x}}_{t},\boldsymbol{\mathscr{y}}_{t},\lambda_{t})&= \mathscr{y}_{t}^{\scriptscriptstyle{(0)}} \left(\mathscr{x}_{t}^{\scriptscriptstyle{(3)}}+g \,V(\lambda_{t})\right)+2 \,\varepsilon\,\mathscr{y}_{t}^{\scriptscriptstyle{(1)}}\, \mathscr{x}_{t}^{\scriptscriptstyle{(2)}} + \mathscr{y}_{t}^{\scriptscriptstyle{(2)}}\left(- \mathscr{x}_{t}^{\scriptscriptstyle{(2)}}-\varepsilon \left(\mathscr{x}_{t}^{\scriptscriptstyle{(4)}}\mathscr{x}_{t}^{\scriptscriptstyle{(1)}} -\mathscr{x}_{t}^{\scriptscriptstyle{(3)}}\right) \right) \nonumber\\ & +2\,\mathscr{y}_{t}^{\scriptscriptstyle{(3)}}\, \left(1-\mathscr{x}_{t}^{\scriptscriptstyle{(3)}}-\varepsilon \,\mathscr{x}_{t}^{\scriptscriptstyle{(4)}}\, \mathscr{x}_{t}^{\scriptscriptstyle{(2)}} \right) +\mathscr{y}_{t}^{\scriptscriptstyle{(4)}}\lambda_{t} \label{MP:pH} \end{align}\tag{24}\] and \[\begin{align} c = \begin{cases} &0 \qquad \text{case \ref{C1}} \\ &1 \qquad \text{case \ref{C2}} \end{cases} \nonumber \end{align}\]An important consequence of needle variations of the action functional in Mayer form (23 ) is that extremals can occur either when the Lagrange multiplier \(\mathscr{y}_{t}^{\scriptscriptstyle{(0)}}\) is a non-vanishing constant or equal to zero almost everywhere during the control horizon. When \[\mathscr{y}_{t}^{\scriptscriptstyle{(0)}}=0\] almost everywhere, we encounter abnormal extremals that are independent of the form of the cost functional. Hence, the first step to determine necessary conditions for optimality is usually to assess the existence of abnormal extremals [35]. From the physical point of view, the cost functional discriminates between distinct thermodynamic transitions. We thus expect optimal control to be determined by normal extremals corresponding to the condition \[\begin{align} \mathscr{y}_{t}^{\scriptscriptstyle{(0)}}=-1 \label{MP:normal} \end{align}\tag{25}\] recovering the Lagrange (case [C1]) or Bolza (case [C2]) forms of the cost functional. We defer a proof of this statement to 13 and instead focus here on the analysis of normal extremals.

5.1 Normal extremals↩︎

Normal extremals obey the Hamilton equations \[\begin{align} & \begin{aligned} & \dot{\mathscr{x}}_{t}^{\scriptscriptstyle{(i)}}=(\partial_{\mathscr{y}_{t}^{\scriptscriptstyle{(i)}}}\tilde{H})(\boldsymbol{x}_{t},\boldsymbol{y}_{t},\lambda_{t}) \\ & \dot{\mathscr{y}}_{t}^{\scriptscriptstyle{(i)}}=-(\partial_{\mathscr{x}_{t}^{\scriptscriptstyle{(i)}}}\tilde{H})(\boldsymbol{x}_{t},\boldsymbol{y}_{t},\lambda_{t}) \end{aligned} && \mathfrak{i}=\mathfrak{1},\dots,\mathfrak{4} \label{MP:Heqs} \end{align}\tag{26}\] with the maximum condition \[\begin{align} \lambda_{t}= \arg \max_{l \in \mathrm{A}} H(\boldsymbol{x}_{t},\boldsymbol{y}_{t}, l) \label{MP:argmax} \end{align}\tag{27}\] where \(\mathrm{A}\) is the set of admissible controls. The terminal cost determines the boundary conditions of (26 ), (27 ) that to fully specify extremals.

5.1.1 [C1]: Transitions between equilibrium states↩︎

The mean heat release (11 ) does not include any terminal cost. The cumulants of the Gaussian Maxwell-Boltzmann distributions at the boundaries are assigned by (5 ), (6 ). Transitions between equilibrium states also satisfy (7 ), which gives the boundary conditions for the stiffness of the mechanical drift. In such a case, the numerical value of the mean heat dissipation also coincides with that of the mean work. If boundary values of the stiffness violate (7 ), the terminal states are not equilibria. This happens if we identify the control with the drift as done, for example, in [11], [45], [46].

5.1.2 [C2]: Transitions at minimum work.↩︎

We minimise the mean work (22 ) while again assigning all cumulants of the Gaussian Maxwell-Boltzmann end-of-horizon distributions in agreement (5 ), (6 ). As in section 5.1.1, we assume that the system is in equilibrium at time \(t=0\). In this case, the minimisation is also performed over the terminal value of the stiffness. Stationary variation then corresponds to the condition \[\begin{align} \mathscr{y}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(4)}}=-\dfrac{\mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(1)}}}{2} \label{noneq95bc} \end{align}\tag{28}\] We discuss the interpretation and consequences of this condition in section 10.2 below.

6 Analysis of regular extremals↩︎

We now describe the explicit form of the stationary equations arising from the class of admissible controls and penalty potential.

6.1 Normal extremals with controls subject to a convex penalty↩︎

If the penalty potential is a convex function of its argument like in (18 ) or (19 ), enforcing the maximum condition (27 ) is equivalent to taking the Legendre transform of the pre-Hamiltonian with respect to the control [47]. The result is a Hamiltonian in the classical mechanics sense. Regardless of how we proceed, the optimal evolution necessarily satisfies the Hamiltonian system of differential equations \[\begin{align} & \begin{cases} \dot{\mathscr{x}}_{t}^{\scriptscriptstyle{(1)}}=2 \,\varepsilon\, \mathscr{x}_{t}^{\scriptscriptstyle{(2)}} \\ \dot{\mathscr{x}}_{t}^{\scriptscriptstyle{(2)}}=- \mathscr{x}_{t}^{\scriptscriptstyle{(2)}}-\varepsilon \left(\mathscr{x}_{t}^{\scriptscriptstyle{(4)}}\mathscr{x}_{t}^{\scriptscriptstyle{(1)}} -\mathscr{x}_{t}^{\scriptscriptstyle{(3)}}\right) \\ \dot{\mathscr{x}}_{t}^{\scriptscriptstyle{(3)}}=2 \left(1-\mathscr{x}_{t}^{\scriptscriptstyle{(3)}}-\varepsilon \,\mathscr{x}_{t}^{\scriptscriptstyle{(4)}}\, \mathscr{x}_{t}^{\scriptscriptstyle{(2)}} \right) \\ \dot{\mathscr{x}}_{t}^{\scriptscriptstyle{(4)}}= b(\mathscr{y}_{t}^{\scriptscriptstyle{(4)}}) \end{cases} && \begin{cases} \dot{\mathscr{y}}_{t}^{\scriptscriptstyle{(1)}}=\varepsilon\, \mathscr{y}_{t}^{\scriptscriptstyle{(2)}} \,\mathscr{x}_{t}^{\scriptscriptstyle{(4)}} \\ \dot{\mathscr{y}}_{t}^{\scriptscriptstyle{(2)}}=-2\,\varepsilon\,\mathscr{y}_{t}^{\scriptscriptstyle{(1)}}+\mathscr{y}_{t}^{\scriptscriptstyle{(2)}}+2\,\varepsilon\, \mathscr{y}_{t}^{\scriptscriptstyle{(3)}} \,\mathscr{x}_{t}^{\scriptscriptstyle{(4)}} \\ \dot{\mathscr{y}}_{t}^{\scriptscriptstyle{(3)}}=1-\varepsilon\, \mathscr{y}_{t}^{\scriptscriptstyle{(2)}}+2\,\mathscr{y}_{t}^{\scriptscriptstyle{(3)}} \\ \dot{\mathscr{y}}_{t}^{\scriptscriptstyle{(4)}}= \varepsilon\, \mathscr{y}_{t}^{\scriptscriptstyle{(2)}}\,\mathscr{x}_{t}^{\scriptscriptstyle{(1)}} +2\,\varepsilon \,\mathscr{y}_{t}^{\scriptscriptstyle{(3)}}\, \mathscr{x}_{t}^{\scriptscriptstyle{(2)}} \end{cases} \label{ext:eqs} \end{align}\tag{29}\] The drift steering the stiffness is \[\begin{align} b(y)= \begin{cases} \dfrac{g}{y}\left(\sqrt{1+\dfrac{\Lambda^{2}\,y^{2}}{g^{2}}}-1\right) & for the logarithmic penalty \,(\ref{penalty:log}) \\[0.4cm] \dfrac{\Lambda^2\, y}{2\,g} &for the harmonic penalty \,(\ref{penalty:harmonic}) \end{cases} \label{ext:Hvf} \end{align}\tag{30}\] We thus obtain a system with eight equations and eight boundary conditions determined either by section 5.1.1 or by 5.1.2.

6.2 Controls in a compact set with no penalty↩︎

We now analyze (23 ) when \(g\) in the pre-Hamiltonian is equal to zero and controls are confined in a compact set as by (15 ). In such a case optimal controls are generically brought about by contrasting two classes of extremal conditions [47].

6.2.1 Controls on the boundary of the compact set↩︎

The pre-Hamiltonian satisfies a stationary condition when evaluated on controls belonging to the boundary of the compact set (15 ). Correspondingly \[\begin{align} \arg \max_{l \in \mathrm{A}} \tilde{H}(\boldsymbol{x}_{t},\boldsymbol{y}_{t}, l)=\operatorname{sgn}(\mathscr{y}_{t}^{\scriptscriptstyle{(4)}}) \Lambda \label{push:bv} \end{align}\tag{31}\] specifies the drift \(b(\mathscr{y}_{t}^{\scriptscriptstyle{(4)}})\) in (29 ). An optimal control always satisfying (31 ) is commonly referred to as “bang-bang”.

6.2.2 Controls stemming from the differential stationary condition.↩︎

It is still, however, possible that the pre-Hamiltonian is stationary on controls belonging to the interior of the admissible set. In such a case \[\begin{align} (\partial_{\lambda_{t}}H)(\boldsymbol{x}_{t},\boldsymbol{y}_{t},\lambda_{t})=0 \label{noact:a1} \end{align}\tag{32}\] implies \[\begin{align} \mathscr{y}_{t}^{\scriptscriptstyle{(4)}}=0 \label{noact:c1} \end{align}\tag{33}\] This is only an implicit condition on the control. The condition (33 ) may only hold temporarily during the control horizon, as we discuss below.

6.2.3 Synthesis↩︎

In general, optimal paths are obtained by gluing trajectories obeying (31 ) and (33 ) over sub-intervals of the control horizon. This operation is called synthesis and is required when (33 ) turns out to be more cost efficient than a pure bang-bang control. Yet, (33 ) is not generically sufficient to ensure the fulfillment of the boundary conditions. To substantiate this last claim we proceed as in [39], [48][51]. In order for (33 ) to hold during a finite time interval, it must be a conservation law of the Hamiltonian dynamics (26 ) with control \(\lambda_{t}\) so far unspecified. We obtain \[\begin{align} 0= \dot{\mathscr{y}}_{t}^{\scriptscriptstyle{(4)}}= \varepsilon\, \mathscr{y}_{t}^{\scriptscriptstyle{(2)}}\,\mathscr{x}_{t}^{\scriptscriptstyle{(1)}} +2\,\varepsilon \,\mathscr{y}_{t}^{\scriptscriptstyle{(3)}}\, \mathscr{x}_{t}^{\scriptscriptstyle{(2)}} \label{noact:a2} \end{align}\tag{34}\] As the position variance, \(\mathscr{x}_{t}^{\scriptscriptstyle{(1)}}\), is non-vanishing (a condition always satisfied by smooth densities), we can therefore perform the following calculation. We find \[\begin{align} \mathscr{y}_{t}^{\scriptscriptstyle{(2)}}=-\frac{2\,\mathscr{y}_{t}^{\scriptscriptstyle{(3)}}\, \mathscr{x}_{t}^{\scriptscriptstyle{(2)}} }{\mathscr{x}_{t}^{\scriptscriptstyle{(1)}}} \label{noact:c2} \end{align}\tag{35}\] The condition (34 ) does not directly involve the control \(\lambda_{t}\). Again, it can hold on a finite time interval if it is preserved by the dynamics. Upon differentiating it along trajectories satisfying (26 ) and using (35 ), we arrive at \[\begin{align} 0=-2\,\varepsilon\,\mathscr{y}_{t}^{\scriptscriptstyle{(1)}}\,\mathscr{x}_{t}^{\scriptscriptstyle{(1)}} +2\,\mathscr{x}_{t}^{\scriptscriptstyle{(2)}}+2\,\varepsilon\,\mathscr{y}_{t}^{\scriptscriptstyle{(3)}}\,\mathscr{x}_{t}^{\scriptscriptstyle{(3)}} \label{noact:a3} \end{align}\tag{36}\] We solve for the Lagrange multiplier \[\begin{align} \mathscr{y}_{t}^{\scriptscriptstyle{(1)}}=\frac{\mathscr{x}_{t}^{\scriptscriptstyle{(2)}}+\varepsilon\,\mathscr{y}_{t}^{\scriptscriptstyle{(3)}}\,\mathscr{x}_{t}^{\scriptscriptstyle{(3)}}}{\varepsilon\,\mathscr{x}_{t}^{\scriptscriptstyle{(1)}}} \label{noact:c3} \end{align}\tag{37}\] Again, (36 ) does not yield any explicit condition on the control. We differentiate it again along trajectories satisfying (26 ). Upon inserting (35 ) and (37 ) into the result, we get \[\begin{align} 0 =\mathscr{x}_{t}^{\scriptscriptstyle{(1)}} \left(\mathscr{x}_{t}^{\scriptscriptstyle{(2)}}-2\,\varepsilon \, \left( \mathscr{x}_{t}^{\scriptscriptstyle{(3)}}+ \,\mathscr{y}_{t}^{\scriptscriptstyle{(3)}}\right)\right)+\varepsilon \, \mathscr{x}_{t}^{\scriptscriptstyle{(4)}} \mathscr{x}_{t}^{\scriptscriptstyle{(1)}}{}^2+2\, \varepsilon \, \mathscr{x}_{t}^{\scriptscriptstyle{(2)}}{}^2 \label{noact:a4} \end{align}\tag{38}\] Solving for the Lagrange multiplier, we get \[\begin{align} \mathscr{y}_{t}^{\scriptscriptstyle{(3)}}=\frac{1}{2} \left(\frac{\mathscr{x}_{t}^{\scriptscriptstyle{(2)}}}{\varepsilon\, }+\frac{2 \mathscr{x}_{t}^{\scriptscriptstyle{(2)}}{}^2}{\mathscr{x}_{t}^{\scriptscriptstyle{(1)}}}-2 \mathscr{x}_{t}^{\scriptscriptstyle{(3)}}+\mathscr{x}_{t}^{\scriptscriptstyle{(1)}} \mathscr{x}_{t}^{\scriptscriptstyle{(4)}}\right) \label{noact:c4} \end{align}\tag{39}\] Repeating this reasoning one last time: differentiating (38 ), and using (35 ), (37 ), and (39 ) yields \[\begin{align} 0=-\frac{8\, \varepsilon^2 \,\mathscr{x}_{t}^{\scriptscriptstyle{(2)}}{}^3}{\mathscr{x}_{t}^{\scriptscriptstyle{(1)}}}+\varepsilon\, \mathscr{x}_{t}^{\scriptscriptstyle{(1)}}{}^2 \left(\lambda_{t} -3 \,\mathscr{x}_{t}^{\scriptscriptstyle{(4)}}\right)+2 \varepsilon \mathscr{x}_{t}^{\scriptscriptstyle{(2)}} \left(4 \,\varepsilon \,\mathscr{x}_{t}^{\scriptscriptstyle{(3)}}-5 \mathscr{x}_{t}^{\scriptscriptstyle{(2)}}\right)+\mathscr{x}_{t}^{\scriptscriptstyle{(1)}} \left(9 \,\varepsilon \mathscr{x}_{t}^{\scriptscriptstyle{(3)}}-3 \mathscr{x}_{t}^{\scriptscriptstyle{(2)}}-6 \varepsilon \right) \nonumber \end{align}\] which can be finally solved for the control \[\begin{align} \lambda_{t}=\frac{8 \,\varepsilon \, \mathscr{x}_{t}^{\scriptscriptstyle{(2)}}{}^3}{\mathscr{x}_{t}^{\scriptscriptstyle{(1)}}{}^3}+\frac{2 \,\mathscr{x}_{t}^{\scriptscriptstyle{(2)}} \left(5 \,\mathscr{x}_{t}^{\scriptscriptstyle{(2)}}-4 \,\varepsilon \, \mathscr{x}_{t}^{\scriptscriptstyle{(3)}}\right)}{\mathscr{x}_{t}^{\scriptscriptstyle{(1)}}{}^2}+\frac{3 \,\mathscr{x}_{t}^{\scriptscriptstyle{(2)}}-9\,\varepsilon \, \mathscr{x}_{t}^{\scriptscriptstyle{(3)}}+6\,\varepsilon }{\varepsilon \,\mathscr{x}_{t}^{\scriptscriptstyle{(1)}}}+3 \mathscr{x}_{t}^{\scriptscriptstyle{(4)}} \nonumber \end{align}\] The upshot is that the stationary condition (32 ) yields the differential system \[\begin{align} \begin{aligned} & \dot{\mathscr{x}}_{t}^{\scriptscriptstyle{(1)}}=2 \,\varepsilon\, \mathscr{x}_{t}^{\scriptscriptstyle{(2)}} \\ & \dot{\mathscr{x}}_{t}^{\scriptscriptstyle{(2)}}=- \mathscr{x}_{t}^{\scriptscriptstyle{(2)}}-\varepsilon \left(\mathscr{x}_{t}^{\scriptscriptstyle{(4)}}\mathscr{x}_{t}^{\scriptscriptstyle{(1)}} -\mathscr{x}_{t}^{\scriptscriptstyle{(3)}}\right) \\ & \dot{\mathscr{x}}_{t}^{\scriptscriptstyle{(3)}}=2 \left(1-\mathscr{x}_{t}^{\scriptscriptstyle{(3)}}-\varepsilon \,\mathscr{x}_{t}^{\scriptscriptstyle{(4)}}\, \mathscr{x}_{t}^{\scriptscriptstyle{(2)}} \right) \\ & \dot{\mathscr{x}}_{t}^{\scriptscriptstyle{(4)}}= \dfrac{8 \,\varepsilon \, \mathscr{x}_{t}^{\scriptscriptstyle{(2)}}{}^3}{\mathscr{x}_{t}^{\scriptscriptstyle{(1)}}{}^3}+\dfrac{2 \,\mathscr{x}_{t}^{\scriptscriptstyle{(2)}} \left(5 \,\mathscr{x}_{t}^{\scriptscriptstyle{(2)}}-4 \,\varepsilon \, \mathscr{x}_{t}^{\scriptscriptstyle{(3)}}\right)}{\mathscr{x}_{t}^{\scriptscriptstyle{(1)}}{}^2}+\dfrac{3 \,\mathscr{x}_{t}^{\scriptscriptstyle{(2)}}-9\,\varepsilon \, \mathscr{x}_{t}^{\scriptscriptstyle{(3)}}+6\,\varepsilon }{\varepsilon \,\mathscr{x}_{t}^{\scriptscriptstyle{(1)}}}+3 \mathscr{x}_{t}^{\scriptscriptstyle{(4)}} \end{aligned} \label{noact:reqs} \end{align}\tag{40}\] complemented by the algebraic equations (33 ), (35 ), (37 ), and (39 ). The system (40 ) alone can only satisfy four out of the eight boundary conditions specifying an extremal, hence the need for synthesis.

7 Centre manifold analysis↩︎

Physical intuition suggests that the optimal control models in Section 6 describe qualitatively equivalent experimental setups. Our aim here is to quantitatively corroborate this intuition by studying the limit of vanishing \(g\). To this end, we draw from the supplementary material of [45]. There, centre (also known as invariant) manifold analysis [52], [53] is used to identify universal properties of mean dissipation minimising protocols in underdamped transition between Gaussian states when the control is the mechanical potential.

The starting observation is that, as \(g\) tends to zero, (30 ) is not sensitive to the penalty mechanism as long as \[\begin{align} \mathscr{y}_{t}^{\scriptscriptstyle{(4)}} \lneq g \nonumber \end{align}\] In this regime and within leading order in \(g\), the dynamics stays close to a centre manifold approximately corresponding to (33 ). The manifold is a centre manifold: it has an equal number of exponentially stable and unstable directions along which it can be reached or left by extremals in order to satisfy the boundary conditions. The universal properties we set out to exhibit occur when time evolution mostly occurs close to the centre manifold. Overall, for \(g\) tending to zero the dynamics becomes slow-fast, swiftly connecting the centre manifold either to the boundary conditions or to the penalty-dependant regimes qualitatively similar to those produced by (31 ).

7.1 Dynamics in Fenichel’s coordinates↩︎

The first step is to introduce Fenichel’s coordinates [30] (see [31], [32] for recent reviews) in order to simply characterise the centre manifold. Based on the analysis of section 6.2.2, we express the extremals in Fenichel’s coordinates by setting \[\begin{align} & \mathscr{f}_{t}^{\scriptscriptstyle{(1)}}=\mathscr{y}_{t}^{\scriptscriptstyle{(4)}} \nonumber\\ & \mathscr{f}_{t}^{\scriptscriptstyle{(2)}}=\mathscr{y}_{t}^{\scriptscriptstyle{(2)}}\,\mathscr{x}_{t}^{\scriptscriptstyle{(1)}} +2 \,\mathscr{y}_{t}^{\scriptscriptstyle{(3)}}\, \mathscr{x}_{t}^{\scriptscriptstyle{(2)}} \nonumber \end{align}\] and \[\begin{align} \mathscr{f}_{t}^{\scriptscriptstyle{(3)}}&= \mathscr{x}_{t}^{\scriptscriptstyle{(2)}}+\varepsilon\,\mathscr{y}_{t}^{\scriptscriptstyle{(3)}}\,\mathscr{x}_{t}^{\scriptscriptstyle{(3)}}-\varepsilon\,\mathscr{y}_{t}^{\scriptscriptstyle{(1)}}\,\mathscr{x}_{t}^{\scriptscriptstyle{(1)}} \nonumber\\ \mathscr{f}_{t}^{\scriptscriptstyle{(4)}}&=\mathscr{x}_{t}^{\scriptscriptstyle{(1)}} \left(\mathscr{x}_{t}^{\scriptscriptstyle{(2)}}-2 \,\varepsilon \, \left( \mathscr{x}_{t}^{\scriptscriptstyle{(3)}}+\mathscr{y}_{t}^{\scriptscriptstyle{(3)}}\right)\right)+\varepsilon \,\mathscr{x}_{t}^{\scriptscriptstyle{(4)}} \mathscr{x}_{t}^{\scriptscriptstyle{(1)}}{}^2+2\, \varepsilon \, \mathscr{x}_{t}^{\scriptscriptstyle{(2)}}{}^2 \nonumber \end{align}\] and solving for the Lagrange multipliers. In doing so, we take advantage of the fact that the Lagrange multipliers obey linear dynamics by construction. After straightforward calculations we arrive at the set of equations: \[\begin{align} \begin{aligned} \dot{\mathscr{f}}_{t}^{\scriptscriptstyle{(1)}}&=\varepsilon\, \mathscr{f}_{t}^{\scriptscriptstyle{(2)}} \\ \dot{\mathscr{f}}_{t}^{\scriptscriptstyle{(2)}}&=\mathscr{f}_{t}^{\scriptscriptstyle{(2)}}+2\, \mathscr{f}_{t}^{\scriptscriptstyle{(3)}} \\ \dot{\mathscr{f}}_{t}^{\scriptscriptstyle{(3)}}&=-\frac{\varepsilon\, \left(\varepsilon\, \mathscr{f}_{t}^{\scriptscriptstyle{(2)}} \left(\mathscr{x}_{t}^{\scriptscriptstyle{(3)}}+\mathscr{x}_{t}^{\scriptscriptstyle{(1)}} \mathscr{x}_{t}^{\scriptscriptstyle{(4)}}\right)-2 \,\mathscr{f}_{t}^{\scriptscriptstyle{(3)}} \mathscr{x}_{t}^{\scriptscriptstyle{(2)}}\right)+\mathscr{f}_{t}^{\scriptscriptstyle{(4)}}}{\mathscr{x}_{t}^{\scriptscriptstyle{(1)}}} \\ \dot{\mathscr{f}}_{t}^{\scriptscriptstyle{(4)}}&=\frac{\varepsilon\, \mathscr{f}_{t}^{\scriptscriptstyle{(1)}} \mathscr{x}_{t}^{\scriptscriptstyle{(1)}}{}^2}{g}+\varepsilon\, \delta_{t}\,\mathscr{x}_{t}^{\scriptscriptstyle{(1)}}{}^2+\mathscr{f}_{t}^{\scriptscriptstyle{(4)}} \left(\frac{4 \,\varepsilon\, \mathscr{x}_{t}^{\scriptscriptstyle{(2)}}}{\mathscr{x}_{t}^{\scriptscriptstyle{(1)}}}+2\right)+2 \,\varepsilon^2 \,\mathscr{f}_{t}^{\scriptscriptstyle{(2)}} \\ & +2\,\varepsilon\, \mathscr{x}_{t}^{\scriptscriptstyle{(2)}}\left(4\,\varepsilon\,\mathscr{x}_{t}^{\scriptscriptstyle{(3)}} -5\,\mathscr{x}_{t}^{\scriptscriptstyle{(2)}}\right) +\mathscr{x}_{t}^{\scriptscriptstyle{(1)}}\left(9\,\varepsilon\,\mathscr{x}_{t}^{\scriptscriptstyle{(3)}}-3\,\mathscr{x}_{t}^{\scriptscriptstyle{(2)}}-6\,\varepsilon\right)-3\, \varepsilon\, \mathscr{x}_{t}^{\scriptscriptstyle{(1)}}{}^2 \,\mathscr{x}_{t}^{\scriptscriptstyle{(4)}} -\frac{8\, \varepsilon^2 \,\mathscr{x}_{t}^{\scriptscriptstyle{(2)}}{}^3}{\mathscr{x}_{t}^{\scriptscriptstyle{(1)}}} \end{aligned} \label{centre:f} \end{align}\tag{41}\] and \[\begin{align} \begin{aligned} \dot{\mathscr{x}}_{t}^{\scriptscriptstyle{(1)}}&=2 \,\varepsilon\, \mathscr{x}_{t}^{\scriptscriptstyle{(2)}} \\ \dot{\mathscr{x}}_{t}^{\scriptscriptstyle{(2)}}&=\varepsilon\, \left(\mathscr{x}_{t}^{\scriptscriptstyle{(3)}}-\mathscr{x}_{t}^{\scriptscriptstyle{(1)}} \mathscr{x}_{t}^{\scriptscriptstyle{(4)}}\right)-\mathscr{x}_{t}^{\scriptscriptstyle{(2)}} \\ \dot{\mathscr{x}}_{t}^{\scriptscriptstyle{(3)}}&=-2 \left(\varepsilon\, \mathscr{x}_{t}^{\scriptscriptstyle{(2)}} \mathscr{x}_{t}^{\scriptscriptstyle{(4)}}+\mathscr{x}_{t}^{\scriptscriptstyle{(3)}}-1\right) \\ \dot{\mathscr{x}}_{t}^{\scriptscriptstyle{(4)}}&= \frac{\mathscr{f}_{t}^{\scriptscriptstyle{(1)}}}{g} +\delta_{t} \end{aligned} \label{centre:x} \end{align}\tag{42}\] In (41 ), (42 ) we define \[\begin{align} \delta_{t}=b(\mathscr{f}_{t}^{\scriptscriptstyle{(1)}})-\frac{\mathscr{f}_{t}^{\scriptscriptstyle{(1)}}}{g} \nonumber \end{align}\] This quantity is responsible for non-linear corrections to the dynamics in the complement subspace to the centre manifold. For this reason, in what follows, we focus only on the harmonic penalty 19 with \(\Lambda=\sqrt{2}\) such that \(\delta_t=0\). The aim is to reduce (41 ), (42 ) to a simpler slow-fast system of equations that, while still satisfying arbitrary boundary conditions, captures universal traits of the dynamics independently of the penalty imposed on controls. The reader not interested in technical details of the analysis may want to directly proceed to section 7.5.

7.2 Scaling analysis↩︎

The system of differential equations formed by (41 ), (42 ) is singular as \(g\) tends to zero. In order to study the limit, we need to find a proper rescaling of time and dynamical variables which allow us to expand the dynamics around \(g\) equal zero. To this end, insight from [45] suggests to identify \[\begin{align} h=g^{1/4} \nonumber \end{align}\] as the order parameter of the expansion, and describe the evolution in terms of \[\begin{align} & \begin{aligned} & \mathscr{f}_{t}^{\scriptscriptstyle{(i)}}=h^{5-i}\phi_{\mathscr{t}}^{\scriptscriptstyle{(i)}} \\ & \mathscr{x}_{t}^{\scriptscriptstyle{(i)}}=\xi_{\mathscr{t}}^{\scriptscriptstyle{(i)}} \end{aligned} &&i=1,\dots,4 \nonumber \end{align}\] depending on the ” fast ” time variable \[\begin{align} \mathscr{t}=\frac{t}{h} \nonumber \end{align}\] The starting point of the expansion becomes the system of equations \[\begin{align} & \frac{\mathrm{d}\phi_{\mathscr{t}}^{\scriptscriptstyle{(1)}}}{\mathrm{d}\mathscr{t}} =\varepsilon\, \phi_{\mathscr{t}}^{\scriptscriptstyle{(2)}} \nonumber\\ & \frac{\mathrm{d}\phi_{\mathscr{t}}^{\scriptscriptstyle{(2)}}}{\mathrm{d}\mathscr{t}}=h \phi_{\mathscr{t}}^{\scriptscriptstyle{(2)}}+2 \phi_{\mathscr{t}}^{\scriptscriptstyle{(3)}} \nonumber\\ & \frac{\mathrm{d}\phi_{\mathscr{t}}^{\scriptscriptstyle{(3)}}}{\mathrm{d}\mathscr{t}}=-\frac{h^2 \varepsilon\, ^2 \left(\xi_{\mathscr{t}}^{\scriptscriptstyle{(3)}}+\xi_{\mathscr{t}}^{\scriptscriptstyle{(1)}} \xi_{\mathscr{t}}^{\scriptscriptstyle{(4)}}\right) \phi_{\mathscr{t}}^{\scriptscriptstyle{(2)}}}{\xi_{\mathscr{t}}^{\scriptscriptstyle{(1)}}}+\frac{2 h \varepsilon\, \xi_{\mathscr{t}}^{\scriptscriptstyle{(2)}} \phi_{\mathscr{t}}^{\scriptscriptstyle{(3)}}}{\xi_{\mathscr{t}}^{\scriptscriptstyle{(1)}}}-\frac{\phi_{\mathscr{t}}^{\scriptscriptstyle{(4)}}}{\xi_{\mathscr{t}}^{\scriptscriptstyle{(1)}}} \nonumber\\ & \frac{\mathrm{d}\phi_{\mathscr{t}}^{\scriptscriptstyle{(4)}}}{\mathrm{d}\mathscr{t}}= \varepsilon\, \xi_{\mathscr{t}}^{\scriptscriptstyle{(1)}}{}^2 \,\phi_{\mathscr{t}}^{\scriptscriptstyle{(1)}}+ 2\, h^3\, \varepsilon\, ^2 \phi_{\mathscr{t}}^{\scriptscriptstyle{(2)}}+2 \,h \,\phi_{\mathscr{t}}^{\scriptscriptstyle{(4)}} \left(\frac{2 \varepsilon\, \xi_{\mathscr{t}}^{\scriptscriptstyle{(2)}}}{\xi_{\mathscr{t}}^{\scriptscriptstyle{(1)}}}+1\right) \nonumber\\ & +2\,\varepsilon\,\xi_{\mathscr{t}}^{\scriptscriptstyle{(2)}}\left(4\,\varepsilon\,\xi_{\mathscr{t}}^{\scriptscriptstyle{(3)}}-5\,\xi_{\mathscr{t}}^{\scriptscriptstyle{(2)}}\right)+\xi_{\mathscr{t}}^{\scriptscriptstyle{(1)}} \left(9 \varepsilon\, \xi_{\mathscr{t}}^{\scriptscriptstyle{(3)}}-3 \xi_{\mathscr{t}}^{\scriptscriptstyle{(2)}}-6 \varepsilon\, \right)-3\,\varepsilon\, \xi_{\mathscr{t}}^{\scriptscriptstyle{(1)}}{}^2\, \xi_{\mathscr{t}}^{\scriptscriptstyle{(4)}}-\frac{8\, \varepsilon\, ^2 \xi_{\mathscr{t}}^{\scriptscriptstyle{(2)}}{}^3}{\xi_{\mathscr{t}}^{\scriptscriptstyle{(1)}}} \nonumber \end{align}\] and \[\begin{align} & \frac{\mathrm{d}\xi_{\mathscr{t}}^{\scriptscriptstyle{(1)}}}{\mathrm{d}\mathscr{t}}=2\, h \,\varepsilon\, \xi_{\mathscr{t}}^{\scriptscriptstyle{(2)}} \nonumber\\ & \frac{\mathrm{d}\xi_{\mathscr{t}}^{\scriptscriptstyle{(2)}}}{\mathrm{d}\mathscr{t}}=h \left(\varepsilon\, \big{(}\xi_{\mathscr{t}}^{\scriptscriptstyle{(3)}}-\xi_{\mathscr{t}}^{\scriptscriptstyle{(1)}} \xi_{\mathscr{t}}^{\scriptscriptstyle{(4)}}\big{)}-\xi_{\mathscr{t}}^{\scriptscriptstyle{(2)}}\right) \nonumber\\ &\frac{\mathrm{d}\xi_{\mathscr{t}}^{\scriptscriptstyle{(3)}}}{\mathrm{d}\mathscr{t}}=-\,2 \,h\, \left(\varepsilon\, \xi_{\mathscr{t}}^{\scriptscriptstyle{(2)}} \xi_{\mathscr{t}}^{\scriptscriptstyle{(4)}}+\xi_{\mathscr{t}}^{\scriptscriptstyle{(3)}}-1\right) \nonumber\\ & \frac{\mathrm{d}\xi_{\mathscr{t}}^{\scriptscriptstyle{(4)}}}{\mathrm{d}\mathscr{t}}=h\, \phi_{\mathscr{t}}^{\scriptscriptstyle{(1)}} \nonumber \end{align}\] We look for a solution of the above system of differential equations in the form \[\begin{align} & \begin{aligned} & \phi_{\mathscr{t}}^{\scriptscriptstyle{(i)}}=\phi_{\mathscr{t},t}^{\scriptscriptstyle{(i:0)}}+h\,\phi_{\mathscr{t},t}^{\scriptscriptstyle{(i:0)}}+O(h^{2}) \nonumber\\ &\xi_{\mathscr{t}}^{\scriptscriptstyle{(i)}}=\xi_{\mathscr{t},t}^{\scriptscriptstyle{(i:0)}}+h\,\xi_{\mathscr{t},t}^{\scriptscriptstyle{(i:0)}}+O(h^{2}) \end{aligned} && i=1,\dots,4 \nonumber \end{align}\] The time derivative with respect to \(\mathscr{t}\) acts on the expansion as \[\begin{align} \frac{\mathrm{d}}{\mathrm{d} \mathscr{t}}=\partial_{\mathscr{t}}+h \,\partial_{t} \nonumber \end{align}\] According to multiscale perturbation theory [54], we determine the dependence upon the “slow time” \(t\) by subtracting resonant terms from the regular expansion in powers of \(h\).

7.3 Zero order solution↩︎

At zero order, we get \[\begin{align} & \partial_{\mathscr{t}} \xi_{\mathscr{t},t}^{\scriptscriptstyle{(i:0)}}=0 && i=1,\dots,4 \nonumber \end{align}\] which show that the expansion of the cumulants starts with pure functions of the “slow” time \(t\): \[\begin{align} & \xi_{\mathscr{t},t}^{\scriptscriptstyle{(i:0)}}=X_{t}^{\scriptscriptstyle{(i)}} && \mathfrak{i}=\mathfrak{1},\dots,\mathfrak{4} \nonumber \end{align}\] As expected from our qualitative analysis, only Fenichel’s coordinates depend on the “fast” time \(\mathscr{t}\) and satisfy the linear non-homogeneous equation \[\begin{align} \partial_{\mathscr{t}}\begin{bmatrix} \phi_{\mathscr{t},t}^{\scriptscriptstyle{(1:0)}} \\ \phi_{\mathscr{t},t}^{\scriptscriptstyle{(2:0)}} \\ \phi_{\mathscr{t},t}^{\scriptscriptstyle{(3:0)}} \\ \phi_{\mathscr{t},t}^{\scriptscriptstyle{(4:0)}} \end{bmatrix} =\mathsf{A}^{\scriptscriptstyle{(0)}} \begin{bmatrix} \phi_{\mathscr{t},t}^{\scriptscriptstyle{(1:0)}} \\ \phi_{\mathscr{t},t}^{\scriptscriptstyle{(2:0)}} \\ \phi_{\mathscr{t},t}^{\scriptscriptstyle{(3:0)}} \\ \phi_{\mathscr{t},t}^{\scriptscriptstyle{(4:0)}} \end{bmatrix} + \begin{bmatrix} 0 \\ 0 \\ 0 \\ F_{t}^{\scriptscriptstyle{(0:0)}} \end{bmatrix} \label{center:zero} \end{align}\tag{43}\] where the \(4\,\times\,4\) real matrix \[\begin{align} \mathsf{A}^{\scriptscriptstyle{(0)}}= \begin{bmatrix} 0 & \varepsilon & 0 & 0 \\ 0 & 0 & 2 & 0 \\ 0 & 0 & 0 & -\,\frac{1}{X_{t}^{\scriptscriptstyle{(1)}}} \\ \varepsilon\, X_{t}^{\scriptscriptstyle{(1)}}{}^2 & 0 & 0 & 0 \end{bmatrix} \nonumber \end{align}\] and the vanishing non-homogeneous term \[\begin{align} F_{t}^{\scriptscriptstyle{(0:0)}}&=2\,\varepsilon\,X_{t}^{\scriptscriptstyle{(2)}}\,\left(4\,\varepsilon\,X_{t}^{\scriptscriptstyle{(3)}}-5\,X_{t}^{\scriptscriptstyle{(2)}}\right) \nonumber\\ & +X_{t}^{\scriptscriptstyle{(1)}} \left(9\, \varepsilon\, X_{t}^{\scriptscriptstyle{(3)}}-3\, X_{t}^{\scriptscriptstyle{(2)}}-6\, \varepsilon\, \right) -3 \,\varepsilon\, X_{t}^{\scriptscriptstyle{(1)}}{}^2 \,X_{t}^{\scriptscriptstyle{(4)}} -\frac{8\, \varepsilon\, ^2 X_{t}^{\scriptscriptstyle{(2)}}{}^3}{X_{t}^{\scriptscriptstyle{(1)}}} \label{center:nh} \end{align}\tag{44}\] are constant with respect to the fast time \(\mathscr{t}\). The system of differential equations (43 ) is thus time-autonomous and we can integrate it by diagonalizing \(\mathsf{A}^{\scriptscriptstyle{(0)}}\). The roots of the fourth order characteristic polynomial occur in pairs of complex conjugate eigenvalues that fortunately admit an elementary expression \[\begin{align} \operatorname{Sp}\mathsf{A}^{0}=a\,\left\{ - (1+\imath)\,, -(1-\imath), 1-\imath\,, 1+\imath\right\} \nonumber \end{align}\] with \[\begin{align} a= \left (\frac{\varepsilon^{2}\,X_{t}^{\scriptscriptstyle{(1)}}}{2}\right )^{1/4} \nonumber \end{align}\] corresponding to two exponentially stable and unstable focuses. We define \[\begin{align} \begin{aligned} & \phi_{\mathscr{t}}^{\scriptscriptstyle{(1)}}=e^{-a\,(1+\imath)\,\mathscr{t}}\left(\Phi_{0,t}^{\scriptscriptstyle{(1)}}-\frac{F_{t}^{\scriptscriptstyle{(0:0)}}}{a\,(1+\imath)}\right)+\frac{F_{t}^{\scriptscriptstyle{(0:0)}}}{a\,(1+\imath)} \\ &\phi_{\mathscr{t}}^{\scriptscriptstyle{(2)}}=e^{a\,(1-\imath)\,\mathscr{t}}\left(\Phi_{0,t}^{\scriptscriptstyle{(2)}}+\frac{F_{t}^{\scriptscriptstyle{(0:0)}}}{a\,(1-\imath)}\right)-\frac{F_{t}^{\scriptscriptstyle{(0:0)}}}{a\,(1-\imath)} \end{aligned} \label{center:zerosol} \end{align}\tag{45}\] and, after standard calculations, we find \[\begin{align} \phi_{\mathscr{t},t}^{\scriptscriptstyle{(1:0)}}&=2^{3/4}\frac{\operatorname{Im}\left(\Phi_{t}^{\scriptscriptstyle{(1)}}+\Phi_{t}^{\scriptscriptstyle{(2)}}\right)-\operatorname{Re}\left(\Phi_{t}^{\scriptscriptstyle{(1)}}-\Phi_{t}^{\scriptscriptstyle{(2)}}\right) }{\varepsilon^{1/2} X_{t}^{\scriptscriptstyle{(1)}}{}^{7/4}} \nonumber\\ \phi_{\mathscr{t},t}^{\scriptscriptstyle{(2:0)}}& =\frac{2^{3/2}}{\varepsilon X_{t}^{\scriptscriptstyle{(1)}}{}^{3/2}}\operatorname{Im} \left(\phi_{\mathscr{t}}^{\scriptscriptstyle{(2)}}-\phi_{\mathscr{t}}^{\scriptscriptstyle{(1)}}\right) \nonumber\\ \phi_{\mathscr{t},t}^{\scriptscriptstyle{(3:0)}}&=-\frac{2^{1/4}\,\operatorname{Im}\left(\Phi_{t}^{\scriptscriptstyle{(1)}}+\Phi_{t}^{\scriptscriptstyle{(2)}}\right)}{\varepsilon^{1/2} X_{t}^{\scriptscriptstyle{(1)}}{}^{5/4}\,\imath} \nonumber\\ \phi_{\mathscr{t},t}^{\scriptscriptstyle{(4:0)}}& =2 \,\operatorname{Re}\left(\phi_{\mathscr{t}}^{\scriptscriptstyle{(1)}}+\phi_{\mathscr{t}}^{\scriptscriptstyle{(2)}}\right) \nonumber \end{align}\] Inspection of (45 ) shows that the conditions \[\begin{align} \begin{aligned} & \Phi_{0,t}^{\scriptscriptstyle{(1:0)}}-\frac{F_{t}^{\scriptscriptstyle{(0:0)}}}{a\,(1+\imath)}=0 \\ &\Phi_{0,t}^{\scriptscriptstyle{(2:0)}}+\frac{F_{t}^{\scriptscriptstyle{(0:0)}}}{a\,(1-\imath)}=0 \end{aligned} \label{center:sm} \end{align}\tag{46}\] specify a center (or slow) manifold with respect to the fast time \(\mathscr{t}\). On the slow manifold, the solution of (43 ) reduces to \[\begin{align} \phi_{\mathscr{t},t}^{\scriptscriptstyle{(1:0)}}& =-\frac{F_{t}^{\scriptscriptstyle{(0:0)}}}{\varepsilon\,X_{t}^{\scriptscriptstyle{(1)}}{}^{2}} \nonumber\\ \phi_{\mathscr{t},t}^{\scriptscriptstyle{(2:0)}}&= \phi_{\mathscr{t},t}^{\scriptscriptstyle{(3:0)}} = \phi_{\mathscr{t},t}^{\scriptscriptstyle{(4:0)}}=0 \nonumber \end{align}\] the stationary solution (fixed point) of (43 ).

7.4 First order solution↩︎

First order correction to cumulants and Fenichel variables are determined by the solution of a \(8\,\times\,8\) system of differential equations in the fast time: \[\begin{align} \partial_{\mathscr{t}}\begin{bmatrix} \phi_{\mathscr{t},t}^{\scriptscriptstyle{(1:1)}} \\ \phi_{\mathscr{t},t}^{\scriptscriptstyle{(2:1)}} \\ \phi_{\mathscr{t},t}^{\scriptscriptstyle{(3:1)}} \\ \phi_{\mathscr{t},t}^{\scriptscriptstyle{(4:1)}} \\ \xi_{\mathscr{t},t}^{\scriptscriptstyle{(1:1)}}\\ \xi_{\mathscr{t},t}^{\scriptscriptstyle{(2:1)}}\\ \xi_{\mathscr{t},t}^{\scriptscriptstyle{(4:1)}} \end{bmatrix} =\mathsf{A}^{\scriptscriptstyle{(1)}} \begin{bmatrix} \phi_{\mathscr{t},t}^{\scriptscriptstyle{(1:1)}} \\ \phi_{\mathscr{t},t}^{\scriptscriptstyle{(2:1)}} \\ \phi_{\mathscr{t},t}^{\scriptscriptstyle{(3:1)}} \\ \phi_{\mathscr{t},t}^{\scriptscriptstyle{(4:1)}} \\ \xi_{\mathscr{t},t}^{\scriptscriptstyle{(1:1)}}\\ \xi_{\mathscr{t},t}^{\scriptscriptstyle{(2:1)}}\\ \xi_{\mathscr{t},t}^{\scriptscriptstyle{(4:1)}} \end{bmatrix} + \begin{bmatrix} \dots \\ \dots \\ \dots \\ \dots \\ 2\, \varepsilon\, X_{t}^{\scriptscriptstyle{(2)}}-\partial_{t}X_{t}^{\scriptscriptstyle{(1)}} \\ \left(\varepsilon\, \big{(}X_{t}^{\scriptscriptstyle{(3)}}-X_{t}^{\scriptscriptstyle{(1)}} X_{t}^{\scriptscriptstyle{(4)}}\big{)}-X_{t}^{\scriptscriptstyle{(2)}}\right) -\partial_{t}X_{t}^{\scriptscriptstyle{(2)}} \\ -\,2 \,\left(\varepsilon\, X_{t}^{\scriptscriptstyle{(2)}} X_{t}^{\scriptscriptstyle{(4)}}+X_{t}^{\scriptscriptstyle{(3)}}-1\right) -\partial_{t}X_{t}^{\scriptscriptstyle{(3)}} \\ \phi_{\mathscr{t},t}^{\scriptscriptstyle{(1:0)}} - \partial_{\mathscr{t}}X_{t}^{\scriptscriptstyle{(4)}} \end{bmatrix} \label{center:one} \end{align}\tag{47}\] The \(8\,\times\,8\) real matrix takes the block structure \[\begin{align} \mathsf{A}^{\scriptscriptstyle{(1)}}=\begin{bmatrix} \mathsf{A}^{\scriptscriptstyle{(1:1,1)}} & \mathsf{A}^{\scriptscriptstyle{(1:1,2)}} \\ \mathsf{0}_{4} &\mathsf{0}_{4} \end{bmatrix} \label{center:A1} \end{align}\tag{48}\] with \(\mathsf{0}_{4}\) denoting a \(4\,\times\,4\) matrix with null entries. The explicit form of the square \(4\,\times\,4\) blocks \(\mathsf{A}^{\scriptscriptstyle{(1:1,1)}}\), \(\mathsf{A}^{\scriptscriptstyle{(1:1,2)}}\) as well as the components of the non-homogeneous term in (47 ) denoted by \(\dots\) do not play a role in determining the universal form of the slow-fast dynamics produced by the limit \(g=h^{4}\) tending to zero. The universal form of the slow-fast dynamics in this limit depends only on the solvability condition of the stationary form of (47 ). The solvability condition, as usual in multi-scale perturbation theory, fixes the dynamics of the slow time [55].

7.4.1 Solvability condition↩︎

We construct perturbatively the centre manifold as the stationary solution of the dynamics with respect to the fast time. We observe that the algebraic equation \[\begin{align} \mathsf{A}^{\scriptscriptstyle{(1)}}\boldsymbol{v}+\boldsymbol{f}=0 \nonumber \end{align}\] admits, by Fredholm alternative theorem, a unique solution only if \(\boldsymbol{f}\) is orthogonal to the kernel \(\operatorname{ker}A^{\scriptscriptstyle{(1)}}{}^{\top}\) of the adjoint of \(\mathsf{A}^{\scriptscriptstyle{(1)}}\). In other words for any \(\boldsymbol{\tilde{v}}\) satisfying \[\begin{align} \mathsf{A}^{\scriptscriptstyle{(1)}}{}^{\top}\boldsymbol{\tilde{v}}=0 \nonumber \end{align}\] it is necessary that \[\begin{align} \left \langle\,\boldsymbol{\tilde{v}}\,,\mathsf{A}^{\scriptscriptstyle{(1)}}\boldsymbol{v}+\boldsymbol{f}\,\right\rangle= \left \langle\,\boldsymbol{\tilde{v}}\,,\boldsymbol{f}\,\right\rangle=0 \nonumber \end{align}\] The kernel of the adjoint of (48 ) is spanned by \(\left\{ \boldsymbol{e}_{i} \right\}_{i=5}^{8}\) where the \(\boldsymbol{e}_{i}\)’s are the elements of the canonical basis of \(\mathbb{R}^{8}\). Projecting the non-homogeneous term in (47 ) on the span of \(\operatorname{ker}A^{\scriptscriptstyle{(1)}}{}^{\top}\) and imposing the zero order slow manifold conditions (46 ) yields \[\begin{align} &\partial_{t}X_{t}^{\scriptscriptstyle{(1)}}=2\, X_{t}^{\scriptscriptstyle{(2)}} \nonumber\\ &\partial_{t}X_{t}^{\scriptscriptstyle{(2)}}= \varepsilon\, \big{(}X_{t}^{\scriptscriptstyle{(3)}}-X_{t}^{\scriptscriptstyle{(1)}} X_{t}^{\scriptscriptstyle{(4)}}\big{)}-X_{t}^{\scriptscriptstyle{(2)}} \nonumber \\ &\partial_{t}X_{t}^{\scriptscriptstyle{(3)}}= -\,2 \,\left(\varepsilon\, X_{t}^{\scriptscriptstyle{(2)}} X_{t}^{\scriptscriptstyle{(4)}}+X_{t}^{\scriptscriptstyle{(3)}}-1\right) \nonumber \\ &\partial_{t}X_{t}^{\scriptscriptstyle{(4)}}= -\frac{F_{t}^{\scriptscriptstyle{(0:0)}}}{\varepsilon\,X_{t}^{\scriptscriptstyle{(1)}}{}^{2}} \nonumber \end{align}\] where the right hand side of the last equation is given by (44 ). We thus recover the system of differential equations (40 ) stemming from the stationary condition (32 ), using centre manifold analysis and multiscale pertubation theory.

7.5 Uniform slow-fast approximation↩︎

Within leading order accuracy as \(g\) tends to zero we may summarise the result of the invariant (centre) manifold analysis in a uniform slow-fast approximation to the dynamics. We get \[\begin{align} \begin{aligned} & g^{1/4}\,\partial_{t}\mathscr{F}_{t}^{\scriptscriptstyle{(1)}}=\varepsilon\, \mathscr{F}_{t}^{\scriptscriptstyle{(2)}} \\ & g^{1/4}\,\partial_{t}\mathscr{F}_{t}^{\scriptscriptstyle{(2)}}=2 \,\mathscr{F}_{t}^{\scriptscriptstyle{(3)}} \\ & g^{1/4}\,\partial_{t}\mathscr{F}_{t}^{\scriptscriptstyle{(3)}}=-\frac{\mathscr{F}_{t}^{\scriptscriptstyle{(4)}}}{X_{t}^{\scriptscriptstyle{(1)}}} \\ & g^{1/4}\,\partial_{t}\mathscr{F}_{t}^{\scriptscriptstyle{(4)}}= \varepsilon\, X_{t}^{\scriptscriptstyle{(1)}}{}^2 \,\mathscr{F}_{t}^{\scriptscriptstyle{(1)}} \end{aligned} \label{sf:nonuniv} \end{align}\tag{49}\] and \[\begin{align} \begin{aligned} & \partial_{1}X_{t}^{\scriptscriptstyle{(1)}}=2\, \varepsilon\, X_{t}^{\scriptscriptstyle{(2)}} \\ & \partial_{t}X_{t}^{\scriptscriptstyle{(2)}}= \varepsilon\, \big{(}X_{t}^{\scriptscriptstyle{(3)}}-X_{t}^{\scriptscriptstyle{(1)}} X_{t}^{\scriptscriptstyle{(4)}}\big{)}-X_{t}^{\scriptscriptstyle{(2)}} \\ & \partial_{t}X_{t}^{\scriptscriptstyle{(3)}}=-\,2 \, \left(\varepsilon\, X_{t}^{\scriptscriptstyle{(2)}} X_{t}^{\scriptscriptstyle{(4)}}+X_{t}^{\scriptscriptstyle{(3)}}-1\right) \\ & \partial_{t}X_{t}^{\scriptscriptstyle{(4)}}= \mathscr{F}_{t}^{\scriptscriptstyle{(1)}} \\ &-\frac{ 2\,\varepsilon\,X_{t}^{\scriptscriptstyle{(2)}}\left(4\,\varepsilon\,X_{t}^{\scriptscriptstyle{(3)}}-5\,X_{t}^{\scriptscriptstyle{(2)}}\right)+X_{t}^{\scriptscriptstyle{(1)}} \left(9 \varepsilon\, X_{t}^{\scriptscriptstyle{(3)}}-3 X_{t}^{\scriptscriptstyle{(2)}}-6 \varepsilon\, \right)-3\,\varepsilon\, X_{t}^{\scriptscriptstyle{(1)}}{}^2\, X_{t}^{\scriptscriptstyle{(4)}}-\frac{8\, \varepsilon\, ^2 X_{t}^{\scriptscriptstyle{(2)}}{}^3}{X_{t}^{\scriptscriptstyle{(1)}}}}{\varepsilon\,X_{t}^{\scriptscriptstyle{(1)}}{}^{2}} \end{aligned} \label{sf:univ} \end{align}\tag{50}\] In section (10), we numerically compare the predictions to the above “universal normal form” of the non-linear dynamics with the exact stationary conditions occasioned by penalty models (15 ), (18 ) and (19 ).

8 Overdamped expansion↩︎

We exemplify the overdamped expansion in the case of the mean heat release (dissipation) (9 ) between equilibrium states. In other words, we consider the boundary conditions (5 ), (6 ) and (7 ). In such a setup, the overdamped expansion provides somewhat distinct asymptotics to those of the centre manifold. In fact, we can systematically carry out the overdamped expansion once we hypothesise that the control coupling constant in (29 ), (30 ) scales with the order parameter \(\varepsilon\) \[\begin{align} g=\frac{\tilde{h}}{\varepsilon^{2}} \label{od:op} \end{align}\tag{51}\] and we hold \(\tilde{h}\) fixed.

We infer that the overdamped asymptotic admits the multiscale form \[\begin{align} & \begin{aligned} X_{t}^{\scriptscriptstyle{(i)}}=\mathscr{x}_{t,\varepsilon^{2}t}^{\scriptscriptstyle{(i:0)}}+\varepsilon\,\mathscr{x}_{t,\varepsilon^{2}t}^{\scriptscriptstyle{(i:1)}}+\varepsilon^{2}\,\mathscr{x}_{t,\varepsilon^{2}t}^{\scriptscriptstyle{(i:1)}}+O(\varepsilon^{2}) \\ \mathscr{y}_{t}^{\scriptscriptstyle{(i)}}=\mathscr{y}_{t,\varepsilon^{2}t}^{\scriptscriptstyle{(i:0)}}+\varepsilon\,\mathscr{y}_{t,\varepsilon^{2}t}^{\scriptscriptstyle{(i:1)}}+\varepsilon^{2}\,\mathscr{y}_{t,\varepsilon^{2}t}^{\scriptscriptstyle{(i:1)}}+O(\varepsilon^{2}) \end{aligned} &&i=1,\dots,4 \nonumber \end{align}\] In particular, the scaling relation (51 ) readily imposes \[\begin{align} & X_{t}^{\scriptscriptstyle{(i)}}=\mathscr{X}_{\varepsilon^{2}t}^{\scriptscriptstyle{(i)}}+O(\varepsilon^{2}) && i=1,\dots,4 \nonumber \end{align}\] Again we determine the functional dependence on the “slow” time \[\begin{align} \mathscr{t}_{\mathfrak{2}}=\varepsilon^{2}\,t \nonumber \end{align}\] to subtract from the expansion resonant terms [54]. As for the centre manifold, we focus on the case of harmonic penalty on controls. The main results of the overdamped analysis are the cell-equations (52 ), (53 ). The reader familiar with multiscale analysis may want to proceed directly to section 8.4.

8.1 Zero order↩︎

The overdamped limit surmises fast thermalisation of the momentum process. Indeed, upon setting \(\varepsilon\) equal to zero, we verify that \[\begin{align} &\mathscr{x}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:0)}}=\mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:0)}} && \mathscr{x}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(2:0)}}=0 & \mathscr{x}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(3:0)}}=1 &&\mathscr{x}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:0)}}=\mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:0)}} \nonumber \end{align}\] and \[\begin{align} &\mathscr{y}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:0)}}=\mathscr{y}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:0)}} && \mathscr{y}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(2:0)}}=\mathscr{y}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(2:0)}}\,e^{t} & \mathscr{y}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(3:0)}}=-\frac{1}{2} && \mathscr{y}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:0)}}=\mathscr{y}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:0)}} \nonumber \end{align}\] satisfy the necessary condition for optimality.

8.2 First order↩︎

The expansion around the zero order solution produces the equations \[\begin{align} & \begin{cases} \partial_{t}\mathscr{x}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:1)}}=0 \\ \partial_{t}\mathscr{x}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(2:1)}}=- \mathscr{x}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(2:1)}}- \left(\mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:0)}}\mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:0)}} -1\right) \\ \partial_{t}\mathscr{x}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(3:1)}}=-2 \,\mathscr{x}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(3:1)}} \\ \partial_{t}\mathscr{x}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:1)}}=0 \end{cases} && & \begin{cases} \partial_{t}\mathscr{y}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:1)}}=0 \\ \partial_{t}\mathscr{y}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(2:1)}}=-2\,\mathscr{y}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:0)}}+\mathscr{y}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(2:1)}}-\mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:0)}} \\ \partial_{t}\mathscr{y}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(3:1)}}=2\,\mathscr{y}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(3:1)}} \\ \partial_{t}\mathscr{y}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:1)}}= \mathscr{y}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(2:0)}}\,\mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:0)}}\,e^{t} \end{cases} \nonumber \end{align}\] Self consistence of the expansion requires \[\begin{align} \partial_{t}\mathscr{y}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:1)}}=0 \nonumber \end{align}\] which entails \[\begin{align} \mathscr{y}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(2:0)}}=0 \nonumber \end{align}\] We then obtain \[\begin{align} &\mathscr{x}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:1)}}=\mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:1)}} && \mathscr{x}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(2:1)}}=-(1-e^{-t})\left(\mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:0)}}\mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:0)}} -1\right) & \mathscr{x}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(3:1)}}=0 \nonumber \end{align}\] and \[\begin{align} &\mathscr{y}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:1)}}=\mathscr{y}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:1)}} && \mathscr{y}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(2:1)}}=\mathscr{y}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(2:1)}}\,e^{t}-(e^{t}-1)\,\left(\mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:0)}}+2\, \mathscr{y}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:0)}}\right) & \mathscr{y}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(3:1)}}=\mathscr{y}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(3:1)}}\,e^{2\,t} \nonumber \end{align}\] First order corrections to stiffness and corresponding drift are constant with respect to the faster time scale \[\begin{align} & \mathscr{x}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:1)}}=\mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:1)}} && \mathscr{y}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:1)}}=\mathscr{y}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:1)}} \nonumber \end{align}\] We enforce equilibrium boundary conditions by requiring \[\begin{align} \mathscr{x}_{0,0}^{\scriptscriptstyle{(4:0)}}\mathscr{x}_{0,0}^{\scriptscriptstyle{(1:0)}} = \mathscr{x}_{0,\varepsilon^{2}\,t_{\mathfrak{f}}}^{\scriptscriptstyle{(4:0)}}\mathscr{x}_{0,\varepsilon^{2}\,t_{\mathfrak{f}}}^{\scriptscriptstyle{(1:0)}} =1 \nonumber \end{align}\] which also ensure the vanishing of the position momentum cross correlation at the end of the control horizon.

8.3 Second order: cell equations↩︎

The perturbative expansion produces resonances at second order. The differential equations in the slow time \(\mathscr{t}_{\mathfrak{2}}\), obtained by subtracting secular terms arising from resonances, recover the overdamped dynamics. In the multiscale literature (see e.g. [55]), these equations are usually referred to as “cell equations”. Specifically, the regular expansion in \(\varepsilon\) yields \[\begin{align} \begin{cases} \partial_{t}\mathscr{x}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:2)}}+\partial_{\mathscr{t}_{\mathfrak{2}}}\mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:0)}}= -2 \,(1-e^{-t})\,\left(\mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:0)}}\mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:0)}} -1\right) \\ \partial_{t}\mathscr{x}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(2:2)}}=- \mathscr{x}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(2:2)}}-\left( \mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:0)}}\mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:1)}} +\mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:1)}}\mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:0)}}\right) \\ \partial_{t}\mathscr{x}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(3:2)}}=-2 \left(\mathscr{x}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(3:2)}}+\mathscr{x}_{\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:0)}} (1-e^{-t})(\mathscr{x}_{\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:0)}}\mathscr{x}_{\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:0)}} -1)\right) \\ \partial_{t}\mathscr{x}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:2)}}+\partial_{\mathscr{t}_{\mathfrak{2}}}\mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:0)}}=\dfrac{\mathscr{y}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:0)}}}{\tilde{h}} \end{cases} \nonumber \end{align}\] Resonances correspond to solutions of regular perturbation theory linearly growing in the fast time variable \(t\). Their presence leads to a break down of the expansion as \(t\) becomes order \(O(\varepsilon^{-1})\). In the limit of infinite separation between the fast and slow time scale, we cancel such terms by requiring \[\begin{align} \begin{aligned} & \partial_{\mathscr{t}_{\mathfrak{2}}}\mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:0)}}=-2 \,\left(\mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:0)}}\mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:0)}} -1\right) \\ &\partial_{\mathscr{t}_{\mathfrak{2}}}\mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:0)}}=\dfrac{\mathscr{y}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:0)}}}{\tilde{h}} \end{aligned} \label{od:cell1} \end{align}\tag{52}\] Furthermore by setting \[\begin{align} \mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:1)}}=\mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:1)}}=0 \nonumber \end{align}\] we are in the position to derive the prediction \[\begin{align} &\mathscr{x}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(2:1)}}=-(1-e^{-t})\left(\mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:0)}}\mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:0)}} -1\right) \nonumber\\ & \mathscr{x}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(3:2)}}=\mathscr{x}_{\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:0)}} (\mathscr{x}_{\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:0)}}\mathscr{x}_{\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:0)}} -1) \nonumber \end{align}\] describing within accuracy the evolution of cumulants involving the momentum process. In order to fully specify the solution, we analyze the equations satisfied by the Lagrange multipliers \[\begin{align} \begin{cases} \partial_{t}\mathscr{y}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:2)}}+\partial_{\mathscr{t}_{\mathfrak{2}}}\mathscr{y}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:0)}} =\mathscr{y}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(2:1)}} \,\mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:0)}} \\ \partial_{t}\mathscr{y}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(2:2)}}=-2\,\mathscr{y}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:1)}}+\mathscr{y}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(2:2)}}+2\,\mathscr{y}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(3:1)}} \,\mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:0)}} \\ \partial_{t}\mathscr{y}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(3:2)}}+\partial_{\mathscr{t}_{\mathfrak{2}}}\mathscr{y}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(3:0)}}=- \mathscr{y}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(2:1)}}+2\,\mathscr{y}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(3:2)}} \\ \partial_{t}\mathscr{y}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:2)}}+\partial_{\mathscr{t}_{\mathfrak{2}}}\mathscr{y}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:0)}} =\mathscr{y}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(2:1)}}\,\mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:0)}}-\mathscr{x}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(2:1)}} \end{cases} \nonumber \end{align}\] Self consistency of the expansion requires \[\begin{align} \partial_{t}\mathscr{y}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:2)}}=0 \nonumber \end{align}\] whence it follows \[\begin{align} \partial_{\mathscr{t}_{\mathfrak{2}}}\mathscr{y}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:0)}} =\mathscr{y}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(2:1)}}\,\mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:0)}}-\mathscr{x}_{t,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(2:1)}} \nonumber \end{align}\] The right hand side must be a pure function of the slow time \(\mathscr{t}_{\mathfrak{2}}\). This latter condition is fulfilled by setting \[\begin{align} & \mathscr{y}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(2:1)}}=\mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:0)}}+2\, \mathscr{y}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:0)}} \nonumber \\ & \mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(2:1)}}=-\,\left(\mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:0)}}\mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:0)}}-1\right ) \nonumber \end{align}\] We thus obtain the cell problem of the adjoint dynamics \[\begin{align} \begin{aligned} & \partial_{\mathscr{t}_{\mathfrak{2}}}\mathscr{y}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:0)}}=2\,\mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:0)}}\,\mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:0)}}+2\,\mathscr{y}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:0)}}\,\mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:0)}}-1 \\ &\partial_{\mathscr{t}_{\mathfrak{2}}}\mathscr{y}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:0)}}=\left(\mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:0)}}+2\,\mathscr{y}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(1:0)}}\right)\mathscr{x}_{0,\mathscr{t}_{\mathfrak{2}}}^{\scriptscriptstyle{(4:0)}} \end{aligned} \label{od:cell2} \end{align}\tag{53}\]

8.4 Comparison with overdamped dynamics↩︎

The second cumulant of an overdamped Gaussian dynamics satisfies \[\begin{align} \dot{\mathscr{X}}_{t}^{\scriptscriptstyle{(1)}}=-2\,(\mathscr{X}_{t}^{\scriptscriptstyle{(1)}}\mathscr{X}_{t}^{\scriptscriptstyle{(2)}}-1) \nonumber \end{align}\] where \(\mathscr{X}_{t}^{\scriptscriptstyle{(2)}}\) is the stiffness of the mechanical potential acting on the micro-particle. The corresponding mean dissipation functional is amenable to the form \[\begin{align} \mathcal{E}_{t_{\mathfrak{f}}}=\int_{0}^{t_{\mathfrak{f}}}\mathrm{d}t\,\mathscr{X}_{t}^{\scriptscriptstyle{(2)}}(\mathscr{X}_{t}^{\scriptscriptstyle{(1)}}\mathscr{X}_{t}^{\scriptscriptstyle{(2)}}-1) \nonumber \end{align}\] A straightforward calculation then proves that the cell equations (52 ), (53 ) indeed recover the overdamped optimal control conditions for an harmonic penalty on stiffness manipulations. We emphasize that the coincidence of the equations obtained by first optimizing and then expanding with those describing the inverse order of the operations is a non-trivial result, generally not verified by applications of multiscale analysis to optimal control theory, see e.g. discussion in [56]. Physically, the above result follows from considering a dynamics enjoying detailed balance [57].

9 Uncontrolled relaxation to equilibrium↩︎

If we surmise that, for times larger than the upper end \(t_{\mathfrak{f}}\) of the control horizon, the mechanical potential is frozen at its value at \(t_{\mathfrak{f}}\), the dynamics of second order cumulants obeys (3 ) with \(\lambda\) equal to zero, and \[\begin{align} & \mathscr{x}_{t}^{\scriptscriptstyle{(4)}}=\mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(4)}} && \forall\,t\,\geq\,t_{\mathfrak{f}} \nonumber \end{align}\] The dynamics is then exactly solvable, see e.g. [58]. Relaxation to thermal equilibrium is governed by the decay rates \[\begin{align} \mathscr{r}_{\pm}= \begin{cases} \dfrac{1\pm\sqrt{1-4\,\varepsilon^{2}\,\mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(4)}}}}{2} & if\,\,0<\mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(4)}}\,< \,\dfrac{1}{4} \\[0.3cm] \dfrac{1}{2}&if\,\,\mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(4)}}\,\geq\, \dfrac{1}{4} \end{cases} \nonumber \end{align}\] The mean heat release during the thermalisation process is \[\begin{align} \mathcal{Q}_{T}=\int_{t_{\mathfrak{f}}}^{T}\mathrm{d}t\,\left(1+\frac{\mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(4)}} \, \varepsilon ^2 \left(\mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(1)}}\,\mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(4)}}-1\right) e^{-2\,\mathscr{r}_{-}\,(t-t_{\mathfrak{f}}) }\, \left(1-e^{-(\mathscr{r}_{+}-\mathscr{r}_{-})\,(t-t_{\mathfrak{f}}) }\right)^2}{(\mathscr{r}_{+}-\mathscr{r}_{-}) ^2} \right)-(T-t_{\mathfrak{f}}) \nonumber \end{align}\] The integral converges if \[\begin{align} \mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(4)}}\,>\,0 \label{relax:confining} \end{align}\tag{54}\] This condition is clearly always satisfied by transitions between equilibrium states. Physically, 54 means that once the dynamics is no longer controlled, the system is left under the action of a confining mechanical potential \(U_{t_{\mathfrak{f}}}\). For \(t\,\geq\,t_{\mathfrak{f}}\), the system will undergo a process of thermal relaxation to reach equilibrium with \(U_{t_{\mathfrak{f}}}\). However, if 54 does not hold and the stiffness is negative at \(t_{\mathfrak{f}}\), then the system is left in an unstable state and its spatial probability distribution is pushed towards infinity. When the stiffness is the control, negative values may be present in theoretical optimal protocols for transitions at minimum dissipation between assigned states. Reference [3] describes a method to implement a potential with negative stiffness by means of optical feedback traps.

When the stiffness is positive at the end of a transition between non-equilibrium states at minimum work, i.e. (54 ) holds, the mean heat release during the relaxation is \[\begin{align} \mathcal{Q}_{\star}=\lim_{T\nearrow\infty} \mathcal{Q}_{T,t_{\mathfrak{f}}}=\frac{\varepsilon ^2}{2\,\mathscr{r}_{-}\,\mathscr{r}_{+}} \,\frac{\mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(4)}} \, \left(\mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(1)}}\,\mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(4)}}-1\right) }{\mathscr{r}_{+}+\mathscr{r}_{-}} = \,\frac{\mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(1)}}\,\mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(4)}}-1}{2} \nonumber \end{align}\] which is positive as long as \[\begin{align} \mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(1)}}\,\mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(4)}}\,\geq\,1 \label{relax:contraction} \end{align}\tag{55}\] As no work is done on the system during the relaxation process and the temperature of the system at the end of the control horizon is in equilibrium with the environment, the first law of thermodynamics implies \[\begin{align} 0\,\leq\,\lim_{T\nearrow\infty}\mathscr{Q}_{T,t_{\mathfrak{f}}}= \operatorname{E}U_{t_{\mathfrak{f}}}(\mathscr{q}_{t_{\mathfrak{f}}})-\lim_{T\nearrow\infty}\operatorname{E}U_{t_{\mathfrak{f}}}(\mathscr{q}_{T}) \nonumber \end{align}\] The system therefore releases heat while contracting toward a state described by a more confining probability distribution than the one reached at the end of the control horizon equilibrium.

Independently of the value of the stiffness at the end of the control horizon, the process satisfies the second law of thermodynamics (9 ). Namely, upon including the Gibbs-Shannon entropy, we get \[\begin{align} \lim_{T\nearrow\infty} \mathcal{E}_{T,t_{\mathfrak{f}}}=\lim_{T\nearrow\infty}\operatorname{E} U_{t_{\mathfrak{f}}}(\mathscr{q}_{T}) -\operatorname{E} U_{t_{\mathfrak{f}}}(\mathscr{q}_{t_{\mathfrak{f}}})+\mathcal{Q}_{\star}=\frac{1-\mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(1)}}\,\mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(4)}}}{2}+\frac{\mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(1)}}\,\mathscr{x}_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(4)}}-1}{2}=0 \nonumber \end{align}\]

10 Numerical Analysis↩︎

Numerical methods allow us to illustrate engineered swift equilibration protocols for some examples and to contrast them with protocols implementing transitions at minimum work. There are two ways in which the problem can be approached numerically: directly by minimising the cost functional, and indirectly by solving the first order optimality conditions. Direct methods parametrise the control: the control and state variables are discretised in time and space. Inequality constraints (Karush-Kuhn-Tucker conditions, see e.g. [59]), such as those given by 15 , on states are enforced by logarithmic barriers. Sophisticated numerical algorithms, such as Interior Point Optimisation (IPOpt) [33], [34], then solve a linearised system to find an optimal direction and make gradient-descent like updates to find an optimal solution. A drawback of using direct methods is that, since these require discretisation of time and space coordinates and computation of gradients, they suffer from the curse of dimensionality, whereby increasing the number of dimensions of the problem significantly affects the computational cost.

The indirect approach solves the necessary conditions for optimality derived from Pontryagin’s maximum principle. The necessary conditions are often a two-point boundary value problem with equations for the co-state variables, such as those in 29 . To solve these boundary value problems, we can use for example shooting methods, where the problem is integrated numerically forward from the given initial data and then trajectories are adjusted to bring them closer to the terminal conditions. Clearly, such a method is very sensitive to the initial guess. Another, generally more robust option is collocation, where the solution is approximated in a finite-dimensional set eg polynomials on a fixed set of points, called the collocation points [60].

10.1 Engineered swift equilibration↩︎

When the stiffness is part of the system state, a transition between two equilibrium states in the control horizon \([0,t_{\mathfrak{f}}]\) is defined by the boundary conditions \[\begin{align} & \begin{cases} x_{0}^{\scriptscriptstyle{(1)}}=\sigma^2_0 & \\ x_{0}^{\scriptscriptstyle{(2)}}=0 & \\ x_{0}^{\scriptscriptstyle{(3)}}=1 & \\ x_{0}^{\scriptscriptstyle{(4)}}=1/\sigma^2_0 & \end{cases} && \begin{cases} x_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(1)}}=\sigma^2_{t_{\mathfrak{f}}}& \\ x_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(2)}}=0 & \\ x_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(3)}}=1 & \\ x_{t_{\mathfrak{f}}}^{\scriptscriptstyle{(4)}}=1/\sigma^2_{t_{\mathfrak{f}}} & \end{cases} \label{ep95num:bc} \end{align}\tag{56}\] We can compute the moments of the position and momentum processes, the stiffness, and control of the engineered swift equilibration using both indirect and direct numerical methods. The indirect method means finding a numerical solution to the boundary value problem given by the set of differential equations 29 with boundary conditions 56 . The direct method minimises the cost 20 directly while obeying the dynamical constraints and boundary conditions on the cumulants. Because in this case the cost 20 is a strictly convex function of the control, we expect that solutions of the differential equations to the boundary conditions specifying the stationary value of the Pontryagin functional do not develop conjugate points. This observation is supported by the good agreement of different integration methods, shown in Fig. 1 for an expansion with the harmonic penalty 19 , in Fig. 2 with the logarithmic 18 and hard 15 penalties, and for a contraction with the harmonic penalty in Fig. 3. Fig. 1 also shows the agreement of the centre manifold in Sec. 7.5 with the direct method of optimisation. The direct method of optimisation imposes constraints via self-concordant barrier functions, similar to the logarithmic penalty given by 18 , and Fig. 2 demonstrates the similarities between the qualitative behaviour of the two penalties, particularly for increasing values of \(\Lambda\).

Figure 1: Engineered swift equilibration minimising the entropy production 20 , subject to the harmonic penalty 19 . The solution is computed using a direct optimisation on the cost functional 20 (blue) with InfiniteOpt.jl [61] and solved using Interior Point Optimisation IPOpt [33] and using g=0.01. The central manifold solution is shown for decreasing values of g: g=0.01 orange, g=0.001 green, g=0.0001 maroon; and is found by numerically integrating the set of differential equations 49 , 50 with a collocation method from the package DifferentialEquations.jl [62], [63]. We fix \Lambda=\sqrt{2},\,\varepsilon=1,\,t_{\mathfrak{f}}=3 and boundary conditions are described in 56 , with \sigma^2_0 = 1 and \sigma^2_{t_{\mathfrak{f}}}=2. Results produced using the code available at [64]
Figure 2: Engineered swift equilibration minimising the entropy production 20 . We find the solution by a direct optimisation of the cost functional with a hard penalty 15 (blue) and the solution of system of differential equations specifying the first order optimality conditions 29 with a logarithmic penalty 18 (orange-dashed). We fix \Lambda=9,\,\varepsilon=1,\,t_{\mathfrak{f}}=3 and use g=0.001 and g=0 for the logarithmic and hard penalties respectively. The boundary conditions are given by 28 , with \sigma^2_0 = 1 and \sigma^2_{t_{\mathfrak{f}}}=2. Numerical methods are as described in Fig. 1.
Figure 3: Engineered swift equilibration minimising the entropy production 20 for a contraction. We use a harmonic penalty 19 and find the solution with a direct optimisation of the cost functional (blue lines) and as a solution of the first order conditions (orange, dashed). We fix \Lambda=\sqrt{2},\,\varepsilon=1,\,t_{\mathfrak{f}}=3 and g=0.01. Boundary conditions are given by 28 , with \sigma^2_0 = 1 and \sigma^2_{t_{\mathfrak{f}}}=1/2. Numerical methods are as described in Fig. 1.

10.1.1 Comparison with minimum entropy production transitions when stiffness is the control↩︎

We now contrast two situations:

  1. Engineered swift equilibration with an additional constraint that confines the stiffness to a compact interval \([\kappa_{m},\kappa_{M}]\).

  2. Minimization of entropy production directly now using the stiffness as a control subject to a hard penalty that confines it to \([\kappa_{m},\kappa_{M}]\).

Case ([S2]) is the Gaussian version of the problem studied in [45], [46], [65], where we have replaced the harmonic penalty with a hard penalty. By considering the hard penalty, we can think of case ([S2]) as the limit of case ([S1]) when no penalty is imposed on the time differential of the stiffness. As a consequence, we expect the entropy production in case ([S2]) to always provide a lower bound on the entropy production computed in ([S1]).

We compare the entropy production as a function of the time horizon in both cases in Fig. 4. We consider two situations, where we increase the value of \(\Lambda\) for the hard penalty, effectively reducing the effect of the penalty and changing the interval where the stiffness \(\mathscr{k}_{t}\) is constrained. A larger value of \(\Lambda\) is used in panels (b) and (d), where we notice that the difference between the entropy production for cases ([S1]) and ([S2]) becomes smaller in comparison to that shown in panels (a) and (c). In panels (a) and (b), we constrain the value of \(\mathscr{k}_{t}\) in the interval \([0.2,1.2]\). This choice of interval is motivated by [39]. In (c) and (d), we constrain \(\mathscr{k}_{t}\) in the interval \([-0.1,1.2]\).

As the final time \(t_{\mathfrak{f}}\) increases, in both cases the entropy production \(\mathcal{E}\) 9 tends towards the value of the squared Wasserstein distance in configuration space (i.e. position) divided by the time horizon. The squared Wasserstein-2 distance is given by \[\begin{align} \mathcal{W}^2_2(\rho_0,\rho_{t_{\mathfrak{f}}}) &= \inf_{\pi}\int \mathrm{d} \pi (x,y) \, (x-y)^2 \end{align}\] using \(\rho_0\) and \(\rho_{t_{\mathfrak{f}}}\) to represent the assigned initial and final position marginals respectively, and \(\pi(x,y)\) are joint distributions such that the marginal of \(x\) is \(\rho_0\) and the marginal of \(y\) is \(\rho_{t_{\mathfrak{f}}}\). The squared Wasserstein-2 distance divided by final time specifies the minimum value of the entropy production in the overdamped (Langevin-Smoluchowski) limit [11] and provides a lower bound on the entropy production by the underdamped dynamics [46].

The cumulants of a transition where the stiffness is a state and a control are shown in Fig. 5 and Fig. 6. We notice that for stiffness as a control, the results are more noisy for the stiffness. In fact, as the data is computed using a direct method, we obtain a solution that jumps from the limits of the constraining interval of \(\mathscr{k}_{t}\), in this case between \(0.2\) and \(1.2\). Applying a standard mean filter convolution onto the data uncovers the centre manifold, which is in good agreement with case [S1], where the stiffness is a state. The direct method is attempting to perform synthesis on the interval, however lacking sufficient accuracy due to discretisation, can only find the turnpike manifold “on average”. As the value of \(\Lambda\) gets larger, i.e. the penalty on \(\lambda_t\) becomes weaker, the behaviour of the cumulants becomes more similar to the case where the stiffness is the control. Constraining \(\mathscr{k}_{t}\) in a larger interval which also includes negative values is shown in Fig. 6. From the physical point of view, temporarily negative values of the stiffness of the driving potential are physically realisable [3] considering that inside the control horizon this quantity is not related to the variance of the Langevin particle position process. Negative values of the stiffness just mean an acceleration pushing the particle away from the origin.

Figure 4: Entropy production as a function of the time horizon t_f where the stiffness is a state [S1] (blue, triangle) and where the stiffness is a control [S2] (orange, circle). When the stiffness is a state, we use a hard penalty -\Lambda\leq\lambda_t\leq\Lambda. Panels (a) and (b) constrain the stiffness \mathscr{k}_{t} in the interval 0.2\leq\mathscr{k}_{t}\leq 1.2. Panels (c) and (d) have the constraint -0.1\leq\mathscr{k}_{t}\leq 1.2, allowing for negative values of \mathscr{k}_{t}. To model [S2], only \mathscr{k}_{t} is constrained and there is no constraint on \lambda_t (its time derivative). Results are computed by a direct optimisation of the cost functional. For all plots we use g=0 and \varepsilon=1; in (a) and (c) we set the size of the hard penalty \Lambda =1; for (b) and (d) we set \Lambda =10. The squared Wasserstein-2 distance \mathcal{W}_2 between the initial and final position marginals (blue, dashed) is estimated numerically using n=20\,000 independent samples using simulated evolutions of the dynamics controlled by \mathscr{k}_t The blue shaded region indicates the standard error of the estimator. The numerical methods used are as those in Fig. 1.
Figure 5: Comparison of engineered swift equilibration at minimum entropy production when the stiffness is the control [S2] (orange) and when the stiffness is a state [S1] (blue, green). The stiffness \mathscr{k}_{t} is constrained in the interval 0.2\leq\mathscr{k}_{t}\leq 1.2 in both cases. We use a hard penalty 15 to model the case when the stiffness is a state, with \Lambda=1 (blue) and \Lambda=10 (green). Results are computed by a direct optimisation (see Fig. 1) with parameters \varepsilon=1 and g=0. We use boundary conditions 56 when the stiffness is a state (SI) and remove the boundary conditions on \mathscr{k}_{t} for stiffness as the control (case [S2]). We use \sigma^2_0 = 1 and \sigma^2_{t_{\mathfrak{f}}}=2. Final quantities are smoothed by a convolution with box filter over an interval of approximately 0.067
Figure 6: Comparison of engineered swift equilibration at minimum entropy production when the stiffness is the control [S2] (orange) and when the stiffness is a state [S1] (blue, green). The stiffness \mathscr{k}_{t} is constrained in the interval -0.1\leq\mathscr{k}_{t}\leq 1.2 in both cases, allowing for negative values. We use a hard penalty 15 to model the case when the stiffness is a state, with \Lambda=1 (blue) and \Lambda=10 (green). Results are computed by a direct optimisation (see Fig. 1) with parameters and boundary conditions as in Fig. 6. Final quantities displayed are smoothed by a convolution with box filter over an interval of size approximately 0.067

10.2 Minimum Work Transitions↩︎

Minimum work transitions of the type [C2] minimise the mean work 11 in a transition to a target state generically out of equilibrium. Namely, the stationary condition 28 relates the co-state of the stiffness to the position variance. This condition implicitly determines the terminal value of the stiffness minimising the thermodynamic work during the transition. In other words, the terminal value of the mechanical potential, i.e. the internal energy, is not assigned as a boundary condition but derived as an element of the solution of the optimization problem. This is different from the optimal control problem introduced in [66] with the interpretation of work minimization and then also studied in [67]. The optimal control considered in these papers is equivalent to the minimization of the work functional 22 when we assign the value of the potential stiffness at the end of the control horizon. Furthermore, if we identify the stiffness as a control as in case [S2] above, there is no reason why its end-of-horizon value determined by the stationary condition should be equal the value of the quantity interpreted as stiffness in the terminal cost. This is because of the over-determination issue discussed in the introduction. The bottom line is that by formulating work minimization in a proper Bolza form, not only are we avoiding discontinuities of the internal energy, a feat that can always be accomplished by regularization [68], but we are also in a position to recognize that the physically relevant boundary conditions are not those specified by the assignment of the final value of the mechanical force.

Figure 7: Expansion at minimum work. We use a harmonic penalty and find the solution using a direct optimisation (blue) and as the solution of the first order conditions (orange, dashed). We use t_{\mathfrak{f}}=3,\, \varepsilon=1,\, g=0.01 and \Lambda=\sqrt{2}. We use boundary conditions 56 and replace the boundary condition for x_{t_{\mathfrak{f}}}^{(4)} with 28 , where \sigma^2_0 = 1 and \sigma^2_{t_{\mathfrak{f}}}=2. Numerical methods are used as those in Fig. 1
Figure 8: Contraction at minimum work. We use a harmonic penalty and find the solution using a direct optimisation (blue) and as the solution of the first order conditions (orange, dashed). We use t_{\mathfrak{f}}=7,\, \varepsilon=1,\, g=0.1 and \Lambda=\sqrt{2}. We use boundary conditions 56 and replace the boundary condition for x_{t_{\mathfrak{f}}}^{(4)} with 28 , where \sigma^2_0 = 4 and \sigma^2_{t_{\mathfrak{f}}}=1. Numerical methods are used as those in Fig. 1
Figure 9: Thermodynamic costs of an expansion for engineered swift equilibration (a) and at minimum work (b). We show the mean work (\mathcal{W}_{t_{\mathfrak{f}}} 10 , blue triangle); mean heat release (\mathcal{Q}_{t_{\mathfrak{f}}}, orange cross); mean entropy production (\mathcal{E}_{t_{\mathfrak{f}}} 9 , green dot) as functions of the time horizon. Inset in panel (a) shows the difference in entropy production between the engineered swift equilibration and minimal work transition, showing that the entropy production is always higher for transitions at minimum work rather than between equilibrium states. Costs are computed using solutions of first order conditions with a harmonic penalty, see Fig. 1, with parameters \varepsilon=1,\, g=0.01 and \Lambda=\sqrt{2}. We use boundary conditions 56 in panel (a); in panel (b) we replace the boundary condition for x_{t_{\mathfrak{f}}}^{(4)} with 28 . We use \sigma^2_0 = 1 and \sigma^2_{t_{\mathfrak{f}}}=2 in both cases.
Figure 10: Thermodynamic costs of an expansion for engineered swift equilibration (a) and at minimum work (b). We use parameters and boundary conditions as Fig 9, and set \sigma^2_0 = 1 and \sigma^2_{t_{\mathfrak{f}}}=0.5 to model the contraction.

We illustrate an expansion at minimum work in Fig. 7 and a contraction in Fig. 8. Minimum work transitions prove to be more numerically unstable than those between equilibria, which leads to small discrepancies between the direct and indirect methods, particularly for the contraction shown in Fig. 8.

We compare the thermodynamic costs between equilibrium and non-equilibrium transitions: in Fig. 9 for an expansion, and in Fig. 10 for a contraction. In general, entropy production is a measure of how far away a system is from equilibrium. Therefore we expect that engineered swift equilibration should give a smaller entropy production, which is indeed the case in both expansion and contraction (see insets of Figs. 9 and 10).

10.3 Turnpike behaviour↩︎

In all examples, we notice that optimal protocols exhibit a tendency to align with a universal behaviour in the bulk of the control horizon. There, the cumulant dynamics remains close to the values determined by the centre manifold identified in section 7. This universal behaviour becomes more pronounced as \(g\), the order parameter of the penalty on the control potential 14 , tends to zero, see Figs 1 and  2. In the case of the hard penalty 15 , this limit corresponds to larger values of the bound \(\Lambda\). We interpret the existence of a universal regime of the dynamics as the so-called “turnpike” phenomenon, which is often observed in optimal control problems. The name turnpike, an American English term for highway, originates in econometric studies of neo-classical models of optimal capital growth such as the Ramsey problem of optimal saving [69]. In the econometric context, the turnpike property refers to the tendency of time evolution of capital to remain in a neighborhood of a “bliss” point specifying the maximum of a strictly concave consumption utility function in the static case. An alternative terminology designating the same phenomenon is that of “von Neumann" path, to honor the groundbreaking work [70] containing one of the very first applications of linear programming to prove the existence of an equilibrium point in neoclassical economic growth models.

More generally [71], the turnpike property stipulates that the qualitative behaviour of optimally controlled dynamics can be subdivided into three regions. Two of them are located at the beginning and the end of the control horizon. There, the optimal dynamics is strongly affected by inherently non-universal constraints imposed by the boundary conditions: the form of the initial state of the system, its final target state, or, alternatively, the presence of a terminal cost and the penalty affecting the choice of controls that is adapted to match these conditions. When the control horizon becomes large, or the intensity of the penalty on controls becomes smaller, in our case measured by \(g\), these regions are compressed to boundary layers, and the behaviour in the bulk converges exponentially to the turnpike. Here, the form of the running cost dominates the dynamics.

In our set up, the running cost is specified by the momentum variance. The numerical integration of the centre manifold equations (49 ), (50 ) in Fig. 1 shows how the turnpike behaviour corresponds to maintaining the momentum variance close to a constant value near the Maxwell-Boltzmann equilibrium value, equal to unity in our non-dimensional units.

11 Conclusions and outlooks↩︎

Optimal control theory problems of the form considered in this paper make a neat distinction between the control horizon (when the system is externally steered between target states) and the time interval when the system evolution is only monitored. This distinction unavoidably implies the existence of non-analytic mathematical indicators that jump from one value to another when the control is switched on or off. The problem is therefore how to properly interpret the role of indicators, when comparing predictions of mathematical optimal control models to experimental results.

Our interpretation is that thermodynamic quantities at the end of an optimal control horizon are only unambiguously defined when they are pure functions of the system’s state variables. In agreement with standard formulations of the theory [9], excluding an explicit dependence on control protocols avoids identifying, by construction, terminal costs with markers of intrinsic discontinuity in mathematical models. For instance, the work done on the system is well defined within an optimal control model if the internal energy is a (pure function of the) system’s state variable at the beginning and end of the control horizon. This motivates a dynamic model for the evolution of the mechanical potential acting on the system. In doing so, we also address the issue that in actual laboratory implementations, control protocols cannot be switched on and off instantaneously; ignoring this may lead to experimental setups that incur in unwanted consequences. For instance, [28] warns that neglecting the finite response time of reactive elements of tuned circuits for resonant driving of electric and magnetic fields causes to signal distortions.

In addressing these difficulties, we need to consider higher order Markov decision processes, where the thermodynamic quantities do not specify a running cost that is convex in the controls. A solution with straightforward physical justification [39] is to restrict admissible controls to take values on a compact set. Mathematically, the resulting formulation requires the construction of optimal controls via a synthesis procedure [47]. Actual computation of synthesised trajectories may, however, become challenging. Based on the theory of self-concordant functions in direct optimal control methods [42][44], we demonstrate that, for practical purposes, modifying thermodynamic cost functionals to include a convex penalty on controls , produces qualitatively and, often, quantitatively equivalent results.

This can be explained in two ways. From the numerical point of view, the most efficient available algorithms for direct optimisation actually model confinement in a compact set by means of self-concordant smooth barriers [33]. From the analytic point of view, we demonstrate by asymptotic analysis that, independently of the penalty imposed on admissible protocols, in the bulk of the control horizon optimal evolution tends to remain close to a universal centre manifold, described by fewer degrees of freedom. The system then deviates from the centre manifold only at the beginning and end of the horizon, to match boundary conditions.

This behaviour is the turnpike phenomenon often observed in optimal control [71]. The phenomenon explains the observed thermodynamic efficiency of heuristic protocols in engineered swift equilibration [5], [13]. It also clarifies, in our view conclusively, that the predictive power of models identifying the control with the microscopic potential (see e.g. [46] and references within) indeed refers to the turnpike or bulk time interval. The idea of optimal protocols with jumps at the boundaries should be discarded as a potentially misleading metaphor for unresolved time scales in a model.

In the present paper we restricted our study to Gaussian models with linear dynamics. Similar ideas can be also applied to feedback control for transitions between states described by more general distributions in phase space, such as those entering Landauer’s model of bit erasure [11]. We plan this extension to be the subject of forthcoming work.

12 Acknowledgments↩︎

The authors gratefully acknowledge Marco Baldovin, Dario Lucente and Jukka Pekola for many discussions and insights. The authors wish to acknowledge CSC – IT Center for Science, Finland, for computational resources. JS acknowledges financial support from the Doctoral Programme in Mathematics and Statistics at the University of Helsinki and the centre of Excellence FiRST of the Research Council of Finland (funding decision number: 346305).

13 Absence of abnormal extremals↩︎

Abnormal extremals are non-trivial stationary conditions occurring if we set \[\mathscr{y}_{t}^{\scriptscriptstyle{(0)}}=0\] in (24 ). If we now undertake the analysis of section 6.2 under this additional condition, we verify that (35 ) is not affected, and (37 ) becomes \[\begin{align} \mathscr{y}_{t}^{\scriptscriptstyle{(1)}}=\frac{\mathscr{y}_{t}^{\scriptscriptstyle{(3)}}\,\mathscr{x}_{t}^{\scriptscriptstyle{(3)}}}{\mathscr{x}_{t}^{\scriptscriptstyle{(1)}}} \nonumber \end{align}\] whence instead of (38 ), we only arrive at \[\begin{align} 0=2\,\varepsilon\,\mathscr{x}_{t}^{\scriptscriptstyle{(1)}}\,\mathscr{y}_{t}^{\scriptscriptstyle{(3)}} \nonumber \end{align}\] which yields an empty condition on the controls. We thus conclude as in [39], [50] that the thermodynamic cost minimisation cannot be obtained by means of abnormal extremals.

References↩︎

References↩︎

[1]
Martínez I A, Roldán E, Dinis L, Petrov D, Parrondo J M R and Rica R A 2016 Nature Physics12 67–70 https://doi.org/10.1038/nphys3518.
[2]
Martínez I A, Petrosyan A, Guéry-Odelin D, Trizac E and Ciliberto S 2016 Nature Physics12 843–846 ISSN 1745-2481 https://doi.org/10.1038/nphys3758.
[3]
Albay J A C, Lai P Y and Jun Y 2020 Applied Physics Letters116 ISSN 1077-3118 http://dx.doi.org/10.1063/1.5143602.
[4]
Gonzalez-Ballestero C, Aspelmeyer M, Novotny L, Quidant R and Romero-Isart O 2021 Science374 ISSN 1095-9203 http://dx.doi.org/10.1126/science.abg3027.
[5]
Dago S, Ciliberto S and Bellon L 2023 Proceedings of the National Academy of Sciences120 ISSN 1091-6490 http://dx.doi.org/10.1073/pnas.2301742120.
[6]
Cunuder A L, Martinez I, Petrosyan A, Guéry-Odelin D, Trizac E and Ciliberto S 2016 Applied Physics Letters109 113502 https://doi.org/10.1063/1.4962825.
[7]
Barros N, Ciliberto S and Bellon L 2024 Physical Review Letters133 057101 ISSN 1079-7114 https://doi.org/10.1103/PhysRevLett.133.057101.
[8]
Raynal D, de Guillebon T, Guéry-Odelin D, Trizac E, Lauret J S and Rondin L 2023 Physical Review Letters131 087101 https://doi.org/10.1103/PhysRevLett.131.087101.
[9]
Liberzon D 2012 Calculus of Variations and Optimal Control Theory. A Concise Introduction(Princeton, New Jersey: Princeton University Press) ISBN 978-0-691-15187-8 http://press.princeton.edu/titles/9760.html.
[10]
Bechhoefer J 2021 Control Theory for Physicists (Cambridge: Cambridge University Press) ISBN 9781009028493 https://www.cambridge.org/gb/academic/subjects/physics/ mathematical-methods/control-theory-physicists?format=AR.
[11]
Aurell E, Gawȩdzki K, Mejía-Monasterio C, Mohayaee R and Muratore-Ginanneschi P 2012 Journal of Statistical Physics147 487–505 https://doi.org/10.1007/s10955-012-0478-x.
[12]
Villani C 2009 Optimal transport: old and new(Grundlehren der mathematischen Wissenschaften vol 338) (Springer) https://link.springer.com/book/10.1007/978-3-540-71050-9.
[13]
Chupeau M, Ciliberto S, Guéry-Odelin D and Trizac E 2018 New Journal of Physics20 075003 ISSN 1367-2630 http://dx.doi.org/10.1088/1367-2630/aac875.
[14]
Li G, Quan H T and Tu Z C 2017 Physical Review E96 ISSN 2470-0053 http://dx.doi.org/10.1103/PhysRevE.96.012144.
[15]
Frim A G, Zhong A, Chen S F, Mandal D and DeWeese M R 2021 Physical Review E103 ISSN 2470-0053 http://dx.doi.org/10.1103/PhysRevE.103.L030102.
[16]
Guéry-Odelin D, Jarzynski C, Plata C A, Prados A and Trizac E 2023 Reports on Progress in Physics86 035902 ISSN 1361-6633 http://dx.doi.org/10.1088/1361-6633/acacad.
[17]
Barros N, Whitelam S, Ciliberto S and Bellon L 2025 Physical Review E111 044114 ISSN 2470-0053 (Preprint) https://doi.org/10.1103/PhysRevE.111.044114.
[18]
Collin D, Ritort F, Jarzynski C, Smith S B, Jr I T and Bustamante C 2005 Nature437 231–234 https://doi.org/10.1038/nature04061.
[19]
R. Chetrite, P. Muratore-Ginanneschi, and K. Schwieger, E. Schrödinger’s 1931 paper On the Reversal of the Laws of Nature [Über die Umkehrung der Naturgesetze, Sitzungsberichte der preussischen Akademie der Wissenschaften, physikalisch-mathematische Klasse, 8 N9 144-153],” The European Physical Journal H, vol. 46, no. 1, pp. 1–29, Nov. 2021, doi: 10.1140/epjh/s13129-021-00032-7.
[20]
Plata C A, Prados A, Trizac E and Guéry-Odelin D 2021 Physical Review Letters127 ISSN 1079-7114 http://dx.doi.org/10.1103/PhysRevLett.127.190605.
[21]
Pavliotis G A 2014 Stochastic Processes and Applications: Diffusion Processes, the Fokker-Planck and Langevin Equations(New York: Springer New York) ISBN 978-1-4939-1322-0 https://link.springer.com/book/10.1007/978-1-4939-1323-7.
[22]
Schmidt R, Carusela M F, Pekola J P, Suomela S and Ankerhold J 2015 Physical Review B91 224303 https://doi.org/10.1103/PhysRevB.91.224303.
[23]
Muratore-Ginanneschi P, Mejía-Monasterio C and Peliti L 2013 Journal of Statistical Physics150 181–203 https://link.springer.com/article/10.1007/s10955-012-0676- 6.
[24]
C. S. Lent, A. O. Orlov, W. Porod, and G. L. Snider, Eds., Energy Limits in Computation. Cham: Springer International Publishing, 2019.
[25]
Schmiedl T and Seifert U 2008 EPL (Europhysics Letters)81 20003 ISSN 1286-4854 https://iopscience.iop.org/article/10.1209/0295-5075/81/20003.
[26]
Muratore-Ginanneschi P and Schwieger K 2015 EPL (Europhysics Letters)112 20002 https://iopscience.iop.org/article/10.1209/0295-5075/112/20002.
[27]
Guéry-Odelin D, Ruschhaupt A, Kiely A, Torrontegui E, Martı́nez-Garaot S and Muga J G 2019 Review of Modern Physics91 045001 https://link.aps.org/doi/10.1103/RevModPhys.91.045001.
[28]
Hirose M and Cappellaro P 2018 Quantum Information Processing17 1–17 (Preprint) https://doi.org/10.1007/s11128-018-1845-6.
[29]
Chupeau M, Besga B, Guéry-Odelin D, Trizac E, Petrosyan A and Ciliberto S 2018 Physical Review E98 ISSN 2470-0053 http://dx.doi.org/10.1103/PhysRevE.98.010104.
[30]
Fenichel N 1979 Journal of Differential Equations31 53–98 ISSN 0022-0396 https://doi.org/10.1016/0022-0396(79)90152-9.
[31]
Guckenheimer J and Kuehn C 2009 SIAM Journal on Applied Dynamical Systems8 854–879 ISSN 1536-0040 https://doi.org/10.1137/080741999.
[32]
Cardin P T and Teixeira M A 2017 SIAM Journal on Applied Dynamical Systems16 1425–1452 ISSN 1536-0040 http://dx.doi.org/10.1137/16M1067202.
[33]
Wächter A 2009 Short Tutorial: Getting Started With Ipopt in 90 MinutesCombinatorial Scientific Computing(Dagstuhl Seminar Proceedings (DagSemProc) vol 9061) ed Naumann U, Schenk O, Simon H D and Toledo S (Dagstuhl, Germany: Schloss Dagstuhl – Leibniz-Zentrum für Informatik) pp 1–17 ISSN 1862-4405 https://drops.dagstuhl.de/entities/document/10.4230/ DagSemProc.09061.16.
[34]
Tasseff B, Coffrin C, Wächter A and Laird C 2019 eprint arXiv:1909.08104https://doi.org/10.48550/arXiv.1909.08104.
[35]
U. Boscain, M. Sigalotti, and D. Sugny, Introduction to the Pontryagin Maximum Principle for Quantum Optimal Control,” PRX Quantum, vol. 2, no. 3, Sep. 2021, doi: 10.1103/prxquantum.2.030203.
[36]
Nelson E 2001 Dynamical Theories of Brownian Motion 2nd ed (Princeton University Press) ISBN -13: 978-0-691-07950-9 https://web.math.princeton.edu/ nelson/books.html.
[37]
Peliti L and Pigolotti S 2020 Stochastic Thermodynamics(Princeton and Oxford: Princeton University Press) ISBN 9780691215525 https://press.princeton.edu/books/ebook/9780691215525/ stochastic-thermodynamics.
[38]
Chétrite R and Gawȩdzki K 2008 Communications in Mathematical Physics282 469–518 (Preprint) https://doi.org/10.1007/s00220-008-0502-9.
[39]
Baldovin M, Ben Yedder I, Plata C A, Raynal D, Rondin L, Trizac E and Prados A 2024 Physical Review Letters135 097102 (Preprint) https://doi.org/10.1103/pss4-b69w.
[40]
Rao A V 2009 A Survey of Numerical Methods for Optimal Control AAS 09-334) Astrodynamics 2009: proceedings of the AAS/AIAA Astrodynamics Specialist Conference held August 9-13 2009, Pittsburgh, Pennsylvania(Adavnces in Astronautical Sciences no 135) (American Astronautical Society / American Institute of Aeronautics and Astronautics) pp 497–528 ISBN 9780877035572 https://www.anilvrao.com/Publications/ConferencePublications/trajectorySurveyAAS.pdf.
[41]
Teo K L, Li B, Yu C and Rehbock V 2021 Applied and Computational Optimal Control(Springer Optimization and Its Applications vol 171) (Springer) ISBN 978-3-030-69913-0 https://doi.org/10.1007/978-3-030-69913-0.
[42]
Karmarkar N 1984 Combinatorica4 373–395 ISSN 1439-6912 http://dx.doi.org/10.1007/BF02579150.
[43]
Nesterov Y and Nemirovskii A 1994 Interior-Point Polynomial Algorithms in Convex Programming(Society for Industrial and Applied Mathematics) ISBN 9781611970791 http://dx.doi.org/10.1137/1.9781611970791.
[44]
Nemirovski A S and Todd M J 2008 Acta Numerica17 191–234 ISSN 1474-0508 http://dx.doi.org/10.1017/S0962492906370018.
[45]
Muratore-Ginanneschi P and Schwieger K 2014 Physical Review E90 060102(R) https://doi.org/10.1103/PhysRevE.90.060102.
[46]
Sanders J, Baldovin M and Muratore-Ginanneschi P 2024 Journal of Statistical Physics191 117 https://doi.org/10.1007/s10955-024-03320-w.
[47]
Boscain U and Piccoli B 2005 An Introduction to Optimal ControlControle non lineaire et applications Le cours du CIMPA ed Sari T (Hermann) pp 19–66 http://www.cmapx.polytechnique.fr/ boscain/.
[48]
Gomez-Marin A, Schmiedl T and Seifert U 2008 The Journal of Chemical Physics129 024114 https://doi.org/10.1063/1.2948948.
[49]
Hegerfeldt G C 2013 Physical Review Letters111 260501 ISSN 1079-7114 https://journals.aps.org/prl/abstract/10.1103/ PhysRevLett.111.260501.
[50]
Muratore-Ginanneschi P and Schwieger K 2017 Entropy19 379 https://doi.org/10.3390/e19070379.
[51]
Lucente D, Manacorda A, Plati A, Sarracino A and Baldovin M 2025 Entropy27 268 ISSN 1099-4300 (Preprint) http://dx.doi.org/10.3390/e27030268.
[52]
Tin S K, Kopell N and Jones C K R T 1994 SIAM Journal on Numerical Analysis31 1558–1576 ISSN 1095-7170 https://doi.org/10.1137/0731081.
[53]
Hayes M, Kaper T J, Kopell N and Ono K 1998 International Journal of Bifurcation and Chaos08 189–209 ISSN 1793-6551 https://doi.org/10.1142/S0218127498000140.
[54]
Verhulst F 2005 Methods and applications of singular perturbations: boundary layers and multiple timescale dynamics Texts in applied mathematics (New York: Springer) ISBN 978-0387-22966-9 https://doi.org/10.1007/0-387-28313-7.
[55]
Pavliotis G A and Stuart A M 2008 Multiscale methods: averaging and homogenization(Texts in applied mathematics vol 53) (New York: Springer) ISBN 978-0-387-73828-4 https://doi.org/10.1007/978-0-387-73829-1.
[56]
Hartmann C, Latorre J C, Zhang W and Pavliotis G A 2014 Journal of Computational Dynamics1 279–306 ISSN 2158-2491 https://www.aimsciences.org/article/doi/10.3934/jcd.2014.1.279.
[57]
Bo S and Celani A 2017 Physics Reports670 1–59 ISSN 0370-1573 http://dx.doi.org/10.1016/j.physrep.2016.12.003.
[58]
Risken H 1996 The Fokker-Planck equation: methods of solution and applications 3rd ed (Series in Synergetics vol 18) (Springer) ISBN 978-3-540-61530-9 https://doi.org/10.1007/978-3-642-61544-3.
[59]
Halperin I and Agranovich G 2014 Functional Differential Equations21 119–136 https://www.functionaldifferentialequations.com/index.php/ fde/article/view/570.
[60]
Ascher U 2015 Collocation Methods(Berlin, Heidelberg: Springer Berlin Heidelberg) p 224–227 ISBN 978-3-540-70529-1 https://doi.org/10.1007/978-3-540-70529-1_102.
[61]
Pulsipher J L, Zhang W, Hongisto T J and Zavala V M 2022 Computers & Chemical Engineering156 107567 ISSN 0098-1354 https://doi.org/10.1016/j.compchemeng.2021.107567.
[62]
Rackauckas C and Nie Q 2017 Journal of Open Research Software5 15 https://openresearchsoftware.metajnl.com/articles/10.5334/jors.151.
[63]
Jay L O 2015 Lobatto Methods(Springer Berlin Heidelberg) p 817–826 ISBN 9783540705291 http://dx.doi.org/10.1007/978-3-540-70529-1_123.
[64]
Sanders J 2025 GitHub repositoryhttps://github.com/julia-sand/swift_equilibrium.
[65]
Sanders J, Baldovin M and Muratore-Ginanneschi P 2025 Physical Review E111 ISSN 2470-0053 http://dx.doi.org/10.1103/PhysRevE.111.034127.
[66]
Schmiedl T and Seifert U 2007 Physical Review Letters98 108301 (Preprint) https://doi.org/10.1103/PhysRevLett.98.108301.
[67]
Aurell E, Mejía-Monasterio C and Muratore-Ginanneschi P 2011 Physical Review Letters106 250601 https://journals.aps.org/prl/abstract/10.1103/ PhysRevLett.106.250601.
[68]
Aurell E, Mejía-Monasterio C and Muratore-Ginanneschi P 2012 Physical Review E85(2) 020103(R) (Preprint).
[69]
Samuelson P A 1965 The American Economic Review55 486–496 https://www.jstor.org/stable/1814560.
[70]
Von Neumann J 1945 The Review of Economic Studies13 1 ISSN 0034-6527 http://www.jstor.org/stable/2296111.
[71]
Trélat E and Zuazua E 2015 Journal of Differential Equations258 81–114 ISSN 0022-0396 http://dx.doi.org/10.1016/j.jde.2014.09.005.