Perfecting Periodic Trajectory Tracking:
Model Predictive Control with a Periodic Observer (\(\Pi\)-MPC)

Luis Pabon\(^{1}\), Johannes Köhler\(^{2}\), John Irvin Alora\(^{1}\), Patrick Benito Eberhard\(^{2}\), Andrea Carron\(^{2}\),
Melanie N. Zeilinger\(^{2}\), Marco Pavone\(^{1}\)
123


Abstract

In Model Predictive Control (MPC), discrepancies between the actual system and the predictive model can lead to substantial tracking errors and significantly degrade performance and reliability. While such discrepancies can be alleviated with more complex models, this often complicates controller design and implementation. By leveraging the fact that many trajectories of interest are periodic, we show that perfect tracking is possible when incorporating a simple observer that estimates and compensates for periodic disturbances. We present the design of the observer and the accompanying tracking MPC scheme, proving that their combination achieves zero tracking error asymptotically, regardless of the complexity of the unmodelled dynamics. We validate the effectiveness of our method, demonstrating asymptotically perfect tracking on a high-dimensional soft robot with nearly 10,000 states and a fivefold reduction in tracking errors compared to a baseline MPC on small-scale autonomous race car experiments.

Supplementary Material↩︎

Code: https://github.com/StanfordASL/Pi-MPC

Video: https://youtu.be/vBgiodXCQVQ

1 Introduction↩︎

Model Predictive Control (MPC)[1] is a state-of-the-art method for reference tracking due to its effectiveness in enforcing safety constraints and handling nonlinear models. However, tracking performance is limited by the accuracy of the nominal model used to predict the actual system’s behavior. This prediction model is typically an approximation of the true and unknown underlying dynamics. The resulting model mismatch can lead to significant tracking errors and makes achieving perfect tracking with MPC challenging, if not impossible.

Recent works have addressed this issue by learning the residual dynamics of a system with deep neural networks [2], Gaussian processes [3], [4], or other data-driven methods. With enough data, these approaches can achieve high tracking accuracy. However, optimizing over complex, expressive, nonlinear models increases the computational burden and can complicate the controller design and implementation.

Repetitive tasks, and hence periodic trajectory tracking, play a crucial role across a broad spectrum of applications in robotics and control. Representative examples include legged locomotion [5], industrial manipulation [6], and autonomous racing [7]. Leveraging the periodic nature of these tasks presents an opportunity to achieve perfect tracking without the need for complex, data-driven models.

A similar observation has been made in the context of setpoint tracking. Offset-free MPC schemes [8][11] ‘learn’ only what is necessary to achieve the control task. The key idea is to augment the model and use a disturbance observer that estimates a constant offset to account for steady-state error. Thus, despite using a simplified model, the MPC can achieve exact convergence – but only to a desired setpoint.

Statement of Contributions: In this work, we propose an extension of offset-free MPC that asymptotically achieves zero tracking error for general periodic reference signals, i.e., perfect tracking, despite a large model mismatch. Our contributions are as follows:

  1. We present the design of a linear observer to estimate a periodic disturbance that captures model mismatch throughout the period. The observer ensures that the model’s output predictions match measurements from the real system upon convergence.

  2. We incorporate these estimated disturbances in a simple tracking MPC and provide sufficient conditions to theoretically guarantee that the scheme achieves zero tracking error asymptotically.

  3. While the initial presentation considers linear prediction models, we show how the method can be applied to nonlinear models and formulate a simple nonlinear MPC scheme that also achieves exact periodic tracking.

  4. Lastly, we validate our approach through

    1. Finite Element Method (FEM) simulations on a 9768-dim. soft robot using an MPC based on a learned 6-dim. linear model (Fig. [fig:titlefig]A),

    2. hardware experiments on a miniature race car using nonlinear MPC based on a simple kinematic bicycle model (Fig. [fig:titlefig]B).

    Despite the use of simple models, our method consistently yields minimal tracking errors in both scenarios.

Related Work: The field of control theory has extensively explored the estimation and rejection of periodic disturbances. Frameworks like Iterative Learning Control (ILC) [12] and Repetitive Control (RC) [13] can improve tracking accuracy by ‘learning’ from past errors [14]. ILC is tailored for scenarios where systems undergo a state reset with each new operation cycle, whereas RC is designed for systems continuously transitioning across cycles.

Following the Internal Model Principle [15], RC incorporates a periodic signal generator in the controller, allowing it to reject periodic disturbances. Formulations combining RC and MPC were proposed in [16], [17], showing success on periodically time-varying linear systems. However, RC and ILC directly utilize the measured error from the last cycle to update the prediction model, which can lead to poor performance due to non-repeating errors, e.g., from measurement noise [18].

In contrast, offset-free MPC methods [8][11] avoid such pitfalls by using a more general disturbance observer to filter deterministic disturbances caused by model mismatch. The design ensures zero steady-state error and can balance noise suppression and convergence rate by tuning the observer. While this method is successfully used in many implementations [19], [20], its focus is mainly on setpoint tracking.

From a technical perspective, our work is related to [21], which generalizes offset-free MPC methods to references generated by arbitrary, unstable dynamics. By focusing on periodic problems, we can provide a simpler parametrization and design for the observer (cf. 4 and [21]) and an effective MPC design for nonlinear systems (Sec. 5). The problem of periodic optimal control with inexact models is also addressed in [22] using periodic disturbance observers and modifier adaptation. However, this implementation utilizes knowledge of gradients of the actual system.

Our approach merges the principles of disturbance observers and repetitive control. By augmenting the MPC nominal model with a lifted periodic disturbance, our approach extends RC to nonlinear systems with constraints. Using an observer (instead of direct updates) allows users to balance noise reduction and convergence speed. By incorporating offset-free MPC techniques and design principles into RC, we ensure perfect asymptotic tracking of periodic signals despite model mismatches.

Outline: We begin by describing the problem setup (Sec. 2). Then, we present the proposed periodic disturbance observer (Sec. 3) and the corresponding periodic tracking MPC (Sec. 4), including convergence guarantees (Thm. 1). While this exposition considers linear prediction models for simplicity, we also discuss how the method naturally generalizes to nonlinear prediction models (Sec. 5). Lastly, we provide results from simulation and hardware experiments (Sec. 6) and present our conclusions (Sec. 7).

2 Problem Setup↩︎

Notation: We denote the quadratic norm with respect to a positive definite matrix \(Q = Q^\top\) by \(\| x \|_Q^2\vcentcolon=x^\top Qx\). Non-negative integers are denoted by $ {}$, positive integers by $ {>0}$, and integers in the interval \([a,b]\) by $ _{[a,b]}$. The \(n \times n\) identity matrix is denoted by \(I_n\). The spectrum of a matrix \(A\) is denoted by \(\sigma(A)\). The Kronecker product between matrices \(A\) and \(B\) is denoted by \(A \otimes B\).

We consider a discrete-time nonlinear system \[\begin{align} \label{eq:plant} \begin{aligned} & x^{\mathrm{f}}(t+1)=f^{\mathrm{f}}\left(x^{\mathrm{f}}(t), u(t)\right),\\ & y^{\mathrm{f}}(t)=g^{\mathrm{f}}\left(x^{\mathrm{f}}(t)\right), \\ & z(t) = Hy^{\mathrm{f}}(t), \end{aligned} \end{align}\tag{1}\] where \(x^{\mathrm{f}}\in \mathbb{R}^{n}\) represents the system state, \(u \in \mathbb{R}^{n_u}\) the control input, and \(y^{\mathrm{f}}\in \mathbb{R}^{n_y}\) the output measured at each time \(t \in \mathbb{Z}_{\geq 0}\). The functions \(f^{\mathrm{f}}\) and \(g^{\mathrm{f}}\) are assumed to be unknown. The controlled variable, \(z \in \mathbb{R}^{n_{r}}\), is a linear combination of the measured outputs where, without loss of generality, we assume \(H\) has full row rank (\(n_r \leq n_y\)).

The primary objective for the controlled variable \(z\) is to track a periodic reference signal \(r(t) \in \mathbb{R}^{n_r}\) for all time \(t \in \mathbb{Z}_{\geq 0}\). We denote the reference period as \(N \in \mathbb{Z}_{>0}\) such that periodicity of \(r\) implies \(r(t+N)=r(t)\).

We consider a linear time-invariant (LTI) nominal model \[\begin{align} \label{eq:nominal} \begin{aligned} &x(t+1) = A x(t) + B u(t),\\ &y(t) = C x(t), \end{aligned} \end{align}\tag{2}\] with output \(y \in \mathbb{R}^{n_y}\) and state \(x \in \mathbb{R}^{n_x}\), allowing for \(n_x\) to be different from \(n\). We assume that \((A, B)\) is controllable, \((A, C)\) is observable, and \(C\) has full row rank. The model is subject to the constraints \[x(t) \in \mathcal{X}, \; u(t) \in \mathcal{U}, \quad \forall t \in\mathbb{Z}_{\geq 0},\] where the sets \(\mathcal{X}\) and \(\mathcal{U}\) are assumed to be compact.

The goal is to design an MPC scheme where the controlled variable asymptotically converges to the periodic ref., i.e., \[\lim _{t \rightarrow \infty}\|z(t) - r(t)\|=0.\] Hence, we assume there exist control inputs such that the system 1 can track the reference while satisfying constraints.

To this end, we design a linear observer that estimates periodic disturbances (Sec. 3). We then combine it with a tracking MPC formulation and establish convergence guarantees (Sec. 4). While we initially consider an LTI model to streamline the exposition, we also extend the method for application with nonlinear models (Sec. 5).

3 Periodic Disturbance Observer Design↩︎

In this section, we introduce a simple linear observer 6 to estimate periodic disturbances that account for the model mismatch. In particular, we first present an augmented model 4 , discuss its observability (Prop. 1), and end by characterizing the observer’s convergence properties (Prop. 2).

To capture the model mismatch of the true system 1 with respect to the nominal model 2 throughout the period \(N\), we estimate a ‘lifted’ disturbance \(\mathbf{d}\in \mathbb{R}^{n_yN}\). Specifically, the lifted disturbance corresponds to \(N\) disturbances \[\label{eq:Ndists} \mathbf{d}(t) = \begin{bmatrix} {d_{0}(t)}^\top & {d_1(t)}^\top & \cdots & {d_{N-1}(t)}^\top \end{bmatrix}^\top,\tag{3}\] where each \(d_{k}(t) \in \mathbb{R}^{n_y}\) represents the disturbance prediction computed at time \(t\) for the expected disturbance at \(k\) time steps in the future, for \(t \in \mathbb{Z}_{\geq 0}, \, k\in \mathbb{Z}_{[0, N - 1]}\).

Our goal will be to augment the nominal model 2 with these periodic disturbances in such a way that the augmented model is observable, i.e., we can estimate both the state and disturbance vectors online. For this, we introduce the matrices \(\overline{B}\in \mathbb{R}^{n_x \times n_y}\) and \(\overline{C}\in \mathbb{R}^{n_y \times n_y}\) as design choices for how the disturbances should act on the state and output, respectively. Augmenting the nominal model 2 with the lifted disturbance 3 yields \[\begin{align} \label{eq:LDO} \begin{aligned} \begin{bmatrix} x(t+1) \\ \mathbf{d}(t+1) \end{bmatrix} = \begin{bmatrix} A & \overline{B}S_{\mathrm{sel}}\\ 0 & S_d \end{bmatrix} &\begin{bmatrix} x(t) \\ \mathbf{d}(t) \end{bmatrix} + \begin{bmatrix} B \\ 0 \end{bmatrix} u(t),\\ y(t) = \begin{bmatrix} C & \overline{C}S_{\mathrm{sel}} \end{bmatrix} &\begin{bmatrix} x(t) \\ \mathbf{d}(t) \end{bmatrix} , \end{aligned} \end{align}\tag{4}\] where \(S_{\mathrm{sel}}= \begin{bmatrix} I_{n_y} & 0 & \cdots & 0 \end{bmatrix} \in \mathbb{R}^{n_y\times n_yN}\) is a selection matrix that picks out the current (first) disturbance, and \(S_d\) advances the disturbance prediction by one time step using the cyclic forward shift matrix \(S \in \mathbb{R}^{N \times N}\), defined as \[\label{eq:sd} S= \begin{bmatrix} 0 & 1 & 0 & 0 \\\\ \vdots & 0 & \ddots & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & \cdots & 0\\ \end{bmatrix}, \quad S_d= S \otimes I_{n_y}.\tag{5}\] In particular, we have that \(S_d^N = I_{n_yN}\). Due to the block structure of matrix \(S_d\), all of its eigenvalues \(\lambda_k=e^{i 2 \pi k / N}, \, k\in \mathbb{Z}_{[0, N - 1]}\) lie on the unit circle, i.e., \(\vert \lambda\vert =1\), and have algebraic and geometric multiplicity of \(n_y\). The case \(N=1\) corresponds to a constant disturbance, which recovers the offset-free MPC disturbance model [8][11].

Next, we design an observer to estimate the state \(x\) and the disturbance \(\mathbf{d}\) of the augmented model 4 . The following proposition clarifies when this model is observable.

Proposition 1. The augmented system 4 is observable if and only if \[\label{eq:obsRank} \operatorname{rank} \begin{bmatrix} A - \lambda I_{n_x} & \overline{B}\\ C & \overline{C} \end{bmatrix} = n_x + n_y, \quad \text{for all } \lambda \in \sigma(S_d).\qquad{(1)}\]

The proof is provided in the appendix.

When Prop. 1 holds, i.e., 4 is observable, the periodic disturbance \(\mathbf{d}\) and the state \(x\) can be uniquely reconstructed from a trajectory of \(y\) and \(u\). This ensures that a linear observer can be designed to estimate \(\mathbf{d}\), \(x\). In turn, these estimates will be used in the ensuing predictions of the model to enable the estimated output to converge to the true output.

Therefore, we must choose \(\overline{B},\overline{C}\) such that the observability condition ?? holds. The following remark discusses how to select disturbance models to satisfy this condition.

Remark 1. Suppose for simplicity that the eigenvalues of \(A\) and \(S_d\) are distinct4. Then a simple choice of a disturbance model consists of an output disturbance, \(\overline{B}=0\), \(\overline{C}=I_{n_y}\), which satisfies condition ?? (cf. [10]). Alternatively, a pure input disturbance can be modeled by choosing \(\overline{C}=0\), and condition ?? reduces to choosing \(\overline{B}\) such that \(\det(C(A - \lambda I_{n_x})^{-1}\overline{B})\neq 0\) \(\forall \lambda\in\sigma(S_d)\).

The special case of full state measurement, i.e., \(C=I_{n_x}\), enables a simple design using \(\overline{B}=I_{n_x},\, \overline{C}=0\), which can even be directly applied to nonlinear models – see Sec. 5.

More guidelines and existing results for the choice of a disturbance model can be found in [8][10].

To estimate the state and disturbance vectors online, we design a simple Luenberger observer: \[\begin{align} \label{eq:estimator} \begin{aligned} \begin{bmatrix} \hat{x}(t+1) \\ \mathbf{\hat{d}}(t+1) \end{bmatrix} &= \begin{bmatrix} A & \overline{B}S_{\mathrm{sel}}\\ 0 & S_d \end{bmatrix} \begin{bmatrix} \hat{x}(t) \\ \mathbf{\hat{d}}(t) \end{bmatrix} + \begin{bmatrix} B \\ 0 \end{bmatrix} u(t)\\ & + \begin{bmatrix} L_x \\ L_d \end{bmatrix} \left(-y^{\mathrm{f}}(t) + C \hat{x}(t) + \overline{C}S_{\mathrm{sel}}\mathbf{\hat{d}}(t) \right). \end{aligned} \end{align}\tag{6}\] Given observability, we can design a stable estimator 6 using standard techniques, e.g., pole placement or Kalman filtering. The design of \(L_x, L_d\) allows users to balance noise reduction against faster estimator convergence. When the input and output signals become periodic5, the observer converges to a periodic trajectory, characterized in the following proposition.

Proposition 2. Suppose the input and output signal are asymptotically \(N\)-periodic, i.e., \(u(t+N) = u(t)\) and \(y(t+N) = y(t)\) for \(t\rightarrow\infty\). Then, the estimator 6 converges to periodic trajectories \(\mathbf{\hat{x}}\), \(\mathbf{\hat{d}}\) that satisfy \[\begin{align} \label{eq:ObsSSR} \begin{bmatrix} A_N - S_x& B_N \\ C_N & 0 \end{bmatrix} \begin{bmatrix} \mathbf{\hat{x}}(t) \\ \mathbf{u}(t) \end{bmatrix} = \begin{bmatrix} - {\overline{B}}_N \mathbf{\hat{d}}(t)\\ \mathbf{y^{\mathrm{f}}}(t) - {\overline{C}}_N \mathbf{\hat{d}}(t) \end{bmatrix}, \end{align}\qquad{(2)}\] where we denote \(M_N \vcentcolon=I_N \otimes M\) for any matrix \(M\), define \(S_x\vcentcolon=S \otimes I_{n_x}\) as the block-cyclic permutation matrix, and introduce \(u_k(t) \vcentcolon=u(t+k)\) and \(y^{\mathrm{f}}_k(t) \vcentcolon=y^{\mathrm{f}}(t+k)\) to express the periodic trajectories in the limit \(t \to \infty\) as \[\begin{align} \mathbf{u}(t) &= \begin{bmatrix} u_0(t)^\top & u_{1}(t)^\top & \cdots & u_{N-1}(t)^\top \end{bmatrix}^\top,\\ \mathbf{y^{\mathrm{f}}}(t) &= \begin{bmatrix} {y^{\mathrm{f}}_0}(t)^\top & {y^{\mathrm{f}}_{1}}(t)^\top & \cdots & {y^{\mathrm{f}}_{N-1}}(t)^\top \end{bmatrix}^\top,\\ \mathbf{\hat{x}}(t) &= \begin{bmatrix} {\hat{x}_0}(t)^\top & {\hat{x}_{1}}(t)^\top & \cdots & {\hat{x}_{N-1}}(t)^\top \end{bmatrix}^\top. \end{align}\]

Periodic input and output with a stable observer 6 implies that \(\hat{x}\) and \(\mathbf{\hat{d}}\) asymptotically converge to a periodic trajectory with the same period [23]. Thus, as \(t \to \infty\), we have \(\mathbf{\hat{d}}(t+N) = \mathbf{\hat{d}}(t) = S_d^N \mathbf{\hat{d}}(t)\).

Defining \(e(t) = -y^{\mathrm{f}}(t) + C \hat{x}(t) + \overline{C}\hat{d}_0(t)\) and focusing on the bottom row of 6 we obtain \[\begin{align} &\mathbf{\hat{d}}(t+N) = S_d^N \mathbf{\hat{d}}(t) + \sum_{j=0}^{N-1} S_d^{N-1 - j} L_d e(t+j)\\ &\iff 0= \begin{bmatrix} S_d^{N-1} L_d & \cdots & S_dL_d & L_d \end{bmatrix} \begin{bmatrix} e(t)\\ \vdots \\ e(t+N-1) \end{bmatrix}. \end{align}\] The \(N n_y\times N n_y\) matrix above is the controllability matrix for \((S_d, L_d)\), which has full rank since the observer 6 is stable – see Prop. 3 in the appendix. Inverting, we have \[\begin{align} 0 &= e(i) = -y^{\mathrm{f}}(i) + C \hat{x}(i) + \overline{C}\hat{d}(i), \; \forall i \in\mathbb{Z}_{[t, t+N-1]}. \label{eq:zero95err} \end{align}\tag{7}\] Substituting 7 into the top row of 6 , we obtain \[\begin{align} \label{eq:per95state} \hat{x}(i+1) = A \hat{x}(i) + \overline{B}\hat{d}(i) + B u(i). \end{align}\tag{8}\] Combining 7 and 8 leads to ?? .

Prop. 2 generalizes the theoretical results in [10] and allows us to provide convergence guarantees for the MPC in the next section.

4 Periodic Model Predictive Control (\(\Pi\)-MPC)↩︎

We now present an MPC scheme that leverages the observer 6 to asymptotically achieve zero tracking error for periodic references. The observer provides estimates \(\mathbf{\hat{d}}\) that are used to compute targets \(\mathbf{\bar{x}}\), \(\mathbf{\bar{u}}\) 9 for the state and input, respectively. The MPC problem 11 is formulated to minimize the deviation from these targets and ensure that the estimated controlled variable converges to the reference.

4.1 Target computation↩︎

We compute the state and input targets \[\begin{align} \mathbf{\bar{x}}(t) &= \begin{bmatrix} \bar{x}_{0}(t)^\top & \bar{x}_{1}(t)^\top & \cdots & \bar{x}_{N-1}(t)^\top \end{bmatrix}^\top, \\ \mathbf{\bar{u}}(t) &= \begin{bmatrix} \bar{u}_{0}(t)^\top & \bar{u}_{1}(t)^\top & \cdots & \bar{u}_{N-1}(t)^\top \end{bmatrix}^\top, \end{align}\] at time \(t\) using\[\label{eq:ref95sel} \begin{bmatrix} A_N - S_x& B_N \\ H_N C_N & 0 \end{bmatrix} \begin{bmatrix} \mathbf{\bar{x}}(t) \\ \mathbf{\bar{u}}(t) \end{bmatrix} = \begin{bmatrix} - {\overline{B}}_N \mathbf{\hat{d}}(t)\\ \mathbf{r}(t) - H_N {\overline{C}}_N \mathbf{\hat{d}}(t) \end{bmatrix},\tag{9}\] where \(H_N = I_N \otimes H\). The targets correspond to the trajectory \(\mathbf{\bar{x}}, \mathbf{\bar{u}}\) that achieves reference tracking for a given disturbance estimate \(\mathbf{\hat{d}}(t)\), analogous to ?? in Prop. 2.

Consequently, we assume that \[\label{eq:wellposed} \operatorname{rank} \begin{bmatrix} A - \lambda I_{n_x} & B \\ HC & 0 \end{bmatrix}= n_x + n_r, \quad \forall \lambda \in \sigma(S_d).\tag{10}\] This condition6 ensures that the target computation 9 is feasible for any disturbance estimate \(\mathbf{\hat{d}}\) and any reference \(\mathbf{r}\), see [21]. Condition 10 requires the transmission zeros from \(u\) to \(z\) to be distinct from \(\sigma(S_d)\), which holds generically for random matrices and is a necessary condition for tracking and disturbance rejection for the LTI system  [24].

Having to compute the targets \(\mathbf{\bar{x}}(t), \mathbf{\bar{u}}(t)\) at each time step may be computationally expensive or undesirable in practice. This limitation can be addressed using the alternative \(\Pi\)-MPC formulation 17 discussed in Sec. 5.

4.2 \(\Pi\)-MPC formulation↩︎

We now formulate the MPC with horizon length \(L\) as \[\tag{11} \begin{align} \min_{u_0, \ldots, u_{L-1}} \,& \| x_L - \bar{x}_{L}(t) \|_P^2\\ &+ \sum_{k=0}^{L-1} \| x_{k} - \bar{x}_{k}(t) \|_Q^2 + \| u_{k} - \bar{u}_{k}(t) \|_R^2 \nonumber\\ \text{s.t.} \quad & x_{k+1} = A x_{k} + B u_{k} + \overline{B}\hat{d}_k (t), \quad k \in\mathbb{Z}_{[0,L-1]}, \nonumber\\ & x_0 = \hat{x}(t), \nonumber\\ & x_{k} \in \mathcal{X}, \quad k \in\mathbb{Z}_{[1,L]}, \tag{12}\\ & u_{k} \in \mathcal{U}, \quad k \in\mathbb{Z}_{[0,L-1]}, \tag{13} \end{align}\] where \(Q \succeq 0\), \(R \succ 0\), \((A, Q)\) is detectable, the terminal cost \(P\) is chosen using a linear quadratic regulator (LQR), and we assume \(L<N\) for simplicity. At each time \(t\), the observer provides the state estimate \(\hat{x}(t)\) and the estimates \(\hat{d}_k (t)\) for the expected disturbance \(k\) time steps ahead. We denote the optimal solution to 11 with a star (\(^\star\)).

The following algorithm recaps the offline design of the periodic MPC scheme (\(\Pi\)-MPC), which includes the disturbance model, the observer, and the LQR design.

Figure 1: \(\Pi\)-MPC (Offline Design)

The following algorithm summarizes the closed-loop control of 1 with the observer 6 and MPC 9 , 11 .

Figure 2: \(\Pi\)-MPC (Online Operation)

The considered problem could also be addressed with the disturbance observer design from [21], which decomposes general linear signals using eigenmodes. Naively applying eigenmode decomposition methods to periodic problems fails to account for sparsity, thus complicating observer design and MPC target calculations as complexity scales with period length. In contrast, the proposed periodic disturbance observer provides a simple and efficient MPC implementation through sequential disturbances in time.

4.3 Convergence analysis↩︎

The following theorem provides the main theoretical result of this paper, showing that the closed-loop system resulting from Alg. 2 asymptotically achieves zero tracking error.

Theorem 1. Assume that the MPC problem 11 is feasible for all \(t \in \mathbb{Z}_{\geq 0}\), the constraints 12 -13 are inactive after some time \(t \geq j\) with \(j \in \mathbb{Z}_{\geq 0}\), and the closed-loop system in Alg. 2 converges to a periodic trajectory denoted by \(\mathbf{u}(t), \mathbf{y^{\mathrm{f}}}(t)\). Then zero tracking error is achieved asymptotically, i.e., \(\|z(t) - r(t)\| \to 0\) as \(t\rightarrow \infty\).

The following proof extends the arguments in [10] to the periodic reference tracking case.

As \(t \to \infty\), Prop. 2 implies that the \(N\)-periodic sequences \(\mathbf{u}\), \(\mathbf{y^{\mathrm{f}}}\), \(\mathbf{\hat{d}}\), and \(\mathbf{\hat{x}}\) satisfy ?? . By definition, the targets \(\mathbf{\bar{x}}\) and \(\mathbf{\bar{u}}\) satisfy 9 . Left multiplying the second row of ?? by \(H_N\), and then subtracting 9 from ?? we obtain \[\begin{align} \label{eq:errorSSR} \begin{bmatrix} A_N - S_x& B_N \\ H_N C_N & 0 \end{bmatrix} \begin{bmatrix} \mathbf{\hat{x}}(t) - \mathbf{\bar{x}}(t) \\ \mathbf{u}(t) - \mathbf{\bar{u}}(t) \end{bmatrix} &= \begin{bmatrix} 0 \\ H_N \mathbf{y^{\mathrm{f}}}(t) - \mathbf{r}(t) \end{bmatrix}. \end{align}\tag{14}\]

Consider a change of variables in the MPC problem 11 to \(\delta x_k = x_k - \bar{x}_k(t)\) and \(\delta u_k = u_k - \bar{u}_k(t)\). Since the constraints are inactive for \(t \geq j\), the MPC 11 is equivalent to \[\begin{align} \min_{\delta u_0, \ldots, \delta u_{L-1}} \, & \| \delta x_L \|_P^2 + \sum_{k=0}^{L-1} \| \delta x_{k} \|_Q^2 + \| \delta u_{k} \|_R^2 \nonumber\\ \text{s.t.} \quad & \delta x_{k+1} = A \delta x_{k} + B \delta u_{k}, \quad k \in\mathbb{Z}_{[0,L-1]}, \nonumber\\ & \delta x_0 = \hat{x}(t) - \bar{x}_{0}(t). \end{align}\]

Since the terminal cost \(P\) is chosen based on the LQR, the optimal input is given by the unconstrained optimal LQR, i.e., \(\delta u_0^\star(t) = K \delta x(t) = K(\hat{x}(t) - \bar{x}_{0}(t))\). Furthermore, given \((A,B)\) stabilizable and \((A,Q)\) detectable, \(A+BK\) is (Schur) stable. Thus, as \(t\to \infty\), the top row of 14 implies \[\begin{align} \left( (A + B K)_N - S_x\right) \left( \mathbf{\hat{x}}(t) - \mathbf{\bar{x}}(t) \right) &= 0, \end{align}\] where the left matrix is invertible by stability of \(A + B K\). Finally, since \(\mathbf{\hat{x}}(t) - \mathbf{\bar{x}}(t) = 0\), the bottom row of 14 yields \[\lim_{t\rightarrow\infty}\|Hy(t)-r(t)\|=\lim_{t\rightarrow\infty}\|z(t)-r(t)\|=0. \qedhere\]

Figure 3: Left: Picture of the Diamond robot mesh. The reference figure-eight trajectory for the tip is shown with a dashed black line. The red arrows indicate the actuator inputs, i.e., applied forces at the elbows. Right: Simulation results illustrate tracking performance over ten periods for a high-frequency, 2D figure-eight trajectory. We compare a standard MPC scheme (left), offset-free MPC (center), and our proposed MPC with a periodic disturbance observer (\(\Pi\)-MPC) (right), all shown in blue. The bottom plots show the tracking error over time. The shading indicates time progression, with lighter shades representing earlier portions of the trajectory. The dashed black line represents the reference trajectory.

5 Implementation and convergence for nonlinear MPC↩︎

In the following, we discuss how to generalize the presented design and analysis to more practical nonlinear MPC problems. Specifically, we discuss convergence guarantees with general nonlinear disturbance observers (Sec. 5.1), a nonlinear MPC implementation which does not require the explicit computation of the targets \(\mathbf{\bar{x}}\), \(\mathbf{\bar{u}}\) (Sec. 5.2), and simple designs in case of state measurement (Sec. 5.3).

5.1 Nonlinear disturbance observer - convergence analysis↩︎

The augmented linear model 4 is generalized to a nonlinear model \[\begin{align} \label{eq:NDO} \begin{aligned} \begin{bmatrix} x(t+1) \\ \mathbf{d}(t+1) \end{bmatrix} &= \begin{bmatrix} f(x(t),u(t),S_{\mathrm{sel}}\mathbf{d}(t))\\ S_d\mathbf{d}(t) \end{bmatrix},\\ y(t) &= h(x(t),S_{\mathrm{sel}}\mathbf{d}(t)). \end{aligned} \end{align}\tag{15}\] The observer 6 generalizes to \[\begin{align} \label{eq:estimator95nonlinear} \begin{aligned} \begin{bmatrix} \hat{x}(t+1)\\ \mathbf{\hat{d}}(t+1) \end{bmatrix} =& \begin{bmatrix} f(\hat{x}(t),u(t),S_{\mathrm{sel}}\mathbf{\hat{d}}(t))\\ S_d\mathbf{\hat{d}}(t) \end{bmatrix},\\ &+ \ell( y^{\mathrm{f}}(t), \hat{x}(t),S_{\mathrm{sel}}\mathbf{\hat{d}}(t) ). \end{aligned} \end{align}\tag{16}\] The analysis of this nonlinear periodic disturbance model is based on a combination of the arguments for nonlinear disturbance observers in [25] and the proposed periodic disturbance observer (Sec. 34). Similar to Prop. 2, if a stable observer is designed and the input and output trajectories converge to a periodic trajectory, then the converged estimates satisfy \(y^{\mathrm{f}}(t)=h(\hat{x}(t),\hat{d}_0(t))\) (cf. [25]). Then, asymptotic tracking of the reference can be ensured if the MPC design ensures \(\lim_{t\rightarrow\infty}\|H h(\hat{x}(t),\hat{d}_0(t)))-r(t)\|=0\) whenever the prediction model is exact.

5.2 Nonlinear MPC design↩︎

Application of the MPC scheme (Alg. 2) with the nonlinear model 15 is complicated by two factors: (i) computing a periodic target \(\mathbf{\bar{x}},\mathbf{\bar{u}}\) 9 is computationally expensive; (ii) relating the MPC scheme to an LQR or designing a suitable terminal penalty \(P\) becomes non-trivial. Instead, the following MPC formulation from [26] provides a simple solution: \[\begin{align} \label{eq:MPC95alternative95nonlinear} \begin{aligned} \min_{u_0, \ldots, u_{L-1}} \quad & \sum_{k=0}^{L-1} \| z_{k} - r_{k} \|_{Q_z}^2 + \| u_{k} - u_{k-N} \|_R^2 \\ \text{s.t.} \quad & x_{k+1} = f(x_{k},u_{k},\hat{d}_k (t)), \quad x_0 = \hat{x} (t), \\ & y_{k} = h(x_k,\hat{d}_k (t)),\\ & z_{k} = H y_k, \quad k \in\mathbb{Z}_{[0,L-1]},\\ & x_{k} \in \mathcal{X}, \quad k \in\mathbb{Z}_{[1,L]}, \\ & u_{k} \in \mathcal{U}, \quad k \in\mathbb{Z}_{[0,L-1]}, \end{aligned} \end{align}\tag{17}\] with \(Q_z\) positive definite. This formulation directly minimizes the error with respect to the reference and regularizes the input by penalizing non-periodicity. Under suitable technical conditions (involving stabilizability, detectability, and non-resonance), this MPC scheme satisfies the desired tracking properties, i.e., the reference is asymptotically tracked when the prediction model is exact if the horizon \(L\) is chosen sufficiently large, see [26] for details.

5.3 Nonlinear observer design↩︎

The design of nonlinear observers with guaranteed stability is generally challenging, but promising results can often be obtained using a simple extended Kalman filter (EKF). In the special case of full state measurements \(y(t)=x(t)\), a simple linear observer can be designed for an additive disturbance model \(x(t+1)=f(x(t),u(t))+S_{\mathrm{sel}}\mathbf{d}(t)\). The disturbance observer is given by \[\begin{align} \label{eq:nonlinear95observer95simple} \begin{aligned} &\mathbf{\hat{d}}(t+1)=S_d\mathbf{\hat{d}}(t)\\ &+S_{\mathrm{sel}}^\top L_d(f(x(t),u(t))+ \hat{d}_0(t) -x(t+1)), \end{aligned} \end{align}\tag{18}\] which essentially corresponds to an update of \(\hat{d}_0(t)\) based on the difference between the prediction and the new measured state. Here, the matrix \(L_d\) should be chosen such that \(I+L_d\) is Schur stable, e.g., \(L_d=-\lambda I_{n_y}\), \(\lambda\in(0,1)\).

Overall, the design from Sec. 34 is naturally extended to nonlinear MPC while maintaining the simplicity and theoretical guarantees for asymptotically perfect tracking.

6 Experiments↩︎

We demonstrate our approach’s broad applicability on two challenging robotic systems with significant model mismatch, leveraging the MPC outlined in Sec. 5.2. First, we validate the \(\Pi\)-MPC scheme in simulation on an underactuated soft robot with nearly 10,000 degrees of freedom, where we use a simple linear 6-dimensional prediction model. Next, we apply our method to a real-world miniature race car and show its ability to achieve near-perfect tracking of a given reference with a simple kinematic bicycle model.

Figure 4: Experimental results illustrate tracking with a race car. We compare a standard MPC scheme and the same MPC with the proposed periodic disturbance observer (\(\Pi\)-MPC), all shown in blue. The shading indicates time progression, with lighter shades representing earlier portions of the trajectory. The dashed black line represents the reference trajectory.

6.1 Soft robot finite element simulation↩︎

We now apply our approach in simulation to the ‘Diamond’ soft robot (shown in Fig. 3). We compare:

  1. an MPC scheme using only the nominal model

  2. the same MPC equipped with a constant disturbance observer (Offset-free MPC [8][11])

  3. the same MPC with the proposed periodic disturbance observer (\(\Pi\)-MPC).

We demonstrate that \(\Pi\)-MPC asymptotically eliminates tracking error, achieving perfect tracking on a challenging, high-frequency periodic control task.

We conduct simulations through the SOFA finite-element-based physics simulator [27]. The mesh used to represent the Diamond robot is available in the SoftRobots plugin [28]. The Diamond robot features four actuators, shown in red in Fig. 3, each pulling at an elbow. The robot’s physical parameters match those reported in [29], with Young’s modulus of \(E = 175\) MPa, Poisson ratio of \(\nu = 0.45\), and Rayleigh damping parameters \(\alpha = 2.5\) and \(\beta = 0.01\), where the damping matrix is defined as \(C_{\mathrm{damp.}} = \alpha \mathbf{M} + \beta \mathbf{K}\). The output measurement, \(y^\text{f}\), is the \(xyz\) position of the robot’s tip (see 3) at time \(k\) and at time \(k-1\). A continuous-time, 6-dimensional linear model is learned using the framework outlined in [29] by specifying a first-order polynomial basis.

To build the MPC, we use a time-discretization of the model of \(dt = 0.01\) s and an MPC prediction horizon of \(L = 3\). Since the learned model’s \(A\) matrix has no eigenvalues in common with \(S_d\), we choose a disturbance model with \(\overline{B}=0\), \(\overline{C}=I\), see Remark 1. Finally, the observer gains in 6 are calculated using a Kalman filter, with resulting magnitudes of closed-loop eigenvalues between \(0.68\) and \(0.98\).

The control task has the robot’s end effector tracing a figure-eight along the \(xy\)–plane, with freedom in the \(z\)-direction. The figure-eight trajectory has an amplitude of \(35\) mm and a frequency of \(2\) Hz, corresponding to a period length of \(N = 50\). We note that no random noise is added to the simulation. Fig. 3 shows the superior tracking performance of our method compared to the baselines over ten periods. The average tracking error in the last period is \(32.66\) mm for standard MPC, \(18.59\) mm for Offset-free MPC, and \(0.25\) mm for the proposed \(\Pi\)-MPC. The standard MPC scheme exhibits poor closed-loop tracking performance and fails to track the desired high-frequency trajectory, as the learned linear model does not give accurate predictions of the full nonlinear FEM system. The offset-free disturbance observer improves performance but still exhibits large tracking errors as it tries to estimate a constant disturbance, despite the model mismatch being time-varying. Instead, our approach properly considers the disturbances at each point throughout the trajectory. As expected from the presented theory, \(\Pi\)-MPC ensures that the tracking error decays to zero asymptotically despite significant model discrepancy. In fact, after 50 periods, the error reduces below \(1 \times 10^{-2}\) mm.

6.2 Miniature race car experiments↩︎

In the following, we showcase the practicality of the proposed approach in realistic conditions through hardware experiments – see Fig. [fig:titlefig]B. The experiments were conducted on a miniature RC car (scale 1:28) in combination with the CRS software framework; for details on the overall code framework and the involved hardware see [30]. The MPC uses a simple kinematic bicycle model for the car [31]: \[\begin{align} \dot{p}_{\mathrm{x}}=&v\cos(\psi+\beta),\quad \dot{p}_{\mathrm{y}}=v\sin(\psi+\beta)\\ \dot{\psi}=&v/l_{\mathrm{r}} \sin(\beta),\quad \dot{v}=a,\quad \beta=\arctan\left(\frac{l_{\mathrm{r}}}{l_{\mathrm{r}}+l_{\mathrm{f}}} \tan(\delta)\right)\\ x=&[p_{\mathrm{x}},p_{\mathrm{y}},\psi,v]^\top\quad u=[\delta,a]^\top,\quad z=[p_{\mathrm{x}},p_{\mathrm{y}}]^\top, \end{align}\] where \(p_{\mathrm{x}/\mathrm{y}}\) are the positions, \(\psi\) is the heading angle, \(v\) the velocity, \(\beta\) the slip, \(\delta\) the steering angle, and \(a\) the acceleration. The state \(x\) is measured using a Qualiysis motion capture system. As discussed in Sec. 5, we use the observer 18 , and the design consists of choosing a gain matrix \(L_d\in\mathbb{R}^{4\times 4}\) that determines convergence speed. Experiments are conducted with \(L_d = -\mathrm{diag}(0.1,0.1,0.2,0.2)\). The periodic reference is chosen as a physically feasible trajectory on the racetrack based on past experiments. When solving 17 , we penalize non-periodicity of \(du/dt\) instead of \(u\) to yield smoother operation. The MPC problem 17 is solved online using acados [32]. The overall implementation considers a prediction horizon of \(L=40\), a period length of \(N=231\), and a sampling period of \(40\) ms.

In the experiments, we compare a naïve MPC implementation, using only the model, with the proposed \(\Pi\)-MPC approach, which additionally uses the periodic disturbance observer. The experimental results are illustrated in Fig. 4. Both MPC formulations provide identical results in the first lap. After ten laps, the MPC baseline has an average error of \(6.22\) cm, while \(\Pi\)-MPC has an average error of only \(1.26\) cm, five times lower. Over the course of sixteen laps, the proposed formulation reduces the peak tracking error in a lap to \(2.8\) cm while the naïve MPC implementation continues to show peak errors over \(14\) cm. Hence, as seen in Fig. 4, we reduce the tracking error by a factor of five after only a few laps. Although the theory suggests convergence to zero error, the presence of non-deterministic effects, such as noise and delays, may result in small fluctuations.

Overall, the baseline MPC exhibits significant tracking errors and oscillations caused by the model mismatch. In contrast, the proposed formulation achieves almost perfect tracking after a few periods. Notably, the proposed approach has minimal design complexity and is implemented in a modular way in addition to an existing MPC implementation.

7 Conclusion↩︎

Our work shows that including a periodic disturbance observer in MPC is a simple and effective method to remove tracking errors for periodic references. Specifically, we have shown that the proposed \(\Pi\)-MPC is

  • easily implemented on top of an existing MPC scheme,

  • characterized with theoretical guarantees, and

  • validated numerically and experimentally to achieve minimal tracking errors, even with significant model-system mismatches.

Appendix↩︎

This proof extends the results in [10] to periodic disturbances. From the Hautus observability condition [33], the observability of system 4 is equivalent to \[\label{eq:augRank} \operatorname{rank} \begin{bmatrix} A-\lambda I_{n_x} & \overline{B}S_{\mathrm{sel}}\\ 0 & S_d- \lambda I_{n_yN} \\ C & \overline{C}S_{\mathrm{sel}} \end{bmatrix} = n_x + n_yN,\tag{19}\] for all \(\lambda \in \mathbb{C}\).

When \(\lambda \notin \sigma(S_d)\), we have that \(S_d- \lambda I_{n_yN}\) is full rank. Furthermore, since \((A,C)\) is observable, the Hautus condition on 2 implies \(\operatorname{rank} \left( \begin{bmatrix} (A-\lambda I_{n_x})^\top & C^\top \end{bmatrix}^\top \right) = n_x\). Thus, the left and right sides of the matrix contribute \(n_x\) and \(n_yN\) independent columns, respectively, and 19 holds.

When \(\lambda \in \sigma(S_d)\), we have \(\lambda = \lambda_k=e^{i 2 \pi k / N}, \, k\in \mathbb{Z}_{[0, N - 1]}\). Since the geometric multiplicity of each \(\lambda_k\) is \(n_y\), the dimension of the null-space of \(S_d- \lambda_k I_{n_yN}\) is \(n_y\). The rank-nullity theorem implies that \(\operatorname{rank}\left(S_d- \lambda_k I_{n_yN} \right)= n_y(N - 1)\). These \(n_y(N - 1)\) columns are clearly independent from the left side of the matrix and can be removed from the Hautus condition 19 , yielding \[\operatorname{rank} \begin{bmatrix} A-\lambda_k I_{n_x} & \overline{B}S_{\mathrm{sel}}\\ C & \overline{C}S_{\mathrm{sel}} \end{bmatrix} = n_x + n_y.\] Disregarding the additional zero columns introduced by multiplying \(\overline{B}\) and \(\overline{C}\) with \(S_{\mathrm{sel}}\) yields the rank condition ?? . Thus, condition 19 is equivalent to ?? .

Proposition 3. Assume the observer 6 is stable. Then the controllability matrix for the pair \((S_d, L_d)\) is full row rank.

By stability of the observer 6 , we have that \[\begin{align} \det \left( \begin{bmatrix} A - \lambda I_{n_x} + L_x C & (\overline{B}+ L_x \overline{C}) S_{\mathrm{sel}}\\ L_d C & S_d- \lambda I_{n_yN} + L_d \overline{C}S_{\mathrm{sel}} \end{bmatrix} \right) \neq 0, \end{align}\] for all unstable eigenvalues, \(|\lambda| \geq 1\). Hence, for all \(\lambda \in \sigma(S_d)\), the bottom \(n_yN\) rows must be full row rank, i.e., \[\begin{align} n_yN &= \operatorname{rank} \left( \begin{bmatrix} L_d C & S_d- \lambda I_{n_yN} + L_d \overline{C}S_{\mathrm{sel}} \end{bmatrix} \right),\\ &= \operatorname{rank} \left( \begin{bmatrix} L_d & S_d- \lambda I_{n_yN} \end{bmatrix} \begin{bmatrix} C & \overline{C}S_{\mathrm{sel}}\\ 0 & I_{n_yN} \end{bmatrix} \right),\\ &= \operatorname{rank} \left( \begin{bmatrix} L_d & S_d- \lambda I_{n_yN} \end{bmatrix} \right). \end{align}\] The last equality leverages the full row rank of \(C\), ensuring the upper triangular matrix also has full row rank. The claim now follows from the Hautus Lemma [33].

References↩︎

[1]
J. Rawlings, D. Mayne, and M. Diehl, Model Predictive Control: Theory, Computation, and Design. Nob Hill Publishing, 2017.
[2]
G. Shi, X. Shi, M. O’Connell, R. Yu, K. Azizzadenesheli, A. Anandkumar, Y. Yue, and S.-J. Chung, “Neural lander: Stable drone landing control using learned dynamics,” in Proc. International Conference on Robotics and Automation (ICRA), pp. 9784–9790, 2019.
[3]
J. Kabzan, L. Hewing, A. Liniger, and M. N. Zeilinger, “Learning-based model predictive control for autonomous racing,” IEEE Robotics and Automation Letters, vol. 4, no. 4, pp. 3363–3370, 2019.
[4]
G. Torrente, E. Kaufmann, P. Föhn, and D. Scaramuzza, “Data-driven MPC for quadrotors,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 3769–3776, 2021.
[5]
P. Holmes, R. J. Full, D. Koditschek, and J. Guckenheimer, “The dynamics of legged locomotion: Models, analyses, and challenges,” SIAM Review, vol. 48, no. 2, pp. 207–304, 2006.
[6]
C. Cosner, G. Anwar, and M. Tomizuka, “Plug in repetitive control for industrial robotic manipulators,” in Proc. IEEE International Conference on Robotics and Automation, pp. 1970–1975 vol.3, 1990.
[7]
A. Romero, S. Sun, P. Foehn, and D. Scaramuzza, “Model predictive contouring control for time-optimal quadrotor flight,” IEEE Transactions on Robotics, vol. 38, no. 6, pp. 3340–3356, 2022.
[8]
T. Badgwell and K. Muske, “Disturbance model design for linear model predictive control,” in Proc. American Control Conference, vol. 2, pp. 1621–1626 vol.2, May 2002.
[9]
G. Pannocchia and J. B. Rawlings, “Disturbance models for offset-free model-predictive control,” AIChE Journal, vol. 49, no. 2, pp. 426–437, 2003.
[10]
U. Maeder, F. Borrelli, and M. Morari, “Linear offset-free Model Predictive Control,” Automatica, vol. 45, pp. 2214–2222, Oct. 2009.
[11]
G. Pannocchia, M. Gabiccini, and A. Artoni, “Offset-free MPC explained: Novelties, subtleties, and applications,” IFAC-PapersOnLine, vol. 48, pp. 342–351, Jan. 2015.
[12]
H.-S. Ahn, Y. Chen, and K. L. Moore, “Iterative learning control: Brief survey and categorization,” IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 37, no. 6, pp. 1099–1121, 2007.
[13]
L. Cuiyan, Z. Dongchun, and Z. Xianyi, “A survey of repetitive control,” in Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), vol. 2, pp. 1160–1166 vol.2, 2004.
[14]
Y. Wang, F. Gao, and F. J. Doyle, “Survey on iterative learning control, repetitive control, and run-to-run control,” Journal of Process Control, vol. 19, no. 10, pp. 1589–1600, 2009.
[15]
B. A. Francis and W. M. Wonham, “The internal model principle for linear multivariable regulators,” Applied Mathematics and Optimization, vol. 2, pp. 170–194, June 1975.
[16]
J. H. Lee, S. Natarajan, and K. S. Lee, “A model-based predictive control approach to repetitive control of continuous processes with periodic operations,” Journal of Process Control, vol. 11, no. 2, pp. 195–207, 2001.
[17]
R. Cao and K.-S. Low, “A Repetitive Model Predictive Control Approach for Precision Tracking of a Linear Motion System,” IEEE Transactions on Industrial Electronics, vol. 56, pp. 1955–1962, June 2009.
[18]
M. Li, T. Yan, C. Mao, L. Wen, X. Zhang, and T. Huang, “Performance-enhanced iterative learning control using a model-free disturbance observer,” IET Control Theory & Applications, vol. 15, no. 7, pp. 978–988, 2021.
[19]
A. Carron, E. Arcari, M. Wermelinger, L. Hewing, M. Hutter, and M. N. Zeilinger, “Data-driven model predictive control for trajectory tracking with a robotic arm,” IEEE Robotics and Automation Letters, vol. 4, no. 4, pp. 3758–3765, 2019.
[20]
J. Chen, Y. Dang, and J. Han, “Offset-free model predictive control of a soft manipulator using the Koopman operator,” Mechatronics, vol. 86, p. 102871, 2022.
[21]
U. Maeder and M. Morari, “Offset-free reference tracking with model predictive control,” Automatica, vol. 46, pp. 1469–1476, Sept. 2010.
[22]
V. Mirasierra and D. Limon, “Modifier-adaptation for real-time optimal periodic operation,” arXiv preprint arXiv:2309.09680, 2023.
[23]
S. W. Haddleton, “Steady state performance of discrete linear time-invariant systems,” Master’s thesis, Rochester Institute of Technology, 1994.
[24]
E. Davison, “The robust control of a servomechanism problem for linear time-invariant multivariable systems,” IEEE Transactions on Automatic Control, vol. 21, no. 1, pp. 25–34, 1976.
[25]
M. Morari and U. Maeder, “Nonlinear offset-free model predictive control,” Automatica, vol. 48, pp. 2059–2067, Sept. 2012.
[26]
J. Köhler, M. A. Müller, and F. Allgöwer, “Constrained nonlinear output regulation using model predictive control – extended version,” IEEE Transactions on Automatic Control, vol. 67, pp. 2419–2434, May 2022.
[27]
J. Allard, S. Cotin, F. Faure, P.-J. Bensoussan, F. Poyer, C. Duriez, H. Delingette, and L. Grisoni, SOFA-an open source framework for medical simulation,” in MMVR 15-Medicine Meets Virtual Reality, vol. 125, pp. 13–18, IOP Press, 2007.
[28]
E. Coevoet, T. Morales-Bieze, F. Largilliere, Z. Zhang, M. Thieffry, M. Sanz-Lopez, B. Carrez, D. Marchal, O. Goury, J. Dequidt, et al., “Software toolkit for modeling, simulation, and control of soft robots,” Advanced Robotics, vol. 31, no. 22, pp. 1208–1224, 2017.
[29]
J. I. Alora, M. Cenedese, E. Schmerling, G. Haller, and M. Pavone, “Data-driven spectral submanifold reduction for nonlinear optimal control of high-dimensional robots,” in Proc. IEEE International Conference on Robotics and Automation (ICRA), pp. 2627–2633, IEEE, 2023.
[30]
A. Carron, S. Bodmer, L. Vogel, R. Zurbrügg, D. Helm, R. Rickenbach, S. Muntwiler, J. Sieber, and M. N. Zeilinger, “Chronos and CRS: Design of a miniature car-like robot and a software framework for single and multi-agent robotics and control,” in Proc. IEEE International Conference on Robotics and Automation (ICRA), pp. 1371–1378, IEEE, 2023.
[31]
R. Rajamani, Vehicle dynamics and control. Springer Science & Business Media, 2011.
[32]
R. Verschueren, G. Frison, D. Kouzoupis, J. Frey, N. v. Duijkeren, A. Zanelli, B. Novoselnik, T. Albin, R. Quirynen, and M. Diehl, “acados—a modular open-source framework for fast embedded optimal control,” Mathematical Programming Computation, vol. 14, no. 1, pp. 147–183, 2022.
[33]
E. D. Sontag, Mathematical Control Theory: Deterministic Finite Dimensional Systems. Texts in Applied Mathematics, New York: Springer, second ed., 1998.

  1. \(^{1}\)Department of Aeronautics and Astronautics, Stanford University, Stanford, CA (e-mail: {lpabon, jjalora, pavone}stanford.edu?).↩︎

  2. \(^{2}\)Institute for Dynamic Systems and Control, ETH Zürich, Zürich CH-8092, Switzerland (e-mail: {jkoehle, peberhard, carrona, mzeilinger}ethz.ch?).↩︎

  3. Johannes Köhler was supported by the Swiss National Science Foundation under NCCR Automation (grant agreement 51NF40 180545).↩︎

  4. Disjoint spectra \(\sigma(A) \cap \sigma(S_d) = \emptyset\) are expected in general for random matrices \(A\). Otherwise, \(\overline{B}\) should be chosen such that \(A-\overline{B}C\) has distinct eigenvalues from \(S_d\), e.g., using pole-placement.↩︎

  5. This behavior is expected in cases where the MPC yields bounded closed-loop trajectories (see Sec. 6).↩︎

  6. Condition 10 implies \(n_r \leq n_u\). When \(n_r < n_u\), 9 is under-determined and we use the minimal norm solution.↩︎