PINN-based viscosity solution of HJB equation


Abstract

This paper proposed a novel PINN-based viscosity solution for Hamiltonian-Jacobi-Bellman (HJB) equations corresponding for both infinite time and limited time optimal control cases. Although there exists PINN-based work for solving HJB, to the best of our knowledge, none of them gives the solution in viscosity sense. This paper reveals the fact that using the (partial) convex neural network, one can guarantee the viscosity solution. Also we proposed a new training method called strip training method, through which the neural network (NN) could easily converge to the true viscosity solution of HJB despite of the initial parameters of NN, and local minimum could be effectively avoided during training.

1 Introduction↩︎

Physics-Informed Neural Network(PINN) is a powerful tool for solving Partial Differential Equation(PDE) and even for identifying the missing parameters. Since the conception of PINN was first proposed by M. Raissi etc. in [1], lots of PINN-based applications have been created and showed us its great power in solving PDEs that come from various practical problems, e.g. fluid mechanics, medical diagnosis, etc.[2], [3]. Among all the problems, the one that draws great intension for us is the solving of the Hamiltonian-Jacobi-Equation (HJ equation). It is worth noticing that HJ equation has a deep connection with the Regular equations in classical mechanics. On the one hand, the existence of the HJ equation comes from the transformation of Hamilton quantity in order to solve the regular equations in Hamiltonian system, on the other hand, the regular equations are the characteristic lines of the HJ equation.

Another important application of the HJ equation originates from optimal control, where the Hamiltonian has the form of minimum of a family of linear functions. In this case the HJ equation has the name of HJB where the B represents the Bellman in honor of his contribution to principle of optimality. With the HJB equation, it is easy to find the optimal cost for a finite or infinite horizon optimal control problem. Actually, the difference between the HJB and the popular reinforcement learning technique lies in the knowledge of system or environment model, and also if we know the initial condition when start learning. With an accurate model of the system, we can theoretically solve the minimum cost and the corresponding optimal control very easily.

But there is a huge gap between reality and theory: the difficulty of solving the HJB by using traditional numerical methods such as Finite Difference (FD) method and Semi-Lagrangian(SL) method will increase exponentially with the increase of the state dimension of the system. This problem is called the "curse of dimensionality"[4]. Since PINN doesn’t need to discretize the differential (FD method) or search step by step through discrete dynamic programming (SL method), PINN has theoretically a better performance especially when come across high dimensional system. Some of the existing work using PINN for solving HJB equations could be seen in [5][7]

Among all these research, a problem is that none of those research considered the solution in viscosity sense. Actually there exists more than one locally Lipschitz continuous function that satisfy the HJB almost everywhere. The key point is to find the viscosity solution that corresponds to the optimal control. Otherwise, if the PINN is trained directly with the loss term from the HJB, the neural network will converge to other solutions instead the viscosity one, unless the start point of training happens to be close to the answer. The existing solution is to first train the NN to a "guess" solution, which is usually based on the quadratic form of system states. However, it is obvious that this guess method can not guarantee to be correct every time.

In this paper, two PINN-based viscosity solutions for HJB equation are proposed. One is adding an extra term in the loss function, the other is using directly a convex network. Section 2 and Section 3.2 gives the motivation of this paper and some basic introduction of the viscosity solution. Section 4 proves the equivalence of the unique solution of HJB and the convex property of the solution. Section 5 provides two methods to guarantee the uniqueness of solution from neural network and the constructive proof of the convex PINN. Section 6 gives the simulation results of the both methods.

2 Motivation↩︎

Before formally introducing the scheme of the viscosity solution of the HJB equation, it is of great importance to introduce the motivation of studying the solution in viscosity sense instead of classical sense. On the one hand, it is too strong a condition to require differentiable at every point in the domain of a function. Actually, the boundary value problem for a first order P.D.E. does not admit a global \(C^1\) solution in general. Hence it is interesting for us to study the solution in more generalised case, differentiable except limited points. On the other hand, there could be more than one solution even if we could find a smooth solution for our problem. An example is given below.

\[\begin{align} \dot{x} &= ax+bu \\ V^{*} &= \min_{u \in U} \int_{0}^{\infty} \left( x^2 +u^2\right) dt \end{align}\] where \(x\), \(u\) are the system states and control inputs in \(\mathbb{R}^{1}\). The HJB equation corresponding is given below \[H=\min_{u \in U}\left\{ V_x\dot{x}+x^2 +u^2 \right\}=0\]

with the boundary condition \(V(0)=0\). H could be rewritten as \[H=-\frac{1}{4} {V_x}^2b^2 + axV_x + x^2=0\]

From the above equation, it could be imagined that there are at lease two analytical solutions. Actually these solutions are

\[\begin{align} V_x &=2\frac{a \pm \sqrt{a^2+b^2}}{b^2}x \\ V(x) &= \frac{a \pm \sqrt{a^2+b^2}}{b^2} x^2 \end{align}\] we could see that both solutions satisfy the boundary condition, which leads to a natural question that which is the true solution corresponding to our optimal control? It is noticeable that in these two solutions, one is convex, and the other is concave. In our example, it is easy to prove that, the convex one \[\begin{align} V(x) &= \frac{a + \sqrt{a^2+b^2}}{b^2} x^2 \end{align}\] is our true solution.

For a more complex multi-dimensional and nonlinear problem, the solution of HJB could be more complicated. Actually for a nonlinear system, the generalized solution, i.e. locally Lipschitz continuous solution that satisfies HJB almost everywhere, doesn’t have uniqueness even the stability[8]. Next two sections would show that the optimal control problem under the context of concave Hamiltonian would lead to the unique semiconvex solution of equation, just like the example shows.

3 Viscosity solution of HJB↩︎

3.1 HJB for optimal control problem↩︎

Consider an affine system \[\begin{align} \label{affine32system} \dot{x} &= a(x)x+b(x)u, \quad u(t)\in U, \quad t\in \left[0,T\right] \end{align}\tag{1}\] with the cost function

\[\begin{align} \label{cost95function} V(t,x) &= \inf_{u \in U} \int_{0}^{T} \left(x^TQx +u^TRu\right) dt +\psi(x(T))=\inf_{u \in U} \int_{0}^{T} L(x,u) dt +\psi(x(T)) \end{align}\tag{2}\] where \(x \in R^n\) and \(u \in R^m\), and \(Q,R\) are positive definite matrix, and the set \(U\) is all of the admissible control values as defined below.

\[\begin{align} U:= \{ u: \mathbb{R} \to \mathbb{R}^m \;\;measureable, \;\;\;u(t)\in U \;for \;a.e. \;t \} \end{align}\]

Given the initial data \(x(s)=y\), consider the control u and the corresponding trajectory x, we have the following theorem.

Theorem 1. Given an affine system in eq.@eq:affine32system , and the optimal cost function defined in eq.@eq:cost95function , then \(V^{*}(t,x)\) satisfies the following HJB equation \[\begin{align} \label{HJB95finite} V_t + \inf_{u \in U} \{ L(x,u) + V_x\dot{x}\} = V_t + H(x,u,V_x) = 0 \end{align}\qquad{(1)}\] where the \(H(x,u,V_x)\) is usually defined as the Hamiltonian

Proof. First define a function \(\Phi^u(t)\) as \[\begin{align} \Phi^u(t) := \int_s^tL(x(t),u(t))dt+V^*(t,x), \quad t\in \left[ s,T\right] \end{align}\] From the dynamic programming principle \[\begin{align} V^*(t_1,x(t_1)) =\inf_{u \in U} \{ \int_{t_1}^{t_2}\left[ L(x(t),u(t))dt+V^*(t_2,x_2) \right]\} \end{align}\] it is easy to see that

\[\begin{align} \Phi^u(t_1) - \Phi^u(t_2) = V^*(t_1,x(t_1)) -\int_{t_1}^{t_2}\left[ L(x(t),u(t))dt+V^*(t_2,x_2) \right]\leq 0 \end{align}\] Thus function \(\Phi^u(t)\) is non-decreasing, and we have \[\begin{align} \frac{d}{d\tau } \int_{s}^{\tau}\left[ L(x(t),u(t))dt+V^*(\tau,x_{\tau}) \right] & \geq 0 \\ V^*_t + \{ L(x,u) + V_x\dot{x}\} & \geq 0 \end{align}\] Notice that the control \(u\) is optimal if and only if the cost function is minimum \(V^*(s,y)\), hence the function \(\Phi^u(t)\) is constant (optimal cost) if \(u = u^*\), \(\forall t \in [s,T]\) we have \[\begin{align} \frac{d}{d\tau } \int_{s}^{\tau}\left[ L(x(t),u(t))dt+V^*(\tau,x_{\tau}) \right] & = 0 \\ V^*_t + \{ L(x,u^*) + V_x(a(x)+b(x)u^*)\} & = 0 \end{align}\]

Through the above equations, it is easy to see that at a point \((s,y)\) where it is differentiable, the value function satisfies the HJB in eq.@eq:HJB95finite  ◻

Remark 1. It is worth mentioning that, although in some literatures, e.g. [9], [10], the HJB is given in the form \[\begin{align} -\{V_t + \inf_{u \in U} \{ L(x,u) + V_x\dot{x}\}\} = -V_t + \sup_{u \in U} \{ -V_x\dot{x} -L(x,u)\} = 0 \end{align}\] Although the difference of HJB above with eq.@eq:HJB95finite is the minus before the whole equation, the corresponding optimal cost function (viscosity solution) are totally different [8], [9]. The reason could be see clearly from the definition 1 in next subsection.

It is worth mentioning that, the solution of HJB in eq.@eq:HJB95finite does not only share the meaning of optimal cost in our problem, but also can serve as the Lyapnov function for infinite horizon optimization problem, when the \(Q\) and \(R\) in the cost function 2 are positive definite and semi positive definite. For example, it is easy to see that in infinite horizon optimization problem, \(V(x)\) is greater or equal than 0 and has the global minimum at \(V(0)=0\). Further more, \[\begin{align} V^*_x\dot{x}+L(x,u^*) =0& \\ \frac{dV}{dt} = - L(x,u^*)<0& \end{align}\]

3.2 Viscosity solution↩︎

Definition 1. Given a general nonlinear first order PDE \[\begin{align} \label{HJB} F(x, V(x), DV(x)) & =0 & & \text{ in } {R}^n \times[0, \infty) \\ V & =g & & \text{ on } {R}^n \times\{t=T\} \nonumber \end{align}\qquad{(2)}\] the solution \(V(x,t)\) is called the viscosity solution of eq.@eq:HJB

\[\begin{align} \label{viscosity} V(x,t):=\left\{ \left.\lim_{\epsilon \to 0}V^{\epsilon}\right| V^{\epsilon}_t+H(V^{\epsilon}_x,x) =\epsilon \triangle V^{\epsilon} \right\} \end{align}\qquad{(3)}\] where the \(\triangle\) is the divergence notation, and \(\epsilon\) is a small positive number.

Some useful properties of viscosity solution are listed below:

  • Existence. For a nonlinear first order PDE in eq.@eq:HJB , the viscosity solution exists.

  • Uniqueness. For such a problem, there exists at most one viscosity solution.

  • Stability. The viscosity solution is stable in the following sense
    for any disturbance \(d>0\), \(V^d\) is a viscosity solution of \[\begin{align} \label{HJBd} V^d_t+H(V_x^d, x) & =0 & & \text{ in } {R}^n \times(0, \infty) \\ V^d & =g^d & & \text{ on } {R}^n \times\{t=T\} \nonumber \end{align}\tag{3}\] if parameters of the eq.@eq:HJBd converge to eq.@eq:HJB , e.g. \(H^d \to H\), \(g^d \to g\), and \(V^d \to V\) correspondingly, then \(V\) is the viscosity solution of eq.@eq:HJB .

The proof of the above properties could be find in lots of books [8], [9], [11]

Remark 2. The name of viscosity solution comes from the right hand side of the equation above: the vanishing viscosity term. With the quasilinear elliptic equations above, for any \(\epsilon>0\), eq.@eq:viscosity has a unique and bounded solution \(V^{\epsilon} \in C^2(R^n)\)[8]. When the viscosity term vanishes from some small positive quantity to zero, \(V^{\epsilon}\) converges to the unique solution. Apart from defining the uniqueness of solution, viscosity solution is also a general solution that satisfies equation almost everywhere.

It is worth mentioning that, another definition of the viscosity solution which is more commonly used for analysis is given below

Theorem 2. A function \(V(t,x) \in C(\Omega)\) is a viscosity solution [9] of ?? if \[\begin{align} \label{subsolution} V(T,x)=g(x) & &\forall x \in R^n \\ q+H(p, x,u) & \leq0 & \forall (t,x) \in (0,T)\times \Omega, (q,p) \in D^+V(t,x) \\ q+H(p, x,u) & \geq0 & \forall (t,x) \in (0,T)\times \Omega, (q,p) \in D^-V(t,x) \end{align}\qquad{(4)}\] where \(D_{t,x}^{1,+}V(x)\) and \(D_{t,x}^{1,-}V(x)\) are the superdifferentials and subdifferentials of V at x which are given \[\begin{align} \label{supersolution} D_{t,x}^{1,+}V(t,x) =\left\{ \begin{bmatrix} q\\p \end{bmatrix} \in R\times R^n \left. \right| \varlimsup_{ \scriptsize { \begin{array}{c} y \to x \\ s \to t, s \in (0,T) \\ \end{array}} } \frac{V(s,y)-V(t,x)-q(s-t)-<p,y-x>}{\left|y-x\right|+\left|s-t\right|}\leq 0\right\} \end{align}\qquad{(5)}\] \[\begin{align} \label{vnmajorg} D_{t,x}^{1,-}V(t,x) =\left\{ \begin{bmatrix} q\\p \end{bmatrix} \in R\times R^n \left. \right| \varliminf_{ \scriptsize { \begin{array}{c} y \to x \\ s \to t, s \in (0,T) \\ \end{array}} } \frac{V(s,y)-V(t,x)-q(s-t)-<p,y-x>}{\left|y-x\right|+\left|s-t\right|}\geq 0\right\} \end{align}\qquad{(6)}\] Further more, if \(s \to t^+\), then we get the right sup and subdifferentials \(D_{t^+,x}^{1,+}V(t,x)\) and \(D_{t^+,x}^{1,-}V(t,x)\). Similarly, if \(s \to t^-\), then we get the left sub and subdifferentials \(D_{t^-,x}^{1,+}V(t,x)\) and \(D_{t^-,x}^{1,-}V(t,x)\)

Remark 3. From the definition, it could be seen that a vector \(\begin{bmatrix} q^T p^T \end{bmatrix}^T\) is a sub differential if and only if the hyperplane \((s,y) \to V(t,x)+q(s-t)+<p,y-x>\) is tangent from below to the curve of V at the point (t,x). The supdifferential is similar but in the opposite direction. It is also worth to mention that the viscosity solution defined by this way is equivalent to the definition 1

4 Uniqueness of semiconvex solution in viscosity sense↩︎

This section proves the uniqueness of a semi convex solution that satisfy eq.@eq:HJB , given the assumption that Hamiltonian is concave.

4.1 Proof of concave Hamiltonian↩︎

Lemma 1. For affine system in eq.@eq:affine32system with the cost function in eq.@eq:cost95function , the hamiltonian is given by \[\begin{align} \label{Hamiltonian} \min_{u \in U} H(V_x, x, u) = \min_{u \in U}\left\{ V_x \dot{x} + x^TQx +u^TRu \right\} \end{align}\qquad{(7)}\] when the Lagrangian L is unbounded below in x, the Hamiltonian takes on the value \(-\infty\). Since the Hamiltonian is the pointwise minimum of a family of affine function of \(V_x\). It is concave, even when problem eq.@eq:cost95function is not convex[12].

Proof. suppose that the Hamiltonian in eq.@eq:Hamiltonian is convex, there must exists \(x_1, x_2, t\) such that \[\begin{align} H(V_{x_1}+t(V_{x_2}-V_{x_1}))<H(V_{x_1})+t(H(V_{x_2})-H(V_{x_1})) \end{align}\] but this impossible, since the line \(H(V_{x_1})+t(H(V_{x_2})-H(V_{x_1}))\) is already the minimum. ◻

4.2 Equivalence of convex solution and viscosity solution↩︎

Most reference gives the proof in the form of opposite assumptions and opposite conclusions[8], [10], e.g. when the Hamiltonian is convex, the unique viscosity solution is concave. This part of proof would try to give the opposite theorem that is more familiar to control community and the entire proof refers to the skill in[10].

Definition 2. Let V(t,x): \([0,T]\times R^n \to R\) be locally lipschitz. A vector \([q^T p^T]^T \in R^n\) is called a reachable gradient of V(t,x) at \((t,x) \in [0,T]\times R^n\) if a sequence \(\left\{[t_k^T x_k^T]^T\right\} \subset [0,T]\times R^n - \left\{[t^T x^T]^T\right\}\) exists such that V is differentiable at \([t_k^T x_k^T]^T\) for each \(k \in N\), and \[\lim_{k \to \infty} \begin{bmatrix} t_k \\ x_k \end{bmatrix} = \begin{bmatrix} t \\ x \end{bmatrix} , \lim_{k \to \infty} D^*V(t_k,x_k) = \begin{bmatrix} q \\ p \end{bmatrix},\] The set of all reachable gradients of V at (t,x) is denoted by \(D^*V(t,x)\)

Lemma 2. Let V(t,x): \([0,T]\times R^n \to R\), \(V(t,\cdot)\) be a convex function. Then the following properties hold true

\(D_{t,x}^{1,+}V(t,x)\) are nonempty if and only if V is differentiable at (t,x), so \[D_{t,x}^{1,+}V(t,x)=D_{t,x}^{1,-}V(t,x)=DV(t,x)\]

The proof is given in [9] and [10]

Lemma 3. Let V(t,x): \([0,T]\times R^n \to R\), \(V(t,\cdot)\) be a convex function and let \(x \in A\). Then \[D_{t,x}^{1,-}V(t,x)=CH[D^*V(t,x)]\] where CH represents the convex hull notation.

Proof. The proof could be referred to [10] when taking into \(-V(x)\) as semiconvex ◻

Remark 4. it is easy to see that \(D^*V(x)\) is a compact set, and it is bounded since we are takeing \(V(t,\cdot)\) Lipschitz. These propositions shows that, not only \(D^*V(x) \subset D^-V(x)\) but also that all reachable gradients are boundary points of \(D^-V(x)\)

Theorem 3. Given a system in eq.@eq:affine32system , considering the time limited cost function in eq.@eq:cost95function and the HJB equation in eq.@eq:HJB95finite . \(V(t,x)\) is the viscosity solution of the above HJB, if

  • \(V(t,x)\) satisfies the eq.@eq:HJB95finite almost everywhere

  • \(V(t,\cdot)\) is locally liptschitz convex and \(H(x,u,\cdot)\) is locally liptschitz concave

  • \(V(\cdot,x) \in C^1\)

Proof. First we observe that for \(\forall (t,x) \in (0,T)\times R^n\) \[q + H(x,u,p) = 0, \quad \forall [q^T,p^T]^T \in D^*V(t,x)\] This follows directly from the definition of \(D^*V(x)\), the conditions \(V(\cdot,x) \in C^1\) and continuity of H. Since in Lemma 3, \(D^-V(t,x)\) is the convex hull of \(D^*V(t,x)\), and \(H(x,u,\cdot)\) is concave guaranteed from lemma 1 , it is easy to see that \[\begin{align} q + H(p, x,u) & \geq0 & \forall (t,x) \in (0,T) \times R^n, [q^T p^T]^T \in D^-V(t,x) \end{align}\] The next step is to check the \[\begin{align} q + H(p, x,u) & \leq0 & \forall (t,x) \in (0,T) \times R^n, [q^T p^T]^T \in D^+V(t,x) \end{align}\] If there exists some \([q^T p^T]^T \in D^+V(t,x)\), according to the lemma 2, V(t,x) should be differentiable and \([q^T p^T]^T = D^V(t,x)\). Thus the inequality holds as an equality. Hence \(V(t,x)\) is the viscosity solution for eq.@eq:HJB , and uniqueness is ensured. ◻

Corollary 1. Given a system in eq.@eq:affine32system and consider infinite horizon optimal control problem: \[\begin{align} V(x) &= \inf_{u \in U} \int_{0}^{\infty} \left(x^TQx +u^TRu\right) dt \end{align}\] where the HJB equation becomes \[\begin{align} \label{infinite95HJB} \min_{u \in U} H(V_x, x, u) = \min_{u \in U}\left\{ V_x \dot{x} + x^TQx +u^TRu \right\}=0 \end{align}\qquad{(8)}\]

Let \(V(x)\) be a locally liptschitz convex function satisfying eq.@eq:infinite95HJB almost everywhere. If \(H(\cdot,x,u)\) is concave, then V(x) is the viscosity solution of eq.@eq:HJB , thus \(V(x)\) is the unique optimal cost function for eq.@eq:infinite95HJB

Proof. We consider the \(D^*V(x)\) as the special case of \(D^*V(t,x)\), and \(D^-V(x)\) is the special case of \(D_{t,x}^{1,-}V(t,x)\). It could be seen directly from the definition of \(D^*V(x)\) and the continuity of H, that the HJB follows when \(\forall p \in D^*V(x)\). Since \(D^-V(x)\) is the convex hull of \(D^*V(x)\), and \(H(\cdot,x,u)\) is concave, it is easy to see that \[\begin{align} H(p, x,u) & \geq0 & \forall x \in R^n, p \in D^-V(x) \end{align}\] For the super differentials \[\begin{align} H(p, x,u) & \leq0 & \forall x \in R^n, p \in D^+V(x) \end{align}\] If there exists some \(p \in D^+V(x)\), similarly like theorem 3, \(p = DV(x)\). Thus inequality@eq:subsolution hold as an equality. Hence \(V(x)\) is the viscosity solution for eq.@eq:HJB . ◻

5 PINN-based numerical viscosity solution of HJB↩︎

This section introduced two methods to ensure the convex structure of solution based on PINN method

5.1 Method 1: Addding convex term in loss function↩︎

The sufficient and necessary condition of the function \(V(t,.)\) being convex is \[\begin{align} Hess(V_x)=\begin{bmatrix} \frac{\partial^2V(t,x)}{\partial x_1^2} & \cdots & \frac{\partial^2V(t,x)}{\partial x_1x_n}\\ \vdots & \ddots & \vdots\\ \frac{\partial^2V(t,x)}{\partial x_nx_1} & \cdots & \frac{\partial^2V(t,x)}{\partial x_n^2} \end{bmatrix}>0 \\ \iff \left|Hess(V_x)_{i,i}\right|>0 \end{align}\] where \(\left|Hess(V_x)_{i,i}\right|\) is the ith order principal minor determinant.

Hence, the following loss function is thus proposed during training \[\begin{align} Loss = \frac{1}{N_i} \Sigma_{j=1}^{N_i}\left| V_{tj}+H(x_j,V_{xj}) \right|^2 +\frac{1}{N_b} \Sigma_{j=1}^{N_b} \left|V(tj,xj)-V_{NN}(tj,xj)\right|^2 + \frac{1}{N_H} \Sigma_{j=1}^{N_H}\left|\min (0,\left|Hess(V_x)_{j,j} \right|)\right|^2 \end{align}\] where \(N_i\) is the number of inner point, \(N_b\) is the number of boundary point, \(N_H\) is the number of point calculating Hessin matrix. It is worth mentioning that the Tanh activation function would be used in the neural network, hence the smoothness of the solution is guaranteed. Once we can check the positiveness of Hessian matrix of \(V_x\), then the uniqueness of solution is satisfied according to theorem 3.

5.2 Method 2: Design of input convex neural network↩︎

Another method is to construct a partially input convex neural network directly to ensure the convexity of solution in theorem3.

To build a first full input convex neural network, it is necessary to review some basic properties of convex functions

Proposition 1. If \(f(x)\) and \(g(x)\) are both convex functions

  • \(m(x)=\max \left\{ f(x), g(x)\right\}\) and \(h(x)=w_1f(x)+w_2g(x)\) are both convex functions \(\forall w_1>0,w_2>0\)

  • Affine transform doesn’t change convexity, e.g. \(h(p)=f(Ap+b)\) is also convex function

  • \(f(g(x))\) is also convex given that \(g(x)\) is monotonically increasing

From the proposition above, it is easy to prove that the following neural network is a convex function

\[\begin{align} y = W_1\sigma(W_0x+b_0) + b_1\\ where \left\{\left. w_i>0\right|\forall w_i \in W_1 \nonumber \right\},\sigma(x)=\max(0,x) \nonumber \end{align}\] It is clear that this single neural network represents the sum of a bunch of convex functions. To improve the approximation ability, we add a feed forward channel to the output, so that the single layer NN become \[\begin{align} \label{ConvexNN} y = W_1\sigma(W_0x+b_0) + b_1 + fx\\ where\left\{\left. w_i>0\right| \forall w_i \in W_1 \nonumber \right\} \end{align}\tag{4}\] The structure is shown in Fig.1 and the following theorem holds

Figure 1: Convex Neural Network

Theorem 4. Given any convex function \(f(x)\), and \(\forall C>0\),\(\exists y_{NN}(x)\) in the form eq.@eq:ConvexNN , such that \[\begin{align} \label{approxmationtheorem} \| f(x)-y_{NN}(x) \|<C \end{align}\qquad{(9)}\] where \(\| f(x)-y_{NN}(x) \|\) is the \(L_\infty\) norm.

Proof. the eq.@eq:ConvexNN could be re-write in the following form \[\begin{align} \label{NNOneChannel} y_{NN}(x)= \Sigma_{i=1}^{n} w_{1,i} \{ \max \{ w_{0,i}x +b_{0,i}, \;0\}+\frac{b_{1,i}}{w_{1,i}} +\frac{f_{i}}{w_{1,i}}x \} \end{align}\tag{5}\] where i represents the index of ith neural in the hidden layer, \(w_{0,i}, w_{1,i}\) are the weight vector corresponds to the relative hidden neuron, \(b_{0,i}\) is the corresponding bias. Further more \(b_{1,i}, f_{i}\) are the parameters that satisfy \[\begin{align} \Sigma_{i=1}^{n} b_{1,i} = b_1, \Sigma_{i=1}^{n} f_{i} = f \nonumber \end{align}\] it is easy to see that eq.@eq:NNOneChannel represents the effect of one channel through a single hidden neuron, and it is equivalent to the following summation of convex two piece linear function \[\begin{align} \tag{6} y_{NN}(x)&= \Sigma_{i=1}^{n} w_{1,i} \{ \max \{ \left(w_{0,i} + \frac{f_{i}}{w_{1,i}}\right)x +b_{0,i} +\frac{b_{1,i}}{w_{1,i}}, \; \frac{f_{i}}{w_{1,i}}x + \frac{b_{1,i}}{w_{1,i}}\}\} \\ \tag{7} &= \Sigma_{i=1}^{n} \max \{ m_1x +n_{1}, \; m_2x +n_{2}\} \end{align}\] it is obvious that any convex piece-wise linear function \(h(x)\) could represented by the sum of two convex piece linear function, e.g. eq.@eq:TwoPieceLinear . A constructive proof could start from the outermost two piece linear function, plus the following function \[\begin{align} h_j(x)= \max \{ m_jx +n_{j}, 0\} \end{align}\] when comes to the nonsmooth point. Hence \(y_{NN}(x)\) could represents any piecewise linear function. It is also known that convex piecewise linear function is dense in continuous convex function space[13], [14], hence eq.@eq:approxmationtheorem is prooved. ◻

Remark 5. it is worth to point out that, the reason that we don’t use the maximum of a bunch of linear functions directly is that, when this network is used for solving PDEs, which involves lots of computation of derivatives, it would cause oscillation during training, and it is almost impossible to converge to the global optimal. On the contrary, when approximating a convex function directly, this maximum of a bunch linear functions shows good performance.

Next phase is to build the partially convex network. A partial convex neural network is proposed as Fig.2 shows, where the input output relation is given as

\[\begin{align} y &= W_2\sigma(W_0x + W_1{\mathcal{F}}_{{NN}_2}(\hat{y}) + b_0) + b_1\\ \hat{y} &= {\mathcal{F}}_{{NN}_1}(t) \end{align}\] where \(\left\{\left. w_i>0\right|\forall w_i \in W_2 \nonumber \right\},\sigma(x)=\max(0,x)\), and \({\mathcal{F}}_{{NN}_1},{\mathcal{F}}_{{NN}_2}\) are normal fully connected networks.

On the one hand, for a given \(t\), \(V(t,.)\) could be seen as a convex neural network in Fig.1 with a special bias decided by input t, hence \(V(t,.)\) is a convex neural network. On the other hand, for a given \(x\), \(W_0x\) could be regarded as a special bias for the network of \(V(t,.)\). It is worth mentioning that, although \(W_2\) should be element wise positive, there exists a network \({\mathcal{F}}_{{NN}_2}\) with certain layers, so that the input \(\hat{y}\) approximate output \(y\). Hence given a normal fully connected network \({\mathcal{F}}_{{NN}_1}\) with certain layers, the representability from input \(t\) to output \(y\) is not worth than a normal fully connected neural network.

Figure 2: Partial Convex Neural Network

5.3 Strip training method↩︎

A key issue during the training of PINN, is how to find global minimum. For HJB-based problem, we found a efficient training method that could avoid local minimum theoritically, we call it strip training method, which is similar to the curriculum learning in reinforcement learning.

For a given system \[\begin{align} \dot{x} &= a(x)x+b(x)u, \quad u(t)\in U, \quad t\in \left[0,T\right] \end{align}\] with the cost function \[\begin{align} V(t,x) &= \inf_{u \in U} \int_{0}^{T} \left(x^TQx +u^TRu\right) dt +\psi(x(T))=\inf_{u \in U} \int_{0}^{T} L(x,u) dt +\psi(x(T)) \end{align}\] where the cost function \(V(T,x)\), at end time T, is known. The corresponding HJB is given as

\[\begin{align} V_t + \inf_{u \in U} \{ L(x,u) + V_x\dot{x}\} = V_t + H(x,u,V_x) = 0 \end{align}\]

We have the strip training algorithm which is shown in algorithm 4. It worth mentioning that, the method 2 has a better performance than method 1 with less computation time in practice, so this strip training method is studied based on method 2. Although method 1 could also be realized in similar way. The sketch figure is shown in Fig.3

Figure 3: Strip training procedure

The main idea of strip training is, the optimal function \(V(t,x)\) is continuous, and if we start from the boundary, and take \(\Delta t\) small enough, the shape of neural network could approximate the true solution as close as we want. Then the training step is first train the boundary point, then under the shape of boundary points (realized by large \(\lambda\)), add the information of HJB equation to the loss (in step 2.2). Then we treat the trained area as a known area, so after adding a new strip to the training area (in step 3), the large \(\lambda\) is used to maintain the shape of known area. Repeat the procedure till the time reaches to 0.

Figure 4: Strip training method

6 Simulation results↩︎

Here we provide three nonlinear examples.

6.1 Example 1↩︎

\[V^{*} = \min_{u \in U} \int_{0}^{\infty} x^Tx +u^2 =0.5x_1^2+x_2^2\] where \[\begin{align} \dot{x_1} &= -x_1+x_2 \\ \dot{x_2} &= -0.5x_1-0.5x_2+0.5x_1^2x_2 +x_1u \\ \end{align}\] The corresponding HJB is \[\begin{align} x_1^2+x_2^2+(-x_1+x_2)V_{x_1}+(-0.5x_1-0.5x_2+0.5x_1^2x_2)V_{x_2} -0.25x_1^2V_{x_2}^2=0\\ V(0)=0 \end{align}\]

The training data is sampled with the range \([-1,-1]\) to \([1,1]\). results with both methods are shown in Fig.5. The orange surface is the Analytical solution, while the blue surface is the prediction of neural network. It could be seen that in this case, the method 2 has a better accuracy.

Figure 5: Example 1 by method 1

6.2 Example 2↩︎

\[V^{*} = \min_{u \in U} \int_{0}^{\infty} x^Tx +u^2 =0.5x_1^2+x_2^2\] where \[\begin{align} \dot{x_1} &= -x_1+x_2 \\ \dot{x_2} &= -0.5x_1-0.5x_2(1-(cos(2x_1)+2)^2)+(cos(2x_1)+2)u \\ \end{align}\] The corresponding HJB is \[\begin{align} x_1^2+x_2^2+(-x_1+x_2)V_{x_1}+(-0.5x_1-0.5x_2(1-(cos(2x_1)+2)^2))V_{x_2} -0.25(cos(2x_1)+2)^2V_{x_2}^2=0\\ V(0)=0 \end{align}\]

The training data is sampled with the range \([-1,-1]\) to \([1,1]\). results with both methods are shown in Fig.6 . The orange surface is the Analytical solution, while the blue surface is the prediction of neural network. In this case, because of the nonlinearity of system and the convex optimization term, the method 1 could not find the global optimum. While the method 2 find the solution directly.

Figure 6: Example 2 by method 1

6.3 Example 3↩︎

\[V^{*} = \min_{u \in U} \int_{0}^{\infty} x_2^2+u^2 =x_1^2(\frac{\pi}{2}+arctan(5x_1))+x_2^2\] where \[\begin{align} \dot{x_1} &= x_2 \\ \dot{x_2} &= -x_1(\frac{\pi}{2}+arctan(5x_1))-\frac{5x_1^2}{2+50x_1^2}+4x_2 +3u \\ \end{align}\] The corresponding HJB is \[\begin{align} x_2^2+x_2V_{x_1}+(-x_1(\frac{\pi}{2}+arctan(5x_1))-\frac{5x_1^2}{2+50x_1^2}+4x_2)V_{x_2} -\frac{9}{4} V_{x_2}^2=0\\ V(0)=0 \end{align}\] This example is the most hard one to be trained. The results are given in Fig.7. To show that we don’t start at some point that is already close to the true solution, the initial function has a really large distance from the true solution, and by method 2, we could see that the PINN could convergent to the true solution. But the accuracy is not as good as the two example before.

Figure 7: Example 3 by method 1

6.4 Example 4↩︎

For a linear system \[\begin{align} \dot{x}(t) &= x(t) + u(t) \end{align}\] the time limited quadratic cost function is given as \[V^{*}(t,x) = \min_{u \in U}V(t,x)= x^2(T) + \int_{t}^{T} u^2 dt=\frac{2x^2}{1+ e^{2t-2T}}\] where \(T= 10\) in this example. and corresponding HJB is \[\begin{align} V_t - 0.25V^2_x + V_xx = 0 \\ V(10,x)=x^2(10) \end{align}\]

The corresponding training result is shown in Fig.8. It could be seen that, for time limited case, the method 2 could well approximate the true solution of the HJB.

Figure 8: Example 4 by method 2

References↩︎

[1]
M. Raissi, P. Perdikaris, and G. E. Karniadakis, “Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations,” Journal of Computational physics, vol. 378, pp. 686–707, 2019.
[2]
F. Sahli Costabal, Y. Yang, P. Perdikaris, D. E. Hurtado, and E. Kuhl, “Physics-informed neural networks for cardiac activation mapping,” Frontiers in Physics, vol. 8, p. 42, 2020.
[3]
S. Cai, Z. Mao, Z. Wang, M. Yin, and G. E. Karniadakis, “Physics-informed neural networks (pinns) for fluid mechanics: A review,” Acta Mechanica Sinica, vol. 37, no. 12, pp. 1727–1738, 2021.
[4]
J. Rust, “Using randomization to break the curse of dimensionality,” Econometrica: Journal of the Econometric Society, pp. 487–516, 1997.
[5]
R. Furfaro, A. D’Ambrosio, E. Schiassi, and A. Scorsoglio, “Physics-informed neural networks for closed-loop guidance and control in aerospace systems,” in AIAA SCITECH 2022 Forum, 2022, p. 0361.
[6]
A. Mukherjee and J. Liu, “Bridging physics-informed neural networks with reinforcement learning: Hamilton-jacobi-bellman proximal policy optimization (hjbppo),” arXiv preprint arXiv:2302.00237, 2023.
[7]
D. He, S. Li, W. Shi, X. Gao, J. Zhang, J. Bian, L. Wang, and T.-Y. Liu, “Learning physics-informed neural networks without stacked back-propagation,” in International Conference on Artificial Intelligence and Statistics.PMLR, 2023, pp. 3034–3047.
[8]
M. Bardi, I. C. Dolcetta et al., Optimal control and viscosity solutions of Hamilton-Jacobi-Bellman equations.Springer, 1997, vol. 12.
[9]
A. Bressan, “Viscosity solutions of hamilton-jacobi equations and optimal control problems,” Lecture notes, 2011.
[10]
P. Cannarsa and C. Sinestrari, Semiconcave functions, Hamilton-Jacobi equations, and optimal control.Springer Science & Business Media, 2004, vol. 58.
[11]
L. C. Evans, Partial differential equations.American Mathematical Society, 2022, vol. 19.
[12]
S. P. Boyd and L. Vandenberghe, Convex optimization.Cambridge university press, 2004.
[13]
M. G. Cox, “An algorithm for approximating convex functions by means by first degree splines,” The Computer Journal, vol. 14, no. 3, pp. 272–275, 1971.
[14]
M. M. Gavrilović, “Optimal approximation of convex curves by functions which are piecewise linear,” Journal of Mathematical Analysis and Applications, vol. 52, no. 2, pp. 260–282, 1975.