September 30, 2025
This paper is dedicated to the analysis of forward backward stochastic differential equations driven by a Lévy process. We assume that the generator and the terminal condition are path-dependent and satisfy a local Lipschitz condition. We study
solvability and Malliavin differentiability of such BSDEs.
The proof of the existence and uniqueness is done in three steps. First of all, we truncate and localize the terminal condition and the generator. Then we use an iteration argument to get bounds for the solutions of the truncated BSDE (independent from the
level of truncation). Finally, we let the level of truncation tend to infinity. A stability result ends the proof. The Malliavin differentiability result is based on a recent characterisation for the Malliavin Sobolev space \(\mathbb{D}_{1,2}\) by S. Geiss and Zhou.
Keywords: Lévy driven BSDEs, path dependent and locally Lipschitz parameters, Malliavin differentiability of BSDEs, existence and uniqueness of solutions to BSDEs
Backward stochastic differential equations (BSDEs) have been studied for many years, since the seminal papers from Pardoux and Peng ([1] in 1990 and [2] in 1992) that study the Brownian motion setting. Then, this class of equation was extended in the setting of random measures associated to a Lévy process by Barles, Buckdahn, and Pardoux ([3] in 1997). Given that BSDEs have numerous applications, for example in optimal stochastic control, mathematical finance, semilinear PDEs or stochastic differential games, existence and uniqueness of solutions have been a central topic, as has Malliavin differentiability, the connection to semilinear PDEs for forward backward SDEs, and numerical simulation. We refer to [4] for an extensive presentation of results and applications in the Brownian setting and to [5] for a comprehensive literature review concerning the jump framework.
To match the settings in applications, weaker assumptions on the coefficients are often needed. For example, quadratic or super-quadratic behavior in the control variable instead of Lipschitz continuity is often needed in optimal stochastic control applications. In the Brownian setting, the quadratic case was deeply studied since the seminal paper [6], and it is known that it requires rather strong necessary assumptions on the terminal condition and the random part of the generator: in [7] and [8] existence and uniqueness were shown if they satisfy an exponential moment condition. We also refer to [9] where a method to prove existence of solutions to quadratic BSDEs with unbounded terminal conditions using comparison theorems is developed. On the contrary, the super-quadratic setting is much less studied. First results were obtained in [10] for the bounded case and later in [11] for a framework where the control variable is bounded. Then, the unbounded framework (for the terminal condition, the random part of the generator and the control variable) was tackled in [12].
In the Lévy setting, the quadratic-exponential framework - this designation coming from the fact that it allows an exponential growth with respect to the jump part of the control variable - was strongly studied due to some applications to exponential utility maximization problems, see e.g. [13]–[16] or [17] in the bounded case. The unbounded case was studied in [18] and in the preprints [19] and [20]. Still in the unbounded case we also refer to the preprint [21] for the reflected setting and to [22] for the marked point processes setting. As in the Brownian setting, the super-quadratic framework gets much less attention: see [23].
In the present paper we consider the FBSDE given by 2 and 3 below. These equations are driven by a Brownian motion and/or an independent compensated Poisson random measure. We treat a path-dependent framework (for the SDE’s drift and the dependence of the FBSDE on the SDE) but we can emphasize that our setting includes the Markovian case: see Remark 1 and Example 1.
Our main result, Theorem 2, requires the Assumptions 1 and 2 and states existence and uniqueness of the solutions which are bounded by an expression involving the \(\sup\)-norm of the forward process. It also states that the solutions are Malliavin differentiable. In order to better reflect the novelty of our results, it is important to put our assumptions into perspective with regard to the current state of the art.
If \(\tt r=0\) (\(\tt r\) is introduced in Assumption 2), our results generalize [23] since we do not need any boundedness on the terminal condition and the random part of the generator. It also extends some Brownian framework results of [11] to the path dependent setting.
In the pure Brownian setting, we do not cover results of [12] since we need to assume \(0 \leq {\tt r} \leq \frac{1}{2\ell}\), while in [12] we only need \(0 \leq {\tt r} \leq \frac{1}{\ell}\). This limitation seems strange at first sight since our proof strategy is inspired by [12] but can be easily explained. Indeed, the technical Lemma 4 which is essential in our proof gives us that \[\mathbb{E}[e^{c\sup_{0 \leq t \leq T} |X_t|^p}]<+\infty\] for all \(c > 0\) and all \(p \leq 1\). It is easy to show that this estimate is not true for \(p>1\) by considering a standard Poisson process. Nevertheless, in the Brownian setting this estimate can be extended to \(p=2\) and \(c\) small enough: remark that this is obvious for the standard Brownian motion and see e.g. part 5 in [24] for the general case.
In the quadratic framework (\(\ell=1\)), we have an existence and uniqueness result in some cases where the terminal condition is not exponentially integrable. Indeed, by taking \({\tt r}=1/2\) and \(|X_T|^{3/2}\) for the terminal condition, non purely Brownian Lévy processes satisfy \(\mathbb{E}\left[e^{c|X_T|^{3/2}}\right]=+\infty\) for all \(c>0\). This is not in contradiction with known results for unbounded quadratic-exponential BSDEs where the exponential integrability is asked and cannot be relaxed for the classical entropic risk measure since our framework does not cover the quadratic-exponential setting.
We assume in Assumptions 1 that \(\sigma\) and \(\rho\) are deterministic. This strong assumption is essential in our proof and cannot be relaxed easily to a setting where \(\sigma\) and \(\rho\) are functions of the process (and even less for functions of the process path). Indeed, in the Brownian setting we can show that the estimate ?? for \(Z\) is still valid in the quadratic framework, for the bounded setting or \({\tt r}=0\): see [12] and [25] where BMO tools are essential. Let us remark that in the quadratic framework, the estimates ?? are interesting for some applications, for example analysis of numerical schemes, but are not useful for proving existence and uniqueness results. Finally, up to our knowledge, there is nothing known in the super-quadratic framework where we do not have BMO tools on hand. In order to preserve the readability of this article and since only the quadratic framework could be tackled, we will not treat here the extension of [12] and [25] to the Lévy setting when \(\sigma\) and \(\rho\) are not deterministic.
For the Malliavin differentiability of SDEs there is a very general result in [26]. However, it does not cover our specific path-dependent setting which permits a pointwise estimate for the Malliavin derivative. Our result on Malliavin differentiability of BSDEs with jumps generalizes [27] and [17] to the unbounded case.
Let us now give some details concerning the proof strategy and tools used.
The starting point of the proof of the existence and uniqueness result is to truncate/localize the terminal condition \(g\) and the generator \(({\boldsymbol{x}},z,u) \mapsto f(.,{\boldsymbol{x}},.,z,u)\) in order to come across the classical Lipschitz framework. Then, a crucial step is given by Proposition 14 where, as in [12], an iteration argument is applied to get bounds for the solution of the truncated BSDE, independent from the level of truncation. Finally, we make the truncation level tend to infinity and we conclude thanks to a stability result given by Lemma 18.
Concerning the Malliavin differentiability result, a recent characterisation for the Malliavin Sobolev space \({\mathbb{D}_{1,2}}\) from [28] (see Theorem 5 below) enables us to get Malliavin differentiability in the Gaussian setting of the path-dependent SDE and BSDE very easily. Another characterisation of \({\mathbb{D}_{1,2}}\) goes back to Sugita ([29]) and was rediscovered in the recent years as a useful tool ([26], [27], [30]). In combination with results from Janson [31] we use it here for instance to estimate Malliavin derivatives of path dependent functions in the Gaussian setting or in the ‘Gaussian direction’ of the derivative in the Lévy setting. Let us emphasize that these estimates are crucial to get bounds on the truncated solution in Proposition 14.
Eventually, we highlight that the estimates ?? , in particular those on \(Z\) and \(U\), can be used for tackling interesting applications. For example, a comparison result can be derived easily thanks to ?? and the classical linearization technique. Then, this comparison result can be used to prove that the unique solution of our FBSDE provides a viscosity solution to the associated integro-PDE. It could be also possible to prove the (classical) differentiability of the solution under appropriate assumptions, by following same ideas as in [32]. Finally, the estimates ?? allow to get an upper bound on the truncation error between the original BSDE and the one where the dependence of the generator with respect to \((z,u)\) is bounded in order to get a classical Lipschitz generator. This kind of result is important if we want to have a numerical scheme for solving the BSDE: Indeed, a natural approach is to use a classical scheme on the Lipschitz approximated BSDE (e.g. [33]) and then upper bound the global error thanks to the truncation error and the discretization error for the truncated BSDE. Obviously, the two terms depend strongly on the truncation level. Nevertheless, in the Brownian setting it is possible to set it in a way such that the global error is reasonable, see [12], and it should be possible to use the same in the Lévy setting.
In the remaining, the paper is organized as follows: Section 2 contains necessary assumptions and notations in order to understand the underlying problem. Section 3 presents our main result on existence, uniqueness and Malliavin differentiability of superquadratic BSDEs together with remarks, explanations and examples. The sections following directly afterwards are preliminaries for the proof of the main result but convey interesting results themselves: In Section 4 we show that the forward process admits all exponential moments and Section 5 presents tools for Malliavin calculus in the Lévy setting. Finally, in Section 6 bounds for the BSDE’s solution processes are obtained and in Section 7, together with an a priori-estimate, the main result’s proof is finalized. Some auxiliary results are postponed into the 8ppendix.
Let \(\mathcal{L}=\left( \mathcal{L}_t\right)_{t\in{[0,T]}}\) (for \(T>0\) fixed) be a càdlàg Lévy process on a complete probability space \((\Omega,\mathcal{F},\mathbb{P})\) with Lévy measure \(\nu\) on \(\mathbb{R}_0:= \mathbb{R}\!\setminus\!\{0\}\) (especially, also \(\nu\equiv 0\) is permitted). We will denote the augmented natural filtration of \(\mathcal{L}\) by \(\left({\mathcal{F}_t}\right)_{t\in{[0,T]}}\) and assume that
\(\mathcal{F}=\mathcal{F}_T.\)
The Lévy-Itô decomposition of a Lévy process \(\mathcal{L}\) can be written as \[\begin{align}
\mathcal{L}_t = \text{b} t + \varsigma W_t + \int_{{]0,t]}\times \{ |x|\le1\}} x\tilde{\mathcal{N}}(ds,dx) + \int_{{]0,t]}\times \{ |x|> 1\}} x \mathcal{N}(ds,dx),
\end{align}\] where \(\text{b}\in \mathbb{R}, \varsigma\geq 0\), \(W\) is a Brownian motion and \(\mathcal{N}\) (\(\tilde{\mathcal{N}}\)) is the (compensated) Poisson random measure associated with \(\mathcal{L}\).
Let \(D{[0,T]}\) denote the space of càdlàg functions on the interval \({[0,T]}\). For \({\boldsymbol{x}}\in D[0,T]\) let \(|{\boldsymbol{x}}|_\infty := \sup_{0\le t\le T} |{\boldsymbol{x}}_t|.\)
We equip \(D[0,T]\) with the (modified) Skorokhod \(J_1\)-metric \(d_{J_1}\), (see [34]) given by \[\begin{align} d_{J_1}({\boldsymbol{x}},{\boldsymbol{y}})=\inf_\lambda\bigg\{|{\boldsymbol{x}}\circ\lambda-{\boldsymbol{y}}|_\infty {\color{blue}+} \sup_{s\neq t}\Big|\log\Big(\frac{\lambda(s)-\lambda(t)}{s-t}\Big)\Big|\bigg\}, \end{align}\] where the infimum is taken over all increasing bijections \(\lambda\colon [0,T]\to[0,T]\). Note that \(d_{J_1}({\boldsymbol{x}},{\boldsymbol{y}})\leq|{\boldsymbol{x}}-{\boldsymbol{y}}|_\infty\). For properties of \((D{[0,T]},d_{J_1})\) and further reading see [34]–[37], for instance.
Let \(\mathcal{B}(D{[0,T]})\) be the Borel \(\sigma\)-algebra generated by the open sets of \(D{[0,T]}.\) This \(\sigma\)-algebra coincides with the \(\sigma\)-algebra generated by the coordinate projections from \(D{[0,T]}\) to \(\mathbb{R}\) given by \(\;{\boldsymbol{x}}\mapsto {\boldsymbol{x}}_t,\; t\in [0,T]\) (see [35]).
For \({\boldsymbol{x}}\in D{[0,T]}\) and \(t\in [0,T]\) we set \[{\boldsymbol{x}}^t:=({\boldsymbol{x}}_{t\wedge s})_{s\in{[0,T]}}\] and define \[D{[0,t]}:=\left\{{\boldsymbol{x}}\in D{[0,T]} : {\boldsymbol{x}}^t={\boldsymbol{x}}\right\}.\] By this identification we define a filtration on this space by \[\begin{align} \label{filtrationG-t} \mathcal{G}_t:=\sigma\left(\mathcal{B}\left(D{[0,t]}\right)\cup \operatorname{N}_{\mathcal{L}}{[0,T]}\right), \quad 0\leq t\leq T, \end{align}\tag{1}\] where \(\operatorname{N}_{ \mathcal{L}}{[0,T]}\) denotes the \(\mathbb{P}_{\mathcal{L}}\)- null sets of \(\mathcal{B}\left(D{[0,T]}\right)\) where \(\mathbb{P}_{\mathcal{L}}\) is the image measure of the Lévy process \({\mathcal{L}}\colon\Omega\to D{[0,T]},\omega \mapsto { \mathcal{L}}(\omega).\)
For \(p> 0\), a measure space \((M,\Sigma,\mu)\) and a Banach space \(\mathrm{E}\), as usual, let \[\begin{align} &L^p(M,\Sigma,\mu;\mathrm{E})\\&:=\Big\{f\colon M \rightarrow E : f\text{ is measurable and }\int_M\|f\|_{\mathrm{E}}^p d\mu<\infty\Big\}\big/ N_\mu, \end{align}\] where \(N_\mu\) are the measurable functions that are zero \(\mu\)-a.e. For the boundary cases, \(L^0(M,\Sigma,\mu;\mathrm{E})\) is the space of measurable functions \(f\colon M\to E\) (up to a function being zero \(\mu\)-a.e.) and \(L^\infty(M,\Sigma,\mu;\mathrm{E})\) is the space of measurable functions such that \(\|f\|_E\) is essentially bounded (again up to \(\mu\)-a.e. zero functions). Slightly inaccurate, depending on use, when taking a function out of such a space, we mean to work with a representative of an equivalence class.
Further, if the underlying measure space and/or \(\sigma\)-algebra is clear, we will just drop them in the notation, if further \(\mathrm{E}=\mathbb{R}\), we just write \(L^p(\mu)\).
For the particular case \(\mathrm{E}=\mathbb{R}\), we let \(L^p([0,T]):=L^p([0,T],\mathcal{B}([0,T]), \lambda),\) where \(\lambda\) stands for the Lebesgue measure.
For \(t\in [0,T]\) and a probability measure \(\mathbb{Q}\) on \((\Omega, \mathcal{F})\) we denote \[\mathbb{E}^\mathbb{Q}_t \cdot = \mathbb{E}^\mathbb{Q}[\cdot |\mathcal{F}_t] \quad \text{and} \quad \mathbb{E}^\mathbb{Q}_{t-} \cdot = \mathbb{E}^\mathbb{Q}\Big[\cdot \Big | \sigma \Big (\bigcup_{s<t } \mathcal{F}_s \Big) \Big].\]
We denote by \(\operatorname{Prog}\) the \(\sigma\)-algebra of \((\mathcal{F}_t)_t\)-progressively measurable sets on \(\Omega\times[0,T]\) and by \(\mathcal{P}\) the one of \((\mathcal{F}_t)_t\)-predictable sets generated by the left-continuous \((\mathcal{F}_t)\)-adapted processes thereon.
For \(1\le p \le \infty\) let \(\mathcal{S}^p\) denote the space of all \(\operatorname{Prog}\)-measurable and càdlàg processes \(Y\colon\Omega\times{[0,T]} \rightarrow \mathbb{R}\) such that \[\begin{align} \left\|Y\right\|_{\mathcal{S}^p}:=\||Y|_\infty \|_{L^p} <\infty. \end{align}\]
\(L^p(W)\) denotes the space of all \(\operatorname{Prog}\)-measurable processes \(Z\colon \Omega\times{[0,T]}\rightarrow \mathbb{R}\) such that \[\begin{align} \|Z \|_{L^p( \mathbb{P}\otimes\lambda)} <\infty. \end{align}\]
We define \(L^2(\tilde{\mathcal{N}})\) as the space of all random fields \(U\colon \Omega\times{[0,T]}\times{\mathbb{R}_0}\rightarrow \mathbb{R}\) which are measurable with respect to \(\mathcal{P}\otimes\mathcal{B}(\mathbb{R}_0)\) such that \[\begin{align} \left\|U\right\|_{L^2(\tilde{\mathcal{N}}) }^2:=\mathbb{E}\int_{{[0,T]}\times{\mathbb{R}_0}}\left|U_s(x)\right|^2 ds \nu(dx)<\infty, \end{align}\]
We consider for \(t \in [0,T]\) \[\begin{align} \tag{2} X_t = \, & x + \int_0^t b(s, (X_r^{s})_{r\in [0,T]}) ds + \int_0^t \sigma(s) dW_s + \int_{{]0,t]}\times{\mathbb{R}_0}} \rho(s,v) \tilde{\mathcal{N}}(ds,dv), \\ \tag{3} Y_t= \,&g((X_s)_{s\in [0,T]})+\int_t^T f\left( s, (X_r^{s})_{r\in [0,T]},Y_s, Z_s, H_s \right)ds \notag \\ & - \int_t^T Z_{s} dW_s -\int_{{]t,T]}\times{\mathbb{R}_0}}U_{s}(v) \tilde{\mathcal{N}}(ds,dv), \end{align}\] with \(H_s := \int_{\mathbb{R}_0} h(s, U_s(v)) \kappa (v) \nu(dv),\) and \(\kappa (v) := 1\wedge |v|.\) Recall that \(X_r^s = X_{s \wedge r}\)
.
The structure of the dependency on \(U\) by a \(\nu(dv)\)-integral functional is an explicit way to employ a locally Lipschitz dependence in this variable. With this certain form, we can rely on earlier BSDE results, e.g. those in [17], [27], [38], especially concerning existence and comparison theorems.
Definition 1. We say that \((Y,Z,U)\) is a solution to 3 if the triplet of processes satisfies 3 and belongs to \(\mathcal{S}^2\times L^2(W)\times L^2(\tilde{\mathcal{N}})\).
We agree on the following assumptions on the coefficients.
Assumption 1 (for \(X\)). The functions \(b: [0,T]\times D[0,T] \to \mathbb{R},\) \(\sigma: [0,T]\to \mathbb{R},\) and \(\rho: [0,T] \times \mathbb{R}\to \mathbb{R}\) are jointly measurable and satisfy
(1) \(|b(t,0)| \le K_b\), \(|b(t,{\boldsymbol{x}})-b(t,{\boldsymbol{x}}')|\le L_b |{\boldsymbol{x}}-{\boldsymbol{x}}'|_\infty,\) for all \((t,{\boldsymbol{x}})\in [0,T] \times D[0,T],\)
(2) \(|\sigma(t)| \le K_\sigma,\) for all \(t\in [0,T]\),
(3) \(|\rho(t,z)| \le \kappa_{\rho}(z)\) for all \((t,z)\in [0,T] \times \mathbb{R}\) with \(\kappa_{\rho} \in L^2(\nu)\cap L^\infty(\nu)\).
We denote \(\kappa_{\rho,2}:=\|\kappa_{\rho}\|_{L^2(\nu)}\) and \(\kappa_{\rho,\infty}:=\|\kappa_{\rho}\|_{L^{\infty}(\nu)}\).
Remark 1.
Note that in Assumption 1[ass:sdeb], even if there is no dependency on \(t\), measurability w.r.t. \(\mathcal{B}(D[0,T])\) needs to be required, as it does not follow from continuity w.r.t. \(|\cdot|_\infty\) (in contrast to the situation on the space of continuous functions \(C[0,T]\)). See the counterexample in Remark 3.
The mapping \(b\colon (t,{\boldsymbol{x}})\mapsto \tilde{b}(t, {\boldsymbol{x}}_T)\) for a measurable function \(\tilde{b}\colon [0,T]\times\mathbb{R}\to\mathbb{R}\), which is Lipschitz in the second variable fits our setting. Since \[\tilde{b}(t,X_t)=\tilde{b}(t,X^t_T)=b(t,{\boldsymbol{x}}^t),\] the Markovian setting is included in ours.
The SDE 2 has a unique solution under these assumptions. This can be seen following the proof in [39], where Picard iterations are constructed that converge to a solution. For another approach in the continuous setting, see [40]. Exactly the same as in [39] can be done in our situation (even more generally, with an additional Lipschitz dependency in \(\sigma\) and \(\varrho\)). The limit of the Picard iterations in \(|\cdot|_\infty\) is also a limit in \(d_{J_1}\) (the coarser topology), therefore, measurability in \(D[0,T]\) pertains.
Assumption 2 (for \((Y,Z,U)\)). For a measurable function \[f: [0,T]\times D{[0,T]} \times \mathbb{R}\times \mathbb{R}\times \mathbb{R}\to \mathbb{R}\] there exists a function \(k_f \in L^1([0,T])\) such that \(\forall t \in [0,T]\) it holds \[\begin{align} && |f(t,0,0,0,0)| \le k_f(t). \end{align}\]
Moreover, we assume that there exist \(c>0\), \(\ell \ge 1, 0\le {\tt r} \le \frac{1}{2\ell}, \alpha\ge0, \beta\ge 0, \gamma\ge 0,\) \(L_{f,{\rm y}}\ge 0\) and \(m_1, m_2 \ge 0\), \(m_1+ m_1m_2+m_2 \le \ell\) such that
(i) \(g: D{[0,T]} \to \mathbb{R}\) is \(\mathcal{B}(D[0,T])-\mathcal{B}(\mathbb{R})\)-measurable and
\(|g({\boldsymbol{x}})-g({\boldsymbol{x}}') | \le (c+ \frac{\alpha}{2} (|{\boldsymbol{x}}|^{{\tt r}}_\infty+ |{\boldsymbol{x}}'|^{\tt r}_\infty))|{\boldsymbol{x}}-{\boldsymbol{x}}'|_\infty\) ,
(ii) for \(f\) we assume, that for almost all \(s\),
\(|f(s,{\boldsymbol{x}}, y, z, u)-f(s,{\boldsymbol{x}}', y,z,u) | \le (c+ \frac{\beta}{2}(|{\boldsymbol{x}}|^ {\tt r}_\infty + |{\boldsymbol{x}}'|^ {\tt
r}_\infty))|{\boldsymbol{x}}-{\boldsymbol{x}}'|_\infty,\)
(iii) \(|f(s,{\boldsymbol{x}}, y, z, u)-f(s,{\boldsymbol{x}}, y', z, u) | \le L_{f,{\rm y}} |y- y'|\),
(iv) \(|f(s,{\boldsymbol{x}},y, z, u)-f(s,{\boldsymbol{x}}, y, z', u) | \le \left (c+ \frac{\gamma}{2} (|z|^\ell + |{\rm z}'|^\ell) \right )| z- z'|\),
(v) \(|f(s,{\boldsymbol{x}}, y, z, u)-f(s,{\boldsymbol{x}}, y, z,u') | \le \left(c+ \frac{\gamma}{2} (| u|^{m_1} + |u'|^{m_1}) \right )|u-u'|\),
(vi) \(h: [0,T]\times \mathbb{R}\to \mathbb{R}\) is continuous and satisfies
\(|h(s,u)-h(s,u') | \le \left (c+ \frac{\gamma}{2} (|u|^{m_2}+ |u'|^{m_2}) \right )|u-u'| , \quad h(s,0)=0,\)
(vii) \(f_{\rm u}(s,{\boldsymbol{x}}, y, z, u) h_{\rm u}(s, u') \ge -1\) where \(f_{\rm u}\) and \(h_{\rm u}\) stand for the weak derivatives.
Our main result is the following theorem.
Theorem 2. Let \((g,f,h)\) be such that Assumptions 1 and 2 are satisfied. Then, there exists a solution \((Y,Z,U)\) such that \[\mathbb{E}|Y|_\infty^2+\mathbb{E}\int_0^T|Z_s|^2ds +\mathbb{E}\int_0^T \|U_s\|_{L^2(\nu)}^2ds<\infty\] and there exist \(a,b,c>0\) such that \[\begin{align} \label{bd-cond-x} |Y_t | \le c(1 + |X^t|^{{\tt r}+1}_\infty), \,\,\,\;|Z_t | \le a + b |X^t|^{\tt r} _\infty \,\,\,\text{and}\,\,\, |U_t(v) | \le \kappa_{\rho}(v)(a + b |X^t|^{\tt r}_\infty ). \end{align}\qquad{(1)}\] Moreover, the solution is unique among the set of solutions satisfying ?? , and the processes are Malliavin differentiable i.e. \[\begin{align} Y, Z \in L^2([0,T];{\mathbb{D}_{1,2}}), \quad U\in L^2([0,T]\times\mathbb{R}_0;{\mathbb{D}_{1,2}}) \end{align}\] and for \(t \le u \le T\) \[\begin{align} \label{eqn95DZ95DU} D_{t,x} Y_u= \,& D_{t,x}g(X)+ D_{t,x} \int_u^T f \left( s, X^s, Y_s,Z_s,H_s \right)ds \notag \\ & - \int_u^TD_{t,x} Z_{s} dW_s -\int_{{]u,T]}\times{\mathbb{R}_0}} D_{t,x} U_{s}(v) \tilde{\mathcal{N}}(ds,dv). \end{align}\qquad{(2)}\]
We also have the following bounds for \(x\neq 0\) and for a.a. \(0 \le s \le t\le T\) \[\begin{align} \label{bound95DZ95DU} |D_{s,x}Z_t| \le C(1+ a + b |X^t|_\infty^{{\tt r}}),\; and\;|D_{s,x} U_t(v) | \le \kappa_{\rho}(v) C(1+ a + b |X^t|_\infty^{{\tt r}}) \end{align}\qquad{(3)}\] where \(C\) depends on \(\kappa_{\rho}\) through \(\kappa_{\rho,\infty}\) and \(\kappa_{\rho,2}\).
Remark 3.
A function \(g: D{[0,T]} \to \mathbb{R}\) satisfying only \[\begin{align} \label{uniform-lip} |g({\mathbf{x}})-g({\mathbf{y}})| \le L_g |{\boldsymbol{x}} -{\boldsymbol{y}}|_\infty \end{align}\tag{4}\] is in general not or \(\mathcal{G}_T\)- and hence also not \(\mathcal{B}(D{[0,T]})\)-measurable:
In the case of \(\mathcal{B}([0,T])\), consider a set A which is closed w.r.t. the \(\sup\) norm but not Borel measurable w.r.t. to the Skorokhod metric. Such a set must exist because otherwise the Borel \(\sigma\)-algebras generated by the \(\sup\) norm and Skorokhod metric would coincide which is not the case - proved in [35], for example. For a general Lévy process such a set can also be found if we take the completion \(\mathcal{G}_T\) instead of \(\mathcal{B}([0,T])\). Take for example \(\mathcal{L}\) to be a standard Poisson process. Then, consider the map \[\phi\colon ([0,T],\mathrm{Leb}([0,T]))\to (D[0,T],\mathcal{G}_T,\mathbb{P}_\mathcal{L}),\quad t\mapsto \mathbb{1}_{[t,T]},\] where \(\mathrm{Leb}([0,T])\) denotes the Lebesgue-\(\sigma\)-algebra on \([0,T]\). The same map with both sigma algebras restricted to \(\mathcal{B}([0,T])\) and \(\mathcal{B}(D[0,T])\) is measurable, see [35]. Each set in \(\mathcal{G}_T\) can be written as the symmetric difference \(B\triangle M\) with \(B\in \mathcal{B}(D[0,T])\) and \(M\in N_\mathcal{L}[0,T]\). Hence, the preimage \(\phi^{-1}(B\triangle M)\) is given by \(\phi^{-1}(B)\triangle\phi^{-1}(M)\). The set \(\phi^{-1}(B)\) is measurable and \(\phi^{-1}(M)\) is a subset of a Lebesgue null set in \([0,T]\) (the distribution of the jump position of \(\mathcal{L}\) in \([0,T]\) and the Lebesgue measure are equivalent) and therefore contained in \(\mathrm{Leb}([0,T])\). Thus \(\phi\) is also \(\mathrm{Leb}([0,T])-\mathcal{G}_T\)-measurable. Take a non-Lebesgue-measurable set \(H\subseteq[0,T]\) and set \(U:=\bigcup_{t\in H} \{{\boldsymbol{y}}\in D[0,T]: |\phi(t)-{\boldsymbol{y}}|_\infty<\tfrac{1}{2}\}\). Then, \(U\) is open w.r.t. \(|\cdot|_\infty\) and \(\phi^{-1}(U)=H\notin \mathrm{Leb}([0,T])\). Therefore, \(U\) is not \(\mathcal{G}_T\)-measurable and neither is its complement \(A:=D[0,T]\setminus U\). Define the map \[g({{\boldsymbol{x}}}) = \inf_{ {\boldsymbol{y}} \in A} |{{\boldsymbol{x}}} - {\boldsymbol{y}}|_\infty\] which satisfies 4 . But we have the pre-image \(g^{-1}(\{0\}) =A\), hence \(g\) is not \(\mathcal{G}_T\) -measurable.
We need \(\kappa (x) := 1 \wedge |x|\) and [fuhugeminus1] because we want the Doléans-Dade exponentials given in 21 to be non-negative, and for a comparison theorem to hold which is applied in the proof of Proposition 12 below.
Example 1. .
(a) Our setting covers the standard Markovian setting. Functions of type \[\begin{align} g({\boldsymbol{x}})=\tilde{g}({\boldsymbol{x}}_T),\quad f(s,{\boldsymbol{x}}^s,s,y,z,u)=\tilde{f}(s,{\boldsymbol{x}}^s_s,y,z,u)=\tilde{f}(s,{\boldsymbol{x}}^s_T,y,z,u), \end{align}\] where \(\tilde{g}\) and \(\tilde{f}\) are (locally) Lipschitz in the second variable w.r.t. \(\mathbb{R}\) meet our conditions: by the (local) Lipschitz condition, they are (locally) Lipschitz w.r.t. to \(|\cdot|_\infty\). Since the projection \({\boldsymbol{x}}\mapsto {\boldsymbol{x}}_T\) is measurable, the requirements are met.
(b) The supremum functional \({\boldsymbol{x}}\mapsto|{\boldsymbol{x}}|_\infty\) fits in our setting: Clearly it is Lipschitz w.r.t. itself. For the continuity and hence, measurability w.r.t. \(d_{J_1}\) assume a sequence \(({\boldsymbol{x}}_n)_{n\geq 1}\) with \(d_{J_1}({\boldsymbol{x}}_n,{\boldsymbol{x}})\to 0\), which means that there is a sequence of time distortions \((\lambda_n)_{n\geq 1}\) such that \(|\lambda_n-\mathrm{id}|+|{\boldsymbol{x}}_n\circ \lambda_n-{\boldsymbol{x}}|_\infty\to 0\). Since for all admissible \(\lambda\) and \({\boldsymbol{x}}\in D[0,T]\), we have that \(|{\boldsymbol{x}}|_\infty=|{\boldsymbol{x}}\circ\lambda|_\infty\), it follows that \[\begin{align} \big||{\boldsymbol{x}}_n|_\infty-|{\boldsymbol{x}}|_\infty\big|=\big||{\boldsymbol{x}}_n\circ\lambda_n|_\infty-|{\boldsymbol{x}}|_\infty\big|\leq|{\boldsymbol{x}}_n\circ\lambda_n-{\boldsymbol{x}}|_\infty, \end{align}\] from which continuity follows.
(c) In a similar way, the (signed) size of the ‘first’ jump \[\Delta {\boldsymbol{x}}_\tau:=\begin{cases}{\boldsymbol{x}}_{\tau}-{\boldsymbol{x}}_{\tau-},& \tau:=\inf\{t\in [0,T]: |{\boldsymbol{x}}_t-{\boldsymbol{x}}_{t-}|>0\}>0,\\ 0,& \tau=0\text{ or }\tau=\inf\emptyset,\end{cases}\] of a trajectory \({\boldsymbol{x}}\) is Lipschitz in the \(|\cdot|_\infty\)-norm with constant 2 and continuous w.r.t \(d_{J_1}\). Continuity, therefore measurability, holds, since the size of the first jump does not change by time shifts \(\lambda\), and the idea from the previous example can be carried out again. This works also for other functionals, not depending on a time distortion \(\lambda\), as the following example shows too.
(d) The maximal jump \(j({\boldsymbol{x}})\) of a trajectory \({\boldsymbol{x}}\) satisfies our assumptions for \(g\): it is Lipschitz with constant \(2\) w.r.t. \(|\cdot|_\infty\) and continuous w.r.t. \(d_{J_1}\) (see [35]).
(e) The jump size at a fixed position \(s\in [0,T]\), \({\boldsymbol{x}}\mapsto \Delta{\boldsymbol{x}}_s\) is also Lipschitz with respect to \(|\cdot|_\infty\). It is measurable (since it is a limit of differences of projections), but, in contrast to the previous example, not continuous with respect to \(d_{J_1}\). Take for example \({\boldsymbol{x}}_n=\mathbb{1}_{[\frac{1}{2}-\frac{1}{n},1]}\) and \({\boldsymbol{x}}=\mathbb{1}_{[\frac{1}{2},1]}\). Then \(d_{J_1}({\boldsymbol{x}}_n,{\boldsymbol{x}})\to0\) but \(|\Delta{{\boldsymbol{x}}_{n,}}_\frac{1}{2}-\Delta{\boldsymbol{x}}_\frac{1}{2}|=1\).
(f) The integration functional \(g({\boldsymbol{x}}) = \int_0^T {\boldsymbol{x}}_tdt\) satisfies our assumptions, since it is Lipschitz w.r.t. \(|\cdot|_\infty\) (it is even continuous w.r.t. \(d_{J_1}\)). Another admissible type of functional is the point evaluation \(g({\boldsymbol{x}})={\boldsymbol{x}}_{t^*}\), with \(t^*\in [0,T]\) (in this case it is not continuous w.r.t. \(d_{J_1}\)). Both types of functionals are special cases of the next item.
(g) A continuous linear mapping \(\mathcal{T}\colon(D[0,T],|\cdot|_\infty)\to\mathbb{R}\) has the form \[\begin{align} \mathcal{T}({\boldsymbol{x}})=\int_{[0,T]}{\boldsymbol{x}}_t dm(t)+\sum_{t\in [0,T]}\Delta{\boldsymbol{x}}_tM(t), \end{align}\] where \(m\) is a finite signed Borel measure and \(M\colon[0,1]\to\mathbb{R}\) is such that \(\sum_{t\in[0,T]}|M(t)|<\infty\). This decomposition is from [41], where the measurability of \(\mathcal{T}\) w.r.t. \((D[0,T],\mathcal{B}(D[0,T]))\) is shown too. Therefore, it satisfies our assumptions. Note that when inserting the solution process \(X\), the second summand containing \(M\) is zero a.s. since our jumps stem from a Lévy process.
(h) By the above example, also \({\boldsymbol{x}}\mapsto G_1\bigg(\int_0^TG_2({\boldsymbol{x}}_t)dm(t)\bigg)\) for Lipschitz real functions \(G_1,G_2\) and \(m\) as in the previous example is a functional matching our conditions.
(i) All the above examples that are formulated in \(g({\boldsymbol{x}})\) can naturally be turned into functions of type \(f(s,{\boldsymbol{x}},y,z,u)\) having a similar structure in the \({\boldsymbol{x}}\)-variable.
Under our assumptions we have exponential moments for \(|X|_\infty\).
Lemma 4. If Assumption 1 holds then for all \(c>0\) the solution \((X_t)_{t\in [0,T]}\) to 2 satisfies \[\mathbb{E}\exp( c|X|_\infty ) < \infty.\]
Proof. We first investigate \(\mathbb{E}\sup_{0\leq t\leq T}e^{cX_t}\) for all \(c>0\). As \(\mathbb{E}\sup_{0\leq t\leq T}e^{-cX_t}\) can be treated the same way, the finiteness for the modulus follows.
Itô’s formula (see [42]; note that [42] is satisfied thanks to our Assumption 1 [bounded-gamma]) yields \[\begin{align} &e^{c{X}_t}=e^{cx}+\int_0^t e^{c{X}_{s}}\Big(cb(s,X^{s})+\frac{c^2}{2}\sigma(s)^2\Big)ds+\int_0^te^{c{X}_{s}}c\sigma(s)dW_s\\ &\quad+\int_{]0,t]\times\mathbb{R}_0}\bigg(e^{c{X}_{s-}+c\rho(s,z)}-e^{c{X}_{s-}} \bigg) \tilde{\mathcal{N}}(ds, dz)\\ &\quad+\int_0^t\int_{\mathbb{R}_0}\bigg(e^{c{X}_{s-}+c\rho(s,z)}-e^{c{X}_{s-}}-c \rho(s,z)e^{c{X}_{s-}}\bigg)\nu(dz)ds\\ &=e^{cx}+\int_0^t e^{c{X}_{s}}\Big(cb(s, X^{s})+\frac{c^2}{2}\sigma(s)^2\Big)ds+\int_0^te^{c{X}_{s}}c\sigma(s)dW_s\\ &\quad+\int_{]0,t]\times\mathbb{R}_0}e^{c{X}_{s-}}\bigg(e^{c\rho(s,z)}-1\bigg) \tilde{\mathcal{N}}(ds, dz)\\ &\quad+\int_0^t\int_{\mathbb{R}_0}e^{c{X}_{s-}}\bigg(e^{c \rho(s,z)}-1-c \rho(s,z)\bigg)\nu(dz)ds. \end{align}\] So, the process \(\mathfrak{X}_t=e^{c{X}_t}\) solves the SDE (for integration w.r.t. \(dt\) and \(dW_t\) we may use \(\mathfrak{X}_{t}\) instead of \(\mathfrak{X}_{t-}\)) \[\begin{align} d\mathfrak{X}_t = \mathfrak{b}(t,\mathfrak{X}_{t})dt+\mathfrak{s}(t,\mathfrak{X}_{t})dW_t+\int_\mathbb{R}\mathfrak{c}(t,\mathfrak{X}_{ t^-},z)\tilde{\mathcal{N}}(dt,dz),\quad \mathfrak{X}_0=e^{cx} \end{align}\] with the coefficients \[\begin{align} &\mathfrak{b}(t,x)=x\bigg(cb(t,X^t)+\frac{c^2}{2}\sigma(t)^2+\int_{\mathbb{R}_0}\bigg(e^{c\rho(t,z)}-1-c\rho(t,z)\bigg)\nu(dz)\bigg),\\ & \mathfrak{s}(t,x)=xc\sigma(t),\\ &\mathfrak{c}(t,x,z)=x\bigg(e^{c\rho(t,z)}-1\bigg). \end{align}\] By Assumption 1, we have for some \(K>0\), \[\begin{align} |\mathfrak{b}(t,x)|&\leq \bigg[cK_b+\frac{c^2K_\sigma^2}{2}+\int_\mathbb{R}\bigg(e^{c\kappa_{\rho}(z)}-1-c\kappa_{\rho}(z)\bigg)\nu(dz)+cL_b|X^t|_\infty\bigg]|x|\\ & \le K(|X^t |_\infty+1)|x| \end{align}\] so that \[\begin{align} \mathfrak{X}_t \le & e^{cx}+\int_0^t K ( \mathfrak{X}_s| \log(\mathfrak{X}^s)|_\infty+1) ds+ \int_0^t \mathfrak{s}(s,\mathfrak{X}_{s})dW_s\\ &+\int_{{]0,t]}\times{\mathbb{R}_0}}\mathfrak{c}(s,\mathfrak{X}_{s-},z)\tilde{\mathcal{N}}(ds,dz), \end{align}\] and further, estimating \(\mathfrak{X}\) by its supremum and using that \(x (|\log x|+1)\le 2(x+2)\log(x+2)\), \[\begin{align} \mathfrak{X}_t +2 \le &2+ e^{cx} +\int_0^t 2K ( |\mathfrak{X}^s+2|_\infty)\log(|\mathfrak{X}^s+2|_\infty) ds\\ &+ \int_0^t \mathfrak{s}(s,\mathfrak{X}_{s})dW_s +\int_{{]0,t]}\times{\mathbb{R}_0}}\mathfrak{c}(s,\mathfrak{X}_{s-},z)\tilde{\mathcal{N}}(ds,dz). \end{align}\] By a stochastic Bihari-LaSalle inequality (Gronwall type Theorem 19) we have, setting \(\eta(x)=x| \log x|, c_0=1, A_t =2Kt\) and \(H_t =2+ e^{cx}\), \(M_t = \int_0^t \mathfrak{s}(s,\mathfrak{X}_{s})dW_s+\int_{ ]0,t] \times \mathbb{R}_0}\mathfrak{c}(s,\mathfrak{X}_{s-},z)\tilde{\mathcal{N}}(ds,dz)\) for any \(p \in (0,1)\) that \[\begin{align} \mathbb{E}\Big|G^{-1}\Big(G(|\mathfrak{X}+2|_\infty)-\frac{2K}{1-p}T\Big)\Big |^p\leq \frac{(2 +e^{cx})^p }{1-p}<\infty, \end{align}\] where \(G(x)=\int_r^x\frac{ds}{s\log(s)}=\log\log(x)-\log\log(r)\) for some \(r>1\), and thus \(G^{-1}(x)=e^{\log(r)e^x}\) hence, \[G^{-1}\big(G(|\mathfrak{X}+2|_\infty)-(1-p)^{-1}T\big)=|\mathfrak{X}+2|_\infty^{e^{-(1-p)^{-1}T}}.\] Therefore, \[\mathbb{E}e^{cpe^{-(1-p)^{-1} T}\sup_{0\leq t\leq T }X_t} = \mathbb{E}|\mathfrak{X}^{pe^{-(1-p)^{-1}T}}|_\infty \leq \mathbb{E}|\mathfrak{X}+2|_\infty^{pe^{-(1-p)^{-1}T}}<\infty.\] Since \(c\) can be chosen arbitrary, the assertion follows. ◻
We use \((\Omega,\mathcal{F},\mathbb{P})\), \(W,\) \(\tilde{\mathcal{N}}\) etc as introduced in Section 2. Setting \[\mu(dx):=\delta_0(dx)+\nu(dx) \quad \text{and} \quad \mathbb{m}(dt,dx) :=(\lambda\otimes\mu) (dt,dx),\] we define an independently scattered random measure \(\mathcal{M}\) by \[\begin{align} \label{measureM} \mathcal{M}(dt,dx):= dW_t\delta_0(dx) +\tilde{\mathcal{N}}(dt,dx) \end{align}\tag{5}\] on sets \(B \!\in \!\mathcal{B}([0,T]\times\mathbb{R})\) with \(\mathbb{m}(B) < \infty\). It holds \(\mathbb{E}\mathcal{M}(B)^2 = \mathbb{m}(B).\) According to [43] there exists for any \(\xi \in L^2(\Omega,\mathcal{F},\mathbb{P})\) a unique chaos expansion \[\begin{align} \label{chaos} \xi=\sum_{n=0}^\infty I_n(\tilde{f}_n), \end{align}\tag{6}\] where \(f_n \in L^n_2:=L^2(([0,T]\times\mathbb{R})^n,\mathcal{B}(([0,T]\times\mathbb{R})^n), \mathbb{m}^{\otimes n}),\) and the function \(\tilde{f}_n((t_1,x_1),...,(t_n,x_n))\) is the symmetrization of \(f_n((t_1,x_1),...,(t_n,x_n))\) w.r.t. the \(n\) pairs of variables. The multiple integrals \(I_n\) are build with the random measure \(\mathcal{M}\) from 5 . For their definition and properties see [43] or [44]. Let \({\mathbb{D}_{1,2}}\) be the space of all random variables \(\xi \in L^2(\Omega,\mathcal{F},\mathbb{P})\) such that \[\begin{align} \|\xi\|^2_{{\mathbb{D}_{1,2}}}:= \sum_{n=0}^\infty (n+1)!\left\|\tilde{f}_n\right\|_{L^n_2}^2<\infty. \end{align}\] For \(\xi \in {\mathbb{D}_{1,2}}\) the Malliavin derivative is defined by \[D_{t,x}\xi:=\sum_{n=1}^\infty nI_{n-1}\left(\tilde{f}_n\left((t,x),\;\cdot\;\right)\right),\] for \(\mathbb{P}\otimes\mathbb{m}\)-a.a. \((\omega,t,x)\in\Omega\times{[0,T]}\times\mathbb{R}\). It holds \(D \xi \in L^2(\mathbb{P}\otimes\mathbb{m})\).
Defining \[\begin{align} \mathbb{D}_{1,2}^0&:= &\bigg \{\xi=\sum_{n=0}^\infty I_n(\tilde{f}_n) \in L^2(\Omega,\mathcal{F},\mathbb{P})\colon f_n \in L_2^n, n\in \mathbb{N},\\ && \quad \quad \quad \quad \quad \quad\quad \quad \quad\sum_{n=1}^\infty (n+1)! \int_0^T \|\tilde{f}_n((t,0),\cdot) \|_{L^{n-1}_2}^2 dt < \infty \bigg \} \end{align}\] and \[\begin{align} \mathbb{D}_{1,2}^{\mathbb{R}_0}&:= & \bigg \{\xi=\sum_{n=0}^\infty I_n(\tilde{f}_n) \in L^2(\Omega,\mathcal{F},\mathbb{P})\colon f_n \in L_2^n, n\in \mathbb{N}, \\ && \quad \quad \quad \sum_{n=1}^\infty (n+1)! \int_{[0,T]\times \mathbb{R}_0} \|\tilde{f}_n((t,x),\cdot) \|_{\mathrm{L}^{n-1}_2}^2 \mathbb{m}( dt,dx) < \infty \bigg \}. \end{align}\] we get that \({\mathbb{D}_{1,2}}= \mathbb{D}_{1,2}^0\cap \mathbb{D}_{1,2}^{\mathbb{R}_0}.\)
The next result was proved in [45] in the Wiener space setting. We use similar ideas to show that it holds also for \(\mathbb{D}_{1,2}^0\) and \(\mathbb{D}_{1,2}^{\mathbb{R}_0}.\)
Lemma 1 ([45]). Let
(i) \((\xi_N)_{N=1}^\infty \subseteq \mathbb{D}_{1,2}^0\) with \(\sup_N \mathbb{E}\int_0^T| D_{t,0} \xi_N| ^2 dt < \infty, \label{d0}\)
(ii) or \((\xi_N)_{N=1}^\infty \subseteq \mathbb{D}_{1,2}^{\mathbb{R}_0}\) with \(\sup_N \mathbb{E}\int_{[0,T]\times \mathbb{R}_0} |D_{t,x} \xi_N| ^2\mathbb{m}( dt,dx) < \infty. \label{dx}\)
If \(\xi_N \to \xi\) in \(L^2(\Omega,\mathcal{F},\mathbb{P}),\) then assumption [d0] implies that \(\xi \in \mathbb{D}_{1,2}^0\), and assumption [dx] implies that \(\xi \in \mathbb{D}_{1,2}^{\mathbb{R}_0}.\)
Proof. Assume that \(\xi_N=\sum_{n=0}^\infty I_n(\tilde{f}^N_n).\) Then \(\mathbb{E}|\xi_N|^2 = \sum_{n=0}^\infty n!\big\|\tilde{f}^N_n\big\|_{L^n_2}^2\) and \[D_{t,x} \xi_N = \sum_{n=1}^\infty nI_{n-1}\left(\tilde{f}^N_n\left((t,x),\;\cdot\;\right)\right).\] To show [d0] we put \(A=[0,T]\times \{0\}\). Hence by the orthogonality of the \(I_{n-1}\left(\tilde{f}^N_n\left((t,x),\;\cdot\;\right)\right)\), \[\begin{align} \mathbb{E}\int_0^T| D_{t,0} \xi_N| ^2 dt &=\mathbb{E}\int_A |D_{t,x}\xi_N|^2 \mathbb{m}( dt,dx) \\ &= \mathbb{E}\! \int_{[0,T]\times \mathbb{R}} \! \big | \sum_{n=1}^\infty nI_{n-1}\left(\tilde{f}^N_n\left((t,x),\;\cdot\;\right)\right) \mathbb{1}_A(t,x) \big |^2 \! \mathbb{m}( dt,dx) \\ &= \sum_{n=1}^\infty n n! \int_0^T \|\tilde{f}^N_n((t,0),\cdot) \|_{L^{n-1}_2}^2 dt. \end{align}\] By assumption, we have that for all \(K, N\in \mathbb{N}\) \[\sum_{n=1}^K n n! \int_0^T \|\tilde{f}^N_n((t,0),\cdot) \|_{L^{n-1}_2}^2 dt \le \mathbb{E}\int_0^T| D_{t,0} \xi_N| ^2 dt \le C.\] On the other hand, \(\int_0^T \|\tilde{f}^N_n((t,0),\cdot) \|_{L^{n-1}_2}^2 dt \to \int_0^T \|\tilde{f}_n((t,0),\cdot) \|_{L^{n-1}_2}^2 dt\) since \(\xi_N \to \xi\) in \(L^2(\Omega,\mathcal{F},\mathbb{P})\). Therefore, for all \(K\in \mathbb{N}\) \[\sum_{n=1}^K n n! \int_0^T \|\tilde{f}_n((t,0),\cdot) \|_{L^{n-1}_2}^2 dt \le C\] which means \[\mathbb{E}\int_0^T| D_{t,0} \xi| ^2 dt \le C.\] By replacing \(A=[0,T]\times \{0\}\) by \(A= [0,T]\times \mathbb{R}_0\) one shows in the same way that assumption [dx] implies \(\xi \in \mathbb{D}_{1,2}^{\mathbb{R}_0}.\) ◻
If \(W\) and \(W'\) are independent Brownian motions on \((\Omega,\mathcal{F},\mathbb{P})\) we define a ‘coupled’ Brownian motion \(W^\varphi\) by setting \[W^\varphi := \sqrt{1-\varphi^2} W + \varphi W',\] where \(\varphi \in [0,1]\) is a fixed constant. If \(\xi \in L^2(\Omega,\mathcal{F},\mathbb{P})\) is given as in 6 then \(\xi^\varphi\) is built by the same kernels but with respect to the random measure \[\mathcal{M}^\varphi(dt,dx):= dW^\varphi_t\,\delta_0(dx) +\tilde{\mathcal{N}}(dt,dx).\] Using a functional representation of \(\xi \in L^2(\Omega,\mathcal{F},\mathbb{P})\), i.e. \(\xi=\Xi(\mathcal{L})\), with measurable \(\Xi\colon D[0,T]\to\mathbb{R}\), and the Lévy - Itô decomposition \(\mathcal{L}=\varsigma W+ \mathcal{L}-\varsigma W\), the ‘coupled’ random variable is then given by \(\xi^\varphi=\Xi(\varsigma W^\varphi+\mathcal{L}-\varsigma W)\) (see [46]).
In [28] the following criteria for \(\mathbb{D}_{1,2}^0\) was shown (the results are true also for a Hilbert space valued random variable):
Theorem 5 ( [28] ). Let \(\xi \in L^2(\Omega,\mathcal{F},\mathbb{P})\). Then \[\xi \in \mathbb{D}_{1,2}^0 \iff \sup_{0<\varphi\le 1} \frac{\|\xi -\xi^\varphi\|_{L^2}}{\varphi} < \infty.\] Moreover, if \(\xi \in \mathbb{D}_{1,2}^0\) then one has \[\frac{1}{2} \left (\mathbb{E}\int_0^T| D_{t,0}\xi |^2 dt \right)^\frac{1}{2} \le \sup_{0<\varphi\le 1} \frac{\|\xi -\xi^\varphi\|_{L^2}}{\varphi} \le \left (\mathbb{E}\int_0^T| D_{t,0}\xi |^2 dt \right )^\frac{1}{2} .\]
If \(g: D{[0,T]} \to \mathbb{R}\) is \(\mathcal{G}_T\)-measurable (recall 1 ) one can compute the Malliavin derivative in the jump direction i.e. \(D_{t,x}\) for \(x \neq 0\) using the following result:
Lemma 6 ([46], [27] ). If \(g: D{[0,T]} \to \mathbb{R}\) is \(\mathcal{G}_T\) measurable and \(g(\mathcal{L}) = g(( \mathcal{L}_t)_{t \in [0,T]}) \in L^2(\Omega,\mathcal{F},\mathbb{P})\) then \[\begin{align} g(\mathcal{L}) \in \mathbb{D}_{1,2} ^{\mathbb{R}_0} \iff g(\mathcal{L}+x\mathbb{1}_{[t,T]})-g( \mathcal{L}) \in L^2(\mathbb{P}\otimes\mathbb{m}), \end{align}\] and it holds then for \(\mathbb{P}\otimes\mathbb{m}\)-a.e. \((t,x) \in [0,T]\times \mathbb{R}_0\) that \[\label{xieq} D_{t,x} g(\mathcal{L}) = g( \mathcal{L}+x\mathbb{1}_{[t,T]})-g(\mathcal{L}) .\qquad{(4)}\]
Corollary 7. Let \(X\) be a strong solution to 2 and assume that \(X_T \in \mathbb{D}_{1,2} ^{\mathbb{R}_0}\). If \(F: \mathbb{R}\to \mathbb{R}\) is Borel measurable and \(F(X_T + D_{t,x}X_T) -F(X_T) \in L^2(\mathbb{P}\otimes\mathbb{m})\), then for \(\mathbb{P}\otimes\mathbb{m}\)-a.e. \((t,x) \in [0,T]\times \mathbb{R}_0\) \[D_{t,x} F(X_T) = F(X_T + D_{t,x}X_T) -F(X_T).\]
Proof. If \(X\) is a strong solution we can represent \(X_T\) as \(X_T = G(( \mathcal{L}_t)_{t\in [0,T]})\) a.s., where \(G:D{[0,T]} \to \mathbb{R}\) is \(\mathcal{G}_T\) measurable (see [47]), and get by ?? in Lemma 6 \[\begin{align} & D_{t,x} F(X_T) = D_{t,x} F\Big(G ( \mathcal{L})\Big)= F(G( \mathcal{L}+x\mathbb{1}_{[t,T]}))-F(G( \mathcal{L}))\\ &=F( G(\mathcal{L})+D_{t,x}G( \mathcal{L}) )-F(G(\mathcal{L})) = F(X_T +D_{t,x} X_T) -F(X_T). \end{align}\] ◻
We recall another result we will use:
Lemma 8 ([27] ). Let \(\Lambda\in \mathcal{G}_T\) be a set with \(\mathbb{P}\left(\left\{ \mathcal{L}\in \Lambda\right\}\right)=0\). Then \[\mathbb{P} \otimes\mathbb{m}\left(\left\{(\omega,s,x)\in \Omega\times{[0,T]}\times\mathbb{R}_0:\mathcal{L}(\omega)+x\mathbb{1}_{[s,T]}\in \Lambda\right\}\right)=0.\]
Proposition 9. Let the Assumption 1 hold. Then it holds for the solution \(X\) to 2 that \(\, X_t \in \mathbb{D}_{1,2} \,\) for all \(t \in [0,T].\) Moreover,
(1) for \(\nu\)-a.a. \(x\in \mathbb{R}_0\) and for a.a. \(s\leq t\leq T\) we have \[\begin{align} D_{s,x} X_t &=& \rho(s,x) + \int_s^t \big(b(r,X^r+ D_{s,x} X^r ) - b(r,X^r )\big)dr, \end{align}\]
(2) and for \(x = 0\) and a.a. \(s \leq t \leq T\), \[\begin{align} D_{s,0} X_t &=& \sigma(s) + \int_s^t D_{s,0} \, b(r,X^r) dr. \end{align}\] Moreover, we have for a.a. \(s \leq t \leq T\) \[\label{upperbound-derivative-X} |D_{s,x} X_t| \le e^{L_b (t-s)} (\kappa_\rho(x) \mathbb{1}_{x \neq 0} + K_{\sigma} \mathbb{1}_{x = 0}).\qquad{(5)}\]
Proof. \(\boxed{ \, X_t \in \mathbb{D}_{1,2}^0}:\)
By Theorem 5 we need to estimate \(\|X_t -X^\varphi_t\|_{L^2},\) where \[\begin{align}
X^\varphi_t &= \, x + \int_0^t b(s,X^{s,\varphi}) ds + \int_0^t \sigma(s) d(\sqrt{1-\varphi^2} W_s + \varphi W'_s) \\
& \quad + \int_{{]0,t]}\times{\mathbb{R}_0}} \rho(s,v) \tilde{\mathcal{N}}(ds,dv).
\end{align}\] For later use we compute here \(\| |X^t -X^{t,\varphi}|_\infty \|_{L^p}\) for \(p\ge 2.\) By the Burkholder-Davis-Gundy inequality, \[\begin{align}
\||X^t -X^{t,\varphi} |_\infty\|_{L^p} &\le \int_0^t \|b(s,X^s) - b(s,X^{s,\varphi})\|_{L^p} ds \notag\\
& \quad + C_p (1- \sqrt{1-\varphi^2} +\varphi ) \left ( \int_0^t \sigma(s)^2 ds \right)^\frac{1}{2}.
\end{align}\] Using \(\|b(s,X^s) - b(s,X^{s,\varphi})\|_{L^p}\le L_b \||X^s - X^{s, \varphi}|_\infty\|_{L^p}\), Gronwall’s inequality and \(1- \sqrt{1-\varphi^2} \le \varphi\) imply
\[\begin{align} \label{the-p-estimate}
\||X^t -X^{t,\varphi} |_\infty\|_{L^p} &\le c\varphi
\end{align}\tag{7}\] which especially means \(\, X_t \in \mathbb{D}_{1,2}^0\) if \(p=2.\) \(\boxed{\, X_t \in
\mathbb{D}_{1,2}^{\mathbb{R}_0}:}\)
If \(X\) is a strong solution [47] gives us the representation \(X_t=G(t,\mathcal{L}^{ t
})\) for all \(t \in [0,T],\) a.s. where \(G(t, \cdot) :D{[0,T]} \to \mathbb{R}\) is \(\mathcal{G}_t\) measurable. Then by ?? we need \((\omega,r,v) \mapsto ( G(t, \mathcal{L}^t + v\mathbb{1}_{[r,t]} )- G(t, \mathcal{L}^t ) ) \in L^2(\mathbb{P}\otimes\mathbb{m}).\) Abbreviating \(\mathcal{X}^{r,v}_t := G(t, \mathcal{L}^t +
v\mathbb{1}_{[r,t]} )\) we conclude from 2 that \[\begin{align} \mathcal{X}^{r,v}_t &= x+ \int_0^t b\left (s,(\mathcal{X}^{r,v})^s \right ) ds + \int_0^t \sigma(s) dW_s \\
& \quad + \rho(r,v)\mathbb{1}_{{]0,t]}\times{\mathbb{R}_0}}(r,v)+ \int_{{]0,t]}\times{\mathbb{R}_0}} \rho(s,z) \tilde{\mathcal{N}}(ds,dz) ,
\end{align}\] where we used ?? again and the fact that \(D_{r,v} \int_{{]0,t]}\times{\mathbb{R}_0}} \rho(s,z) \tilde{\mathcal{N}}(ds,dz) = \rho(r,v)\) since the integral belongs to the first chaos. Approximating
\(\mathcal{X}^{r,v}_t\) as well as \(X_t\) via Picard iteration starting with \(\mathcal{X}^{r,v,0}_t := 0\) and \(X^0_t=0\)
one derives then \[\begin{align} \mathcal{X}^{r,v,n+1}_t - X_t^{n+1} =& \int_0^t \Big( b\left (s,(\mathcal{X}^{r,v,n})^s \right ) - b \left (s,(X^{n})^s\right ) \Big) ds \\ &+
\rho(r,v)\mathbb{1}_{{]0,t]}\times{\mathbb{R}_0}}(r,v),
\end{align}\] and by the Burkholder-Davis-Gundy inequality, \[\begin{align}
& \| (\omega,r,v) \mapsto |(\mathcal{X}^{r,v,n+1})^t - (X^{n+1})^t|_\infty ) \|_{ L^2(\mathbb{P}\otimes\mathbb{m})} \\
& \le \int_0^t \| (\omega,r,v) \mapsto ( b\left (s,(\mathcal{X}^{r,v,n})^s \right ) - b \left (s,(X^{n})^s\right ) ) \|_{ L^2(\mathbb{P}\otimes\mathbb{m})} ds + C_2 \kappa_{\rho,2} \\
&\le L_b \int_0^t \| (\omega,r,v) \mapsto |(\mathcal{X}^{r,v,n})^s - (X^{n})^s |_\infty \|_{ L^2(\mathbb{P}\otimes\mathbb{m})} ds + C_2 \kappa_{\rho,2}.
\end{align}\] Since \(\| (\omega,r,v) \mapsto |(\mathcal{X}^{r,v,1})^t - (X^{1})^t|_\infty ) \|_{ L^2(\mathbb{P}\otimes\mathbb{m})} \le C_p\, \kappa_{\rho,2}<\infty\) one can derive by Gronwall’s inequality from
this relation that \[\| (\omega,r,v) \mapsto (G(t, \mathcal{L}^t + v\mathbb{1}_{[r,t]} ) - G(t, \mathcal{L}^t )) \|_{ L^2(\mathbb{P}\otimes\mathbb{m})}< \infty.\] For \(x\neq 0\) we get
?? by Assumption 1 \[\begin{align}
|D_{s,x} X_t| & \le |D_{s,x} X^t|_\infty \le \kappa_\rho(x) + \int_s^t L_b |D_{s,x} X^r |_\infty dr,
\end{align}\] from which the assertion follows by Gronwall’s inequality. ◻
We prove a representation for the Malliavin derivative \(D_{t,0}\) of the generator of the BSDE. In our setting the generator is a function depending pathwise on the process \(X\), and on random variables. The proof uses a characterisation of \(\mathbb{D}_{1,2}^0\) done by Sugita [29], it is postponed to the Appendix 8.2.
Lemma 10. Suppose Assumptions 1 and let \(X\) be the solution to 2 .
(1) For a.e. \(s \in [0,t]\) it holds \(|D_{s,0} X_t| \le e^{L_b t} K_{\sigma}.\)
(2) Assume that \(\mathbf{f}: [0,T]\times D([0,T]) \times \mathbb{R}^d \to \mathbb{R}\) is measurable satisfying \[\begin{align} | \mathbf{f}(t,0,0)|& \le k_f(t),\\ | \mathbf{f}(t,{\boldsymbol{x}}, y)-f(t,{\boldsymbol{x}}', y) | &\le L_{\boldsymbol{x}}\,\, (c+ \frac{\beta}{2}(|{\boldsymbol{x}}|^ {\tt r}_\infty + |{\boldsymbol{x}}'|^ {\tt r}_\infty)) |{\boldsymbol{x}}-{\boldsymbol{x}}'|_\infty, \\ | \mathbf{f}(t,{\boldsymbol{x}}, y)- \mathbf{f}(t,{\boldsymbol{x}}, y') | &\le \sum_{k=1}^d L_{y_k} \, \, |y_k- y_k'|, \end{align}\]
*for some $L_{\boldsymbol{x}}, L_{y_1},...,L_{y_d} \ge 0$ and
$k_f \in L^1([0,T]).$ If $Y_1, ...,Y_d \in \mathbb{D}_{1,2}^{0},$
then there exist measurable
$G_1, ... ,G_d : \Omega \times [0,T] \to \mathbb{R}$ which are
bounded: $|G_k| \le L_{y_k},$ and such that it holds a.s. and for
a.a. $s\in [0,T]$
$$\begin{align} \label{chain-rule} D_{s,0} \mathbf{f}\left( t, X^t, (Y_1, ..., Y_d) \right) &= ( D_{s,0} \mathbf{f}( t, X^t,y) ) |_{y=(Y_1, ... ,Y_d )} \notag\\ &\quad + G_1(t) \, D_{s,0} Y_1 +... + G_d(t) \, D_{s,0} Y_d
\end{align}$$ {#eq:chain-rule} and
$$\begin{align} \label{Df-estimate}
\| D_{\cdot,0} \mathbf{f}( t, X^t,y) ) \|_{L^\infty[0,T]} \le L_{\boldsymbol{x}}K_\sigma
\big ( c+ \beta |X^t|^ {\tt r}_\infty \big) e^{L_b T}.
\end{align}$$ {#eq:Df-estimate} for all $y \in \mathbb{R}^d$.*
We show now that the truncated version of the BSDE 3 is Malliavin differentiable.
For \(M >0\) let \(b_M :\mathbb{R}\to [-M,M]\) be a smooth monotone function such that \(0\le b'_M(x) \le 1\) and \[\begin{align}
b_M(x) := \left \{ \begin{array}{cl} M, &\,\, x>M+1,\\
x, & \,\, |x| \le M -1, \\
-M, &\,\, x< -M-1.
\end{array} \right .
\end{align}\] Note that \(|b_M(x)|\le |x|\wedge M\). By a slight abuse of notation, we also define, for all \({\boldsymbol{x}}\in D[0,T]\), \(b_M({\boldsymbol{x}})\) as the function of \(D[0,T]\) given by \(t \mapsto b_M({\boldsymbol{x}}_t)\).
We set \[\begin{align} \label{data-cut-off} g^M({\boldsymbol{x}})&:= g(b_M({\boldsymbol{x}})) \notag \\ f^M(s, {\boldsymbol{x}},y, z, u )&:= f\Big( s, b_M({\boldsymbol{x}}) ,y,
b_M(z), b_M(u) \Big).
\end{align}\tag{8}\]
Theorem 5 implies Malliavin differentiability of the truncated terminal condition:
Corollary 11. Let the Assumptions 1 and 2 hold. Then it holds for \(\varphi \in [0,1]\) that \[\|g^M(X)- g^M(X^\varphi) \|_{L^2} \le c L_{g^M} \, \varphi,\] where \(L_{g^M} = (c + \alpha |M|^{\tt r}).\)
Proof. By the Burkholder-Davis-Gundy inequality we get similarly to the proof of Proposition 9 that \(\| |X-X^\varphi |_\infty\|_{L^2} \le c\varphi.\) Then by Assumption 2 [the-g] , \[\begin{align} \|g^M(X)- g^M(X^\varphi) \|_{L^2} &\le (c + \alpha |M|^{\tt r})\| |b_M(X)-b_M(X^\varphi) |_\infty\|_{L^2} \le c L_{g^M} \, \varphi. \end{align}\] ◻
Proposition 12. Let the Assumptions 1 and 2 hold. There exists a unique solution \((Y^M,Z^M,U^M) \in \mathcal{S}^2 \times L^2 (W)\times L^2(\tilde{\mathcal{N}})\) to \[\begin{align} \label{BSDE-truncated} Y^M_t= \,&g^M(X)+\int_t^T f^M \left( s, X^s,\Theta^M_s \right)ds \notag \\ & - \int_t^T Z^M_{s} dW_s -\int_{{]t,T]}\times{\mathbb{R}_0}}U^M_{s}(v) \tilde{\mathcal{N}}(ds,dv) \end{align}\qquad{(6)}\] where \(\Theta^M_s= (Y^M_s, Z^M_s, H^M_s) ,\) and \[\begin{align} \label{the-H} H^M_s = \int_{\mathbb{R}_0} h(s, b_M ( U^M_s(v))) \kappa (v) \nu(dv) \end{align}\qquad{(7)}\] Moreover, the solution processes \((Y^M,Z^M, U^M)\) are Malliavin differentiable, i.e. \[Y^M, Z^M \in L^2([0,T];{\mathbb{D}_{1,2}}), \quad U^M\in L^2([0,T]\times\mathbb{R}_0;{\mathbb{D}_{1,2}}),\] and for \(t \le u\le T\) we have that \[\begin{align} \label{Diff-BSDE} D_{t,x} Y^M_u= \,& D_{t,x}g^M(X)+\int_u^TD_{t,x} f^M \left( s, X^s, \Theta^M_s \right)ds \notag \\ & - \int_u^TD_{t,x} Z^M_{s} dW_s -\int_{{]u,T]}\times{\mathbb{R}_0}} D_{t,x} U^M_{s}(v) \tilde{\mathcal{N}}(ds,dv). \end{align}\qquad{(8)}\]
Proof. The existence and uniqueness of the solution is clear. Since we are in the Lipschitz case with bounded terminal condition, \[\begin{align} Y^M, Z^M \in L^2([0,T]; \mathbb{D}_{1,2}^{\mathbb{R}_0}), \quad U^M\in L^2([0,T]\times\mathbb{R}_0; \mathbb{D}_{1,2}^{\mathbb{R}_0}) \end{align}\] is shown in [23]. However, for \(\mathbb{D}_{1,2}^0\) the additional regularity condition [23] is used which we do not want to assume here. Instead we will apply repeatedly Theorem 5. We consider \[\begin{align} Y^{M,\varphi}_t= \,&g^M((X)^\varphi)+\int_t^T f^M \left( s, (X^s)^\varphi, \Theta^{M,\varphi}_s \right)ds \notag \\ & - \int_t^T Z^{M,\varphi}_{s} d (\sqrt{1-\varphi^2} W_s + \varphi W'_s) -\int_{{]t,T]}\times{\mathbb{R}_0}}U^{M,\varphi}_{s}(v) \tilde{\mathcal{N}}(ds,dv). \end{align}\]
We introduce the notation \(( \Delta \xi, \Delta Y_t, \Delta Z_t, \Delta U_t ):=( g^M(X)-g^M((X)^\varphi), Y^M_t - Y^{M,\varphi}_t, Z^M_t - Z^{M,\varphi}_t,U^M_t - U^{M,\varphi}_t).\) From [48] and [3] one concludes an a priori estimate: there is a \(C>0\) such that for all \(t\in [0,T]\), \[\begin{align} \label{apriori-levy-Ito} &\mathbb{E}\sup_{s\in[t,T]}|\Delta Y_s|^2+ (1-\sqrt{1-\varphi^2}) \mathbb{E}\int_t^T|Z_s^{ M}|^2ds + \mathbb{E}\int_t^T|\Delta Z_s|^2ds \notag \\ & \quad+\mathbb{E}\int_t^T \|\Delta U_s\|_{L^2(\nu)}^2ds \notag\\ &\leq C\mathbb{E}\bigg[|\Delta \xi|^2+\bigg(\int_t^T |f^M \left( s, X^s, \Theta^M_s \right)- f^M \left( s, (X^s)^\varphi,\Theta^M_s \right)|ds\bigg)^{2}\bigg].\nonumber \\ &\leq C( L_{g^M}^2 + (T-t)^2 L_{ f^M}^2) \,\, c^2\, \varphi^2 \end{align}\tag{9}\] with \(\Theta^M_s:= (Y^M_s, Z^M_s, H^M_s ).\) From this we immediately derive that \[Y^M_t , \,\,\, \int_t^T Z^M_{s} dW_s , \,\,\, \int_{{]t,T]}\times{\mathbb{R}_0}}U^M_{s}(v) \tilde{\mathcal{N}}(ds,dv) \in \mathbb{D}_{1,2}^0.\] We also have \(Z^M \in L_2([0,T]; \mathbb{D}_{1,2}^{0})\) since \[\int_t^T \|Z^M_s - Z^{M,\varphi}_s\|^2_{L^2} ds = \int_t^T \mathbb{E}|\Delta Z_s|^2ds\] so that we again can use 9 and Theorem 5. By the same argument we also have \(U^M\in L^2([0,T]\times\mathbb{R}_0;\mathbb{D}_{1,2}^{0})\). From the last inequality in 9 one also concludes that \(\int_t^T f^M \left( s, X^s, \Theta^M_s \right)ds \in \mathbb{D}_{1,2}^0.\) By [38] we can interchange \(D_{t,x}\) with integrals so that \[\begin{align} D_{t,x} Y^M_u= \,& D_{t,x}g^M(X)+\int_u^TD_{t,x} f^M \left( s, X^s, \Theta^M_s \right)ds \notag \\ & - \int_u^TD_{t,x} Z^M_{s} dW_s -\int_{{]u,T]}\times{\mathbb{R}_0}} D_{t,x} U^M_{s}(v) \tilde{\mathcal{N}}(ds,dv). \end{align}\] ◻
Based on a comparison theorem, we get a first preliminary bound which depends on the truncation level \(M\).
Proposition 13. Suppose Assumptions 1 and 2 hold. For fixed \(M>0\) there is a constant \(a_0^M>0\) such that \(\mathbb{P}\) -a.s. we have that for \(\lambda\)-a.e. \(t \in [0,T]\) \[|Z^M_t| \le a_0^M\] and for \(dt\otimes \nu\) a.e. \((t,x) \in [0,T]\times \mathbb{R}_0\) \[|U^M_t(x)| \le \kappa_{\rho}(x)a_0^M.\]
Proof. By Proposition 12 we know that \((Y^M,Z^M,U^M)\) is Malliavin differentiable, and we have ?? .
Since \((\lim_{u \downarrow t} D_{t,0} Y^M_u)_{t\in [0,T]}\) and \(( \lim_{u \downarrow t} D_{t,x} Y^M_u)_{t\in [0,T]}\) for \(x\neq 0\) have càdlàg adapted
versions, we get by [49] that the predictable projections coincide with the processes itself up to sets of measure \(dt \otimes
\mathbb{P}\) zero, which implies \[\begin{align} \label{ZandUlimDY} Z^M_t= \lim_{u \downarrow t} D_{t,0} Y^M_u \,\, a.s. \quad \text{and} \quad U^M_t(x) = \lim_{u \downarrow t}
D_{t,x} Y^M_u\,\, a.s.
\end{align}\tag{10}\] for \(\nu\) a.a. \(x \in \mathbb{R}_0.\)
By assumption we have that \[|g^M({\boldsymbol{x}})- g^M({\boldsymbol{x}}')| \le ( c +\alpha M^{\tt r}) \, d_{J_1}({\boldsymbol{x}},{\boldsymbol{x}}').\] Then Lemma 10 (seeing \(g^M\) as a special case of \(\mathbf{f}\)) implies \[\begin{align} |D_{t,x}
g^M(X)|& \le( c +\alpha M^{\tt r}) |D_{t,x } X|_\infty \le ( c +\alpha M^{\tt r}) e^{L_b T} (\kappa_\rho(x) \mathbb{1}_{x \neq 0} + K_{\sigma} \mathbb{1}_{x = 0}) \\ &=: \xi_{M,x}
\end{align}\] In the same way we get setting \(H(s,{\boldsymbol{u}}) := \int_{\mathbb{R}_0} h(s, b_M ( {\boldsymbol{u}}(v))) \kappa (v) \nu(dv)\) from \[\begin{align}
& | f^M(s,{\boldsymbol{x}}^s, y,z, H(s,{\boldsymbol{u}}) ) - f^M(s,({\boldsymbol{x}}')^s, y',z', H(s,{\boldsymbol{u}}') ) | \\ & \le (c +\beta M^{\tt r}) \, d_{J_1}({\boldsymbol{x}},{\boldsymbol{x}}') + L_{f ,{\rm y}} |y-y'|
+ (c +\gamma M^\ell)|z-z'| \\ &\quad + (c+ \gamma M^{m_1}) \left | \int_{\mathbb{R}_0} ( h(s, b_M ( {\boldsymbol{u}}(v)))-h(s, b_M ( {\boldsymbol{u}}'(v)))) \kappa (v) \nu(dv) \right | \\ & \le (c +\beta M^{\tt r}) \,
d_{J_1}({\boldsymbol{x}},{\boldsymbol{x}}') + L_{f ,{\rm y}} |y-y'| + (c +\gamma M^\ell)|z-z'| \\ &\quad + (c+ \gamma M^{m_1}) (c +\gamma M^{m_2}) \|{\boldsymbol{u}}-{\boldsymbol{u}}'\|_{L^2(\nu)} \, \| \kappa (v)\|_{L^2(\nu)}
\end{align}\] that \[|D_{t,x} f^M(s,X^s, y,z, H(s,{\boldsymbol{u}}) )| \le (c +\beta M^{\tt r}) e^{L_b T} (\kappa_\rho(x) \mathbb{1}_{x \neq 0} + K_{\sigma} \mathbb{1}_{x = 0}) =: c_{M,x}\] and for \(x \in \mathbb{R}\) we have \[\begin{align} |D_{t,x} f^M(s,X^s, \Theta^M)| &\le c_{M, x} + L_{f ,{\rm y}} | D_{t,x} Y^M_s | + (c +\gamma M^\ell)| D_{t,x} Z^M_s |
\notag\\ & \quad + (c+ \gamma M^{m_1}) (c +\gamma M^{m_2}) \int_{\mathbb{R}_0} | D_{t,x} U^M_s(v)| \kappa (v) \nu(dv) \notag \\ &=: f_{M,x}^+ (s,D_{t,x} Y^M_s, D_{t,x} Z^M_s, D_{t,x} U^M_s) \label{eq1}
\end{align}\tag{11}\]
In [23] using a comparison result it is shown that for the BSDEs with data \((\pm \xi_{M,x}, \pm f_{M,x}^+)\) we have that for the corresponding solution processes \(\pm \overline{\mathcal{Y}}^{t,0}\) for \((\pm \xi_{M,0}, \pm f_{M,0}^+)\) and \((\pm \xi_{M,x}, \pm f_{M,x}^+)\) (\(x\neq 0)\) for \(\pm \overline{\mathcal{Y}}^{t,x}\) that for \(0\le t\le u\le T\) \[- \overline{\mathcal{Y}}^{t,0}_u \le D_{t,0} Y^M_u \le \overline{\mathcal{Y}}^{t,0}_u\] and \[- \overline{\mathcal{Y}}^{t,x}_u \le D_{t,x} Y^M_u \le \overline{\mathcal{Y}}^{t,x}_u.\] Moreover, since \(\xi_{M,0}\) is just a constant, and from the structure of \(f_{M,0}^+\) given in 11 one concludes by [23] that there is a unique solution to
\[\begin{align} \overline{\mathcal{Y}}^{t,0}_u &= \xi_{M,0} +\int_u^T f_{M,0}^+ (s, \overline{\mathcal{Y}}^{t,0}_s , \overline{\mathcal{Z}}^{t,0}_s, \overline{\mathcal{U}}^{t,0}_s)ds \\ & \quad - \int_u^T\overline{\mathcal{Z}}^{t,0}_{s} dW_s -\int_{{]u,T]}\times{\mathbb{R}_0}} \overline{\mathcal{U}}^{t,0}_{s}(v) \tilde{\mathcal{N}}(ds,dv). \end{align}\] which is given by \((\overline{\mathcal{Y}}^{t,0}, 0,0).\) Hence \[\begin{align} Z^M_t= \lim_{u \downarrow t} D_{t,0} Y^M_u \le \overline{\mathcal{Y}}^{t,0}_t = \xi_{M,0} \, e^{L_{f ,{\rm y}}(T-t)} +\int_t^T c_{M,0} \, e^{L_{f ,{\rm y}}(s-t)} ds \end{align}\] with \(\xi_{M,0} = ( c +\alpha M^{\tt r}) e^{L_b T} K_{\sigma}\) and \(c_{M,0}= (c +\beta M^{\tt r}) e^{L_b T} K_{\sigma}.\) Similarly, \[\begin{align} \overline{\mathcal{Y}}^{t,x}_u &= \xi_{M,x} +\int_t^T f_{M,x}^+ (s, \overline{\mathcal{Y}}^{t,x}_s , \overline{\mathcal{Z}}^{t,x}_s, \overline{\mathcal{U}}^{t,x}_s)ds \\ &\quad- \int_t^T\overline{\mathcal{Z}}^{t,x}_{s} dW_s -\int_{{]t,T]}\times{\mathbb{R}_0}} \overline{\mathcal{U}}^{t,x}_{s}(v) \tilde{\mathcal{N}}(ds,dv). \end{align}\] implies \[\begin{align} U^M_t(x)= \lim_{u \downarrow t} D_{t,x} Y^M_u \le \overline{\mathcal{Y}}^{t,x}_t = \xi_{M,x} e^{L_{f ,{\rm y}}(T-t)} +\int_t^T c_{M,x} e^{L_{f ,{\rm y}}(s-t)} ds \end{align}\] with \(\xi_{M,x} = ( c +\alpha M^{\tt r}) e^{L_b T} \kappa_\rho(x)\) and \(c_{M,x}= (c +\beta M^{\tt r}) e^{L_b T} \kappa_\rho(x),\) so that we can choose \[a_0^M := ( \xi_{M,0} \, e^{L_{f ,{\rm y}}T} +T \, c_{M,0} \, e^{L_{f ,{\rm y}}T} ) (1+ K_{\sigma} ).\] ◻
We continue with the main result of this section which gives some uniform bounds that will be crucial to obtain our uniqueness and existence result for the BSDE 3 .
Proposition 14. Suppose Assumptions 1 and 2 hold. Then there exist \(a_\infty >0\) and \(b_\infty>0\) (not depending on \(M\)) such that \[\begin{align} \label{Z-and-U-pi-bound} |Z^M_t| & \le a_\infty + b_\infty |X^t|_\infty^{{\tt r}} \notag \\ |U^M_t(x)| &\le \kappa_{\rho}(x)(a_\infty+ b_\infty |X^t|_\infty^{{\tt r}} ). \end{align}\qquad{(9)}\]
As a consequence we also get that there exists a \(c_\infty >0\) (not depending on \(M\)) such that \[\begin{align} \label{Y-pi-bound} |Y^M_t| \le c_\infty(1 + |X^t|_\infty^{{\tt r}+1}). \end{align}\qquad{(10)}\]
The proof uses several technical lemmata to which we turn next. We derive representations for \(Z^M\) and \(U^M\) from ?? and 10 writing \[\begin{align} \label{Z-as-DY} Z^M_t \mathbb{1}_{x = 0} + U^M_t(x) \mathbb{1}_{x \neq 0} &= D_{t,x} g^M(X^T) + \int_t^T D_{t,x} f^M (s, X^s, \Theta^M_s )ds \notag \\ & - \int_t^T D_{t,x} Z^M_s dW_s -\int_{{]t,T]}\times{\mathbb{R}_0}} D_{t,x} U^M_s(v) \tilde{\mathcal{N}}(ds,dv). \end{align}\tag{12}\] where \(\Theta^M_s= (Y^M_s, Z^M_s, H^M_s) ,\) and \(H^M_s = \int_{\mathbb{R}_0} h(s, b_M ( U^M_s(v))) \kappa (v) \nu(dv).\)
By the chain rule ?? denoting \[\begin{align} \label{fhat-zero-terms} \widehat f^0(s, D_{t,0} X^s)&:= D_{t,0} f^M(s,X^s ,y,z,u)|_{(y,z,u)=(Y^M_s,Z^M_s,H^M_s)} \notag \\ (\widehat f_{{\rm y}}^0, \widehat f_{{\rm z}}^0, \widehat f_{{\rm u}}^0)&:=(G_1, G_2, G_3) \end{align}\tag{13}\] we have \[\begin{align} \label{the32x61032equation} D_{t,0} f^M (s, X^s, \Theta^M_s ) &= \widehat f^0(s, D_{t,0} X^s) \notag\\ &\quad + \widehat f_{{\rm y}}^0(s) \, D_{t,0} Y^M_s + \widehat f_{{\rm z}}^0(s) D_{t,0} Z^M_s +\widehat f_{{\rm u}}^0(s) \, D_{t,0} H^M_s \end{align}\tag{14}\] and furthermore, \(D_{t,0} H^M_s = \int_{\mathbb{R}_0} \widehat h^0_{{\rm u}}(s) \, D_{t,0} U^M_s(v) \kappa (v) \nu(dv).\)
To write the counterpart to the above relation for a.a. \((t,x) \in [0,T]\times \mathbb{R}_0\) we use that there exist, by Corollary 7 and the mean value theorem, some measurable functions \(\eta :\Omega\times [0,T]\times \mathbb{R}\times [0,T] \to [0,1]^{3}\), \(\eta=\eta(\omega,t,x,s),\) and \(\vartheta:\Omega\times [0,T]\times \mathbb{R}_0\times [0,T]\times \mathbb{R}_0 \to [0,1]\) with \(\vartheta=\vartheta(\omega,t,x,s,v)\) such that (we use the abbreviation \(\widehat f^x_{ x_k}(s)\) etc to indicate that we do not have a partial derivative of \(f\) but a partial derivative of the truncated \(f^M\) which is taken for the jump case at an intermediate point.)
\[\begin{align} \label{f-variables} \widehat f^x(s, D_{t,x}X^s) &:= f^M(s, X^s + D_{t,x} X^s, \Theta^M_s + D_{t,x}\Theta^M_s) \notag \\& \quad - f^M(s, X^s, \Theta^M_s+ D_{t,x}\Theta^M_s) \notag \\ \widehat f_{\rm y}^x(s)&:= \partial_{y} f^M(s, X^s, Y^M_s + \eta_1 D_{t,x} Y^M_s\!\!, (Z^M_s\!\!, H^M_s) + D_{t,x} (Z^M_s, H^M_s) ) \notag\\ \widehat f_{\rm z}^x(s)&:= \partial_{z} f^M(s, X^s, Y^M_s, Z^M_s +\eta_2 D_{t,x} Z^M_s, H^M_s + D_{t,x} H^M_s ) , \notag\\ \widehat f_{\rm u}^x(s)&:= \partial_{u} f^M(s, X^s, Y^M_s, Z^M_s , H^M_s +\eta_3 D_{t,x} H^M_s ) , \notag\\ \widehat h_{\rm u}^x(s)&:= \partial_u h(s, U^M_s(v) + \vartheta D_{t,x}U^M_s(v)) \end{align}\tag{15}\]
By using a telescopic sum for \(x \in \mathbb{R}_0\) and 14 for \(x=0\) we have \[\begin{align} \label{mean-value-term-f} &D_{t,x} f^M(s, X^s, \Theta^M_s) = \widehat f^x(s, D_{t,x} X^s)+\widehat f^x_{{\rm y}}(s)D_{t,x}Y^M_s \notag\\ & \quad \quad \quad+\widehat f^x_{{\rm z}}(s)D_{t,x}Z^M_s+\widehat f^x_{{\rm u}}(s) \int_{\mathbb{R}_0} \widehat h^x_{{\rm u}}(s) \, D_{t,x} U^M_s(v) \kappa (v) \nu(dv). \end{align}\tag{16}\]
We rewrite 12 as \[\begin{align} \label{U-pi-M-ep-repr} & Z^M_t \mathbb{1}_{x = 0} + U^M_t(x) \mathbb{1}_{x \neq 0} \notag \\ & = D_{t,x} g^M(X^T) + \int_t^T \widehat f^x(s, D_{t,x} X^s) \notag + \widehat f^x_{{\rm y}}(s) D_{t,x}Y^M_s ds \notag \\ &- \int_t^T D_{t,x} Z^M_s (dW_s- \widehat f^x_{{\rm z}} (s)ds) \notag \\ & -\int_{{]t,T]}\times{\mathbb{R}_0}} D_{t,x} U^M_s(v) \, (\tilde{\mathcal{N}}(ds,dv)- \widehat f^x_{{\rm u}}(s) \widehat h^x_{{\rm u}} (s) \,\, \kappa (v) \,\,ds \nu(dv) ), \end{align}\tag{17}\] and consider the adjoint equation to 12 \[\begin{align} \label{the-process-Gamma} \Gamma^x_{t,u} &= 1 + \int_t^u \widehat f^x_{{\rm y}}(s) \Gamma^x_{t,s} ds + \int_t^u \widehat f^x_{{\rm z}}(s) \Gamma^x_{t,s} dW_s \notag\\ & \quad+ \int_t^u \int_{\mathbb{R}_0} \widehat f^x_{{\rm u}} (s) \widehat h^x_{{\rm u}}(s) \,\, \kappa (v) \,\, \Gamma^x_{t,s-} \tilde{\mathcal{N}}(ds,dv), \,\,u \in [t,T]. \end{align}\tag{18}\]
Lemma 15. Suppose Assumptions 1 and 2 hold and that there exist constants \(a, b \geq 1,\) (possibly depending on \(M\)) such that, for a.a. \(t \in [0,T]\), \(x \in \mathbb{R}_0\), \[|Z^M_t| \le a + b |X^t|_\infty^{{\tt r}} \quad \text{ and } \quad |U^M_t(x)| \le \kappa_{\rho}(x)(a+ b|X^{t}|_\infty^{{\tt r}} ).\]
(1) Then it holds for a.a. \(s \in [0, t]\) (recall ?? ) \[\begin{align} |D_{s,x}Z^M_t| &\le C(1+ a + b |X^t|_\infty^{{\tt r}}) , \quad x\neq 0, \\ |D_{s,x} U^M_t(v) |& \le \kappa_{\rho}(v) C(1+ a + b |X^t|_\infty^{{\tt r}}), \quad x\neq 0, \\ |D_{s,x} H^M_t| &\le C(1 + a^{m_2+1}+ b^{m_2+1} |X^t|_\infty^{{\tt r}(m_2+1)}) , \quad x\neq 0, \end{align}\] where the constants \(C\) depend on \(\kappa_{\rho}\) through \(\kappa_{\rho,2}\) and \(\kappa_{\rho,\infty}\).
(2) For the expressions defined in 13 and 15 we have for a.a. \(t \in [0,s]\) \[\begin{align} |\widehat f^x(s, D_{t,x}X^s)| &\le C (1+ |X^s|_\infty^{\tt r} ),\;\; \forall k=1,\cdots,N\,, \\ |\widehat f^x_{{\rm y}}(s)| &\le L_{f,{{\rm y}}}, \\ |\widehat f^x_{{\rm z}}(s)| & \le C (1+ a^{\ell} + b^{\ell} |X^s|_\infty^{{\tt r} \ell}), \\ | \widehat f^x_{{\rm u}}(s) \widehat h^x_{{\rm u}} (s)| &\le C (1+ a^\ell + b^\ell |X^s|_\infty^{{\tt r} \ell}). \end{align}\]
Proof. For \(x \neq 0,\) since \(Z^M_t,\) \(U^M_t\) and \(H^M_t\) are \(\mathcal{F}_t\) -measurable where \((\mathcal{F}_t)_{t\in [0,T]}\) is the augmented natural filtration generated by \((\mathcal{L}_s )_{s\in [0,T]}\) by [46] we can represent them for fixed \(t\) as functions of the Lévy process \(\mathcal{L}\), denoting \(\mathcal{L}^t :=(\mathcal{L}_{s \wedge t})_{s\in [0,T]}\): \[Z^M_t = F_Z( \mathcal{L}^t), \quad U^M_t(\cdot) = F_U(\cdot, \mathcal{L}^t), \text{ and } \quad H^M_t = F_H( \mathcal{L}^t) \quad a.s.\] Similarly, \(a + b |X^t|_\infty^{{\tt r}} = F_X( \mathcal{L}^t).\) Here \(F_Z,F_H,F_X\) are \(\mathcal{G}_t\)-measurable and \(F_U\) is \(\mathcal{B}(\mathbb{R}_0) \otimes \mathcal{G}_t\)-measurable. The lemma below uses the assumptions on \(X\) and provides the bounds for item [derivative-estimate]:
Lemma 16. Assume \(F:D([0,T]) \to \mathbb{R}\) is \(\mathcal{G}_t\)-measurable. Then, for a.a. \(s \in [0, t]\) and \(x\neq 0\) and any constants \(A,B \ge 0\) it holds that \[|F(\mathcal{L}^t)| \le A + B |X^t|_\infty^{{\tt r}} \,\, \text{ implies} \,\, |D_{s,x} F(\mathcal{L}^t)| \le 2A +2 B |X^t|_\infty^{{\tt r}} + BT^{{\tt r}} e^{L_b {\tt r}T}\kappa_{\rho,\infty}^{{\tt r}}\] for \(\mathbb{P}\otimes \mathbb{m}\)-a.e. \((\omega, s,x).\)
Proof. Since we have an a.s. representation \(F_X( \mathcal{L}^t) = A + B |X^t|_\infty^{{\tt r}}\), we get by our assumption that setting \[\Lambda := \{ {\tt x}\in D[0,T] : F( {\tt x}^t) > F_X( {\tt x}^t) \}\] leads to \(\mathbb{P}( \mathcal{L}\in \Lambda ) = 0.\) Then by Lemma 8, \[\begin{align} \label{shift-relation} \mathbb{P}\otimes \mathbb{m}\Big( (\omega, s,x): F( \mathcal{L}^t + x\mathbb{1}_{[s,t]}) > F_X(\mathcal{L}^t + x\mathbb{1}_{[s,t]} ) \Big ) = 0. \end{align}\tag{19}\] By Lemma 6, Corollary 7 and Proposition 9, \[\begin{align} F_X( \mathcal{L}^t + x\mathbb{1}_{[s,t]} ) &= D_{s,x} (A + B |X^t|_\infty^{{\tt r}}) + A + B |X^t|_\infty^{{\tt r}} \\ &= B| D_{s,x} X^t +X^t |_\infty^{{\tt r}} - B |X^t|_\infty^{{\tt r}} + A + B |X^t|_\infty^{{\tt r}} \\ &= A + B| D_{s,x} X^t +X^t |_\infty^{{\tt r}} \\ &\le A + B |X^t|_\infty^{{\tt r}} + B T^{{\tt r}} e^{L_b {\tt r}T} \kappa_\rho(x)^{{\tt r}}. \end{align}\] Since we have \(D_{s,x} F( \mathcal{L}^t) +F(\mathcal{L}^t) = F( \mathcal{L}^t + x\mathbb{1}_{[s,t]})\) it follows by 19 that \[\begin{align} D_{s,x} F( \mathcal{L}^t) +F( \mathcal{L}^t) & \le F_X(\mathcal{L}^t + x\mathbb{1}_{[s,t]} ) \\ &\le A + B |X^t|_\infty^{{\tt r}} + B T^{{\tt r}} e^{L_b {\tt r}T} \kappa_\rho(x)^{{\tt r}}. \end{align}\]
Since we also have by assumption that \(- F( \mathcal{L}^t) \le A + B |X^t|_\infty^{{\tt r}}= F_X(\mathcal{L}^t)\) we can repeat the above arguments to get \[\begin{align} - D_{s,x} F( \mathcal{L}^t) -F(\mathcal{L}^t) &= -F(\mathcal{L}^t + x\mathbb{1}_{[s,t]} ) \le F_X( \mathcal{L}^t + x\mathbb{1}_{[s,t]} ) \end{align}\] we arrive eventually at \[\begin{align} |D_{s,x} F(\mathcal{L}^t) | &\le |D_{s,x} F( \mathcal{L}^t) +F(\mathcal{L}^t)| + |F( \mathcal{L}^t)| \\ &\le 2( A + B |X^t|_\infty^{{\tt r}} )+ B T^{{\tt r}} e^{L_b {\tt r}T} \kappa_\rho(x)^{{\tt r}}. \end{align}\] ◻
We continue with the proof of Lemma 15. The estimates in item [derivative-estimate] follow now immediately from the assumed bounds on \(Z^M_t\) and \(U^M_t.\) By Assumption 2 [h-assumption] we have \(|h(s,u)| \le \left (c+ \frac{\gamma}{2} |u|^{m_2} \right )|u|\) so that the assumption \(|U^M_t(x)| \le \kappa_{\rho}(x)(a+ b|X^{t}|_\infty^{{\tt r}} )\) implies \[\begin{align} \label{inner-product-path} & |H^M_t | = \left | \int_{\mathbb{R}_0} h(s, b_M ( U^M_s(v))) \, \kappa (v) \nu(dv)\right | \notag \\ &\le \int_{\mathbb{R}_0} \left (c+ \frac{\gamma}{2} |U^M_s(v)|^{m_2} \right )|U^M_s(v)| \, \kappa (v) \nu(dv) \notag\\ &\le \int_{\mathbb{R}_0} \kappa(v) \kappa_{\rho}(v) \nu(dv) \,\, \left (c+ \frac{\gamma}{2} \left [\kappa_{\rho,\infty}(a+ b|X^{t}|_\infty^{{\tt r}} )\right ]^{m_2}\right ) (a+ b|X^t|_\infty^{{\tt r}} ) \notag \\ & \le C(1 + a^{m_2+1}+ b^{m_2+1} |X^t|_\infty^{{\tt r}(m_2+1)} ), \end{align}\tag{20}\] where \(C\) depends on \(\gamma,\kappa_{\rho,2}\) and \(\kappa_{\rho,\infty}\).
We show item [f-estimate]. For \(x \in \mathbb{R}_0,\) by Assumption 2 and Proposition 9, 15 and item [derivative-estimate] we get \[\begin{align} |\widehat f^x(s, D_{t,x}X^s)| & = |f^M(s, X^s + D_{t,x} X^s, \Theta^M_s + D_{t,x}\Theta^M_s) \\&- f^M(s, X^s, \Theta^M_s+ D_{t,x}\Theta^M_s)|\\ &\le \left (c+ \frac{\beta}{2}( 2|X^s|_\infty^{\tt r} + | D_{t,x}X^s|_\infty^{\tt r}) \right) | D_{t,x}X^s|_\infty^{\tt r} \\ &\le C (1+ |X^s|_\infty^{\tt r} ) \\ & \text{ with } \,\, C= C(L_b, {\tt r}, T, \kappa_{\rho,\infty}, \beta)\\ |\widehat f^x_{{\rm y}}(s)| &\le L_{f,{{\rm y}}} \\ |\widehat f^x_{{\rm z}}(s)| &\le \left (c+\gamma 2^{\ell} (|Z^M_s|^\ell + | D_{t,x}Z^M_s |^\ell) \right ) \le C (1+ a^{\ell} + b^{\ell} |X^s|_\infty^{{\tt r}\ell}), \\ |\widehat f^x_{{\rm u}}(s)| &\le \left (c+\gamma 2^{m_1} \Big (\big |H^M_s\big|^{m_1} + \big|D_{t,x} H^M_s\big|^{m_1} \Big)\right ) \\ |\widehat f^x_{{\rm u}}(s) \widehat h^x_{{\rm u}}(s) | &\le \left ( c +\gamma 2^{m_1} \Big( \big|H^M_s\big|^{m_1} + \big| D_{t,x}H^M_s\big|^{m_1}\Big ) \right ) \;\; \\ & \quad\times ( c+ \gamma 2^{m_2} (|U^M_s(v)|^{m_2} + |D_{t,x} U^M_s(v)|^{m_2}) ). \end{align}\]
Then by 20 and item [derivative-estimate] \[\begin{align} | \widehat f^x_u (s) \widehat h^x_u(s) | &\le C( c + a^{(m_2+1)m_1} + b^{(m_2+1)m_1} |X^s|_\infty^{{\tt r}(m_2+1)m_1} )\\ & \quad \times (c+ (\kappa_{\rho}(v) C(1+ a + b |X^s|_\infty^{{\tt r}}) )^{ m_2})\\ &\le C\left (\!1+ a^{(m_1 + m_1m_2 +m_2)}\!+ b^{(m_1 + m_1m_2 +m_2)} |X^s|_\infty^{{\tt r}(m_1 + m_1m_2 +m_2)} \!\right ) \\ &\le C\left (1+ a^{\ell}+ b^{\ell} |X^s|_\infty^{{\tt r}\ell} \right ) \end{align}\] where \(C\) depends on \(\gamma,\kappa_{\rho,2}\) and \(\kappa_{\rho,\infty}\).
For \(x=0\) we get the same bounds by similar computations. ◻
For a.a. \((s,x) \in [0,T]\times\mathbb{R}\), the process \((G^x_{s,t})_{t\in[s,T]}\) defined by \[\begin{align} \label{the-process-G} G^x_{s,t} &= 1 + \int_s^t \widehat f^x_{{\rm z}}(r) \, G^x_{s,r} dW_r + \int_{]s,t]\times \mathbb{R}_0} \widehat f^x_{{\rm u}}(r) \widehat h^x_{{\rm u}}(r) \,\, \kappa (v) \,\, G^x_{s,r} \tilde{\mathcal{N}}(dr,dv), \end{align}\tag{21}\] is a local martingale, but the next lemma shows that under our conditions it is a true martingale.
Lemma 2. Suppose Assumptions 1 and 2 hold, and that there exist constants \(a, b \geq 1,\) (possibly depending on \(M\)) such that for a.a. \((t, x) \in [0,T]\times \mathbb{R}_0\), \[|Z^M_t| \le a + b |X^t|^{{\tt r}}_\infty \quad \text{ and } \quad |U^M_t(x)| \le \kappa_{\rho}(x)(a+ b |X^t|^{{\tt r}}_\infty ).\] Then, the process \((G^x_{s,t}) _{t\in [s,T]}\) given in 21 is a uniformly integrable martingale and we have for the solution to 17 the representation \[\begin{align} & Z^M_t \mathbb{1}_{x = 0} + U^M_t(x) \mathbb{1}_{x \neq 0} \notag \\ &=\mathbb{E}^{x\,'}_t \Big [ e^{\int_t^T \widehat f^x_{{\rm y}} (r) dr}D_{t,x} g^M(X) + \int_t^T e^{\int_t^v \widehat f^x_{{\rm y}} (r) dr} \widehat f^x(v, D_{t,x} X^v) dv \Big ], \end{align}\] where \(d {\mathbb{P}}^{x \, '} = G^x_{t,T} d\mathbb{P}.\)
Proof. Since \((G^x_{s,t} )_{t\in [s,T]}\) is a locally square integrable martingale, by Novikov’s condition given by [50] it suffices to check if \[\mathbb{E}\exp \left (\frac{1}{2} \int_s^T |\widehat f_{{\rm z}}^x(u) |^2du+ \int_s^T |\widehat f_{{\rm u}}^x(u)|^2 \,\,\int_{\mathbb{R}_0} | \widehat h^x_{{\rm u}}|^2 \kappa^2 (v) \nu(dv) du \right )<\infty.\] This follows from Lemma 4 since by Lemma 15 we have \[\begin{align} &\int_s^T \left ( \frac{1}{2}|\widehat f_{{\rm z}}^x(u) |^2+ |\widehat f_{{\rm u}}^x(u)|^2 \,\,\int_{\mathbb{R}_0} | \widehat h^x_{{\rm u}}|^2 \kappa^2 (v) \nu(dv)\right ) du \\ & \le \int_s^T C^2 (1+a^\ell + b^\ell |X^u|_\infty^{{\tt r}\ell} )^2 + C^2 (1+ a^\ell + b^\ell|X^u|_\infty^{{\tt r} \ell})^2du \\ & \le \int_s^T C'(1+ |X^u|_\infty) du \end{align}\] (note that by assumption we have \(2 {\tt r}\ell \le 1\) and \(\kappa \in L^2(\nu)\)). The second part of the Lemma follows from the proof of [51]. It holds \(\Gamma^x_{s,t} = e^{\int_s^t \widehat f^x_{{\rm y}}(r) dr } G^x_{s,t}\) (see 21 and 18 ). ◻
Before stating the next result let us prove an auxiliary Lemma.
Lemma 3. Suppose Assumptions 1 and 2 hold, and that there exists constants \(a, b \geq 1,\) (possibly depending on \(M\)) such that for a.a. \((t, x) \in [0,T]\times \mathbb{R}_0\), \[|Z^M_t| \le a + b |X^t|^{{\tt r}}_\infty \quad \text{ and } \quad |U^M_t(x)| \le \kappa_{\rho}(x)(a+ b |X^t|^{{\tt r}}_\infty ).\] For all \(0\le s \le t \le T\) we have \[\begin{align} \mathbb{E}^{x\, '}_s |X^t|^{\tt r}_\infty&\le C(1+a^{{\tt r }\ell}+b+|X^s|^{\tt r}_\infty). \end{align}\]
Proof. Since \[\begin{align} \mathbb{E}^{x\, '}_s|X^t|_\infty^{\tt r}\le |X^s |_\infty^{\tt r} + \mathbb{E}^{x\, '}_s|X^t -X^s|_\infty^{\tt r}, \end{align}\] recalling \({\tt r}<1\), it suffices to estimate the term \[\mathbb{E}^{x\, '}_s|X^t -X^s|_\infty^{\tt r} = \mathbb{E}^{x\, '}_s \sup_{s\le u\le t} |X_u-X_s|^{\tt r}.\] The process \(X\) for \(t \in [s,T]\) is given by \[\begin{align} X_t &= X_s + \int_s^t b(u,X_u) du + \int_s^t \sigma(u) dW'_u + \int_s^t \sigma(u) \widehat f^x_{\rm z}(u) du \\ & \quad + \int_{{]s,t]}\times{\mathbb{R}_0}} \rho(s,v) \tilde{\mathcal{N}}'(du,dv) + \int_{{]s,t]}\times{\mathbb{R}_0}} \rho(s,v) \widehat f^x_{\rm u}(u) \widehat h^x_{\rm u}(u) \,\, \kappa (v) \,\,du \nu(dv). \end{align}\]
By Itô’s formula it holds for \(0 \le s < t \le T\) that \[\begin{align} & (X_t-X_s)^2 \\ &= 2 \int_s^t (X_u-X_s)( b(u,X_u) + \sigma(u) \widehat f^x_{\rm z}(u) ) du + 2 \int_s^t (X_u-X_s)\sigma(u) dW'_u \\ & \quad + \int_s^t \sigma^2(u) du \\ & \quad + \int_{{]s,t]}\times{\mathbb{R}_0}} 2(X_u -X_s) \rho(s,v) + \rho(s,v)^2 \tilde{\mathcal{N}}'(du,dv) \\ & \quad + 2 \int_{{]s,t]}\times{\mathbb{R}_0}} (X_u -X_s) \rho(s,v) \widehat f^x_u \widehat h^x_u \,\, \kappa (v) \,\,du \nu(dv) \\ &\quad+ \int_{{]s,t]}\times{\mathbb{R}_0}} \rho(s,v) ^2\,\,du \nu(dv). \end{align}\] To shorten the notation we will address the sum of the stochastic integrals on the r.h.s. as ‘local martingale’. The two non-random terms are bounded by \(( K^2_\sigma+\kappa_{\rho, 2})(T-s).\) Using the bounds for \(|\widehat f^x_{\rm z}(u) |\) and \(| \widehat f^x_{\rm u}(u) \widehat h^x_{\rm u}(u)|\) given in Lemma 15 we continue to estimate the remaining integrands as follows: \[\begin{align} & | (X_u-X_s)( b(u,X_u) + \sigma(u) \widehat f^x_{\rm z}(u)) | \\& \quad + \left | \int_{{\mathbb{R}_0}} (X_u -X_s) \rho(s,v) \widehat f^x_{\rm u}(u) \widehat h^x_{\rm u}(u) \, \kappa (v) \nu(dv) \right | \\ & \le C |X_u-X_s| (|X_u|+1+ a^{\ell} + b^{\ell} |X^u|_\infty^{{\tt r}\ell}), \end{align}\] where \(C\) depends on \(L_b, K_\sigma, K_b, \kappa_{\rho,2}\) and \(\kappa_{\rho,\infty}.\) Since \({\tt r}\ell\le \frac{1}{2}\) we may increase the exponent of \(|X^u|_\infty^{{\tt r}\ell}\) to \(\frac{1}{2}\) if we add \(1\) and then we apply the Cauchy-Schwarz inequality to get \[b^{\ell} |X^u|_\infty^{{\tt r}\ell} \le b^{\ell} (1+ |X^u|_\infty^ \frac{1}{2}) \le b^{\ell} + \frac{b^{2\ell}}{2} + \frac{ |X^u|_\infty}{2} \le 2b^{2\ell} + |X^u|_\infty.\] This gives \[\begin{align} &|X_u-X_s| (|X_u|+1+ a^{\ell} + b^{\ell} |X^u|_\infty^{{\tt r}\ell}) \\ & \le |X^u-X^s|_\infty (2 |X^u|_\infty +1+ a^{\ell} + 2b^{2\ell} ) \\ & \le |X^u-X^s|_\infty (2 |X^u-X^s|_\infty + 2 |X^s|_\infty+1+ a^{\ell} + 2b^{2\ell}) \\ &\le 3 |X^u-X^s|_\infty^2 + (|X^s|_\infty+1+ a^{\ell} + b^{2\ell})^2. \end{align}\] Summarising, we have \[\begin{align} (X_t-X_s)^2 \le 6 C \int_s^t \sup_{s\le v\le u} |X_v-X_s|^2 du + H + \,\, \text{ local martingale} \end{align}\] where \[\begin{align} H &= \left (K^2_\sigma +\kappa_{\rho, 2} + C (|X^s|_\infty+1+ a^{\ell} + b^{2\ell})^2 \right )T. \end{align}\] Then, by relation ?? of Theorem 19 (for \(0< p= \frac{\tt r}{2}<1\) ) \[\begin{align} \mathbb{E}^{x\, '}_s\sup_{s \le u\le t}|X_u-X_s |^{\tt r} &\le \frac{1}{(1-p) } H^{{\tt r}/2} \,\, e^{\frac{p}{1-p} 6C (T- s)}\\ &\le C'(1+a^{{\tt r }\ell}+b+|X^s|^{\tt r}_\infty), \end{align}\] where we used that \(b^{2{\tt r} \ell} \le b.\) The constant \(C'\) depends on \(L_b, K_\sigma, K_b, \kappa_{\rho,2}, \kappa_{\rho,\infty}.\) and \({\tt r}\). ◻
Lemma [estimates-depending-on-M] presented below gives us some bounds on \(Z^M\) and \(U^M\) that cannot be used directly to prove the existence of a solution for 3 since they strongly depend on \(M\) through the constants \(a=a_M\) and \(b=b_M\). It is however an important step in the proof of Proposition 14.
Lemma 17. Suppose Assumptions 1 and 2 hold. Then the following holds for a solution \((Y^M, Z^M, U^M)\) to ?? : Assume that there exist constants \({a, b} \geq 1\) which may depend on \(M\) such that for a.a. \((t, x) \in [0,T]\times \mathbb{R}_0\), \[|Z^M_t| \le a + b |X^t|^{{\tt r}}_\infty \quad \text{and} \quad |U^M_t(x)| \le \kappa_{\rho}(x)(a+ b |X^t|^{{\tt r}}_\infty ).\] Then there is a \(C>0\) which does not depend on \(a\), \(b\), \(M\) such that \[|Z^M_t | \le C(( 1+ a^{r \ell}+b) + | X^t |^{\tt r}_\infty )\] and \[|U^M_t(x)| \le C \kappa_\rho(x) (( 1+ a^{r \ell}+b) + | X^t |^{\tt r}_\infty ).\]
Proof. We start by proving the upper bound for \(|U^M_t(x)|\) by using Lemma 2.
From Assumption 2 and Proposition 9 we derive that for \(x \neq 0\) \[\begin{align} \label{U-estimate-termA} & \Big | \mathbb{E}^{x\, '}_t \Big [ e^{\int_t^T \widehat f^x_y (r) dr}D_{t,x} g^M(X) \Big ] \Big | \notag \\ & \le e^{TL_{f,{\rm y}}} \mathbb{E}^{x\, '}_t \Big [ \left (c+ \alpha (|X|_\infty^{\tt r} + |D_{t,x} X|_\infty^{\tt r}) \right )|D_{t,x} X|_\infty \Big ] \notag \\ & \le e^{TL_{f,{\rm y}}} T e^{L_b T} \kappa_\rho(x) \mathbb{E}^{x\, '}_t\left (c+ \alpha (|X|_\infty^{\tt r} + | e^{L_b T} \kappa_\rho(x) |^{\tt r}) \right ) \notag\\ &\le \kappa_\rho(x)C ( 1 + \mathbb{E}^{x\, '}_t|X|_\infty^{\tt r} ). \end{align}\tag{22}\]
Hence from 22 and Lemma 3 we get that there is a \(C>0\) such that we have \[\begin{align} && \Big | \mathbb{E}^{x\, '}_t \Big [ e^{\int_t^T \widehat f^x_y(r) dr}D_{t,x} g^M(X) \Big ] \Big | \le C \kappa_\rho(x) ( 1+ a^{r \ell} +b + | X^t|_\infty^{\tt r}), \end{align}\]
Applying Lemma 15 we get as in 22 that \[\begin{align} &\Big | \mathbb{E}^{x\, '}_t \Big [ \int_t^T e^{\int_t^v \widehat f^x_{{\rm y}}(r) dr} \widehat f^x(v, D_{t,x}X^v) dv \Big ] \Big |\\ & \le C \int_t^T \mathbb{E}^{x\, '}_t \Big [ (1 +e^{L_b \tt r T} \kappa_{\rho}(x)^{\tt r} +| X^v |_\infty^{\tt r}) \, e^{L_bT}\kappa_{\rho}(x) \,\, \Big ] dv . \end{align}\]
By Lemma 3 we have \[\begin{align} &\Big | \mathbb{E}^{x\, '}_t \Big [ \int_t^T e^{\int_t^v \widehat f^x_{{\rm y}}(r) dr} \widehat f^x(v, D_{t,x}X^v) dv \Big ] \Big |\le C \kappa_\rho(x) ( 1+ a^{r\ell} + b + | X^t |_\infty^{\tt r}). \end{align}\] ◻
We have now all the tools needed in order to prove Proposition 14.
Proof of Proposition 14. Let us start by proving ?? . By Lemma
13 there exists an \(a_0>0\) potentially depending on \(M\) such that, for a.a. \((t,x) \in [0,T] \times \mathbb{R}\), \[\begin{align}
\label{step0} &|Z^M_t| \le a_0,\notag\\ &|U^M_t(x)| \le a_0 \kappa_{\rho}(x) .
\end{align}\tag{23}\] By Lemma 17 we have \[|Z^M_t | \le C( 1+ a_0^{r \ell}) + C| X^t |_\infty^{\tt r}\] \[|U^M_t(x)| \le \kappa_\rho(x) (C( 1+ a_0^{r \ell}) + C| X^t |_\infty^{\tt r} ),\] where \(C\) does not depend on \(a_0\) and \(M\). Let \(a_1:=C(1+ a_0^{r \ell}).\) Applying again Lemma 17 gives \[|Z^M_t | \le C( 1+ a_1^{{r \ell}} +C)) + C| X^{\pi,t} |_\infty^{\tt r} \le (C+1)^2(1+a_1^{r \ell})+ C| X^{\pi,t} |_\infty^{\tt r}\] \[|U^M_t(x)| \le C\kappa_\rho(x) (( 1+ a_1^{r \ell}+C) + |
X^{\pi,t} |_\infty^{\tt r} )\le \kappa_\rho(x) ((C+1)^2(1+a_1^{r \ell})+ C| X^{\pi,t} |_\infty^{\tt r}).\] Let \(a_{n+1}:= (C+1)^2(1+a_n^{rl})\). Lemma 17 gives for all \(n\) \[|Z^M_t | \le (C+1)^2(1+a_{n+1}^{r\ell})+ C| X^{\pi,t} |_\infty^{\tt r}\] \[|U^M_t(x)|\le
\kappa_\rho(x) ((C+1)^2(1+a_{n+1}^{r \ell})+ C| X^{\pi,t} |_\infty^{\tt r}).\] We can remark that \(a_{\infty}:=\lim_{n \rightarrow +\infty}a_n\) exists since \(r\ell<1\), and it
depends only on \(C\) and \(r\ell\). Then we just have to set \(b_{\infty}:=C\) in order to get the announced result.
We now show ?? . Since \[\begin{align} Y^M_t &= \mathbb{E}_t g^M(X) + \mathbb{E}_t \int_t^T f^M\left( s, X^s, \Theta^M_s \right)ds
\end{align}\] where \(\Theta^M_s:= (Y^M_s, Z^M_s, H^M_s )\), we get from ?? and Assumption 2 \[\begin{align} |Y^M_t |&\le | g^M(0)| + \mathbb{E}_t |g^M(X) -g^M(0)| \\ & \quad + \int_t^T | f^M\left( s, 0,0,0,0) \right)|ds \\ &\quad+\mathbb{E}_t \int_t^T | f^M\left( s, X^s, \Theta^M_s \right)-f^M \left( s, 0,0,0,0)
\right)| ds \\ &\le C( g(0)) + C \mathbb{E}_t (1+ |X|_\infty^{{\tt r} +1} ) \\ & \quad + C \int_t^T |f(s,0,0,0,0)|ds +\mathbb{E}_t \int_t^T \big [C(1+ |X^t|_\infty^{{\tt r} +1} ) + L_{f, {\rm y}} |Y^M_s | + \\ & \quad + C(1+ |Z^M_s |^{l+1} ) +
C(1+ | H^M_s |^{m_1+1} ) ]ds \\ &\le C + C \mathbb{E}_t |X|_\infty^{{\tt r} +1} + C \int_t^T \mathbb{E}_t |X^s|_\infty^{{\tt r} +1} ds + L_{f, {\rm y}} \mathbb{E}_t \int_t^T |Y^M_s | ds \\ &\le C + C |X^t|_\infty^{{\tt r} +1} + C\mathbb{E}_t |X -
X^t |_\infty^{{\tt r} +1} \\ & \quad + C \int_t^T \mathbb{E}_t |X^s - X^t |_\infty^{{\tt r} +1} ds + L_{f, {\rm y}} \mathbb{E}_t \int_t^T |Y^M_s | ds.
\end{align}\]
Then, by similar computations as in the proof of Lemma 3 (here we use \(\mathbb{E}\) instead of \(\mathbb{E}'\)) one gets that \[|Y^M_t | \le C + C |X^t|_\infty^{{\tt r} +1} + L_{f, {\rm y}} \mathbb{E}_t \int_t^T |Y^M_s | ds\] and the assertion follows by Gronwall’s lemma. ◻
Before we show the main theorem, we prove a stability result well shaped for our setting. We introduce the notation \[f_h(s,{\boldsymbol{x}}^s, y,z, {\boldsymbol{u}}) := f \left (s,{\boldsymbol{x}}^s, y,z, \int_{\mathbb{R}_0} h(s, {\boldsymbol{u}}(v)) \kappa (v) \nu(dv) \right).\]
Lemma 18. Let \((\xi,f,h), (\xi',f',h')\) be data for BSDEs with \(\xi,\xi'\in L^2\) and \(f,f',h,h'\)
satisfy Assumption 2 (iii
-vi
). Assume two respective solutions \((Y,Z,U), (Y',Z',U')\)
exist and are such that there are constants \(a,b>0\) \[|Z_t |\vee |Z'_t| \le a + b |X^t|^{\tt r}_\infty \quad\text{and}\quad |U_t(x) | \vee|U'_t(x)| \le \kappa_{\rho}(x)(a + b
|X^t|^{\tt r}_\infty ).\] Then there is a constant \(C=C(a,b,c,T,L_{f,{\rm y}},\gamma,{\tt r},l, \kappa_{\rho,\infty}, \|\kappa\|_{L^2(\nu)})>0\) such that for all \(n\in
\mathbb{N}\), \[\begin{align}
\label{apriori95est} &\mathbb{E}|Y-Y'|_\infty^2 + \mathbb{E}\int_0^T|Z_t- Z_t'|^2dt +\mathbb{E}\int_0^T \|U_t-U_t'\|_{L^2(\nu)}^2dt\notag \\ &\leq
Ce^{Cn}\bigg(\mathbb{E}|\xi-\xi'|^2+\mathbb{E}\bigg(\int_0^T|f_h(t,Y_t,Z_t,U_t)-f'_{h'}(t,Y_t,Z_t,U_t)|dt\bigg)^2\bigg) \notag\\ &\quad+e^{-n}C\mathbb{E}\left[e^{(T+1)^2|X|_\infty)}\right].
\end{align}\qquad{(11)}\] Moreover, the difference of the generators may also be taken using the solution \((Y',Z',U')\).
Proof. We denote differences of the solutions by \(\Delta Y_t:=Y_t-Y'_t\) etc. and the one of the terminal conditions by \(\Delta\xi:=\xi-\xi'\). We will here use the notation \(\Theta_t: =( Y_t,Z_t,U_t)\) and \(\Theta'_t :=( Y'_t,Z'_t,U'_t).\)
By standard arguments using Itô’s formula for \(|\Delta Y_t|^2\), Doob’s \(L^p\)- and Young’s inequalities, we find that one obtains \(C>0\) (we will keep the constant’s name \(C\) in all subsequent estimates, but its value may change as we proceed) such that for all \(t\in [0,T]\), \[\begin{align} \label{eq:apriori-1} &\mathbb{E}\sup_{s\in[t,T]}|\Delta Y_s|^2+\mathbb{E}\int_t^T|\Delta Z_s|^2ds +\mathbb{E}\int_t^T \|\Delta U_s\|_{L^2(\nu)}^2ds\\ &\leq C\mathbb{E}\bigg[|\Delta \xi|^2+\bigg(\int_t^T |\Delta Y_s||f_h(s,\Theta_s)-f'_{h'}(s,\Theta'_s)|ds\bigg)\bigg].\nonumber \end{align}\tag{24}\]
It holds using Assumption 2 on \(f'\) and \(h'\) that \[|f'_{h'}(s,\Theta_s)-f'_{h'}(s,\Theta'_s)|\le C\Big (|\Delta Y_s|+(1+|X^s|_\infty^{{\tt r}l} )\Big (|\Delta Z_s|+\|\Delta U_s\|_{L^2(\nu)} \Big).\] Here the bound \(C (1+|X^s|_\infty^{{\tt r}l}) \|\Delta U_s\|_{L^2(\nu)}^2\) is coming from Assumption 2 (v), the bound on \(U\) and the fact that \[\begin{align} &\left| \int_{\mathbb{R}_0} (h(s, U_s(v)) - h(s, U'_s(v)) ) \kappa (v) \nu(dv)\right | \\ &\le C(1+ |X^s|_\infty^{{\tt r}m_2} ) \int_{\mathbb{R}_0} |U_s(v) - U'_s(v)| \kappa (v) \nu(dv) \\ &\le C(1+ |X^s|_\infty^{{\tt r}m_2} ) \, \|\kappa\|_{L^2(\nu)}\,\|\Delta U_s\|_{L^2(\nu)} . \end{align}\] Now let us introduce \(Q_{n,t}=\{(\omega,t): T\sup_{0\le s \le t}|X_s|>n\}\). By splitting up the domain and using Assumption 2 and the bounds in \(X\) for \(Z\) and \(U\), we obtain \[\begin{align} \label{Y-timesDiff-f} &|\Delta Y_s||f_h(s,\Theta_s)-f'_{h'}(s,\Theta'_s)| \notag\\ &\leq C|\Delta Y_s|\Big (|\Delta Y_s|+(1+ |X^s|_\infty^{{\tt r}l})\Big (|\Delta Z_s|+\|\Delta U_s\|_{L^2(\nu)} \Big) +|(f_h -f'_{h'})(s,\Theta_s)|\Big )\nonumber\\ &\leq C(|\Delta Y_s|^2 +|\Delta Y_s||(f_h -f'_{h'})(s,\Theta_s)|)\nonumber\\ &\quad+C|\Delta Y_s|(1+ |X^s|_\infty^{{\tt r}l})(|\Delta Z_s|+\|\Delta U_s\|_{L^2(\nu)})(\mathbb{1}_{Q^c_{n,s}}+\mathbb{1}_{Q_{n,s}})\nonumber\\ &\leq C(|\Delta Y_s|^2 +|\Delta Y_s||(f_h -f'_{h'})(s,\Theta_s)|)\nonumber\\ &\quad+C|\Delta Y_s|(1+ |X^s|_\infty^{ {\tt r}l} ) ( |\Delta Z_s|+\|\Delta U_s\|_{L^2(\nu)})\mathbb{1}_{Q^c_{n,s}} \nonumber\\ &\quad + C|\Delta Y_s|(1+|X^s|_\infty^{{1+ \tt r}})\mathbb{1}_{Q_{n,s}}. \end{align}\tag{25}\]
With the help of Young’s inequality \(yz\leq \frac{cy^2}{2}+\frac{z^2}{2c}\) for arbitrary \(c>0\), bounding \(|\Delta Y|\) by its supremum, we may estimate the term \[\begin{align} \label{eq:apriori-two} &|\Delta Y_s||f_h(s,\Theta_s)-f'_{h'}(s,\Theta'_s)|\\ &\leq C|\Delta Y_s|^2+\frac{3Cc}{2}|\Delta Y_s|^2 (1+|X^s|_\infty^{2rl})\mathbb{1}_{Q_{n,s}^c}+\frac{C}{2c}(|\Delta Z_s|^2+\|\Delta U_s\|_{L^2(\nu)}^2)\nonumber\\ &\quad+|X^s|_\infty^{2{\tt r}+2}\mathbb{1}_{Q_{n,s}}+\sup_{r\in [t,T]}|\Delta Y_r||(f_h -f'_{h'})(s,\Theta_s)|.\nonumber \end{align}\tag{26}\] Note that on \(Q_{n,s}^c\), we can estimate \(1+|X^s|_\infty^{2lr}\leq 1+n.\) With the above estimates, using again Young’s inequality and choosing \(c\) in an adequate way, we infer from 24 that \[\begin{align} &\mathbb{E}\sup_{s\in[t,T]}|\Delta Y_s|^2+\mathbb{E}\int_t^T|\Delta Z_s|^2ds +\mathbb{E}\int_t^T \|\Delta U_s\|_{L^2(\nu)}^2ds\\ &\leq C\mathbb{E}\bigg[|\Delta \xi|^2+(n+2)\int_t^T|\Delta Y_s|^2ds\nonumber \\ &\quad\quad+\bigg(\int_t^T|(f_h -f'_{h'})(s,\Theta_s)|ds\bigg)^2+\int_t^T|X^s|_\infty^{2 {\tt r}+2}\mathbb{1}_{Q_{n,s}}ds\bigg].\nonumber \end{align}\] So, replacing \(|\Delta Y_s|^2\) on the right hand side by \(\sup_{r\in [s,T]}|\Delta Y_r|^2+\int_s^T(|Z_r|^2+\|U_r\|^2)dr\), we can use Gronwall’s inequality to arrive at
\[\begin{align} &\mathbb{E}\sup_{s\in[t,T]}|\Delta Y_s|^2+\mathbb{E}\int_t^T|\Delta Z_s|^2ds +\mathbb{E}\int_t^T \|\Delta U_s\|_{L^2(\nu)}^2ds\\ &\leq Ce^{(n+2)(T-t)}\mathbb{E}\bigg[|\Delta \xi|^2+\bigg(\int_t^T |(f_h -f'_{h'})(s,\Theta_s)|ds\bigg)^2\\ &\quad \quad\quad+\int_t^T|X^s|_\infty^{2{\tt r}+2}\mathbb{1}_{Q_{n,s}}ds\bigg]. \end{align}\] Since \[\begin{align} e^{nT} |X^s|_\infty^{2{\tt r}+2}\mathbb{1}_{Q_{n,s}}&\le e^{T^2 \sup_{0\le u \le s}|X_u| } (1+ |X^s|_\infty^{3})\mathbb{1}_{Q_{n,s}}\\ &\le e^{T^2 \sup_{0\le u \le s}|X_u| } 2e^{|X^s|_\infty}e^{-n}e^{T \sup_{0\le u \le s}|X_u|}\\ &\le e^{-n}2e^{(T+1)^2 |X|_\infty }, \end{align}\] we conclude that there is a \(C>0\) such that \[\begin{align} &\mathbb{E}\sup_{s\in[t,T]}|\Delta Y_s|^2+\mathbb{E}\int_t^T|\Delta Z_s|^2ds +\mathbb{E}\int_t^T \|\Delta U_s\|_{L^2(\nu)}^2ds \notag\\ &\leq Ce^{Cn}\mathbb{E}\bigg[|\Delta \xi|^2+\bigg(\int_t^T |(f_h -f'_{h'})(s,\Theta_s)|ds\bigg)^2\bigg] \notag\\ &\quad +e^{-n}C \mathbb{E}\left[ e^{(T+1)^2) |X|_\infty }\right] \end{align}\] and we just have to take \(t=0\) in the previous inequality. ◻
The uniqueness follows from Lemma 18. It remains to show existence. To that end, consider the terminal condition \(\xi^M :=g^M(X)\) and the generator \(f^M\) as given in 8 . By Proposition 14, a solution \((Y^M,Z^M,U^M)\) exists which obeys the condition ?? with the same \(a,b,c\) for any \(M\). Let us show by Lemma 18 that \((Y^M,Z^M,U^M)_{M=1}^\infty\) is a Cauchy sequence in \(\mathcal{S}^2\times L_2(W)\times L_2(\tilde{\mathcal{N}})\). We note that by choosing \(n\) large the third term of the r.h.s. of ?? can be made arbitrarily small. So it suffices to show that the other two expressions on the r.h.s. of ?? are small for large \(M, \overline{M}\). We show that \(\xi^M\) converges to \(\xi^{\overline{M}}\) in \(L^2.\) By Hölder’s and Markov’s inequality and Lemma 4, if \(M\le \;\overline{M},\) \[\begin{align} \mathbb{E}|\xi^M- \xi^{\overline{M}}|^2&=\mathbb{E}|g^M(X)- g^{\overline{M}}(X)|^2\\ &\le \mathbb{E}\Big (( c+\frac{\alpha}{2}(|X^M|_\infty^{\tt r}+|X^{\overline{M}}|_\infty^{\tt r})) |X^M- X^{\overline{M}}|_{\infty} \Big)^2\\ & \le \mathbb{E}\Big (( c+\alpha|X|_\infty^{\tt r}) |X|_\infty\mathbb{1}_{|X|_\infty\ge M} \Big)^2\\ &\le C( 1+ ( \mathbb{E}|X|_\infty^{4({\tt r}+1)})^{\frac{1}{2}} ) \big (\mathbb{E}\mathbb{1}_{|X|_\infty\ge M} \Big)^{\frac{1}{2}}\le C' \frac{1}{M}. \end{align}\]
We continue with
\[\begin{align} \label{the-int} \mathbb{E}\bigg(&\int_0^T|f^M_h( s, X^s, \Theta^M_s) -f^{\overline{M}}_{h'}( s,X^s,\Theta^M_s) |ds\bigg)^2, \end{align}\tag{27}\] where \[f_h^M(s,{\boldsymbol{x}}, y,z, {\boldsymbol{u}}) := f\!\! \left (\!s,b_M({\boldsymbol{x}}), y, b_M(z), b_M \Big( \!\int_{\mathbb{R}_0} h(s, b_M ( {\boldsymbol{u}}(v)) ) \kappa (v) \nu(dv) \Big )\!\right).\] and \[f_{h'}^{\overline{M}}(s,{\boldsymbol{x}}^s\!, y,z, {\boldsymbol{u}}) := f \!\!\left (\!s,b_{\overline{M}}({\boldsymbol{x}}), y,b_{\overline{M}}(z), b_{\overline{M}} \Big( \int_{\mathbb{R}_0} h(s, b_{\overline{M}} ( {\boldsymbol{u}}(v))) \kappa (v) \nu(dv) \! \Big ) \! \right).\]
Since Assumptions 2-(i) and (ii) are of the same type, the \(x\) difference of \(f\) behaves like \(\xi^M-\xi^{\overline{M}}\) so we get for \(M\le \;\overline{M}\) the upper bound \[\begin{align} & |f_h( s, b_M( X^s), \Theta^M_s) -f_h( s,b_{\overline{M}}(X^s),\Theta^M_s) | \\ & \le ( c+\frac{\beta}{2}(|X^M|_\infty^{\tt r}+|X^{\overline{M}}|_\infty^{\tt r})) \, |X^M- X^{\overline{M}}|_{\infty} \\ & \le ( c+\beta|X|_\infty^{\tt r}) |X|_\infty\mathbb{1}_{|X|_\infty\ge M}. \end{align}\] Similarly, we can treat \(f_h^M - f^{\overline{M}}_{h'}\) concerning \(Z_s^M\) and \(U_s^M.\) For example, by Assumption 2(iv) and Proposition 14, \[\begin{align} & \left (c+ \frac{\gamma}{2} (| b_M( Z_s^M) |^\ell + |b_{\overline{M}}( Z_s^M) |^\ell) \right ) | b_M( Z_s^M) - b_{\overline{M}}( Z_s^M)| \\ & \le \left (c+ \gamma ( a_\infty + b_\infty |X^t|_\infty^{{\tt r}} )^\ell \right ) (a_\infty + b_\infty |X^t|_\infty^{{\tt r}} ) \, \mathbb{1}_{(a_\infty + b_\infty |X^t|_\infty^{{\tt r}}) \ge M}. \end{align}\]
Continuing like for \(\xi^M\) we see that the integral term 27 is arbitrarily small for large \(M\le \;\overline{M}.\)
This ends the proof that \((Y^M,Z^M,U^M)_M\) is a Cauchy sequence in \(\mathcal{S}^2\times L_2(W)\times L_2(\tilde{\mathcal{N}}).\) We call the limit \((Y,Z,U)\).
From Proposition 14 we conclude that \(Z\) and \(U\) satisfy ?? setting \(a:=a_\infty\) and \(b:=b_\infty\). The bound for \(Y\) follows in the same way from the convergence of \(Y^M\) to \(Y\) in \(\mathcal{S}^2\) and the bound in ?? .
Looking at ?? , we get that the left hand side \(Y_t^M\) converges in \(L^2\) to \(Y_t\), and the stochastic integrals \(\int_t^T
Z^M_sdW_s, \int_{]t,T]\times\mathbb{R}}U^M_s(x)\tilde{\mathcal{N}}(ds,dx)\) converge in \(L^2\) to the respective terms \(\int_t^T Z_sdW_s,
\int_{]t,T]\times\mathbb{R}}U_s(x)\tilde{\mathcal{N}}(ds,dx)\). It remains to show that also \(I^M:=\int_t^Tf^M_{h}(s,X_s,Y^M_s,Z^M_s,U^M_s)ds\) does converge.
Since \(I^M\) is a Cauchy sequence in \(L^2\) it converges in \(L^2\). There exists a subsequence s.t. \(I^{M_k}\) converges a.s. Moreover, the estimates ?? and ?? imply that \[\begin{align} f^M_h(s,X_s,\Theta^M_s) \le C(1+ |X|_\infty^{1+r}) \end{align}\] which is an integrable bound. Then by the dominated convergence theorem \[\begin{align} \lim_{k\rightarrow +\infty} I^{M_k}=\int_t^T f_h \Big(s,X^s,\Theta_s\Big)ds. \end{align}\]
Finally, to get that the solution is Malliavin differentiable, we may apply Lemma 1 using that we have \(L^2\) convergence of \((Y^M_s,Z^M_s, U^M_s)\) to \((Y_s,Z_s, U_s).\) For \(x \neq 0\) the uniform bound for the \(\mathbb{D}_{1,2}^{\mathbb{R}_0}\)
norms follows immediately from Proposition 14 and Lemma 16.
To get a uniform bound for the \(\mathbb{D}_{1,2}^0\) norms if \(x=0\) we reuse the proof of Lemma 18: We
set \[\begin{align}
&(Y,Z,U) := (0,0,0), \quad (Y',Z',U') := (D_{s,0}Y^M, D_{s,0}Z^M,D_{s,0}U^M), \\
&\xi := 0, \quad \xi' :=D_{s,0} g^M(X),\quad f = 0, \quad
f_h' = D_{s,0} f^M \left( t, X^t, \Theta^M_t \right).
\end{align}\] Clearly, the assumptions on \(Z\) and \(U\) of Lemma 18 are not satisfied for \(D_{t,0}Z^M\) and \(D_{t,0}U^M\), but it is possible to derive the a priori estimate as follows: Note that Lemma 10 implies that \[\begin{align}
& |D_{s,0} f^M \left( t, X^t, \Theta^M_t \right) |\\& \le | ( D_{s,0} f^M ( t, X^t,y) ) |_{y=(Y^M_t,Z^M_t,H^M_t)} |\notag\\ &\quad + L_y \,| D_{s,0} Y^M_t| + (c+ |Z^M_t|^{{\tt r}} ) |D_{s,0} Z^M_t | + (c+ |H^M_t|^{{\tt r}} ) |D_{s,0} H^M_t |.
\end{align}\] Using ?? , the counterpart of 25 reads as \[\begin{align}
&| D_{s,0} Y^M_t| | D_{s,0} f^M\left( t, X^t, \Theta^M_t \right) | \\
&\le | D_{s,0} Y^M_t| | D_{s,0} f^M\left( t, X^t, \Theta^M_t \right) -D_{s,0} f^M(t,X^t, 0) |\\
& \quad + | D_{s,0} Y_t| |D_{s,0} f^M(t,X^t, 0) | \\ &\leq C(| D_{s,0} Y^M_t|^2 +| D_{s,0} Y^M_t| \, |D_{s,0} f^M(t,X^t, 0) |)\nonumber\\ &\quad+C| D_{s,0} Y^M_t|(1+ |X^s|_\infty^{ {\tt r}l} ) ( |D_{s,0}
Z_t^M|+\|D_{s,0}U^M_t\|_{L^2(\nu)})\mathbb{1}_{Q^c_{n,s}} \nonumber\\ &\quad + C|D_{s,0} Y_t^M|(1+|X^s|_\infty^{{\tt r+1}})\mathbb{1}_{Q_{n,s}}.
\end{align}\] From here we can just proceed like in the proof of Lemma 18 to get \[\begin{align}
\label{apriori95est-2} &\mathbb{E}|D_{s,0} Y^M|_\infty^2 + \mathbb{E}\int_0^T|D_{s,0} Z^M_t|^2dt +\mathbb{E}\int_0^T \| D_{s,0} U^M_t\|_{L^2(\nu)}^2dt\notag \\ &\leq Ce^{Cn}\bigg(\mathbb{E}|D_{s,0} g^M(X)|^2+\mathbb{E}\bigg(\int_0^T| D_{s,0}
f^M(t,X^t,0)|dt\bigg)^2\bigg) \notag\\ &\quad+e^{-n}C\mathbb{E}\left[e^{(T+1)^2|X|_\infty)}\right].
\end{align}\tag{28}\] This provides us with a uniform bound for the \(\mathbb{D}_{1,2}^0\) norms which can be derived using ?? combined with Lemma 4 so that thanks to the convergence in \(L^2\) of \((Y^M_t, Z^M_t, U^M_t)\) to \((Y_t, Z_t, U_t)\) we get by Lemma 1 that \(Y_t, Z_t, U_t(v) \in \mathbb{D}_{1,2}^0\) and also \(\int_t^T Z_{s} dW_s\) and \(\int_{{]t,T]}\times{\mathbb{R}_0}}U_{s}(v) \tilde{\mathcal{N}}(ds,dv)\) are in \(\mathbb{D}_{1,2}^0.\) Then we know that also \(\int_t^T f\left( s, X^s,Y_s, Z_s, H_s
\right)ds \in \mathbb{D}_{1,2}^0.\) 0◻
The article [52] contains stochastic Gronwall and Bihari-LaSalle inequalities for many different settings. For the convenience of the reader we cite one special case here ([52] Theorem 3.1(a)) which suits our situation.
Theorem 19. Assume that \(( {\sf X}_t)_{t \ge 0}\) is an adapted, càdlàg, non-negative process satisfying the inequality \[\begin{align} \label{x-inequality2} {\sf X}_t \le \int_{]0,t]} \eta\big(\sup_{0 \le u < s} {\sf X}_{u}\big) dA_s + M_t + H_t, \end{align}\qquad{(12)}\] where for \(T>0\) and \(0<p<1\) it holds
There is a \(c_0 \ge 0\) such that \({\sf X}_t \ge c_0\) for all \(t\ge 0\).
\(\eta\colon]c_0,\infty[\to]0,\infty[\) is continuous, non-decreasing and convex and such that \[\eta^{(p)}(x):= \frac{p}{1-p}\eta(x^{1/p})x^{1-1/p}\] is convex and \(C^1,\) and \(\lim_{x\searrow c_0}\eta^{(p)}(x):=0.\)
\(( A_t)_{t \ge 0}\) is a predictable, non-decreasing càdlàg process with \(A_0 = 0\).
\(( H_t)_{t \ge 0}\) is an adapted, non-negative non-decreasing càdlàg process,
\(\mathbb{E}H_T < \infty\)
\(( M_t)_{t \ge 0}\) is a càdlàg local martingale with \(M_0 = 0.\)
Then one has for \(0<p<1\) \[\begin{align} \label{Bihari} \mathbb{E}_0 \big[ G^{-1}\big(G(|{\sf X^T}|_\infty)-(1-p)^{-1}A_T \big)^p\big] \le \frac{1}{(1-p)} (\mathbb{E}_0 [ H_T ] )^p \end{align}\qquad{(13)}\] where \(G(x):=\int_r^x\frac{1}{\eta(u)}du\) for some \(r>c_0\).
In the special case where \(\eta(x) =x\) and \(( A_t)_{t \ge 0}\) is deterministic we have for \(0<p<1\) \[\begin{align} \label{stochGronwall} \mathbb{E}_0 [ |{\sf X^T}|_\infty^p \, ] \le \frac{1}{1-p} \, (\mathbb{E}_0 [ H_T ] )^p \, e^{\frac{p}{1-p} A_T} . \end{align}\qquad{(14)}\]
We will use Sugita’s approach to define \(\mathbb{D}_{1,2}\) which uses some kind of Gâteaux derivative. We need some notation: First of all, we exploit the Lévy-Itô decomposition, especially, the independence of the Gaussian term and the jump term, to interpret the probability space \((\Omega,\mathcal{F},\mathbb{P})\) as the completion of \((\Omega^W\times\Omega^J, \mathcal{F}^W\otimes\mathcal{F}^J, \mathbb{P}^W\otimes\mathbb{P}^J),\) where \((\Omega^W, \mathcal{F}^W, \mathbb{P}^W)\) is the canonical Wiener space and the separable Hilbert space \(E:=(\Omega^J,\mathcal{F}^J,\mathbb{P}^J)\) carries the pure jump process (for more details see [27]). Following Janson ([31]) we let \[H:=\Big \{ \int_0^T h(s) dW_s: h \in L^2[0,T] \Big \}\] be the Hilbert space with the inner product \(\langle \eta_1, \eta_2 \rangle := \mathbb{E}\eta _1 \eta_2 .\) For \(h \in L^2([0,T])\), we define \[g_h(t):= \int_0^t h(s)ds\] and the Cameron-Martin shift \[\rho_h \xi(\omega^W):= \xi( \omega^W + g_h)\] for any random variable \(\xi \in L^0(\mathbb{P}^W; E).\) We will denote by \(HS(L^2([0,T]);E)\) the space of Hilbert-Schmidt operators between \(L^2([0,T])\) and \(E.\)
(i) A random variable \(\xi \in L^0(\mathbb{P}^W; E)\) is absolutely continuous along \(h \in L^2([0,T])\) (\(h\)-a.c.) if there exists a random variable \(\xi^h \in L^0(\mathbb{P}^W; E)\) such that \(\xi^h = \xi \,\,\, a.s.\) and for all \(\omega^{_W\!\!} \in \Omega^W\) the map \[u \mapsto \xi^h(\omega^{_W\!\!} + u \, g_h)\] is absolutely continuous on bounded intervals of \(\mathbb{R}.\)
(ii) \(\xi \in L^0(\mathbb{P}^W; E)\) is ray absolutely continuous (r.a.c.) if \(\xi\) is \(h\)-a.c. for every \(h \in L^2([0,T]).\)
(iii) For \(\xi \in L^0(\mathbb{P}^W; E)\) and \(h \in L^2([0,T])\) we say the directional derivative \(\partial_h \xi \in L^0(\Omega^W; E)\) exists if \[\begin{align} \frac{ \rho_{u h} (\xi) - \xi}{u} \to^{\mathbb{P}^W } \, \partial_h \xi, \quad u \to 0. \end{align}\]
(iv) \(\xi \in L^0(\mathbb{P}^W; E)\) is called stochastically Gâteaux differentiable (s.G.d.) if \(\partial_h \xi\) exists for every \(h \in L^2([0,T])\) and there exists an \(HS(L^2([0,T]);E)\)-valued random variable denoted by \(\tilde{\mathcal{D}}\xi\) such that for every \(h \in L^2([0,T])\) \[\begin{align} \partial_h \xi= \langle \tilde{\mathcal{D}}\xi ,h\rangle_{L^2([0,T])}, \quad \mathbb{P}^W\text{-} a.s. \end{align}\]
The next theorem does in fact hold for \(p\ge 1\) while we only need here the version for \(p=2\).
Theorem 20 ( [29]). Then \(\mathbb{D}_{1,2}^0\) can be identified with \[\begin{align} \mathbb{D}^W_{1,2}(E) := \!\{\xi \in L^2(\mathbb{P}^W\!; E)\colon \! \xi &\text{ is r.a.c., s.G.d. and }\\ & \tilde{\mathcal{D}}\xi \in L^2(\mathbb{P}^W\!\!; H\!S(L^2([0,T]);\!E)) \}, \end{align}\] and for \(\xi \in \mathbb{D}^W_{1,2}(E)\) it holds \(D_{\cdot,0} \xi = \tilde{\mathcal{D}}\xi\) a.s.
Proof of Lemma 10. We start with item [X-estimate]. Since \[\begin{align} \rho_{u h} X_t -X_t = \int_0^t b(r, \rho_{u h}X^r) - b(r, X^r) dr + u \int_0^t \sigma(r) h(r) dr. \end{align}\] it holds \[\begin{align} |\rho_{u h} X_t -X_t| &\le L_b \int_0^t | \rho_{u h}X^r - X^r |_\infty dr + u K_\sigma \int_0^t | h(r) | dr, \end{align}\] so that by Gronwall’s inequality, \[\begin{align} \label{Cam-Martin-shift-of-X} |\rho_{u h} X^t -X^t|_\infty \le u K_\sigma \| h \|_{L^1[0,T]} \,\, e^{L_b t}. \end{align}\tag{29}\] From Proposition 9 we know that \(X_t\) is Malliavin differentiable. Now [27] implies that in probability, \[\begin{align} \langle D_{\cdot,0} X_t ,h\rangle_{L^2([0,T])} &\le \lim_{u \downarrow 0} \frac{ | \rho_{u h} X_t - X_t | }{u} \le K_\sigma \| h \|_{L^1[0,T]} \,\, e^{L_b t}, \end{align}\] for any \(h \in L^2([0,T])\) with \(\int_0^T |h(r)|dr \le1.\) Hence for a.e. \(s \in [0,t],\) \[|D_{s,0} X_t | \le K_\sigma \,\, e^{L_b t}.\] In the same way one can prove item [f-estimate]: It holds that \(\mathbf{f}\left( t, X^t, (Y_1, ..., Y_d)\right ),\) \(\mathbf{f}\left( t, {\boldsymbol{x}}^t, (Y_1, ..., Y_d)\right ), \mathbf{f}\left( t, X^t, (y_1, ..., y_d)\right ) \in \mathbb{D}_{1,2}^0.\) This follows by Theorem 5, for example for the first term, from the estimate \[\begin{align} & \| \mathbf{f}\left( t, X^t, Y \right ) - \mathbf{f}\left( t, X^{t,\varphi}, Y^\varphi\right )\|_{L^2} \\ & \le L_{\boldsymbol{x}}\,\, \| \Big (c+ \frac{\beta}{2}(|X^t|^ {\tt r}_\infty + |X^{t,\varphi}|^ {\tt r}_\infty) \Big) |X^t-X^{t,\varphi}|_\infty \|_{L^2} \\ & \quad + \sum_{k=1}^d L_{y_k} \, \left \| \, Y_k- Y_k^\varphi \right\|_{L^2} \le c \varphi. \end{align}\] Indeed, by Theorem 5 we have that \(\sup_{0<\varphi \le 1} \frac{\left \| \, Y_k- Y_k^\varphi \right\|_{L^2}}{\varphi} <\infty.\) Moreover, thanks to the Cauchy-Schwarz inequality and 7 , \[\begin{align} & \Big \| \Big (c+ \frac{\beta}{2}(|X^t|^ {\tt r}_\infty + |X^{t,\varphi}|^ {\tt r}_\infty) \Big) |X^t-X^{t,\varphi}|_\infty \Big \|_{L^2}\\ & \le( c+ \beta \||X^t|^ {\tt r}_\infty \|_{L^4}) \, \||X^t-X^{t,\varphi}|_\infty \|_{L^4} \le C \varphi. \end{align}\] Especially (see [27] ) we have that in probability, \[\begin{align} &\lim_{u \downarrow 0} \frac{ \rho_{u h} \mathbf{f}\left( t, X^t, (Y_1, ..., Y_d)\right ) - \mathbf{f}\left( t, X^t, (Y_1, ..., Y_d)\right )}{u} \\ & = \langle D_{\cdot,0} \mathbf{f}\left( t, X^t, (Y_1, ..., Y_d)\right ) ,h\rangle_{L^2([0,T])}. \end{align}\] On the other hand, \[\begin{align} & |\rho_{u h} \mathbf{f}\left( t, X^t, y\right ) - \mathbf{f}\left( t, X^t, y\right ) | \\ & \le L_{\boldsymbol{x}}\, \Big ( c+ \frac{\beta}{2}(|\rho_{u h} X^t|^ {\tt r}_\infty + |X^t|^ {\tt r}_\infty) \Big) \, | \rho_{u h} X^t -X^t |_\infty \\ & \le L_{\boldsymbol{x}}\, \Big ( c+ \frac{\beta}{2}(|\rho_{u h} X^t|^ {\tt r}_\infty + |X^t|^ {\tt r}_\infty) \Big) \, u K_\sigma \| h \|_{L^1[0,T]} \,\, e^{L_b t}, \end{align}\] where we used 29 . Hence we get for any \(h \in L^2([0,T])\) with \(\int_0^T |h(r)|dr \le 1\) \[\begin{align} \langle D_{\cdot,0} \mathbf{f}\left( t, X^t, y)\right ) ,h\rangle_{L^2([0,T])} \le L_{\boldsymbol{x}}K_\sigma \big ( c+ \beta |X^t|^ {\tt r}_\infty \big) e^{L_b T}. \end{align}\] This gives \[\| D_{\cdot,0} \mathbf{f}\left( t, X^t, y)\right )\|_{L^\infty[0,T]} \le L_{\boldsymbol{x}}K_\sigma \big ( c+ \beta |X^t|^ {\tt r}_\infty \big) e^{L_b T}.\] We will show that in probability, \[\begin{align} &\lim_{u \downarrow 0} \frac{ \rho_{u h} \mathbf{f}\left( t, X^t, (Y_1, ..., Y_d)\right ) - \mathbf{f}\left( t, X^t, (Y_1, ..., Y_d)\right )}{u} \\ &= \lim_{u \downarrow 0} \frac{ \mathbf{f}\left( t, \rho_{u h}X^t, \rho_{u h}( Y_1, ..., Y_d)\right ) - \mathbf{f}\left( t, X^t, \rho_{u h}( Y_1, ..., Y_d)\right )}{u} \\ & \quad+ \sum_{k=1}^d \lim_{u \downarrow 0} \bigg ( \frac{ \mathbf{f}\left( t, X^t, \rho_{u h}(Y_1, ...,Y_{d+1-k}),..., Y_d\right )}{u} \\ & \quad\quad \quad \quad \quad \quad - \frac{ \mathbf{f}\left( t, X^t, \rho_{u h}(Y_1, ...,Y_{d-k}),...,Y_d\right )}{u} \bigg ) \\ & = \langle D_{\cdot,0} \mathbf{f}\left( t, X^t, y\right ) ,h\rangle_{L^2([0,T])} \mid_{y =Y} \\ & \quad + \sum_{k=1}^d \langle D_{\cdot,0} \mathbf{f}\left( t, {\boldsymbol{x}}^t, y_1,..., Y_k,...,y_d\right ) ,h\rangle_{L^2([0,T])} \mid_{{\boldsymbol{x}}^t =X^t, y_i =Y_i, i\neq k}. \end{align}\] From this ?? follows. The existence of the \(G_1,...,G_d\) we get from Nualart [45]. To see that the above limits work, we use [31] which states that the Cameron Martin shift is continuous as a map \(L^2([0,T])\times L^0(\mathbb{P}^W; E) \to L^0(\mathbb{P}^W; E)\) given by \((\eta, \xi) \mapsto \rho_{\eta} \xi.\) Moreover, \(\lim_{u \downarrow 0} \rho_{u h}\xi =\xi\) in probability, for any \(\xi \in L^0(\mathbb{P}^W; E).\) Clearly, it holds \[\begin{align} & \lim_{u \downarrow 0} \frac{ \mathbf{f}\left( t, \rho_{u h}X^t, \rho_{u h}( Y_1, ..., Y_d)\right ) - \mathbf{f}\left( t, X^t, \rho_{u h}( Y_1, ..., Y_d)\right )}{u} \\ &= \lim_{u \downarrow 0} \rho_{u h} \left ( \frac{ \mathbf{f}\left( t, X^t,(Y_1, ..., Y_d)\right ) - \mathbf{f}\left( t, \rho_{-u h}X^t, ( Y_1, ..., Y_d)\right )}{u} \right ). \end{align}\] Now, considering the map \((\eta, \xi) \mapsto \rho_{\eta} \xi\) for \[(\eta, \xi):= \Big (uh, \frac{ \mathbf{f}\left( t, X^t,(Y_1, ..., Y_d)\right ) - \mathbf{f}\left( t, \rho_{-u h}X^t, ( Y_1, ..., Y_d)\right )}{u} \Big )\] we get the convergence to \(\langle D_{\cdot,0} \mathbf{f}\left( t, X^t, y\right ) ,h\rangle_{L^2([0,T])} \mid_{y =Y}\) thanks to the joint continuity and the convergence in probability \[\begin{align} & \lim_{u \downarrow 0} \frac{ \mathbf{f}\left( t, X^t,(Y_1, ..., Y_d)\right ) - \mathbf{f}\left( t, \rho_{-u h}X^t, ( Y_1, ..., Y_d)\right )}{u} \\ &= \langle D_{\cdot,0} \mathbf{f}\left( t, X^t, y\right ) ,h\rangle_{L^2([0,T])} \mid_{y =Y}. \end{align}\] ◻