EMORF-II: Adaptive EM-Based Outlier-Robust Filtering With Correlated Measurement Noise


Abstract

We present a learning-based outlier-robust filter for a general setup where the measurement noise can be correlated. Since it is an enhanced version of EM-based outlier robust filter (EMORF), we call it as EMORF-II. As it is equipped with an additional powerful feature to learn the outlier characteristics during inference along with outlier-detection, EMORF-II has improved outlier-mitigation capability. Numerical experiments confirm performance gains as compared to the state-of-the-art methods in terms of accuracy with an increased computational overhead. However, thankfully the computational complexity order remains at par with other practical methods making it a useful choice for diverse applications.

State-space models, Bayesian Inference, Outliers, Statistical Filtering, Expectation-Maximization.

1 Introduction↩︎

Bayesian filtering entails online estimation of the latent states of dynamical systems using noisy data. This challenge remains central to numerous disciplines e.g. robotics, sensor fusion, and target tracking etc. [1][3]. Generally the nominal measurement noise is correlated in variety of applications including Real Time Kinematic (RTK) systems, sensors networks and time-difference-of-arrival (TDOA)-based systems [4], [5]. Assuming perfect apriori knowledge of the noise statistics, standard filters can handle these general scenarios [6].

In several real-world applications the data can be corrupted with unaccounted outliers crippling the standard filtering paradigm. This calls for outlier-robust filtering solutions. Traditionally, robustness is based on predefined assumptions about measurement noise statistics [7], assuming fixed prior models for the residual errors between predicted and actual sensor data [8], and applying thresholding techniques [9]. However, such techniques necessitate user tuning, and the performance of these algorithms becomes sensitive to the chosen parameters [10]. This challenge motivates the design of learning-based, tuning-free methods that assume a prior structure of the measurement model while dynamically learning its parameters. The state along with parameters describing the data (including outliers) are estimated jointly thereby reducing the dependence on user set parameters [5], [11].

Due to the promise offered by learning-based outlier-robust filters, this approach of devising Bayesian filters has been actively adopted recently. Variational Bayesian (VB) techniques are typically favored over Particle Filters (PFs) in their designs as they are less computationally intensive and allow reuse of standard Gaussian filtering results. In this regard, we proposed the Selective Outlier Rejection (SOR) filter [11] that utilizes a vector-parameterized measurement covariance matrix to mitigate the impact of outliers in sensor measurements. While this approach has demonstrated superior performance compared to methods based on scalar-parameterized covariance matrices [12], it assumes that the noise in the measurement vector is uncorrelated an assumption that may be violated in several real-world scenarios [13], [14]. To account for the effects of correlated noise in the measurements, we proposed another VB-based filter namely Expectation-Maximization (EM)-based outlier robust filter (EMORF) [5]. This method assumes a prior model for measurements that accommodates correlated noise while simultaneously incorporating a vectorized outlier detection scheme. Due to its model, EMORF has shown better results as compared to a similar method Variational Bayes Kalman Filters (VBKFs) [4] and is simpler to implement.

Built on a model inspired from the SOR filter, EMORF estimates the state and parameters to detect the outlier during inference. However, it does not learn the outlier characteristics. This information can be useful and if utilized properly can result in better estimators as demonstrated in our Adaptive Selective Outlier Rejecting (ASOR) method [15]. With this motivation we present EMORF-II, which not only handles correlated measurement noise and detects outliers but also learns the the characteristics of outliers. To this end, we take inspiration from the ASOR method to construct EMORF-II. We demonstrate the performance gains of EMORF-II compared with similar state-of-the-art outlier-robust learning-based methods that address correlated measurement noise through numerical experiments. We use the same notations as in EMORF. For completeness, we present the notations, details of derivations and code in the supplementary material.

2 STATE-SPACE MODEL↩︎

Figure 1: Probabilistic graphical model for EMORF-II

To devise EMORF-II, we modify the state-space model (SSM) model of EMORF as shown in the graphical model in Fig. 1. Our modification is inspired from the results of the Adaptive Selective Outlier-Rejecting (ASOR) method which introduces a hierarchical model to describe the system dynamics and characteristics of outliers [15]. The model results in a state estimator that is able to learn the covariance of outliers along with their occurrence.

2.1 Proposed Model↩︎

The proposed SSM for EMORF-II is given as \[\begin{align} \boldsymbol{x}_{k} & =\boldsymbol{f}(\boldsymbol{x}_{k-1})+\boldsymbol{q}_{k-1} \tag{1} \\ \boldsymbol{y}_{k} & =\boldsymbol{h}(\boldsymbol{x}_{k})+\boldsymbol{r}_{k} \tag{2} \end{align}\] where \(\boldsymbol{x}_{k} \in \mathbb{R}^n\) is the state vector, \(\boldsymbol{y}_{k} \in \mathbb{R}^m\) is the measurement vector, \(\boldsymbol{f}(\cdot): \mathbb{R}^n \rightarrow \mathbb{R}^n\) is the process model and \(\boldsymbol{h}(\cdot): \mathbb{R}^n \rightarrow \mathbb{R}^m\) represents the measurement model. Furthermore, the process noise is normally distributed, as in the standard SSM, such that \(\boldsymbol{q}_k \sim \mathcal{N}(\boldsymbol{q}_k\big| \boldsymbol{0}, \boldsymbol{Q}_k)\).

Similar to EMORF, the measurement noise is supposed to follow a Gaussian sum distribution to capture the outlier effects such that \(\boldsymbol{r}_{k}\sim \sum_{\boldsymbol{\mathcal{I}}_k} \textit{p}\left(\boldsymbol{r}_k\big| \boldsymbol{\mathcal{I}}_k\right)\textit{p}\left(\boldsymbol{\mathcal{I}}_k\right)\) with \[\begin{align} \textit{p}\left(\boldsymbol{r}_k\big| \boldsymbol{\mathcal{I}}_k\right)=\mathcal{N}\left(\boldsymbol{r}_k \big| \boldsymbol{0}, \boldsymbol{R}_k\left(\boldsymbol{\mathcal{I}}_k\right)\right) \label{rkIk} \end{align}\tag{3}\] with \(\delta(.)\) denoting the delta function, the measurement covariance matrix \(\boldsymbol{R}_k(\boldsymbol{\mathcal{I}}_k)\) is given as \[\begin{bmatrix} \dfrac{R_k^{1,1}}{\mathcal{I}_k^1} & \cdots & R_k^{1,m}\,\delta(\mathcal{I}_k^1-1)\,\delta(\mathcal{I}_k^m-1) \\ \vdots & \ddots & \vdots \\ R_k^{m,1}\,\delta(\mathcal{I}_k^m-1)\,\delta(\mathcal{I}_k^1-1) & \cdots & \dfrac{R_k^{m,m}}{\mathcal{I}_k^m} \end{bmatrix}\]

As in EMORF, we model the presence (or absence) of outliers through an indicator random vector \(\boldsymbol{\mathcal{I}}_k \in \mathbb{R}^m\). \(\mathcal{I}_k^i \neq 1\) represents the presence of an outlier in the \(i\)-th dimension of measurement at time step \(k\) whereas \(\mathcal{I}_k^i = 1\) represents the absence of an outlier. For inferential tractability and due to lack of prior knowledge of correlations between outliers, we assume that outliers occur independently in each dimension as in EMORF. Similar to the ASOR method, we hierarchically model \(\boldsymbol{\mathcal{I}}_k\) as \[\begin{align} &p\bigl(\boldsymbol{\mathcal{I}}_k \big| b_k\bigr)=\prod_{i=1}^m p\bigl(\mathcal{I}_k^i \big| b_k\bigr) \label{modelI} \end{align}\tag{4}\] with \[\begin{align} p\bigl(\mathcal{I}_k^i\big| b_k\bigr) = (1-\theta_k^i)\overbrace {f(a_k,b_k)(\mathcal{I}_k^i)^{a_k-1}e^{-b_k\mathcal{I}_k^i}}^{Gamma(a_k,b_k)} +\delta\bigl(\mathcal{I}_k^i-1\bigr)\theta_k^i \label{pikbk} \nonumber \end{align}\tag{5}\] where the two components of \(\textit{p}\left(\mathcal{I}_k^i \big| b_k\right)\) are assumed disjoint with the Gamma density defined as zero for \(\mathcal{I}_k^i=1\). \(\theta_k^i\) is the prior probability of no outlier occurring in the \(i\)-th dimension. To model outliers, \(\mathcal{I}_k^i\) is supposed to obey a Gamma distribution supported on the set of positive real numbers i.e. \(\mathcal{I}_k^i \sim \text{Gamma}(a_k, b_k)\) with prior parameters \(a_k,b_k\) and \(f(a_k, b_k) = ({b_k^{a_k}}/{\Gamma(a_k)})\). The parameter \(b_k\) of the Gamma distribution captures the characteristics of outliers. Since the conjugate prior of \(b_k\) is also a Gamma distribution, we use this as the prior on \(b_k\) with parameters \(A_k\) and \(B_k\) given as \[\label{b95prior} \textit{p}(b_k) = f(A_k, B_k) b_k^{A_k-1} e^{-B_k b_k}\tag{6}\]

3 OUTLIER-ROBUST FILTERING↩︎

In Bayesian filtering, the goal is to compute \(\textit{p}(\boldsymbol{x}_{k} \big| \boldsymbol{y}_{1:k})\) i.e. the posterior distribution of the state given the set of observations. Using Bayes’ rule for our modeling setup, the joint posterior \(\textit{p}(\boldsymbol{x}_{k}, \boldsymbol{\mathcal{I}}_k, b_k \big| \boldsymbol{y}_{1:k})\) can be written as \[\textit{p}(\boldsymbol{x}_{k}, \boldsymbol{\mathcal{I}}_k, b_k \big| \boldsymbol{y}_{1:k}) \propto \textit{p}(\boldsymbol{y}_{k} \big| \boldsymbol{\mathcal{I}}_k, x_k) \textit{p}(\boldsymbol{x}_{k}\big| \boldsymbol{y}_{1:k-1}) \textit{p}(\boldsymbol{\mathcal{I}}_k\big| b_k) \textit{p}(b_k) \label{joint}\tag{7}\]

Since marginalizing of the joint posterior to obtain the posterior of the state is intractable, we employ the Expectation Maximization (EM) to this end as [16]

3.1 EM for inference↩︎

At each time instant, the E and M steps are recursively invoked till convergence to approximate the state posterior. In particular, in the E-step we approximate the state distribution as

E-Step \[\textit{q}(\boldsymbol{x}_{{k}})=\textit{p}(\boldsymbol{x}_{k} \big| \boldsymbol{y}_{1: k}, \hat{\boldsymbol{\mathcal{I}}}_k,\hat{b}_k) \propto \textit{p}(\boldsymbol{x}_{k}, \hat{\boldsymbol{\mathcal{I}}}_k,\hat{b}_k \big| \boldsymbol{y}_{1: k}) \label{Estep}\tag{8}\]

The estimates of other parameters \(\hat{b}_k\) and \(\hat{\mathcal{I}}_k^i \;\forall \;i\) are obtained in the M-steps as

M-Steps \[\hat{\mathcal{I}}_k^i=\underset{\mathcal{I}_k^i}{\operatorname{argmax}}\big\langle\ln (\textit{p}(\mathrm{x}_k, \mathcal{I}_k^i, \hat{\boldsymbol{\mathcal{I}}}_k^{i-},\hat{b}_k \big| \boldsymbol{y}_{1: k})\big \rangle_{\textit{q}(\boldsymbol{x}_{k})} \label{Mstep2}\tag{9}\]

\[\hat{b}_k=\underset{{b}_k^i}{\operatorname{argmax}}\big\langle\ln (\textit{p}(\mathrm{x}_k, \hat{\boldsymbol{\mathcal{I}}}_k,{b}_k \big| \boldsymbol{y}_{1: k})\big \rangle_{\textit{q}(\boldsymbol{x}_{k})} \label{Mstep1}\tag{10}\] where \(\hat{\boldsymbol{\mathcal{I}}}_k^{i-}\) contains the all the elements of the estimated vector \(\hat{\mathcal{I}}_k\) except the \(i\)-th element.

3.2 State Estimation↩︎

To invoke the E-step in 8 for approximating the posterior distribution of \(\boldsymbol{x}_{k}\), we need to evaluate the joint posterior in 7 . This requires approximating the predictive distribution \(\textit{p}(\boldsymbol{x}_{k} \big| \boldsymbol{y}_{1:k-1})\) at each time step. This is achieved through the prediction step described as follows.

Prediction: Using the process model and the estimated state posterior distribution from the previous time step \(\textit{p}(\boldsymbol{x}_{k-1} \big| \boldsymbol{y}_{1:k-1})\) we can estimate the predictive density. To leverage the standard filtering results, we assume the predictive distribution is Gaussian given as [6] \[\label{x95prior} \textit{p}(\boldsymbol{x}_{k} \big| \boldsymbol{y}_{1:k-1}) \approx \mathcal{N}(\boldsymbol{x}_{k} \big| \boldsymbol{m}_k^-, \boldsymbol{P}_k^-)\tag{11}\] where the parameters of the predictive distribution are \[\begin{align} \tag{12} \boldsymbol{m}_k^- = & \int f(\boldsymbol{x}_{k-1}) \mathcal{N}(x_{k-1} \big| \boldsymbol{m}_{k-1}^+, \boldsymbol{P}_{k-1}^+) \, d\boldsymbol{x}_{k-1}\\ \tag{13} \boldsymbol{P}_k^- = & \int (f(\boldsymbol{x}_{k-1}) - \boldsymbol{m}_k^-)(f(\boldsymbol{x}_{k-1}) - \boldsymbol{m}_k^-)^\top \nonumber \\ &\;\;\;\mathcal{N}(\boldsymbol{x}_{k-1} \big| \boldsymbol{m}_{k-1}^+, \boldsymbol{P}_{k-1}^+) \, d\boldsymbol{x}_{k-1} + \boldsymbol{Q}_{k-1} \end{align}\]

Update: The joint distribution in 7 can be expressed as \[\begin{align} \label{post95ful} &\textit{p}(\boldsymbol{x}_{k}, \boldsymbol{\mathcal{I}}_k, b_k \big| \boldsymbol{y}_{1:k}) \propto \frac{\mathcal{N}(\boldsymbol{x}_{k} \big| \boldsymbol{m}_k^-, P_k^-)}{\sqrt{(2\pi)^m \big| R_k(\boldsymbol{\mathcal{I}}_k)\big| }} \exp\big( \text{-} \frac{1}{2} (\boldsymbol{y}_{k} - h(\boldsymbol{x}_{k}))^\top \nonumber \\ & R_k^{-1}(\boldsymbol{\mathcal{I}}_k) (\boldsymbol{y}_{k} - h(\boldsymbol{x}_{k})) \big) \prod_i \left[(1-\theta_k)f(a_k, b_k) (\mathcal{I}_k^i)^{a_k-1} e^{-b_k\mathcal{I}_k^i} + \right. \nonumber \\ &\left. \theta_k^i \delta(\mathcal{I}_k^i - 1) \right] f(A_k, B_k) b_k^{A_k-1} e^{-B_k b_k} \end{align}\tag{14}\] where we have used the expressions for the prior distributions from 4 , 6 and the conditional measurement likelihood considering 2 and 3 . To approximate the posterior distribution for \(\boldsymbol{x}_{k}\) we use 8 and 14 to arrive at \[\begin{align} &\textit{q}(\boldsymbol{x}_{k}) \propto \exp\big( -\tfrac{1}{2} (\boldsymbol{y}_{k} - h(\boldsymbol{x}_{k}))^\top \boldsymbol{R}_k^{-1}(\hat{\boldsymbol{\mathcal{I}}}_k) (\boldsymbol{y}_{k} - h(\boldsymbol{x}_{k})) \nonumber \\ & -\tfrac{1}{2} (\boldsymbol{x}_{k} - \boldsymbol{m}_k^-)^\top (\boldsymbol{P}_k^-)^{-1} (\boldsymbol{x}_{k} - \boldsymbol{m}_k^-) \big) \end{align}\] where \(\boldsymbol{R}_k^{-1}(\hat{\boldsymbol{\mathcal{I}}}_k)\) is the matrix obtained by inverting \(\boldsymbol{R}({\boldsymbol{ \hat{\mathcal{I}}} }_k)\) after plugging in the EM estimates \(\boldsymbol{\hat{\mathcal{I}}}_k\) from VB iterations. Finally to approximate \(\textit{q}(\boldsymbol{x}_{k})\) as a normal distribution \(\mathcal{N}(\boldsymbol{x}_{k} \big| \boldsymbol{m}_k^+, \boldsymbol{P}_k^+)\) we employ the general Gaussian filtering results which provide [6] \[\begin{align} \tag{15} & \boldsymbol{m}_k^+ = \boldsymbol{m}_k^- + \boldsymbol{K}_k (\boldsymbol{y}_{k} - \mu_k)\\ \tag{16} & \boldsymbol{P}_k^+ = \boldsymbol{P}_k^- - \boldsymbol{C}_k \boldsymbol{K}_k^\top \end{align}\] where \[\begin{align} & \boldsymbol{K}_k = \boldsymbol{C}_k (\boldsymbol{U}_k + \boldsymbol{R}_k(\hat{\boldsymbol{\mathcal{I}}}_k))^{-1}\\ & \boldsymbol{\mu}_k = \int h(\boldsymbol{x}_{k}) \mathcal{N}(\boldsymbol{x}_{k} \big| \boldsymbol{m}_k^-, \boldsymbol{P}_k^-) \, d\boldsymbol{x}_{k}\\ & \boldsymbol{U}_k = \int (h(\boldsymbol{x}_{k}) - \boldsymbol{\mu}_k)(h(\boldsymbol{x}_{k}) - \boldsymbol{\mu}_k)^\top \mathcal{N}(\boldsymbol{x}_{k} \big| \boldsymbol{m}_k^-, \boldsymbol{P}_k^-) \, d\boldsymbol{x}_{k}\\ & \boldsymbol{C}_k = \int (\boldsymbol{x}_{k} - \boldsymbol{m}_k^-)(h(\boldsymbol{x}_{k}) - \boldsymbol{\mu}_k)^\top \mathcal{N}(\boldsymbol{m}_k \big| \boldsymbol{m}_k^-, \boldsymbol{P}_k^-) \, d\boldsymbol{x}_{k} \end{align}\]

3.3 Parameter Estimation↩︎

We can obtain the estimates of all the elements of \(\boldsymbol{\hat{\mathcal{I}}}_k\) given as \(\hat{\mathcal{I}}^{i}_{k}\) successively, using the M-step in 9 which leads to

\[\begin{align} &\hat{\mathcal{I}}^{i}_{k} = \operatorname*{argmax}_{\mathcal{I}^{i}_{k}} \Bigl\{ -\frac{1}{2} \operatorname{tr}\bigl(\boldsymbol{W}_{k}\boldsymbol{R}^{-1}_{k}(\mathcal{I}^{i}_{k},\hat{\boldsymbol{\mathcal{I}}}^{i-}_{k})\bigr) - \frac{1}{2}\ln\big| \boldsymbol{R}_{k}(\mathcal{I}^{i}_{k},\hat{\boldsymbol{\mathcal{I}}}^{i-}_{k})\big| \nonumber \\ &+ \ln\bigl((1-\theta_{k})f(a_k, \hat{b}_k) (\mathcal{I}_k^i)^{a_k-1} e^{-\hat{b}_k\mathcal{I}_k^i} + \theta_{k}\delta(\mathcal{I}^{i}_{k}-1)\bigr) + k_1 \Bigr\} \label{IKi1} \end{align}\tag{17}\] where we use the property of the trace operator applied on the product of matrices given as \(\operatorname{tr}(\boldsymbol{ABC}) = \operatorname{tr}(\boldsymbol{CAB})\) and \(k_1\) is a constant. \(\boldsymbol{R}_{k}(\mathcal{I}^{i}_{k},\hat{\boldsymbol{\mathcal{I}}}^{i-}_{k})\) denotes \(\boldsymbol{R}_{k}(\boldsymbol{\mathcal{I}}_{k})\) evaluated at \(\boldsymbol{\mathcal{I}}_{k}\) with its \(i\)-th element as \(\mathcal{I}^{i}_{k}\) and remaining entries \(\hat{\boldsymbol{\mathcal{I}}}^{i-}_{k}\) and \(\boldsymbol{W}_{k}\) is given as \[\label{W95est} \boldsymbol{W}_{k} = \int (\boldsymbol{y}_{k} - \boldsymbol{h}(\boldsymbol{x}_{k}))(\boldsymbol{y}_{k} - \boldsymbol{h}(\boldsymbol{x}_{k}))^{\top} \mathcal{N}(\boldsymbol{x}_{k}\big| \boldsymbol{m}^{+}_{k},\boldsymbol{P}^{+}_{k}) d\boldsymbol{x}_{k}\tag{18}\]

By maximizing the objective function in 17 , we end up with the following decision criterion for all the \(\hat{\mathcal{I}}^{i}_{k}\) \[\begin{align} \label{Ik95est} & \mathcal{I}_k^i = \begin{cases} 1 & \text{if } H_k^i/G_k^i \geq 1 \\ ({\alpha_k -1})/{\beta_k^i} & \text{if } H_k^i/G_k^i < 1 \end{cases} \end{align}\tag{19}\] where \(H_k^i\) and \(G_k^i\) are given as

\[\begin{align} \tag{20} H_k^i = &\exp\Bigl( -\tfrac{1}{2}\ln\bigl\lvert \boldsymbol{R}_{k}(\mathcal{I}_{k}^{i}=1,\hat{\boldsymbol{\mathcal{I}}}_k^{-i})\bigr\rvert\\ \nonumber & \quad -\tfrac{1}{2}\,\mathrm{tr}\bigl(\boldsymbol{W}_{k}\,\boldsymbol{R}_{k}^{-1}(\mathcal{I}_{k}^{i}=1,\hat{\boldsymbol{\mathcal{I}}}_k^{-i})\bigr) \Bigr)\,\theta_{k}^{i}\\ \tag{21} G_k^i = & \bigl(R_{k}^{ii}\bigr)^{-\tfrac{1}{2}} \bigl\lvert\hat{\boldsymbol{R}}_{k}^{-i,-i}\bigr\rvert^{-\tfrac{1}{2}} \exp\Bigl( -\tfrac{1}{2}\,\mathrm{tr}\bigl(\boldsymbol{W}_{k}^{-i,-i}\,(\hat{\boldsymbol{R}}_{k}^{-i,-i})^{-1}\bigr) \Bigr) \nonumber\\ &\quad (1-\theta_{k}^{i}) \frac{\Gamma(\alpha_{k})\,\hat{b}_k^{\,a_k}}{\Gamma(a_{k})\,(\beta_k^i)^{\alpha_k}} \end{align}\] where for any matrix \(\boldsymbol{R}\) we obtain the sub-matrix \(\boldsymbol{R}^{-i, -i}\) by removing its \(i\)-th column and row. \(\alpha_k\) and \(\beta_k^i\) are given as \[\begin{align} &\alpha_k = a_k + 0.5 \\ &\beta_k^i = \hat{b}_k + 0.5{W_{k}^{ii}}/{{R_{k}^{ii}}} \end{align}\]

Using the M-step in 10 , we can arrive at the following expression for estimating \(\hat{b}_k\) as

\[\begin{align} \label{b95est} \hat{b}_k = ({\overline{A}_k - 1})/{\overline{B}_k} \end{align}\tag{22}\] where \[\overline{A}_k = M_k a_k + A_k \quad \text{and} \quad \overline{B}_k = B_k + \sum_{\{i : \hat{\mathcal{I}}_k^i \neq 1\}} \hat{\mathcal{I}}_k^i\]

In this formulation, the summation defining \(\overline{B}_k\) includes only those indices \(i\) for which \(\hat{\mathcal{I}}_k^i\neq1\), thereby excluding any terms with \(\hat{\mathcal{I}}_k^i=1\). Further, \(M_k=\#\{\,i:\hat{\mathcal{I}}_k^i\neq1\}\) or the count of \(\boldsymbol{\mathcal{I}_k}\) elements not equal to one.

The pseudocode for the resulting algorithm EMORF-II is shown in Algorithm 2. For convergence, we check if the normalized \(l_2\) norm of changes in state estimates during EM iterations falls below a predefined threshold as proposed in EMORF.

Figure 2: EMORF-II

4 Numerical Experiments↩︎

4.1 Evaluation Setup↩︎

To demonstrate the capabilities of EMORF-II, we conduct a series of numerical experiments and compare the results with those of recent state-of-the-art VB-based outlier-robust filters. All experiments are performed using Matlab R2023b on a Windows 11 laptop equipped with an Intel i9-13900H processor and 32GB of RAM.

We assume that a target moves according to the following nonlinear state equation proposed in [17] \[\boldsymbol{x}_{k}= \begin{bmatrix} 1 & \dfrac{\sin(\omega_{k-1}\zeta)}{\omega_{k-1}} & 0 & \dfrac{\cos(\omega_{k-1}\zeta)-1}{\omega_{k-1}} & 0 \\ 0 & \cos(\omega_{k-1}\zeta) & 0 & -\sin(\omega_{k-1}\zeta) & 0 \\ 0 & \dfrac{1-\cos(\omega_{k-1}\zeta)}{\omega_{k-1}} & 1 & \dfrac{\sin(\omega_{k-1}\zeta)}{\omega_{k-1}} & 0 \\ 0 & \sin(\omega_{k-1}\zeta) & 0 & \cos(\omega_{k-1}\zeta) & 0 \\ 0 & 0 & 0 & 0 & 1 \end{bmatrix} \boldsymbol{x}_{k-1} + \boldsymbol{q}_{k-1}\] where \(\zeta\) is the sampling period and the state vector \(\boldsymbol{x}_{k} = [x_{k},\,\dot{x}_{k},\,y_{k},\,\dot{y}_{k},\,\omega_{k}]^{\mathsf T}\) contains planar position, velocity, and turn rate respectively. The process noise is Gaussian, \(\boldsymbol{q}_{k-1}\!\sim\!\mathcal{N}(\boldsymbol{0},\boldsymbol{Q}_{k-1}),\) with covariance given as [17] \[\boldsymbol{Q}_{k-1} = \begin{bmatrix} \eta_{1}\boldsymbol{M} & \boldsymbol{0} & \boldsymbol{0}\\ \boldsymbol{0} & \eta_{1}\boldsymbol{M} & \boldsymbol{0}\\ \boldsymbol{0} & \boldsymbol{0} & \eta_{2} \end{bmatrix}, \qquad \boldsymbol{M} = \begin{bmatrix} \zeta^{3}/3 & \zeta^{2}/2\\ \zeta^{2}/2 & \zeta \end{bmatrix}\]

Similar to EMORF, Time-Difference-Of-Arrival (TDOA) readings are used to estimate the state of the target. To that end, a network of \(m\) Time Of Arrival (TOA)-based range sensors laid out in a zig-zag pattern is assumed for measurements. The \(i\)-th sensor is placed at \((\xi^{\rho_i},\eta^{\rho_i}) = \bigl(350(i-1),\,350\!\bigl((i-1)\bmod2\bigr)\bigr)\) for \(i=1,\dots,m\). Designating sensor 1 as reference produces \(m-1\) TDOA ranges. To emulate corrupted data, we add the an outlier vector \(\boldsymbol{o}_{k}\) in the nominal noise as \[\begin{align} \label{y95out} & \boldsymbol{y}_{k} = \boldsymbol{h}(\boldsymbol{x}_{k})+\boldsymbol{r}_{k}+\boldsymbol{o}_{k} \end{align}\tag{23}\] where \[\begin{align} \nonumber & h^{j}(\boldsymbol{x}_{k}) = \sqrt{(x_{k}-\xi^{\rho_{1}})^{2}+(y_{k}-\eta^{\rho_{1}})^{2}} \;-\; \\ \label{h95x} & \sqrt{(x_{k}-\xi^{\rho_{j+1}})^{2}+(y_{k}-\eta^{\rho_{j+1}})^{2}} \quad j=1,\dots,m-1. \end{align}\tag{24}\]

Consequently, the nominal covariance of \(\boldsymbol{r}_{k}\) is fully populated \[\boldsymbol{R}_{k} = \begin{bmatrix} \sigma_{1}^{2}+\sigma_{2}^{2} & \cdots & \sigma_{1}^{2}\\ \vdots & \ddots & \vdots\\ \sigma_{1}^{2} & \cdots & \sigma_{1}^{2}+\sigma_{m}^{2} \end{bmatrix}\] where \(\sigma_{i}^{2}\) denotes the noise variance of sensor \(i\). Moreover, we assume that \(\boldsymbol{o}_{k}\) follows the distribution \[\label{pok} \textit{p}(\boldsymbol{o}_{k}) = \prod_{j=1}^{m-1} \mathcal{J}_{k}^{j}\, \mathcal{N}\!\bigl(o_{k}^{j}\,\big| \,0,\gamma(\sigma_{1}^{2}+\sigma_{j}^{2})\bigr)\tag{25}\] where \(\mathcal{J}_{k}^{j}\in\{0,1\}\) is a Bernoulli indicator marking an outlier in the \(j\)-th channel. Let \(\lambda\) be the probability that a single TOA measurement is corrupted. Since every TDOA involves the first TOA measurement as reference, the chance of observing no outlier in every TDOA dimension is \((1-\lambda)^{2}\). The factor \(\gamma\) inflates the nominal variance to produce the effect of outliers.

4.2 Comparative Methods and Parameters↩︎

To evaluate the performance, we compare our proposed EMORF-II, against several baseline approaches. First, we consider a hypothetical outlier-robust Gaussian filter assuming perfect a priori knowledge of all outlier instances. Furthermore, we assess two variations of the generalized VBKF, namely, Gen. VBKF with \(N = 10\) and Gen. VBKF with \(N = 1\) (\(N\) is a hyperparameter of the VBKF filter) together with the independent VBKF estimator (Ind. VBKF) from [4]. We also consider EMORF for comparison.

For simulations, we assume \(\boldsymbol{x}_{0}=[0,1,0,-1,-0.0524]^{\mathrm{T}}\), \(\zeta=1\), \(\eta_{1}=0.1\), \(\eta_{2}=1.75\times 10^{-4}\), \(\sigma_{j}^{2}=10\) and \(K = 100\). All estimators are initialized with \(\boldsymbol{m}_{0}^{+}\sim\mathcal{N}(\boldsymbol{x}_{0},\boldsymbol{P}_{0}^{+})\), \(\boldsymbol{P}_{0}^{+}=\boldsymbol{Q}_{k}\) and use the Unscented Kalman Filter (UKF) [18] as the core Gaussian filter. UKF parameters are set as \(\alpha_{UKF}=1\), \(\beta_{UKF}=2\), \(\kappa_{UKF}=0\). We use a convergence threshold of \(10^{-4}\) and set \(\theta_k^i=0.5\;\forall\;i,k\) as neutral prior for absence of outliers as in EMORF. Other initialization parameters of EMORF-II are set as \(A_k=10000, B_k=1000, a_k=1, \hat{b}_k=10000\) as proposed in ASOR. The numerical experiments are performed for 100 Monte Carlo (MC) runs to capture the error statistics in each scenario.

4.3 Evaluation↩︎

Figure 3: RMSE comparison for m = 5, \gamma = 1000, and \lambda \in \{ 0, 0.1 ,\ldots 0.6\}

First, we evaluate the accuracy of all the considered algorithms with increasing outlier occurrence probabilities denoted by \(\lambda\). We set the parameters \(m = 5\) and \(\gamma = 1000\) in 25 . The resulting error distributions considering the MC runs are visualized using box plots in Fig. 3. As expected, the ideal hypothetical UKF, having perfect knowledge of outlier occurrences, achieves the best performance. Notably, EMORF-II consistently results in lower Root-Mean-Squared-Error (RMSE) as compared to other practical methods as \(\lambda\) increases. This improvement stems from its ability to adaptively learn outlier characteristics and use this information for their mitigation. In contrast, EMORF exhibits progressively larger estimation errors as outliers become more frequent, highlighting its limitation to dynamically learn the outlier characteristics. The performance of other estimators get even worse with increasing \(\lambda\) (see experimental section of EMORF for more details).

Figure 4: RMSE comparison with \lambda = 0.4 , \gamma = 1000 and m \in \{ 5, 10 ,15,20\}.

In the next scenario, with \(\lambda = 0.4\) and \(\gamma = 1000\), we evaluate the error performance of each algorithm with increasing \(m\). The resulting RMSE values are presented as box plots in Fig. 4. The results demonstrate that while all algorithms result in reduced RMSE as \(m\) increases, EMORF-II consistently outperforms all the practical methods. This highlights the distinguishing feature of EMORF-II i.e. to adaptively learn outlier characteristics.

Figure 5: Execution time comparison with \lambda = 0.4, \gamma = 1000 and m \in \{ 5, 10 ,15, 20\}.

In the third setup, we evaluate the computational efficiency of each algorithm by measuring their running times with increasing \(m\), keeping the other simulation parameters identical to the previous experiment. The distributions of execution times across all methods are presented in Fig. 5. The results indicate that EMORF-II exhibits the highest computational cost, second only to Gen. VBKF (\(N = 10\)). The increased overhead, as compared to EMORF, is primarily due to the additional processing required to estimate \(\hat{b}_k\), which facilitates adaptive learning of outlier statistics. Nevertheless, the computational complexity remains in the same order: \(\mathcal{O}(m^4)\) as in EMORF and Gen. VBKFs. This can be attributed to computing determinants and inverses of \(m \times m\) matrices with complexity \(\mathcal{O}(m^3)\) for each \(\mathcal{I}_k^i\) (for \(i \in \{1, \ldots, m\}\)).

5 Conclusion↩︎

We presented an outlier-robust filter, namely EMORF-II, for a general case where the measurement noise is correlated. Using insights from ASOR, we modify the structure of EMORF to devise EMORF-II enabling it to learn outlier characteristics during inference along with outlier detection. Numerical experiments under different scenarios verify that EMORF-II is superior in terms of error performance compared with similar state-of-the-art methods. However, the improved performance comes at a price of more computational overhead. Nevertheless, the complexity order remains \(\mathcal{O}(m^4)\) as exhibited by other algorithms considering correlated measurement noise. This makes EMORF-II a useful candidate for a range of filtering applications.

References↩︎

[1]
Kimmo Suotsalo and Simo Särkkä, “On-line Bayesian parameter estimation in electrocardiogram state-space models,” in 2018 IEEE 28th International Workshop on Machine Learning for Signal Processing (MLSP), 2018, pp. 1–6.
[2]
A. Dubey, A. Santra, J. Fuchs, M. Lübke, R. Weigel, and F. Lurz, Bayesradar: Bayesian metric-Kalman filter learning for improved and reliable radar target classification,” in 2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP), 2021, pp. 1–6.
[3]
Hang Geng, Zidong Wang, Yun Chen, Fuad E. Alsaadi, and Yuhua Cheng, “Multi-sensor filtering fusion with parametric uncertainties and measurement censoring: Monotonicity and boundedness,” IEEE Transactions on Signal Processing, vol. 69, pp. 5875–5890, 2021.
[4]
Haoqing Li, Daniel Medina, Jordi Vilà-Valls, and Pau Closas, “Robust variational-based Kalman filter for outlier rejection with correlated measurements,” IEEE Transactions on Signal Processing, vol. 69, pp. 357–369, 2020.
[5]
Aamir Hussain Chughtai, Muhammad Tahir, and Momin Uppal, EMORF/S: EM-Based outlier-robust filtering and smoothing with correlated measurement noise,” IEEE Transactions on Signal Processing, vol. 72, pp. 4318–4331, 2024.
[6]
Simo Särkkä and Lennart Svensson, Bayesian Filtering and Smoothing, vol. 17, University Press, 2023.
[7]
Yulong Huang, Yonggang Zhang, Ning Li, and Jonathon Chambers, “Robust Student’s t–based nonlinear filter and smoother,” IEEE Transactions on Aerospace and Electronic Systems, vol. 52, pp. 2586–2596, 10 2016.
[8]
Christopher Karlgaard, “Nonlinear regression HuberKalman filtering and fixed-interval smoothing,” Journal of Guidance, Control, and Dynamics, vol. 38, pp. 322–330, 02 2015.
[9]
Shaobu Wang, Wenzhong Gao, and A.P. Meliopoulos, “An alternative method for power system dynamic state estimation based on Unscented transform,” IEEE Transactions on Power Systems, vol. 27, pp. 942–950, 05 2012.
[10]
Akio Nakabayashi and Genta Ueno, “Nonlinear filtering method using a switching error model for outlier-contaminated observations,” IEEE Transactions on Automatic Control, vol. 65, no. 7, pp. 3150–3156, 2019.
[11]
Aamir Hussain Chughtai, Muhammad Tahir, and Momin Uppal, “Outlier-robust filtering for Nonlinear systems with selective observations rejection,” IEEE Sensors Journal, vol. 22, no. 7, pp. 6887–6897, 2022.
[12]
Robert Piché, Simo Särkkä, and Jouni Hartikainen, “Recursive outlier-robust filtering and smoothing for nonlinear systems using the multivariate Student-t distribution,” in 2012 IEEE International Workshop on Machine Learning for Signal Processing, 2012, pp. 1–6.
[13]
Yunmin Zhu, Rick Blum, Zhi-Quan Luo, and Kon Wong, “Unexpected properties and optimum-distributed sensor detectors for dependent observation cases,” IEEE Transactions on Automatic Control, vol. 45, pp. 62–72, 02 2000.
[14]
R. Kaune, J. Hörst, and Wolfgang Koch, “Accuracy analysis for TDOA localization in sensor networks,” 01 2011, pp. 1–8.
[15]
Aamir Hussain Chughtai, Muhammad Tahir, and Momin Uppal, Bayesian heuristics for robust spatial perception,” IEEE Transactions on Instrumentation and Measurement, vol. 73, pp. 1–11, 2024.
[16]
Kevin P. Murphy, Machine Learning: A Probabilistic Perspective, Press, 2012.
[17]
Hongwei Wang, Hongbin Li, Jun Fang, and Heping Wang, “Robust GaussianKalman filter with outlier detection,” IEEE Signal Processing Letters, vol. 25, no. 8, pp. 1236–1240, 2018.
[18]
Eric A. Wan and Rudolph Van Der Merwe, “The UnscentedKalman filter,” Kalman Filtering and Neural Networks, pp. 221–280, 2001.