September 24, 2025
This paper addresses the problem of protecting network information from privacy system identification (SI) attacks when sharing cyber-physical system simulations. We model analyst observations of networked states as time-series outputs of a graph filter driven by differentially private (DP) nodal excitations, with the analyst aiming to infer the underlying graph shift operator (GSO). Unlike traditional SI, which estimates system parameters, we study the inverse problem: what assumptions prevent adversaries from identifying the GSO while preserving utility for legitimate analysis. We show that applying DP mechanisms to inputs provides formal privacy guarantees for the GSO, linking the \((\epsilon,\delta)\)-DP bound to the spectral properties of the graph filter and noise covariance. More precisely, for DP Gaussian signals, the spectral characteristics of both the filter and noise covariance determine the privacy bound, with smooth filters and low-condition-number covariance yielding greater privacy.
Graph Filters, Graph Signal Processing (GSP), Differential Privacy (DP)
One classic problem in signal processing is system identification (SI). The goal, given observations of a system driven by an excitation signal, is to estimate the system information [1]. Formally, we denote our observations as \({\boldsymbol{y}}_t\in\mathbb{R}^n\), our excitations by \({\boldsymbol{u}}_t\in\mathbb{R}^n\) and our system by \({\mathcal{H}}(\cdot;\boldsymbol{\theta})\) where \({\mathcal{H}}(\cdot;\boldsymbol{\theta})\) is a linear shift invariant (LSI) system with parametrization \(\boldsymbol{\theta}\in\mathbb{R}^d\). Using this notation we have the following input-output model \[\begin{align} {\boldsymbol{y}}_t={\mathcal{H}}({\boldsymbol{u}}_t;\boldsymbol{\theta}). \label{eq:gen95system} \end{align}\tag{1}\] Typically, one first fixes a model class for \({\mathcal{H}}(\cdot;\boldsymbol{\theta})\) and then applies known excitations in order to estimate the parameter vector \(\boldsymbol{\theta}\) that best explains the input–output observations [2]. For example in Ho and Kalman’s seminal work [3], they determine the impulse response of the deterministic system to then algebraically reconstruct a minimal state-space model by constructing and factoring a Hankel matrix (block matrix of the inputs and outputs). This approach was later extended to handle stochasticity in both N4SID[4] and MOESP[5]. While the literature on SI is quite mature and developed, we are specifically interested in systems comprised of graph filters.
A graph filter is a LSI map where the invariance is with respect to the graph shift operator (GSO), which is typically represented by the graph Laplacian or adjacency matrix. For example, the order \(K\) polynomial graph filter, which is popular in graph convolutional neural networks as the Chebyshev filter [6], [7],1 is given by \[\begin{align} {\boldsymbol{H}}({\boldsymbol{S}};{\boldsymbol{h}}):=\sum_{k=0}^K{\boldsymbol{h}}_k{\boldsymbol{S}}^k. \end{align}\] In fact, thinking of graph filters as polynomials of the GSO is particularly useful since, by applying the Cayley-Hamilton theorem [8], all graph filters can be represented by a sufficiently high, possibly infinte, degree polynomial of the GSO. Using this graph filter formulation, 1 reduces to, where \(\boldsymbol{\theta}=[{\boldsymbol{S}},{\boldsymbol{h}}]\), \[\begin{align} {\boldsymbol{y}}_t= {\boldsymbol{H}}({\boldsymbol{S}};{\boldsymbol{h}}){\boldsymbol{u}}_t. \label{eq:gf95setup} \end{align}\tag{2}\] Two classic examples of 2 include the diffusion dynamics in social networks and the DC power flow model. More generally, the typical SI task is to find the coefficients \({\boldsymbol{h}}\) given a known \({\boldsymbol{S}}\) and known graph filter class. In this regime, if \({\boldsymbol{u}}_t\) is known, identifying \(\boldsymbol{\theta}\) reduces to a least-squares problem given the linearity. However, the problem is made more interesting by instead assuming \({\boldsymbol{S}}\) is unknown and/or by assuming \({\boldsymbol{u}}_t\) is unknown. This setting is referred to as blind SI and is considerably more challenging. In fact, if no assumptions about \({\boldsymbol{H}}\), \({\boldsymbol{S}}\), or \({\boldsymbol{u}}_t\) are made, the problem is ill-posed [9].
We are most interested in the case studied by [9]–[11] where both \({\boldsymbol{S}}\) and \({\boldsymbol{u}}_t\) are unknown. Unlike prior work, our goal is to prevent an adversary from solving the identification problem rather than solving it ourselves. Suppose we want to release \({\boldsymbol{y}}_t\) to an analyst so that they can perform some data analysis on the graph signals. We want the analyst to have good results, in the Mean Square Error (MSE) sense, but we do not want them to infer information about \({\boldsymbol{S}}\). More precisely we are interested in the scenario when the input signals have been a priori made Differentially Private (DP). DP is mechanism that adds noise to a release of statistics in a way that can provide probabilistic guarantees that the releases of any two statistics over adjacent datasets are indistinguishable. Formally, we say a mechanism is DP if it satisfies the following definition.
Definition 1 (\((\epsilon, \delta)-DP\) [12]). Let \({\mathcal{M}}\) be a randomized algorithm for publishing a query defined over subsets of dataset \(\boldsymbol{{\mathcal{D}}}\). Then \(\forall \boldsymbol{\boldsymbol{X}},\boldsymbol{\boldsymbol{X}}'\subset\boldsymbol{{\mathcal{D}}}\) such that \({\boldsymbol{X}}\) and \({\boldsymbol{X}}'\) are adjacent, \({\mathcal{M}}\) is \((\epsilon,\delta)-\)DP if \(\forall{\mathcal{S}}\subset\operatorname{Range}({\mathcal{M}})\) \[\begin{align} \Pr\left[{\mathcal{M}}({\boldsymbol{X}})\in{\mathcal{S}}\right]\leq e^\epsilon\Pr\left[{\mathcal{M}}({\boldsymbol{X}}')\in{\mathcal{S}}\right]+\delta. \label{eq:dp} \end{align}\qquad{(1)}\]
Using the notation in Definition 1, we assume each \({\boldsymbol{u}}_t={\mathcal{M}}({\boldsymbol{X}})\) is the output of the DP mechanism. A convenient property of DP is the conservation of privacy under post processing [12]. That is, any post-processing function on \({\boldsymbol{y}}_t\) does not affect the privacy guarantee on the input. However, we focus on the privacy of \({\boldsymbol{S}}\) rather than \({\boldsymbol{u}}_t\). The central question is: without altering \({\boldsymbol{H}}({\boldsymbol{S}})\), can the DP signals \({\boldsymbol{u}}_t\) protect \({\boldsymbol{S}}\) and limit an adversary’s ability to identify it? We address this by deriving explicit conditions on the noise structure of \({\boldsymbol{u}}_t\) under a Gaussian prior. In fact, to the best of our knowledge, we are the first to provide results on protecting the privacy of a GSO in a graph filter given DP inputs. Before stating the formal problem, we introduce a motivating example.
Graph Signal Processing (GSP) has been broadly applied as an approximation of the systems describing many real data justified by network dynamics, as they often exhibit low-pass graph-filter characteristics [13]. In power systems, we are interested in protecting the system and the consumption data. The consumption data is \({\boldsymbol{u}}_t\), while \({\boldsymbol{y}}_t\) represents the voltage angles (states). These states are generated via a graph filter with the consumption data as input. Since customer consumption data is generally considered private, a mechanism is required to protect it. While many mechanisms exist, DP2 is the only mechanism with provable guarantees against an arbitrarily powerful adversary and has already seen success on consumption data [14], [15]. The question thus becomes: given \({\boldsymbol{u}}_t\) has been made DP, is the privacy of the grid structure also protected when we release the states?
Consider a graph \(\mathcal{G} = (\mathcal{V}, \mathcal{E})\) with \(n\) vertices and GSO denoted by \({\boldsymbol{S}}\). Let \({\boldsymbol{y}}_t\in\mathbb{R}^n\) be a graph signal, \({\boldsymbol{H}}({\boldsymbol{S}})\) a graph filter, and \({\boldsymbol{u}}_t\) an excitation. Combining these, we have the standard graph filter denoted as \({\boldsymbol{y}}_t={\boldsymbol{H}}({\boldsymbol{S}}){\boldsymbol{u}}_t.\) Furthermore, let \(\tilde{{\boldsymbol{u}}}_t\) be a DP excitation with distribution given by \(f_{{\boldsymbol{u}}}(\cdot;\boldsymbol{\phi})\) such that \(\tilde{{\boldsymbol{y}}_t}={\boldsymbol{H}}({\boldsymbol{S}})\tilde{{\boldsymbol{u}}_t}.\) Extending this to the matrix format, let \(\tilde{{\boldsymbol{Y}}},\tilde{{\boldsymbol{U}}}\in\mathbb{R}^{n\times T}\) be formed by column stacking \(T\) observations and excitations yielding \(\tilde{{\boldsymbol{Y}}}={\boldsymbol{H}}({\boldsymbol{S}})\tilde{{\boldsymbol{U}}},\) where \(\tilde{{\boldsymbol{U}}}\)’s probability density function (pdf) is given by \(f_{{\boldsymbol{U}}}(\cdot;\boldsymbol{\phi})\). The goal is to show that given \(\tilde{{\boldsymbol{U}}}\) is DP, we get a DP guarantee, as in Definition 1, on release \(\tilde{{\boldsymbol{Y}}}\) with respect to (w.r.t) \({\boldsymbol{S}}\). Before continuing, we introduce a strictly tighter notion of DP known as probabilistic-DP (PDP) where, PDP \(\implies\) DP, but DP\(\centernot\implies\)PDP [16].
Definition 2 (\((\epsilon,\delta)-PDP\) [17]). Let \({\boldsymbol{X}},{\boldsymbol{X}}'\subset\mathcal{D}\) be adjacent subsets of dataset \(\mathcal{D}\). Let \(\mathcal{M}\) denote a query mechanism over \(\mathcal{D}\) with output given by \(\tilde{q}\) and where \(\tilde{q}\sim f(\tilde{q}|{\boldsymbol{X}})\). We say \(\mathcal{M}\) is PDP if \[\begin{align} \Pr \left(\left|\ln\dfrac{f(\tilde{q}|{\boldsymbol{X}})}{f(\tilde{q}|{\boldsymbol{X}}')}\right|>\epsilon\right)\leq \delta. \end{align}\]
Additionally we need to define a notion of adjacency. We consider two GSO’s \({\boldsymbol{S}}\) and \({\boldsymbol{S}}'\) to be adjacent if an only if \[\begin{align} \left\lVert{\boldsymbol{S}}-{\boldsymbol{S}}'\right\rVert_2\leq \Delta_{{\boldsymbol{S}}} ~~~\text{and}~~~\operatorname{rank}({\boldsymbol{S}}-{\boldsymbol{S}}')\leq 1. \label{eq:adjacent} \end{align}\tag{3}\] Equation 3 requires that graphs only have a single edge change and that the spectrum of this change is bounded. In the traditional DP scenario, adjacent datasets are typically defined as having a single row change with bounded norm [12], and thus restricting a GSO to bounded single edge changes is the appropriate parallel [18], [19]. Finally, to align our scenario within the PDP framework, we let \({\boldsymbol{p}}(\operatorname{vec}\left({\boldsymbol{Y}}\right)|{\boldsymbol{S}};\boldsymbol{\phi})\) refer to the pdf of \({\boldsymbol{Y}}\) given GSO \({\boldsymbol{S}}\) and parameter vector \(\boldsymbol{\phi}\) for \(f_{{\boldsymbol{U}}}(\cdot;\boldsymbol{\phi})\). We then define \[\begin{align} {\mathcal{L}}_{{\boldsymbol{S}},{\boldsymbol{S}}'}\left(\operatorname{vec}\left({\boldsymbol{Y}}\right)\right):=\ln\left(\dfrac{{\boldsymbol{p}}(\operatorname{vec}\left({\boldsymbol{Y}}\right)|{\boldsymbol{S}},\boldsymbol{\phi})}{{\boldsymbol{p}}(\operatorname{vec}\left({\boldsymbol{Y}}\right)|{\boldsymbol{S}}',\boldsymbol{\phi})}\right). \end{align}\] Therefore, we want to verify that \[\begin{align} \Pr\left[{\mathcal{L}}_{{\boldsymbol{S}},{\boldsymbol{S}}'}\left(\operatorname{vec}\left({\boldsymbol{Y}}\right)\right)>\epsilon\right]\leq \delta. \label{eq:goal} \end{align}\tag{4}\] In the following section, we verify 4 and find explicit conditions assuming \(f_{{\boldsymbol{U}}}(\cdot;\boldsymbol{\phi})\) is Gaussian.
For exposition and the sake of analysis we assume, for now, that \({\boldsymbol{H}}({\boldsymbol{S}})\) is invertible. We will later show how to generalize the analytical results. Using the change of variable equation, and letting \({\boldsymbol{H}}_{{\boldsymbol{S}}}^{-1}:={\boldsymbol{I}}_T\otimes{\boldsymbol{H}}^{-1}({\boldsymbol{S}})\) we have \[\begin{align} &{\boldsymbol{p}}\left(\operatorname{vec}\left({\boldsymbol{Y}}\right)|{\boldsymbol{S}};\boldsymbol{\phi}\right)=\dfrac{1}{|\det{{\boldsymbol{H}}({\boldsymbol{S}})}|^T}f_{{\boldsymbol{U}}}({\boldsymbol{H}}_{{\boldsymbol{S}}}^{-1}\operatorname{vec}\left({\boldsymbol{Y}}\right);\boldsymbol{\phi}).\nonumber \end{align}\] This yields an explicit log-likelihood ratio of \[\begin{align} &{\mathcal{L}}_{{\boldsymbol{S}},{\boldsymbol{S}}'}(\operatorname{vec}\left({\boldsymbol{Y}}\right)):= \nonumber\\ &~~~T\ln\left(\left|\dfrac{\det{{\boldsymbol{H}}({\boldsymbol{S}}')}}{\det{{\boldsymbol{H}}({\boldsymbol{S}})}}\right|\right)+\ln\left(\dfrac{f_{{\boldsymbol{U}}}\left({\boldsymbol{H}}_{{\boldsymbol{S}}}^{-1}\operatorname{vec}\left({\boldsymbol{Y}}\right);\boldsymbol{\phi}\right)}{f_{{\boldsymbol{U}}}\left({\boldsymbol{H}}_{{\boldsymbol{S}}'}^{-1}\operatorname{vec}\left({\boldsymbol{Y}}\right);\boldsymbol{\phi}\right)}\right).\label{eq:log-lik} \end{align}\tag{5}\] We therefore have the following theorem.
Theorem 1. Let, for \(\alpha>1\), \(D_{\alpha}(P||Q)\) be the order \(\alpha\) Rényi Divergence for distributions \(P\) and \(Q\). Then, for the log-likelihood ratio \({\mathcal{L}}_{{\boldsymbol{S}},{\boldsymbol{S}}'}(\cdot)\) defined in 5 and where \[\begin{align} P:=\dfrac{f_{{\boldsymbol{U}}}\left({\boldsymbol{H}}_{{\boldsymbol{S}}}^{-1}\operatorname{vec}\left({\boldsymbol{Y}}\right);\boldsymbol{\phi}\right)}{|\det{{\boldsymbol{H}}(L)}|^T}~~Q:=\dfrac{f_{{\boldsymbol{U}}}\left({\boldsymbol{H}}_{{\boldsymbol{S}}'}^{-1}\operatorname{vec}\left({\boldsymbol{Y}}\right);\boldsymbol{\phi}\right)}{|\det{{\boldsymbol{H}}(L')}|^T}\nonumber \end{align}\]we have \[\begin{align} &\Pr\left({\mathcal{L}}_{{\boldsymbol{S}},{\boldsymbol{S}}'}(\cdot)>\epsilon\right)\leq \inf_{\alpha}\exp\left\{ (\alpha-1)D_{\alpha}\left(\left.P\right|\left|Q\right.\right)-\alpha \epsilon\right\}.\label{eq:renyi95final} \end{align}\qquad{(2)}\]
Proof. Given that \(D_{\alpha}(P||Q)=\dfrac{1}{\alpha-1}\log\int P^{\alpha}Q^{1-\alpha}\), we can apply the Chernoff bound to 5 yielding \[\begin{align} \inf_{\alpha}e^{-\alpha\epsilon}\mathbb{E}\left[e^{\alpha{\mathcal{L}}_{{\boldsymbol{S}},{\boldsymbol{S}}'}(\cdot)}\right]=\inf_{\alpha>1}\exp\left\{ (\alpha-1)D_{\alpha}\left(\left.P\right|\left|Q\right.\right)-\alpha \epsilon\right\} \nonumber \end{align}\] ◻
Next, we introduce the following corollary when \(\tilde{{\boldsymbol{U}}}\) is given by a Gaussian DP mechanism. We do not assume our DP mechanism is i.i.d. In fact, returning to the power systems example, if \({\boldsymbol{S}}\) represents the power grid and \({\boldsymbol{U}}\) represents the active power, then \({\boldsymbol{U}}\) would have both spatial and temporal correlations as of result of weather patterns and thus the DP mechanism would also require these correlations.
Corollary 1. In the case when \(\operatorname{vec}\left(\tilde{{\boldsymbol{U}}}\right)\sim{\mathcal{N}}\left(\operatorname{vec}\left({\boldsymbol{M}}\right),\boldsymbol{\Sigma}_{Tn}\right)\), where \(\boldsymbol{\Sigma}_{Tn}\) is assumed to be invertible and denote \({\boldsymbol{H}}_{{\boldsymbol{S}}}:={\boldsymbol{I}}_T\otimes{\boldsymbol{H}}({\boldsymbol{S}})\), then let \[\begin{align} \boldsymbol{\mu}:= \operatorname{vec}\left({\boldsymbol{H}}({\boldsymbol{S}}){\boldsymbol{M}}\right) \quad \boldsymbol{\mu}':=\operatorname{vec}\left({\boldsymbol{H}}({\boldsymbol{S}}'){\boldsymbol{M}}\right), \label{eq:cor95means} \end{align}\qquad{(3)}\] \[\begin{align} {\boldsymbol{K}}'&:={\boldsymbol{H}}_{{\boldsymbol{S}}'}\boldsymbol{\Sigma}_{Tn}{\boldsymbol{H}}_{{\boldsymbol{S}}'}^\top~~ {\boldsymbol{K}}:={\boldsymbol{H}}_{{\boldsymbol{S}}}\boldsymbol{\Sigma}_{Tn}{\boldsymbol{H}}_{{\boldsymbol{S}}}^\top \label{eq:cor95cov} \end{align}\qquad{(4)}\] and \[\begin{align} {\boldsymbol{B}}_{\alpha}=(1-\alpha)({\boldsymbol{K}}')^{-1}+\alpha({\boldsymbol{K}})^{-1}, \end{align}\] then ?? is bounded by \[\begin{align} &\inf_{\alpha>1}\left\{\dfrac{1}{2(\alpha-1)}\left[\ln\left(\dfrac{\det({\boldsymbol{B}}_{\alpha}^{-1})}{\left(\det(({\boldsymbol{K}}'))\right)^{1-\alpha}\left(\det({\boldsymbol{K}})\right)^{\alpha}}\right)\right]\right.\nonumber\\ &~~~~~~\left.+\dfrac{\alpha}{2(\alpha-1)}(\boldsymbol{\mu}-\boldsymbol{\mu}')^\top\left({\boldsymbol{B}}_{\alpha}\right)(\boldsymbol{\mu}-\boldsymbol{\mu}')-\alpha\epsilon\right\}\nonumber\\ \label{eq:cor} \end{align}\qquad{(5)}\]
In the following theorem we provide explicit conditions on the covariance matrix \(\boldsymbol{\Sigma}_{Tn}\).
Theorem 2. Using the notation in corollary 1, we further assume, by Lipchitz continuity of the graph filter, that for some constant \(\kappa\) \[\begin{align} \left\lVert{\boldsymbol{H}}({\boldsymbol{S}})-{\boldsymbol{H}}({\boldsymbol{S}}')\right\rVert\leq \kappa \Delta_{{\boldsymbol{S}}}, \end{align}\] and \[\begin{align} \gamma:= \sigma_{\min}({\boldsymbol{H}}({\boldsymbol{S}})) ~~\forall {\boldsymbol{S}}\quad \left\lVert{\boldsymbol{H}}({\boldsymbol{S}})\right\rVert\leq \Gamma ~~\forall {\boldsymbol{S}}. \end{align}\] Then, we define \[\begin{align} \omega:=\gamma^2\lambda_{\min}(\boldsymbol{\Sigma}_{Tn}) ~~~~ \Omega:=\Gamma^2\lambda_{\max}(\boldsymbol{\Sigma}_{Tn})\\ C_{\alpha}(\boldsymbol{\Sigma}_{Tn}):=\Omega^{-1}\left(\alpha-(\alpha-1)\dfrac{\Omega}{\omega}\right), \end{align}\] which allows us to write, where \(D_{\alpha}\) upper bounds the Rényi divergence of \({\boldsymbol{S}}\) vs \({\boldsymbol{S}}'\), \[\begin{align} &D_{\alpha}:=\nonumber\\ &\dfrac{Tn}{2(\alpha-1)}\left[\ln\dfrac{\Omega^{\alpha-1}}{C_{\alpha}(\boldsymbol{\Sigma}_{Tn})\omega^{-\alpha}}\right]+\dfrac{\alpha(2\alpha-1)\left(\kappa\Delta_{{\boldsymbol{S}}}\left\lVert{\boldsymbol{M}}\right\rVert\right)^2}{2(\alpha-1)\omega^{2}}.\nonumber \end{align}\] Finally we have that if \[\begin{align} \dfrac{\alpha-1}{\alpha}D_{\alpha}+\dfrac{1}{\alpha}\ln\left(\frac{1}{\delta}\right)\leq \epsilon, \label{eq:th95295final} \end{align}\qquad{(6)}\] then the release of \({\boldsymbol{Y}}\) is \((\epsilon,\delta)\)-DP w.r.t. \({\boldsymbol{S}}\).
Due to space constraints we omit the proof and provide a sketch.
Proof. The key idea for bounding the Rényi divergence is to utilize the spectral bounds assumed on \({\boldsymbol{H}}({\boldsymbol{S}})\). That is, we want to bound \(\lambda_{\min}({\boldsymbol{K}}),\lambda_{\max}({\boldsymbol{K}})\) in terms of the assumptions and the spectrum of \(\boldsymbol{\Sigma}_{Tn}\). Then, we construct \({\boldsymbol{R}}:={\boldsymbol{K}}^{1/2}({\boldsymbol{K}}')^{-1}{\boldsymbol{K}}^{1/2}\) and bound \(\lambda_{\max}({\boldsymbol{R}})\) and \(\lambda_{\min}({\boldsymbol{R}})\). Next, we can rewrite \({\boldsymbol{B}}_{\alpha}\) in terms of \({\boldsymbol{R}}\) and bound the det and mean terms using the spectral bounds we derived. ◻
The key insight from Theorem 2 is the dependence of \(\frac{\Omega}{\omega}\). This term is determined by the condition number of \(\boldsymbol{\Sigma}_{Tn}\) and the spectral bounds of \({\boldsymbol{H}}({\boldsymbol{S}})\). The upshot is that one can design \(\boldsymbol{\Sigma}_{Tn}\) to have the ratio \(\frac{\Omega}{\omega}\) close to 1 which does not necessarily depend on the scale of the noise. This means that if a graph filter is particularly smooth and the input signals have a covariance matrix with a low condition number, then a small \(\epsilon\) can be achieved w.r.t \({\boldsymbol{S}}\) for free! However, there is no free lunch. In typical DP scenarios a low condition number might be too restrictive especially when correlation through time is high. However, we can still get privacy for free when we consider non-invertible graph filters.
In the general case, where we have density \(f_{{\boldsymbol{U}}}(\cdot;\theta)\) and where \({\boldsymbol{H}}({\boldsymbol{S}})\) is singular, we start by taking the thin-SVD of \({\boldsymbol{H}}_{{\boldsymbol{S}}}:={\boldsymbol{I}}_T\otimes {\boldsymbol{H}}({\boldsymbol{S}})\) yielding \({\boldsymbol{Q}}_r\boldsymbol{\Sigma}_r{\boldsymbol{V}}_r^{\top}={\boldsymbol{H}}_{{\boldsymbol{S}}}\). Then, if we let \({\boldsymbol{V}}_0\) denote the remaining eigen vectors, \({\boldsymbol{W}}:=\operatorname{span}({\boldsymbol{Q}}_r)\), and \(\bar{{\boldsymbol{y}}}:=\operatorname{vec}\left(\tilde{{\boldsymbol{Y}}}\right)\), we have via the coarea for a linear map, \[\begin{align} &{\boldsymbol{p}}(\bar{{\boldsymbol{y}}}|{\boldsymbol{S}};\theta)\nonumber\\ &~~~~=\dfrac{{\boldsymbol{1}}_{\left\{\bar{{\boldsymbol{u}}}\in{\boldsymbol{W}}\right\}}}{\det{\boldsymbol{\Sigma}_r}}\int_{\mathbb{R}^{Tn-r}}f_{{\boldsymbol{U}}}\left({\boldsymbol{V}}_r\boldsymbol{\Sigma}_{r}^{-1}{\boldsymbol{Q}}_r^{\top}\bar{{\boldsymbol{y}}}+{\boldsymbol{V}}_0{\boldsymbol{w}};\theta\right)d{\boldsymbol{w}}.\nonumber \end{align}\] Using this, one can again bound the log-likelihood with the Rényi divergence and the existing proofs carry through with the appropriate projections. The more interesting case arises when we assume Gaussianity and \(\boldsymbol{\Sigma}_{Tn}\) is singular. Here, under specific conditions, we can collapse the divergence to 0! We introduce the following corollary.
Corollary 2. Let \({\boldsymbol{V}}:=\operatorname{range}(\boldsymbol{\Sigma}_{Tn})\), \({\boldsymbol{H}}_{{\boldsymbol{S}}}={\boldsymbol{I}}_T\otimes {\boldsymbol{H}}({\boldsymbol{S}})\), and \({\boldsymbol{H}}_{{\boldsymbol{S}}'}={\boldsymbol{I}}_T\otimes {\boldsymbol{H}}({\boldsymbol{S}}')\). Then, if \({\boldsymbol{V}}\subset \ker\left({\boldsymbol{H}}_{\boldsymbol{S}}-{\boldsymbol{H}}_{{\boldsymbol{S}}'}\right)\) and \(\operatorname{vec}\left({\boldsymbol{M}}\right)\in \ker\left({\boldsymbol{H}}_{\boldsymbol{S}}-{\boldsymbol{H}}_{{\boldsymbol{S}}'}\right)\), then the divergence is 0.
However, in the situation where \(\operatorname{vec}\left({\boldsymbol{M}}\right)\) and \(\boldsymbol{\Sigma}_{Tn}\) do not live in the null space, we must apply a projection on to ?? ?? . Let \({\boldsymbol{W}}:=\operatorname{range}({\boldsymbol{H}}_{{\boldsymbol{S}}}\boldsymbol{\Sigma}^{1/2})\) and \({\boldsymbol{W}}':=\operatorname{range}({\boldsymbol{H}}_{{\boldsymbol{S}}'}\boldsymbol{\Sigma}^{1/2})\) where \({\boldsymbol{W}}={\boldsymbol{W}}'\). Then if we let \({\boldsymbol{Q}}\) be an orthonormal basis for \({\boldsymbol{W}}\), ?? ?? become \[\begin{align} \boldsymbol{\mu}= {\boldsymbol{Q}}^\top\operatorname{vec}\left({\boldsymbol{H}}({\boldsymbol{S}}){\boldsymbol{M}}\right)~~{\boldsymbol{K}}={\boldsymbol{Q}}^\top{\boldsymbol{H}}_{\boldsymbol{S}}\boldsymbol{\boldsymbol{\Sigma}}_{Tn}{\boldsymbol{H}}_{{\boldsymbol{S}}}^\top{\boldsymbol{Q}}, \end{align}\] and the proofs carry through as before.
In order to validate the analysis we generated a Erdos-Rényi graph and a Gaussian input signal matrix \(\tilde{{\boldsymbol{U}}}\) and applied the diffusion-dynamic filter from [13]. Formally, \[\begin{align} \tilde{{\boldsymbol{Y}}}=({\boldsymbol{I}}-0.01{\boldsymbol{S}})^{-1}\tilde{{\boldsymbol{U}}}, \end{align}\] where our graph has 7 vertices and we perform \(T=20\) samplings. We assumed that our input signal was given by some true signal \({\boldsymbol{U}}\) and was made DP using the additive Gaussian mechanism where, \(\operatorname{vec}\left(\tilde{{\boldsymbol{U}}}\right)\sim{\mathcal{N}}\left(\operatorname{vec}\left({\boldsymbol{U}}\right),\boldsymbol{\Sigma}_{Tn}\right)\). In order to generate a time correlated input signal we generated a time covariance matrix \(\boldsymbol{\Sigma}_T\) via an AR(1) process given by \(\boldsymbol{\Sigma}_{T}=\sigma^2\rho^{|i-j|}\) where \(\rho\) is the correlation and \(\sigma\) the noise scale. Then, we formed \(\boldsymbol{\Sigma}_{Tn}:=\boldsymbol{\Sigma}_{T}\otimes {\boldsymbol{I}}_{n}\). Finally we compare the MSE obtained on \(\tilde{{\boldsymbol{Y}}}\) and compare it to the \(\epsilon\) w.r.t \({\boldsymbol{S}}\) and with the \(\epsilon\) w.r.t \({\boldsymbol{U}}\). The \(\epsilon\) w.r.t \({\boldsymbol{S}}\) is computed using the analytical closed form developed in Theorem 2, while the \(\epsilon\) w.r.t \({\boldsymbol{U}}\) is computed using the classical additive gaussian mechanism [12]. The results are presented in Figure 1. As can be seen in the figure, the scale of the noise, and thus the MSE for the input signal has a loose dependence. If the input signal has little noise, then there is little privacy for the GSO. However, once there is a small to moderate amount of noise the privacy of the GSO is not directly determined by the scale but rather the structure. In Figure 1 this can be seen where the \({\boldsymbol{S}}\) curve levels out prior to the \({\boldsymbol{U}}\) curve. This matches the analytical results which indicate that by carefully designing the covariance structure, the DP of the input signal is enough to protect the GSO.
In this paper, we addressed the inverse problem of traditional system identification: protecting the GSO from adversarial inference while maintaining signal utility. We established formal conditions under which DP input signals can guarantee privacy for the GSO. Our main contribution, Theorem 2, provides explicit bounds relating the privacy parameters \((\epsilon, \delta)\) to the spectral and continuity properties of the graph filter and the spectral properties of the input covariance. The analysis demonstrates that smoother filters with bounded spectrum are better suited for privacy. We can further improve privacy by carefully designing the structure of the covariance matrix to have a low condition number and/or to be indistinguishable with respect to the filter. The key takeaway is that if the graph filter is specially designed and we assume a Gaussian noise mechanism on the input signals, we can effectivity get privacy for “free" on the graph itself.