Disentangling network dependence among multiple variables


Abstract

When two variables depend on the same or similar underlying network, their shared network dependence structure can lead to spurious associations. While statistical associations between two variables sampled from interconnected subjects are a common inferential goal across various fields, little research has focused on how to disentangle shared dependence for valid statistical inference. We revisit two different approaches from distinct fields that may address shared network dependence: the pre-whitening approach, commonly used in time series analysis to remove the shared temporal dependence, and the network autocorrelation model, widely used in network analysis often to examine or account for autocorrelation of the outcome variable. We demonstrate how each approach implicitly entails assumptions about how a variable of interest propagates among nodes via network ties given the network structure. We further propose adaptations of existing pre-whitening methods to the network setting by explicitly reflecting underlying assumptions about "level of interaction" that induce network dependence, while accounting for its unique complexities. Our simulation studies demonstrate the effectiveness of the two approaches in reducing spurious associations due to shared network dependence when their respective assumptions hold. However, the results also show the sensitivity to assumption violations, underscoring the importance of correctly specifying the shared dependence structure based on available network information and prior knowledge about the interactions driving dependence.

Autocorrelation, Pre-whitening, Network autocorrelation model

1 Introduction↩︎

Network dependence, or network autocorrelation—statistical dependence within a variable due to network ties—is prevalent across various studies where subjects are interconnected through network relationships [1][5]. Numerous statistical methods have been developed to either account for or assess autocorrelation within a single network-dependent variable, typically the primary outcome of interest [6], [7], often focusing on its relationship with a fixed covariate. However, in practice, it is common for both outcome and exposure variables to exhibit network dependence. Despite the widespread use of methods for correcting dependence in multiple variables in other settings, there has been little discussion of statistical inference when examining relationships among multiple network-dependent variables.

1.1 Spurious associations due to network dependence↩︎

It is well known that two independently generated time series can appear statistically correlated if they exhibit similar patterns of strong autocorrelation. This is also known as nonsense or volatile correlations in econometrics literature [8], [9]. In fact, any form of autocorrelation shared by two or more variables can lead to such misleading associations. [5] demonstrate that two variables sampled from the same network, exhibiting shared dependence structure, may appear statistically correlated, even in the absence of a true association. Following [5], we refer to this phenomenon as spurious associations due to network dependence. Despite the shared underlying mechanisms behind spurious associations across different fields, statistical approaches to addressing this issue remain discipline-specific. To our knowledge, no existing literature comprehensively examines the applicability of existing methods for disentangling shared dependence in the context of network dependence.

While spurious associations due to network dependence have been routinely unacknowledged in many human subjects studies, accounting for network dependence in statistical models has been widely practiced in network research, particularly when network data are explicitly collected. Such models primarily focus on assessing the presence and extent of autocorrelation [10], [11] or evaluating whether network relationships causally induce correlations among observations [12], [13]. These analyses do not necessarily require multiple variables beyond the primary outcome of interest, and even when additional variables are included, no further statistical assumptions about them are typically required. In this work, we focus on cases where the relationships between two variables sampled from the same network are of interest, assuming that the presence of autocorrelation in these variables is already known and thus not our focus.

1.2 Our contribution↩︎

In this work, we examine the utility of two widely used approaches for accounting for autocorrelation in addressing spurious associations due to network dependence. The first approach is pre-whitening methods, which are commonly applied in time series analysis but have seen limited use in network settings, and the second approach is the network autocorrelation model, which is not explicitly designed to correct for shared dependence among multiple variables, but rather to address autocorrelation in the outcome variation within regression models. While neither approach was originally developed for the present setting, we demonstrate that either of them can be effective in addressing shared network dependence across multiple variables—though only under specific circumstances related to the presumed network structure.

Our main contributions are as follows. First, we identify key differences between network dependence and dependence arising in other contexts, and we highlight important elements that should be considered when modeling network dependence. Second, we discuss how pre-whitening approaches can be adapted in network settings. We propose a general approach to constructing the dependence structure based on certain statistical assumptions and known network structure and introduce an estimation method that addresses computational challenges due to the complexity of network dependence, illustrated through a concrete example. Lastly, we examine the assumptions underlying network autocorrelation models and compare them with those used in the pre-whitening approach. Overall, our work highlights the importance of considering the level of interactions on a network that induces autocorrelation across multiple variables. This is because different approaches for addressing within-variable dependence implicitly assume different levels of interaction, which has important implications for both model specification and interpretation.

The remainder of this paper is organized as follows. In Section 2, we review autocorrelation in other settings and discuss the unique challenges in network dependence. Section 3 introduces the pre-whitening approach and examines its potential application to network dependence. Section 4 discusses the network autocorrelation model and examines its underlying assumptions regarding shared dependence. Section 5 presents a simulation study comparing the performance of these two approaches in correcting spurious associations due to network dependence. Finally, we discuss the results and their implications in Section 6.

2 Preliminaries↩︎

2.1 Structured autocorrelation in other contexts↩︎

Autocorrelation is common in temporal, spatial, genetic, and network spaces, where the degree of dependence between two observations depends on their respective distance measures. In temporal autocorrelation, distance is often defined by time lags on \(\mathbb{R}\) between observations, meaning that observations closer in time are more likely to be correlated than those farther apart [14]. In spatial autocorrelation, the Euclidean distance on \(\mathbb{R}^{2}\) is often used to quantify dependence between observations [15]. In both temporal and spatial autocorrelation, as the number of observations (\(n\)) increases, the distribution of temporal and spatial distances becomes more variable, leading to an exponential growth in the number of observation pairs with large (relative) distances. As a result, autocorrelation among most pairs decays. This scarcity of strongly autocorrelated pairs plays a crucial role in statistical inference by ensuring statistical independence among most observation pairs or even generating clusters across which no statistical correlations are assumed.

Genetic autocorrelation, or genetic dependence, is one of the most commonly considered forms of autocorrelation in human subjects research. In genome-wide association studies (GWAS), statistical associations between diseases and genetic variants are often of interest, and each is likely to be genetically autocorrelated. For example, parents and their children may have similar disease statuses (not solely due to genetics but also because they share health-related behaviors or environmental factors), and their genetic variants are inherently correlated. Failure to adjust for shared genetic autocorrelation between these two variables can lead to inflated positives in association measures. This is also known as cryptic relatedness or population structure [16], [17]. To quantify genetic similarity of relatedness, the kinship matrix measures pairwise genetic relationships within a population [16], [18][22]. Higher values in the kinship matrix indicate closer genetic relationships, while lower values indicate less genetic relatedness. When distinct clusters are not present in the population, the kinship matrix is used to account for shared genetic dependencies among individuals [18], [22]. This approach assumes that the kinship matrix, often estimated using sample correlations among all single nucleotide polymorphisms (SNPs), directly captures the dependence structure that governs the autocorrelation of each variable (e.g., disease status and genetic variants) [23][25].

2.2 Network dependence and its unique challenges↩︎

In the contexts above, autocorrelation between two units is largely determined by their distance within each context: the farther apart they are, the less correlated they tend to be. Therefore, once distance is defined or estimated, the degree of autocorrelation can be inferred. In networks, however, while distance within the network space contributes to autocorrelation, there is often no clear consensus on how to define this distance. More importantly, distance alone is not the primary factor governing the degree of autocorrelation in network settings.

Let us first focus on measuring distance in a network. Similar to the kinship matrix, distance on a network space can be measured using an adjacency matrix, denoted by \(\mathbf{A} = [A_{ij}]\), where \(A_{ij} = 1\) indicates a tie (or edge) between nodes \(i\) and \(j\), and \(A_{ij} = 0\) otherwise, for \(i,j=1,2,\ldots, n\). One straightforward way to determine the dependence structure (e.g., variance-covariance matrix) is to define it as \(\text{Corr}(Y_{i}, Y_{j}) = \sigma A_{ij}\) when \(\mathbf{A}\) is symmetric, or \(\text{Corr}(Y_{i}, Y_{j}) = \sigma(A_{ij}+A_{ji})/2\) when \(\mathbf{A}\) is asymmetric, where \(0 \leq \sigma \leq 1\). This assumes the presence of dependence if and only if nodes \(i\) and \(j\) are adjacent (e.g., being friends with each other) in a network, with the strength of dependence potentially reflecting the reciprocity of their relationships. This approach is similar to adjustments using the kinship matrix in GWAS.

Another way to define the distance between two nodes is by the geodesic distance, which counts the number of ties along the shortest paths between them from \(\mathbf{A}\), and to determine the degree of dependence as the inverse of their geodesic distance (e.g., [26], [27]). Under this definition, dependence exists for all adjacent pairs at the same amount, and as long as two nodes are connected, they are not statistically independent. This can result in a non-sparse variance-covariance structure, which, unlike temporal and spatial dependence structures—where distance is typically measured in Euclidean space—is not easily resolved by increasing the number of observations in network data. As researchers include a larger number of observations, pairs of nodes are more likely to be connected, and their distances in network space become shorter. This non-Euclidean nature renders network dependence more complicated, and it becomes challenging to guarantee asymptotic properties of many statistics developed under certain (conditional) independent structures. As a result, methods or models that are developed to accommodate temporal or spatial dependence may not be appropriate for, or at least cannot be adapted directly to network dependence.

Dependence induced by a network is often more complex than simply the strength of network ties dictated by \(\mathbf{A}\): unlike other dependence settings, where the degree of dependence tends to decay smoothly with increasing distance, network dependence is arbitrary and context-specific. For example, compared to adjacent pairs \(i\) and \(j\) with \(A_{ij} =1\), a pair of \(i\) and \(k\) with \(A_{ik} = 0\) may exhibit higher autocorrelation in certain variables if nodes \(i\) and \(k\) share a larger number of mutual friends than nodes \(i\) and \(j\) do. For this reason, unlike genetic dependence, whose structure is directly determined by the kinship matrix, network dependence is often not solely informed by an adjacency matrix but is also heavily influenced by the underlying transmission process through which a variable of interest propagates among nodes via network ties.

2.3 Transmission process through network ties↩︎

In addition to distance derived directly from \(\mathbf{A}\)—which is obtained independently of context related to the variables of interest—network dependence in each variable is influenced by the transmission or contagion process, which is heavily dependent on the type of variable. This process refers to the mechanism by which a variable of interest (e.g., opinions, behaviors, or attributes of nodes) propagates through network ties, thus generating autocorrelation among observations. Even when \(\mathbf{A}\) is correctly specified (i.e., context-free network distance is accurate), misspecifying the pattern through which variables spread across the network can lead to invalid statistical or causal inferences [28][30]. In this paper, we focus on the transmission processes within a variable to highlight the specific assumptions about the transmission process on a network space that underlie existing approaches for addressing autocorrelation. For simplicity, we illustrate the transmission process in \(\mathbf{Y}\), assuming the same process applies to other variables (e.g., \(\mathbf{X}\)) of interest. We also consider the case where the transmission process within one variable is not affected by others, and any associations between the variables are the focus of our inference and should not be adjusted.

a

b

c

d

e

f

Figure 1: Visualization for (a) a network with 10 nodes, (b) their adjacency matrix, (c) their geodesic distance matrix. Sample variance-covariance matrix based on 1000 independent replicates under the direct transmission process at (d) \(t=1\), (e) \(t=2\), and (f) \(t = 10000\)..

We introduce one class of transmission processes to illustrate the potential discrepancy between the structure of \(\mathbf{A}\) and the dependence structure within one variable, as captured by the variance-covariance matrix of \(\mathbf{V}\): the direct transmission process. This transmission process provides a general model in which each node’s value of a variable is influenced by its adjacent neighbors over “time” in a longitudinal way. Here, the time unit should be sufficiently small to capture the influence from one node to other, assuming that influence occurs only during discrete interactions [5], [31][33]. Let \(Y^0_i\) denote the outcome value of node \(i\) at baseline before initiating any interactions (\(i=1,\ldots, n\)). Since no interactions have occurred yet, these are independent across \(i\). At the subsequent time point (\(t=1\)), each node has the opportunity to influence its adjacent nodes. Therefore, \(Y^{1}_{i}\) may be influenced by the baseline values of its adjacent peers. As one example, \(Y^1_i\) can be represented as a weighted average of its adjacent peers’ baseline values, in addition to its own baseline value: \[\begin{align} Y^1_i = \kappa\sum_{j\neq i}A_{ij}Y^{0}_j + \alpha Y^0_i+ \epsilon_i, \end{align}\] where \(\kappa\) and \(\alpha\) are scalars, and \(\epsilon_i\) is independent random noise. As this process continues over discrete time points, generating dependencies through multiple paths between nodes, the amount of dependence between two connected nodes \(i\) and \(j\), e.g., \(\text{Corr}(Y^t_i,Y^t_j)\), reflects the cumulative influence from many different paths connecting \(i\) and \(j\).

Such a direct transmission process that generates dependencies in the variable through network ties over discrete time points has been considered in various fields of studies, including infectious diseases [34], [35], economics [36], [37], public health [38], [39], and education [40], [41].

Figure 1 demonstrates network distances and the variance-covariance matrices induced by a direct transmission process of multiple \(t\) using a toy example of a network with ten nodes. Figures 1 (b) and 1 (c) present their adjacency matrix \(\mathbf{A}\) and pairwise geodesic distance, respectively. While each component in \(\mathbf{A}\) is binary, pairwise geodesic distances are more variable among the connected nodes; only an isolated node (node 10 in Figure 1 (a)) has an infinite distance to other nodes. We generate network dependent vector \(\mathbf{Y}^t = (Y^t_{1}, \ldots, Y^t_{10})\) from the direct transmission process where the initial values are \(Y^0_i \overset{\text{i.i.d}}{\sim}\mathcal{N}(0,1)\) for \(i = 1,\cdots,10\). Figures 1 (d)1 (e), and 1 (f) present the sample variance-covariance matrix of \(\mathbf{Y}^t\) based on independent 1000 replicates for \({t=1}\), \({t=2}\), and \({t=10000}\) (denoted as \(t = \infty\)), respectively. At \(t=1\), in addition to adjacent pairs, nodes with a geodesic distance of \(2\) (e.g., node \(1\) and \(7\)) are shown to be correlated. At \(t=2\) where the influence propagates over two distances (from a node to a node to a node), almost all pairs of connected nodes exhibit dependence. After evolving for a long time (e.g., \(t=10000\) possible interactions through network ties), the dependence becomes unstructured, making it challenging to capture it only through \(\mathbf{A}\).4 These results highlight the structural differences between \(\mathbf{A}\) and \(\mathbf{V}\), even in the simple case.

2.4 Temporal dynamics in network and time series data↩︎

The temporal dynamics that generate network dependence are fundamentally different from the dynamics that generate temporal autocorrelation in time series data. In network settings, researchers typically observe one (or at most, a few) snapshots of autocorrelated outcomes for \(n\) nodes in a network after the nodes have interacted over a specific time period, say \(t=t_{0}\), denoted as \(\mathbf{Y}^{t_{0}} = (Y^{t_{0}}_{1}, Y^{t_{0}}_{2}, \ldots, Y^{t_{0}}_{n})\). From now, we refer to this time duration of interactions as the level of interactions. Here, network autocorrelation refers to the correlation between two nodes, e.g., \(\text{Corr}(Y^{t_{0}}_{i}, Y^{t_{0}}_{j})\). In contrast, temporal data often involves the collection of time series \(\{ (Y^{t=1}_{i}, Y^{t=2}_{i}, \ldots, Y^{t=T}_{i}): i=1,2,\ldots,n \}\), where the autocorrelation between two time points, e.g., \(\text{Corr}(Y^{t}, Y^{t-k})\) refers to temporal autocorrelation, which can be inferred from observations across multiple subjects (\(n\)).

When modeling a temporal dependence pattern, stationarity—assuming consistent temporal dependence patterns over time—often plays a crucial role, It allows the dependence structure to be captured with relatively few parameters, typically as a function of temporal distance between two time points. For example, under stationarity, \(\text{Corr}(Y^t, Y^{t-k})\) depends only on the lag \(k\), not on the specific time point \(t\). In network settings, the dependence between two nodes can become (nearly) independent of the specific number of transmission or interaction steps (e.g., \(t\)) once that process reaches equilibrium. Equilibrium refers to the state in which the dependence structure among nodes stabilizes. At equilibrium, network dependence, like temporal dependence under stationarity, can often be modeled using the network structure (e.g., an adjacency matrix) and a relatively small number of parameters. While equilibrium and stationarity represent different concepts, both allow for simplified modeling of dependence structures through stable, underlying patterns in temporal dynamics. However, unlike time series, determining whether network data has reached equilibrium relies on the researcher’s conjecture or prior knowledge, as there is often insufficient data on network dynamics.

The implications of approaches for accounting for shared network dependence should critically depend on the assumed nature of transmission process that underlies dependence structures. We first introduce the pre-whitening approach, which can be applicable when the dependence can be captured through a specific level of interactions (e.g., a specific \(t_{0}\)). We then introduce the network autocorrelation model, which is more appropriate when the network is assumed to have reached an equilibrium state.

3 Pre-whitening approaches↩︎

3.1 Specification of the variance-covariance matrix in pre-whitening approach↩︎

The pre-whitening approach was developed specifically to address shared dependence between variables, which has been dominantly used in temporal autocorrelation [42][44]. Pre-whitening often involves estimating the variance-covariance matrix of \(\mathbf{V}\) using (often many) variables that are believed to share the same dependence structure with \(\mathbf{X}\) and \(\mathbf{Y}\). Then each of autocorrelated variables \(\mathbf{X}\) and \(\mathbf{Y}\) is transformed into \(\widehat\mathbf{V}^{-1/2}\mathbf{X}:= \mathbf{X}^*\) and \(\widehat\mathbf{V}^{-1/2}\mathbf{Y}:= \mathbf{Y}^*\), respectively. This whitening procedure can remove shared dependence in the data by rendering elements each in the pre-whitened variables \(\mathbf{X}^{*}\) and \(\mathbf{Y}^{*}\) (possibly asymptotically) statistically independent given that \(\mathbf{V}\) is correctly specified. With \(\mathbf{X}^*\) and \(\mathbf{Y}^*\), standard statistical analyses (e.g., fitting a simple linear regression) can be conducted.

The validity of the pre-whitening approach heavily depends on the specification of \(\mathbf{V}\). Researchers across various fields have developed different approaches based on the dependence structure they aim to address, including temporal dependence in the context of time series data in neuroimaging studies (e.g., [45], [46]), genetic dependence in the context of GWAS (e.g., [20], [47]). These approaches may be adapted to network settings if their assumed dependence structures are considered appropriate for modeling network dependence.

Without specifying any transmission process or any level of interactions that result in the dependence structure, one may attempt to estimate \(\mathbf{V}\) directly through the data. However, in a network, estimating \(\mathbf{V}\) of one or a small number of variables is particularly challenging, especially when only one snapshot of autocorrelated variables is observed. A more practical way is to construct \(\mathbf{V}\) through an adjacency matrix \(\mathbf{A}\), which is similar to using the kinship matrix in GWAS. As non-adjacent pairs have zero elements in \(\mathbf{A}\), it captures only first-order network dependence. Here, “first-order” refers to the dependence resulting from paths of length one that connect two nodes (i.e., direct connection). The following sections discuss specifications of \(\mathbf{V}\) using higher-order (\(>\)​1) powers of \(\mathbf{A}\) to characterize dependence beyond direct adjacency.

3.2 Possible specifications under the direct transmission process↩︎

While the relationship between \(\mathbf{A}\) and \(\mathbf{V}\) can be generally very complex, we provide the following Theorem to establish their relationship under the given level of interactions in a very specific direct transmission process. The result can then be used to pre-whiten the variables in network settings.

Theorem 1 (Variance-covariance matrix in a direct transmission process). Suppose that (i) \(\mathbf{A}\) is symmetric, and (ii) \(Y^0_i\overset{\text{i.i.d}}{\sim}\mathcal{N}(0,1)\) for \(i=1,\ldots, n\). Then when \(\mathbf{Y}^t = (Y^t_1,\cdots,Y^t_n)\) follows the direct transmission process given by: \[~\label{directMatrix} \mathbf{Y}^t = \kappa\mathbf{A}\mathbf{Y}^{t-1} + \alpha \mathbf{Y}^{t-1} + \boldsymbol{\epsilon}^t, \quad t = 1, 2, \cdots, T,\tag{1}\] where \(\kappa \in \mathbb{R}\) and \(\alpha\in \mathbb{R}\), \(\boldsymbol{\epsilon}^{t} = (\epsilon^t_1,\cdots,\epsilon^t_n), \epsilon^t_i\overset{\text{i.i.d}}{\sim}\mathcal{N}(0,1)\), for \(i = 1, \cdots, n\), the variance-covariance matrix \(\mathbf{V}\) of \(\mathbf{Y}^{t=1}\) is given by: \[\label{eq:varTheorem1} \mathbf{V}= \text{Var}(\mathbf{Y}^{t=1}) = \kappa^2\mathbf{A}^2 + 2\alpha\kappa\mathbf{A}+ (\alpha^2+1)\mathbf{I},\tag{2}\] where \(\mathbf{I}\) is the \((n \times n)\) identity matrix.

While the direct transmission process at \(t=1\) in 1 appears to generate autocorrelation only between adjacent pairs, Theorem 1 demonstrates that \(\mathbf{V}\) is, in fact, related to \(\mathbf{A}\) up to its order two. Consider a network with three nodes, \(i,j\) and \(l\), where \(i\) and \(j\) are connected only through \(l\), i.e., \(A_{il}A_{lj} = 1\) but \(A_{ij} = 0\). At \(t= 1\) in the direct transmission process, although \(i\) and \(j\) have not had direct interaction, \(Y^{t=1}_i\) and \(Y^{t=1}_j\) are statistically correlated through \(Y^{t=1}_l\). In this way, the order of \(\mathbf{A}\) would exponentially—not linearly—increase as the number of interactions allowed between adjacent pairs increases. A proof of Theorem 1 is provided in Appendix 7.

a

b

Figure 2: (a) The variance-covariance matrix \(\mathbf{V}\); (b) the sample variance-covariance matrix under the direct transmission process at \(t = 1\)..

Figure 2 (a) presents \(\mathbf{V}\) in 2 using the parameters that generate the direct transmission process for the toy example introduced in Figure 1. Figure 2 (b) presents the sample variance-covariance matrix of \(\mathbf{Y}^{t=1}\) based on 1000 independent replicates (same as Figure 1 (d)), which exhibits a dependence structure very similar to that specified in \(\mathbf{V}\) in Figure 2 (a). This agreement between theoretical and empirical variance-covariance matrices is consistent with our results in Theorem 1.

Remark 1. If the adjacency matrix is asymmetric, capturing the full dependence structure may require considering high-order adjacency matrix, its transpose matrix \(\mathbf{A}^\intercal\), and their product \(\mathbf{A}^{d_1}{\mathbf{A}^\intercal}^{d_2}\), where \(d_1\) and \(d_2\) are scalars. For example, when \(\mathbf{A}\) is asymmetric, the first term in the variance-covariance matrix in Theorem 1 is given by \(\kappa^2\mathbf{A}\mathbf{A}^\intercal\).

3.3 General specifications of the variance-covariance structure in a network↩︎

While the results in Theorem 1 are helpful for understanding the relationship between \(\mathbf{A}\) and \(\mathbf{V}\) within a certain model and a fixed level of interaction (\(t_{0} = 1\)), they may appear valid only under restrictive conditions. Without necessarily assuming a specific form of the transmission process, it is also an option to construct \(\mathbf{V}\) through the higher-order linear combination of \(\mathbf{A}\) given a certain order \(d\). For example: \[\label{eq:mixVariance} \mathbf{V}= \sigma^2_0 \mathbf{I}+ \sigma^2_1\mathbf{A}+ \cdots \sigma^2_d \mathbf{A}^d,\tag{3}\] where \(\sigma^2_0\) represents the variance of independent component, each term \(\mathbf{A}^m\) captures \(m\)-order dependence resulting from length-\(m\) paths that connect two nodes, with \(\sigma^2_m\) indicating the strength of dependence. The value of \(\mathbf{A}^m_{ij} =g\) indicates that there exist \(g\) number of length-\(m\) paths connecting nodes \(i\) and \(j\). The variance-covariance matrix defined in 3 assumes that each length-\(m\) path connecting nodes \(i\) and \(j\) contributes to \(\text{Cov}(Y_{i}, Y_{j})\) with the same strength of \(\sigma^2_m\). However, the impact of \(m\)-order dependence on \(\mathbf{V}\) may vary across different orders of \(m\) (\(=1,\ldots,d\)). In many cases, it is reasonable to conjecture that dependence induced by nodes farther apart in the path would be negligible [48][51].

The choice of \(d\) is subjective, and researchers may determine the value of \(d\) based on their substantive knowledge of how information propagates through network ties and to what extent. With the underlying transmission process unknown, empirical evidence on social influence can guide the choice of \(d\) [48], [52][54]. The six degrees of separation rule [52], [55] suggests that any two individuals in a social network are connected through at most six steps, indicating that setting \(d\) to a finite number (e.g., 10) may be sufficient to capture indirect dependence in many networks, regardless of the dynamic of the underlying transmission process. Additionally, the three degrees of influence rule [48], [53] further states that influence tends to diminish beyond three steps, implying the dependence due to transmission process may be at most significant within \(t=3\). In the context of a direct transmission process, these two rules align with each other, as the variance structure in \(t=3\) involves adjacency relationships up to \(\mathbf{A}^6\), which exactly corresponds to the six degrees of separation. Furthermore, network studies specifically on social media suggest that the degrees of separation may be even lower (\(d=4\)[54], [56]. This implies if the underlying network structure is denser and the distribution of distances is more right-skewed, a smaller \(d\) may be more appropriate, while the specific transmission processes that result in \(\mathbf{V}\) in 3 for \(d >2\) remain unknown to us.

3.4 An example of pre-whitening example using linear mixed models↩︎

In this section, we provide an example of the pre-whitening approach, where the variance-covariance matrix is estimated using a linear mixed model. To address the shared genetic dependence, the linear mixed model has been used to represent shared dependence structure by random effects in several studies [18], [47], [57][59]. More recently, it has also been applied specifically to address the spurious associations due to network dependence [60], [61]. For example, the linear mixed model of \(\mathbf{X}\) and \(\mathbf{Y}\) can be given by: \[\label{eq:egmix} \mathbf{Y}= \beta \mathbf{X}+ \mathbf{u}+ \boldsymbol{\epsilon},\tag{4}\] where \(\mathbf{u}\) represents the random effect associated with network dependence, \(\boldsymbol{\epsilon}\sim \mathcal{N}(0,\sigma^2_0\mathbf{I})\) denotes the error term, and \(\beta\) is the fixed effect for relationship between \(\mathbf{X}\) and \(\mathbf{Y}\). We assume \(\mathbf{u}\sim \mathcal{N}(0,\mathbf{V}_u)\), where \(\mathbf{V}_u = \sigma^2_1\mathbf{A}+\cdots + \sigma^2_d\mathbf{A}^d\) and hence \(\mathbf{Y}\sim \mathcal{N}(\beta \mathbf{X}, \mathbf{V})\), where \(\mathbf{V}\) is given in 3 . Although unstructured forms of \(\mathbf{V}_u\) and \(\mathbf{V}\) can be used in 4 , specifying \(\mathbf{V}\) as in 3 can substantially reduce the dimension of parameter space, and also can reflect substantive knowledge about transmission dynamics. To estimate the unknown parameters, the regression coefficient, and the variance components (\(\sigma^2_0,\cdots,\sigma^2_d\)), the maximum likelihood estimators can be obtained by maximizing the following log-likelihood function. \[\label{eq:mixLM} \begin{align} & l(\mathbf{Y}, \mathbf{X},\beta,\sigma^2_0,\sigma^2_1, \cdots, \sigma^2_d) = -\frac{n}{2}\ln(2\pi)-\frac{1}{2}\ln|\mathbf{V}| \\&- \frac{1}{2} (\mathbf{Y}-\beta \mathbf{X})^\intercal(\mathbf{V})^{-1}(Y-\beta \mathbf{X}).\\ \end{align}\tag{5}\] Even numerically maximizing the above log-likelihood function often faces challenges, as it involves computing the inverse of updated \(\mathbf{V}\) in each iteration. The following Theorem simplifies the optimization procedure by spectral decomposition under certain conditions, a similar approach to [58] but with a more general structure of \(\mathbf{V}\) as in 3 .

Theorem 2 (Maximum likelihood estimation with efficient optimization). Assuming \(\mathbf{A}\) is symmetric and \(\mathbf{Y}\sim \mathcal{N}(\beta\mathbf{X}, \mathbf{V})\), where \(\mathbf{V}\) is given in 3 , the maximum likelihood estimators for \((\beta,\sigma^2_0,\sigma^2_1,\cdots,\sigma^2_d)\) in 4 are obtained by maximizing the following log-likelihood function: \[\label{eq:simmixLM} \begin{align} &l(\mathbf{Y}, \mathbf{X},\beta,\sigma^2_0,\sigma^2_1, \cdots, \sigma^2_d) = -\frac{n}{2}\ln(2\pi) \\&-\frac{1}{2}\sum^n_{i=1} \ln(\sigma^2_0 + \sigma^2_1 \lambda_i + \cdots + \sigma^2_d \lambda^d_i) \\ & - \frac{1}{2} \mathbf{Z}^\intercal(\sigma^2_0 \mathbf{I}+ \sigma^2_1\mathbf{D}+ \cdots + \sigma^2_d \mathbf{D}^d)^{-1}\mathbf{Z}, \end{align}\tag{6}\] where \(\mathbf{Z}= \mathbf{U}^\intercal(\mathbf{Y}-\beta\mathbf{X})\), \(\mathbf{A}= \mathbf{U}\mathbf{D}\mathbf{U}^\intercal\), \(\mathbf{U}\) is the orthogonal matrix of eigenvectors of \(\mathbf{A}\) and
\(\mathbf{D}=\text{diag}[\lambda_1,\lambda_2,\cdots,\lambda_n]\) is a diagonal matrix of eigenvalues of \(\mathbf{A}\).

A proof of Theorem 2 is provided in Appendix 7. The estimated coefficient \(\hat{\beta}\) represents the relationship of \(\mathbf{X}\) and \(\mathbf{Y}\) accounting for the shared network dependence. If we assume that \(\mathbf{V}\) is the variance-covariance matrix for both \(\mathbf{Y}\) and \(\mathbf{X}\), then \(\hat{\beta}\) is equivalent to the estimator obtained from linear regression of the whitened variables, \(\mathbf{X}^* = \widehat\mathbf{V}^{-1/2}\mathbf{X}\) and \(\mathbf{Y}^* =\widehat\mathbf{V}^{-1/2}\mathbf{Y}\).

4 Network Autocorrelation Model↩︎

4.1 Model specification and estimation↩︎

The network autocorrelation model has been widely used to address or account for autocorrelation within a variable of interest in a network [6], [61][66]. The standard form of the network autocorrelation model is given by: \[\begin{align} \label{eq:nam} \mathbf{Y}= \rho \mathbf{W}\mathbf{Y}+ \beta_\text{NAM} \mathbf{X}+ \boldsymbol{\epsilon}, \end{align}\tag{7}\] where \(\boldsymbol{\epsilon}\sim \mathcal{N}(0, \sigma^2\mathbf{I})\) denotes the error terms.

As an extension of standard linear regression model, the network autocorrelation model relaxes the assumption of independence in \(\mathbf{Y}\) by incorporating the term \(\rho\mathbf{W}\mathbf{Y}\), allowing \(\mathbf{Y}\) to exhibit network dependence. The coefficient \(\beta_\text{NAM}\) represents the linear relationship between \(\mathbf{Y}\) and \(\mathbf{X}\) after addressing the autocorrelation only within \(\mathbf{Y}\), where \(\mathbf{X}\) is not necessarily network-dependent. In this sense, the network autocorrelation model may be appropriate for our target inference to examine the relationship between \(X_i\) and \(Y_i\), with the conditions for its applicability detailed in the next section.

Each element of the weight matrix \(\mathbf{W}\), \(W_{ij}\), represents the extent to which \(Y_i\) depends on \(Y_j\) while the scalar parameter \(\rho\) quantifies the strength of the network autocorrelation. The selection of \(\mathbf{W}\) is crucial, as it defines the network influence pattern and directly impacts parameter estimation and inference. Researchers typically determine \(\mathbf{W}\) based on their prior knowledge and available network information (e.g., \(\mathbf{A}\)). One common specification is the adjacency matrix \(\mathbf{A}\) (e.g., [63], [66]), and its row-normalized version of \(W_{ij} = A_{ij}/\sum_j A_{ij}\) is also commonly used (e.g., [11], [67][70]). The parameter space for \(\rho\) also depends on the choice of \(\mathbf{W}\). Rearranging 7 , we have the following representation: \(\mathbf{Y}= (\mathbf{I}- \rho \mathbf{W})^{-1} (\beta_\text{NAM}\mathbf{X}+ \boldsymbol{\epsilon})\), given that \((\mathbf{I}- \rho \mathbf{W})\) is non-singular. A detailed discussion on the selection of \(\mathbf{W}\) and the feasible range of \(\rho\) can be found in [6] and [71], respectively.

Several methods have been developed to estimate parameters in the network autocorrelation model. [72] proposes likelihood function to estimate parameters \(\rho\) and \(\beta_\text{NAM}\), which has been predominantly used in subsequent studies [10], [65], [73]. Additionally, [74] propose a two-stage least squares method through an instrument variable \(\mathbf{Z}:= (\mathbf{X}, \mathbf{W}\mathbf{Y})\), involving regress \(\mathbf{X}\) on \(\mathbf{Z}\) in the first stage, and regress \(\mathbf{Y}\) on fitted values from the first stage as the second stage. [68] provide a Bayesian estimation approach where an informative prior for \(\rho\) is recommended to use. A comprehensive review of different estimation methods for the network autocorrelation model can be found in [67]. We present the likelihood-based estimation, where \(\widehat\beta_\text{NAM}\), \(\hat{\rho}\), and \(\hat{\sigma}^2\) are derived from maximizing the following log-likelihood: \[\label{mle32for32nam} \begin{align} & l(\mathbf{Y},\mathbf{X},\rho, \beta_\text{NAM},\sigma^2) \\ &= -\frac{n}{2}\ln(2\pi\sigma^2)-\frac{n}{2}\ln(\det(\mathbf{I}-\rho \mathbf{W}))\\&-\frac{1}{2\sigma^2}((\mathbf{I}- \rho\mathbf{W})\mathbf{Y}- \beta_\text{NAM}\mathbf{X})^\intercal((\mathbf{I}- \rho\mathbf{W})\mathbf{Y}- \beta_\text{NAM}\mathbf{X}) \end{align}\tag{8}\] Maximizing the above log-likelihood function typically requires less computational burden to optimize 8 , compared to optimizing 5 in Section 3.2, as it avoids computing the matrix inverse in each iteration.

4.2 Underlying assumptions↩︎

We have observed the possibility of directly using the network autocorrelation model to infer the statistical association between two network-dependent variables, even without assuming autocorrelation in the exposure variable \(\mathbf{X}\). Here, we elaborate on the assumptions required to use this model, particularly those related to the transmission process it implies. Specifically, similar to our discussion of the pre-whitening approach in Section 3.2, we demonstrate the connection between the direct transmission and the dependence structure implied by the model. For simplicity, consider using \(\mathbf{A}\) in place of \(\mathbf{W}\) and assume the following direct transmission process: \[~\label{direct:nam} \mathbf{Y}^{t} = \rho\mathbf{A}\mathbf{Y}^{t-1} + \beta_\text{NAM}\mathbf{X}+ \boldsymbol{\epsilon}, \quad t=1,2,\ldots, T\tag{9}\] with \(\mathbf{Y}^0 \sim \mathcal{N}(0,\mathbf{I})\). Then as the process evolves, we have the following form at \(t\): \[\mathbf{Y}^{t} = \rho^t \mathbf{A}^t\mathbf{Y}^{0} + \sum^{t-1}_{q=0}(\rho^q\mathbf{A}^q)(\beta_{\text{NAM}}\mathbf{X}+ \boldsymbol{\epsilon}).\] Under certain conditions on the eigenvalues of \(\mathbf{A}\), \(\rho^t\mathbf{A}^t\) vanishes as \(t \to \infty\) [75]. Therefore, \(\lim_{t\to \infty}\rho^t \mathbf{A}^t\mathbf{Y}^{0} = 0\). Then we have: \[\lim_{t\to \infty}\mathbf{Y}^{t} = (\mathbf{I}- \rho\mathbf{A})^{-1} (\beta_\text{NAM}\mathbf{X}+ \boldsymbol{\epsilon}),\] where \((\mathbf{I}- \rho\mathbf{A})^{-1} = \sum^{\infty}_{q=0}(\rho^{q}\mathbf{A}^q)\) by the Neumann series, assuming \((\mathbf{I}- \rho\mathbf{A})\) is non-singular [76]. With \(\mathbf{Y}:= \lim_{t\to\infty} \mathbf{Y}^{t}\) representing the outcome after a "long-run" process or an infinite level of interactions, we can interpret \(\mathbf{Y}\) in the network autocorrelation model 7 as the outcome at the equilibrium state reached following the long-term direct transmission process through network ties specified in \(\mathbf{A}\).

The network autocorrelation model also dictates the explicit form of the variance-covariance matrix of \(\mathbf{Y}\) in 7 . Given the Neumann series, \[~\label{namVar} \begin{align} \text{Var}(\mathbf{Y})& = (\mathbf{I}- \rho\mathbf{A})^{-1}\text{Var}(\beta_\text{NAM}\mathbf{X}+ \boldsymbol{\epsilon})(\mathbf{I}- \rho\mathbf{A})^{-1}\\& =\sigma^2\sum^\infty_{q=0}(\rho\mathbf{A})^q. \end{align}\tag{10}\] For this reason, the network autocorrelation model can also be interpreted as a special form of pre-whitening applied only to the outcome variable \(\mathbf{Y}\) with the variance-covariance matrix of \(\mathbf{V}= \sigma^2\sum^\infty_{q=0}(\rho\mathbf{A})^q\).

Even though the network autocorrelation model uses only the first-order \(\mathbf{A}\) as an input (as the weight matrix \(\mathbf{W}\)), the variance-covariance matrix induced by the model actually depends on all higher-order terms of \(\mathbf{A}^q\), for \(q=0,1,\ldots, \infty\). In other words, the model assumes non-zero autocorrelation between the two nodes as long as they are connected. Assuming \(\rho\) is less than \(1\), the variance-covariance matrix defined in 10 enforces a decay pattern for each order of dependence. Particularly, the value \(\sigma^2\rho^q\) decays as \(q\) increases, indicating that dependence induced by nodes farther apart would diminish, as \(\lim_{q\to\infty}\sigma^2\rho^q = 0\). The strength of high-order dependence (e.g., \(\sigma^2\rho^2\)) must be weaker than the strength of the low-order dependence (e.g., \(\sigma^2\rho\)). This decay pattern is similar to the autocorrelation structures typically assumed in temporal and spatial models. Consequently, while the model allows a parsimonious representation of \(\mathbf{V}\) via \(\mathbf{A}\), it is not flexible enough to accommodate settings where high-order dependence is strong or equal to lower-order dependence.

5 Simulation↩︎

The simulation study aims to evaluate the effectiveness of the aforementioned two approaches in addressing spurious associations due to network dependence. We generate network-dependent data under different transmission processes and examine the performance of the two approaches.

5.1 Simulation settings↩︎

We first generate random networks from the Erdős–Rényi model [77], where each network tie is formed with a fixed probability, independently of other ties. We set the sample size to \(n = 500\) nodes and fix the number of ties at \(500\).

Given the network structure, we generate \(\mathbf{Y}= (Y_{1}, \ldots, Y_{n})\) through the two transmission processes to generate network dependence: (a) the direct transmission process at \(t=1\), and (b) the equilibrium transmission process. For each process, we vary the strength of network dependence as weak, medium or strong. We independently generate \(\mathbf{X}\) in the same way under each network dependence setting. For each simulation, with network-dependent \(\mathbf{X}\) and \(\mathbf{Y}\), we measure a statistical association between \(X_{i}\) and \(Y_{i}\) using \(n\) observations and examine whether there is significant evidence of inflated type-I errors based on the results of 500 replicates. We consider three association measures here: (i) the coefficient \(\beta_\text{OLS}\) from a simple linear model, and (ii) the distance correlation \(dCorr\), which captures the relationships beyond linearity [78], and (iii) linear coefficient \(\beta_{\text{NAM}}\) obtained directly from the network autocorrelation model 7 . More details about data generating models for the simulation studies can be found in Appendix 8.

We apply the two approaches to network-dependent (\(\mathbf{X}, \mathbf{Y}\)): (1) pre-whitening and (2) network autocorrelation model. To apply (1), we specify the variance-covariance matrix for both \(\mathbf{X}\) and \(\mathbf{Y}\) as follows. \[\label{eq:second} \mathbf{V}= \sigma^2_0 \mathbf{I}+ \sigma^2_1\mathbf{A}+ \sigma^2_2\mathbf{A}^2,\tag{11}\] which was proven to be the correct form of the variance-covariance matrix for the direct transmission process at \(t=1\), according to Theorem 1 in Section 3.2. Maximum likelihood estimation is then applied to each of \(\mathbf{X}\) and \(\mathbf{Y}\) separately to derive the estimated variances \(\widehat\mathbf{V}_X\) and \(\widehat\mathbf{V}_Y\). The pre-whitening then transforms (\(\mathbf{X}\),\(\mathbf{Y}\)) into network-independent versions (\(\mathbf{X}^*\),\(\mathbf{Y}^*\)) using \(\mathbf{X}^* = \widehat\mathbf{V}^{-1/2}_X\mathbf{X}\) and \(\mathbf{Y}^* = \widehat\mathbf{V}^{-1/2}_Y\mathbf{Y}\), respectively, each of which is used later to obtain association measures.

Table 1: Rejection rates for testing the null hypothesis of no statistical correlation between \(X_{i}\) and \(Y_{i}\), using different association measures for \((\mathbf{X}, \mathbf{Y})\), \((\mathbf{X}^*, \mathbf{Y}^*)\), and \((\mathbf{X}^*_{\text{NAM}}, \mathbf{Y}^{*}_{\text{NAM}})\) across (a) the direct transmission (\(t= 1\)) and (b) the equilibrium transmission processes, under three different strengths of network dependence for each process.
\((\mathbf{X},\mathbf{Y})\) \((\mathbf{X}^*,\mathbf{Y}^*)\) \((\mathbf{X}^*_{\text{NAM}},\mathbf{Y}^*_{\text{NAM}})\)
\(\beta_{\text{OLS}}\) \(\text{dCorr}\) \(\beta_{\text{NAM}}\) \(\beta_{\text{OLS}}\) \(\text{dCorr}\) \(\beta_{\text{OLS}}\) \(\text{dCorr}\)
(a) Direct transmission process at \(t =1\)
Strength Weak 0.222 0.488 0.178 0.058 0.086 0.220 0.434
Medium 0.194 0.624 0.194 0.058 0.064 0.212 0.610
Strong 0.192 0.844 0.172 0.030 0.042 0.188 0.842
(b) Equilibrium transmission process
Strength Weak 0.302 0.294 0.060 0.068 0.088 0.052 0.058
Medium 0.452 0.528 0.057 0.236 0.204 0.042 0.056
Strong 0.878 0.956 0.060 0.684 0.652 0.044 0.066

We fit the network autocorrelation model with \((\mathbf{X}, \mathbf{Y})\) for two purposes; first, to obtain the association measure \(\beta_{\text{NAM}}\) directly from the model, or second, to obtain the “pre-whitened” \((\mathbf{X}^*_\text{NAM}, \mathbf{Y}^*_\text{NAM})\) through the variance estimated in the model. We fit the following model for the first purpose: \[\label{eq:nam95sim} \mathbf{Y}= \rho \mathbf{A}\mathbf{Y}+ \beta_{\text{NAM}}\mathbf{X}+ \boldsymbol{\epsilon}, \quad \boldsymbol{\epsilon}\sim \mathcal{N}(0,\sigma^2\mathbf{I}).\tag{12}\] For the first purpose, we use the estimated coefficient \(\widehat\beta_{\text{NAM}}\) to evaluate whether the network autocorrelation model can mitigate the issue of spurious associations. On the other hand, as the network autocorrelation model specifies a particular variance-covariance structure for \(\mathbf{Y}\), given by \(\mathbf{V}_{Y,\text{NAM}} = \sigma^2(\mathbf{I}- \rho\mathbf{A})^{-1}(\mathbf{I}- \rho\mathbf{A})^{-1}\), we can construct the estimated variance-covariance matrix \(\widehat\mathbf{V}_{Y,\text{NAM}}\) by plugging in the estimated \(\hat{\rho}\) and \(\hat{\sigma}^2\) from the network autocorrelation model without any covariates. Similarly, assuming the same dependence structure between \(\mathbf{X}\) and \(\mathbf{Y}\), we obtain \(\widehat\mathbf{V}_{X,\text{NAM}}\) by using \(\mathbf{X}\) as the response variable in 12 , also without any covariates. We then pre-whiten each variable by transforming \(\mathbf{X}\) and \(\mathbf{Y}\) into \(\mathbf{X}^*_{\text{NAM}} = \widehat\mathbf{V}_{X,\text{NAM}}^{-1/2}\mathbf{X}\) and \(\mathbf{Y}^*_{\text{NAM}} = \widehat\mathbf{V}_{Y,\text{NAM}}^{-1/2}\mathbf{Y}\).

We evaluate the performance of each approach by (i) and (ii) for all network-dependent \((\mathbf{X},\mathbf{Y})\), as well as the pre-whitened variables, \((\mathbf{X}^*,\mathbf{Y}^*)\) and \((\mathbf{X}^*_{\text{NAM}},\mathbf{Y}^*_{\text{NAM}})\). For network-dependent \((\mathbf{X}, \mathbf{Y})\), we apply simple linear regression, calculate the distance correlation, and fit the network autocorrelation model 12 . For the pre-whitened variables \((\mathbf{X}^*,\mathbf{Y}^*)\) and \((\mathbf{X}^*_\text{NAM},\mathbf{Y}^*_\text{NAM})\), we apply simple linear regression and calculate the distance correlation.

5.2 Simulation Results↩︎

Table 1 presents the rejection rates of the null hypothesis of no association between \(X_{i}\) and \(Y_{i}\) across different transmission processes (indicated by each row) using different approaches and association measures (indicated by each column).

Without pre-whitening, the tests using network-dependent \((\mathbf{X},\mathbf{Y})\) produce significant type-I errors, except in the case of \(\beta_{\text{NAM}}\) under (b) the equilibrium transmission process where the dependence structure in \(\mathbf{Y}\) is correctly specified. This result suggests that the network autocorrelation model alone could be sufficient to address the spurious associations if the dependence structure in the outcome variable is correctly specified (e.g., equilibrium state). However, the rejection rates of tests on \(\beta_\text{NAM}\) are slightly higher than \(0.05\), also suggesting that the network autocorrelation model, at most, partially accounts for spurious associations when both the response and covariate variables are network-dependent. These results align with recent literature [61], which reports that while the network autocorrelation model can mitigate false positive rates, it does not achieve a desirable level (i.e., below 0.05). In general, the measure \(\text{dCorr}\) yields higher rejection rates than \(\beta_{\text{OLS}}\), as distance correlation captures both linear and nonlinear relationships, whereas the linear coefficient reflects only linear associations.

With the pre-whitened \((\mathbf{X}^*, \mathbf{Y}^*)\), the rejection rates using \(\beta_{\text{OLS}}\) and \(\text{dCorr}\) remain around the nominal value of 0.05 under (a) the direct transmission process at \(t=1\). This result validates our findings in Theorem 1. However, under (b) the equilibrium transmission process, the pre-whitening \(\mathbf{V}\) is misspecified for the local dependence, leading to inflated type-I errors, with rejection rates increasing as the strength of network dependence grows. With the pre-whitened \((\mathbf{X}^*_{\text{NAM}}, \mathbf{Y}^*_{\text{NAM}})\) by the \(\mathbf{V}\) specified in the network autocorrelation model 12 without any covariates, the rejection rates are close to the nominal value, slightly above 0.05 under (b) the equilibrium transmission process, while they are substantially higher under (a) the direct transmission process at \(t=1\). The results demonstrate that the network autocorrelation model can be effectively used to obtain the pre-whitened variables, which can then be applied in separate models or measures to estimate the association between the variables. However, it may be challenging to compute the inverse matrix \((\mathbf{I}- \hat{\rho}\mathbf{A})^{-1}\) to obtain the pre-whitened variables \(\mathbf{X}^{*}_{\text{NAM}}\) and \(\mathbf{Y}^{*}_{\text{NAM}}\).

6 Discussion↩︎

In human subjects research, where a variable of interest represents behaviors or health outcomes, there is typically no inherent structure governing autocorrelation among individuals. Unlike other dependence settings, researchers usually must rely on prior knowledge to determine the level of interaction (e.g., the choice of \(t_{0}\)) that result in autocorrelation, especially when only a single snapshot of the data is available. If the observed autocorrelated outcomes in the network are believed to result from a relatively small number of interactions (e.g., a smaller value of \(t_0\)), then the variance-covariance matrix using higher-order terms of the adjacency matrix, as in 3 can be applied up to a moderate degree (e.g., \(d = 10\)) to pre-whiten the network-dependent variables. In contrast, if the autocorrelated outcomes are believed to remain relatively stable even as more interactions are allowed (i.e., the process is believed to have reached equilibrium), a network autocorrelation model and related recent methodologies can be used directly for regression or to obtain the variance-covariance matrix from the model.

Without specifying any level of interactions leading to autocorrelation, one may attempt to estimate the variance-covariance matrix \(\mathbf{V}\) directly from the data. However, estimating the variance-covariance matrix \(\mathbf{V}\) of one or a small number of variables is particularly challenging in a network. While one might infer the dependence structure through multiple covariates under the assumption that they share similar patterns of dependence as in the case of genetic autocorrelation, the number of such covariates available is often quite limited. In such cases, researchers analyzing statistical associations between two variables sampled from interconnected subjects can have two directions: (1) explicitly constructing \(\mathbf{V}\) based on available network information (e.g., \(\mathbf{A}\)) to pre-whiten each variable and then measuring their association, or (2) incorporating the structure within a regression model to jointly estimate the variance and regression coefficients. For the latter approach, we find that both linear mixed models used in GWAS and network autocorrelation models can help reduce spurious associations, provided that \(\mathbf{V}\) is correctly specified. However, the assumptions underlying each method regarding the transmission process that generates network dependence can be highly restrictive in practice. Moreover, the effectiveness of reducing spurious associations is sensitive to the specific assumptions implied in the transmission process.

Recently, there has been a growing number of attempts to model specific forms of dependence using Markov properties, allowing the joint density of \((\mathbf{Y}, \mathbf{X})\) to be factorized into a product of several conditionally independent components. In this framework, each variable (e.g., \(\mathbf{X}\) or \(\mathbf{Y}\)) is treated as a single realization of a particular type of Markov random field over the network, similar to certain equilibrium distributions [33], [79][82]. While this approach offers statistical (and also causal) advantages, it can also only capture highly specific dependence structures that may not hold in practice.

Our study opens up potential future directions. First, it is common for \(\mathbf{X}\) and \(\mathbf{Y}\) to share the same network dependence structure, though they may only partially overlap. In such cases, the extent of spurious associations is expected to decrease, and any remaining biases could be addressed more efficiently and effectively compared to the case with full overlaps. Second, similar to GWAS, if multiple variables are available, their sample variances could be utilized to obtain insights into \(\mathbf{Y}\). In network studies, an important avenue for exploration is how to leverage the typically small set of covariates, in addition to \(\mathbf{A}\), to better learn about shared dependence structures.

Acknowledgement↩︎

This research was supported by Grant Number 5P20GM103645 from the National Institute of General Medical Sciences.

Appendix↩︎

7 Proofs↩︎

Proof of Theorem 1. The \(\mathbf{Y}^1\) is given by: \[\begin{align} \mathbf{Y}^1 = \kappa\mathbf{A}\mathbf{Y}^0 + \alpha\mathbf{Y}^0 + \boldsymbol{\epsilon}^1, \end{align}\] The variance matrix is: \[\begin{align} \text{Var}(\mathbf{Y}^1) = \text{Var}((\kappa\mathbf{A}+ \alpha\mathbf{I})\mathbf{Y}^0 + \boldsymbol{\epsilon}^1) \end{align}\] Expanding it to have: \[\begin{align} \text{Var}(\mathbf{Y}^1) & = (\kappa\mathbf{A}+ \alpha\mathbf{I})\text{Var}(\mathbf{Y}^0) (\kappa\mathbf{A}+ \alpha\mathbf{I})^\intercal+ \text{Var}(\epsilon^1) \end{align}\] Given \(\mathbf{Y}^0 \sim \mathcal{N}(0,\mathbf{I})\) and \(\boldsymbol{\epsilon}^1 \sim \mathcal{N}(0,\mathbf{I})\), we get: \[\begin{align} \text{Var}(\mathbf{Y}^1) = (\kappa\mathbf{A}+ \alpha\mathbf{I})(\kappa\mathbf{A}+ \alpha\mathbf{I})^\intercal+ \mathbf{I} \end{align}\] Since \(\mathbf{A}\) is symmetric, we have: \[\begin{align} \text{Var}(\mathbf{Y}^1) = \kappa^2\mathbf{A}^2 + 2\alpha\kappa\mathbf{A}+ (\alpha^2 + 1)\mathbf{I}, \end{align}\] which completes the proof. ◻

Proof of Theorem 2. Given the eigendecomposition of \(\mathbf{A}\):\(\mathbf{U}\mathbf{D}\mathbf{U}^\intercal\), the high-order \(\mathbf{A}^m\) can be written as: \[\begin{align} \mathbf{A}^{m} &= (\mathbf{U}\mathbf{D}\mathbf{U}^\intercal)^m = \mathbf{U}\mathbf{D}^m\mathbf{U}^{\intercal} \end{align}\] since \(\mathbf{U}^\intercal\mathbf{U}= \mathbf{I}\). Therefore, we can write the variance structure in 3 as: \[\label{proof:V} \begin{align} \mathbf{V}&= \sigma^2_0\mathbf{U}\mathbf{I}\mathbf{U}^\intercal+ \sigma^2_1\mathbf{U}\mathbf{D}\mathbf{U}^\intercal+ \cdots +\sigma^2_d\mathbf{U}\mathbf{D}^d\mathbf{U}^\intercal\\ & = \mathbf{U}(\sigma^2_0 \mathbf{I}+ \sigma^2_1 \mathbf{D}+ \cdots + \sigma^2_d\mathbf{D}^d)\mathbf{U}^\intercal \end{align}\tag{13}\] where \(\mathbf{D}\) is diagonal with eigenvalues \((\lambda_1,\lambda_2,\cdots,\lambda_n)\), the high-order \(\mathbf{D}^d\) is diagonal with \((\lambda^d_1,\lambda^d_2,\cdots,\lambda^d_n)\). The inverse of \(\mathbf{V}\) is: \[\label{proof:inverseV} \begin{align} \mathbf{V}^{-1} = \mathbf{U}(\sigma^2_0 \mathbf{I}+ \sigma^2_1 \mathbf{D}+ + \cdots + \sigma^2_d\mathbf{D}^d)^{-1} \mathbf{U}^{\intercal}, \end{align}\tag{14}\] The matrix \((\sigma^2_0 \mathbf{I}+ \sigma^2_1 \mathbf{D}+ \cdots + \sigma^2_d\mathbf{D}^d)\) is diagonal, and its inverse can be computed by taking the reciprocal of elements along the diagonal.

Next, we compute the determinate \(|\mathbf{V}|\) by  13 : \[\begin{align} |\mathbf{V}| & = |\mathbf{U}||\sigma^2_0 \mathbf{I}+ \sigma^2_1 \mathbf{D}+ \cdots + \sigma^2_d\mathbf{D}^d| |\mathbf{U}^\intercal|\\ & = |\sigma^2_0 \mathbf{I}+ \sigma^2_1 \mathbf{D}+ \cdots + \sigma^2_d\mathbf{D}^d|\\ & = \prod^n_{i=1}(\sigma^2_0 + \sigma^2_1\lambda_i + \cdots + \sigma^2_d\lambda^d_i) \end{align}\] since \(\mathbf{U}\) is an orthogonal matrix and \(|\mathbf{U}| = |\mathbf{U}^\intercal| = 1\). Taking logarithm, we get: \[\label{proof:lnV} \begin{align} \ln|\mathbf{V}| = \sum^n_{i=1}\ln(\sigma^2_0 + \sigma^2_1\lambda_i + \cdots+ \sigma^2_d\lambda^d_i) \end{align}\tag{15}\]

Plugging 15 and 14 into 5 , we have: \[\begin{align} &l(\mathbf{Y}, \mathbf{X},\beta,\sigma^2_0,\sigma^2_1, \cdots, \sigma^2_d) = -\frac{n}{2}\ln(2\pi) \\&-\frac{1}{2}\sum^n_{i=1} \ln(\sigma^2_0 + \sigma^2_1 \lambda_i + \cdots + \sigma^2_d \lambda^d_i) \\ & - \frac{1}{2} \mathbf{Z}^\intercal(\sigma^2_0 \mathbf{I}+ \sigma^2_1\mathbf{D}+ \cdots + \sigma^2_d \mathbf{D}^d)^{-1}\mathbf{Z}, \end{align}\] which completes the proof. ◻

8 Additional Simulation Studies↩︎

We set the sample size to \(n = 500\) nodes with the number of ties at \(500\). For (a) the direct transmission process at \(t = 1\), initial values \(\mathbf{Y}^0 = (Y^0_{1},\cdots, Y^0_n)\) are generated with \(Y^0_i \overset{i.i.d}{\sim} \mathcal{N}(0,1)\) for \(i = 1,\cdots,n\). The network-dependent vector \(\mathbf{Y}= \mathbf{Y}^{1} = (Y^{1}_1,\cdots,Y^1_n)\) is generated by: \[\begin{align} \mathbf{Y}^1 = \kappa \mathbf{A}\mathbf{Y}^0 + \alpha \mathbf{Y}^0 + \boldsymbol{\epsilon}, \boldsymbol{\epsilon}= (\epsilon_1,\cdots,\epsilon_n), \epsilon_i\overset{\text{i.i.d.}}{\sim}\mathcal{N}(0, 0.1^2) \end{align}\] We consider three specifications of \((\kappa,\alpha)\), where \((0.7,0.3)\), \((0.8,0.2)\), and \((0.9,0.1)\) for the strength of network dependence as weak, medium, and strong, respectively. We independently generated \(\mathbf{X}\) in the same way under each network dependence setting.

For (b) equilibrium transmission process, we generate the equilibrium state of the network dependence data by the network autocorrelation model: \[\begin{align} \mathbf{Y}&= (\mathbf{I}- \rho\mathbf{A})^{-1}(\mathbf{Y}^0 + \boldsymbol{\epsilon}), \boldsymbol{\epsilon}= (\epsilon_1, \cdots,\epsilon_n), \epsilon_i \overset{\text{i.i.d.}}{\sim} \mathcal{N} (0,0.1^2) \end{align}\] where initial values \(\mathbf{Y}^0 = (Y^0_{1},\cdots, Y^0_n)\) are generated with \(Y^0_i \overset{i.i.d}{\sim} \mathcal{N}(0,1)\) for \(i = 1,\cdots,n\). We set \(\rho = (0.25, 0.27, 0.29)\) to represent the weak, medium, and strong network dependence, respectively. In the same way and under each network dependence setting, we independently generate \(\mathbf{X}\).

References↩︎

[1]
Dow, M. M., M. L. Burton, and D. R. White (1982). Network autocorrelation: A simulation study of a foundational problem in regression and survey research. Social Networks 4(2), 169–200.
[2]
Dow, M. M., M. L. Burton, D. R. White, and K. P. Reitz (1984). Galton’s problem as network autocorrelation. American Ethnologist 11(4), 754–770.
[3]
O’malley, A. J. and P. V. Marsden (2008). The analysis of social networks. Health services and outcomes research methodology 8, 222–269.
[4]
Peeters, D. and I. Thomas (2009). Network autocorrelation. Geographical Analysis 41(4), 436–443.
[5]
Lee, Y. and E. L. Ogburn (2021). Network dependence can lead to spurious associations and invalid inference. Journal of the American Statistical Association 116(535), 1060–1074.
[6]
Leenders, R. T. A. (2002). Modeling social influence through network autocorrelation: constructing the weight matrix. Social networks 24(1), 21–47.
[7]
Christakis, N. A. (2004). Social networks and collateral health effects.
[8]
Yule, G. U. (1926). Why do we sometimes get nonsense-correlations between time-series?–a study in sampling and the nature of time-series. Journal of the royal statistical society 89(1), 1–63.
[9]
Ernst, P. A., L. A. Shepp, and A. J. Wyner (2017). Yule’s “nonsense correlation” solved!
[10]
Wang, W., E. J. Neuman, and D. A. Newman (2014). Statistical power of the social network autocorrelation model. Social Networks 38, 88–99.
[11]
Dittrich, D., R. T. A. Leenders, and J. Mulder (2020). Network autocorrelation modeling: Bayesian techniques for estimating and testing multiple network autocorrelations. Sociological Methodology 50(1), 168–214.
[12]
Carr, C. T. and P. Zube (2015). Network autocorrelation of task performance via informal communication within a virtual world. Journal of Media Psychology.
[13]
De-Marcos, L., E. Garcı́a-López, A. Garcı́a-Cabot, J.-A. Medina-Merodio, A. Domı́nguez, J.-J. Martı́nez-Herráiz, and T. Diez-Folledo (2016). Social network analysis of a gamified e-learning course: Small-world phenomenon and network metrics as predictors of academic performance. Computers in Human Behavior 60, 312–321.
[14]
Makridakis, S. (1994). Time series prediction: Forecasting the future and understanding the past. International Journal of Forecasting 10(3), 463–466.
[15]
Legendre, P. (1993). Spatial autocorrelation: trouble or new paradigm? Ecology 74(6), 1659–1673.
[16]
Astle, W. and D. J. Balding (2009). Population structure and cryptic relatedness in genetic association studies.
[17]
Sillanpää, M. J. (2011). Overview of techniques to account for confounding due to population stratification and cryptic relatedness in genomic data association analyses. Heredity 106(4), 511–519.
[18]
Kang, H. M., N. A. Zaitlen, C. M. Wade, A. Kirby, D. Heckerman, M. J. Daly, and E. Eskin (2008). Efficient control of population structure in model organism association mapping. Genetics 178(3), 1709–1723.
[19]
Speed, D. and D. J. Balding (2015). Relatedness in the post-genomic era: is it still useful? Nature Reviews Genetics 16(1), 33–44.
[20]
Hoffman, G. E. (2013). Correcting for population structure and kinship using the linear mixed model: theory and extensions. PloS one 8(10), e75707.
[21]
Li, M., X. Liu, P. Bradbury, J. Yu, Y.-M. Zhang, R. J. Todhunter, E. S. Buckler, and Z. Zhang (2014). Enrichment of statistical power for genome-wide association studies. BMC biology 12, 1–10.
[22]
Yu, J., G. Pressoir, W. H. Briggs, I. Vroh Bi, M. Yamasaki, J. F. Doebley, M. D. McMullen, B. S. Gaut, D. M. Nielsen, J. B. Holland, et al. (2006). A unified mixed-model method for association mapping that accounts for multiple levels of relatedness. Nature genetics 38(2), 203–208.
[23]
Milligan, B. G. (2003). Maximum-likelihood estimation of relatedness. Genetics 163(3), 1153–1167.
[24]
Hayes, B. J., P. M. Visscher, and M. E. Goddard (2009). Increased accuracy of artificial selection by using the realized relationship matrix. Genetics research 91(1), 47–60.
[25]
Jiang, W., X. Zhang, S. Li, S. Song, and H. Zhao (2022). An unbiased kinship estimation method for genetic data analysis. BMC bioinformatics 23(1), 525.
[26]
Macdonald-Wallis, K., R. Jago, A. S. Page, R. Brockman, and J. L. Thompson (2011). School-based friendship networks and children’s physical activity: A spatial analytical approach. Social science & medicine 73(1), 6–12.
[27]
Agneessens, F., S. P. Borgatti, and M. G. Everett (2017). Geodesic based centrality: Unifying the local and the global. Social Networks 49, 12–26.
[28]
Hays, J. C., A. Kachi, and R. J. Franzese Jr (2010). A spatial model incorporating dynamic, endogenous network interdependence: A political science application. Statistical Methodology 7(3), 406–428.
[29]
An, W. (2018). Causal inference with networked treatment diffusion. Sociological Methodology 48(1), 152–181.
[30]
Lee, Y., A. L. Buchanan, E. L. Ogburn, S. R. Friedman, M. E. Halloran, N. V. Katenka, J. Wu, and G. K. Nikolopoulos (2023). Finding influential subjects in a network using a causal framework. Biometrics 79(4), 3715–3727.
[31]
Ogburn, E. L. (2018). Challenges to estimating contagion effects from observational data. Complex spreading phenomena in social systems: influence and contagion in real-world social networks, 47–64.
[32]
Lee, Y. and E. L. Ogburn (2020). Testing for network and spatial autocorrelation. In Proceedings of NetSci-X 2020: Sixth International Winter School and Conference on Network Science 6, pp. 91–104. Springer.
[33]
Ogburn, E. L., O. Sofrygin, I. Diaz, and M. J. Van der Laan (2024). Causal inference for social network data. Journal of the American Statistical Association 119(545), 597–611.
[34]
Keeling, M. J. and K. T. Eames (2005). Networks and epidemic models. Journal of the royal society interface 2(4), 295–307.
[35]
Wang, L. and J. T. Wu (2018). Characterizing the dynamics underlying global spread of epidemics. Nature communications 9(1), 218.
[36]
Banerjee, A., A. G. Chandrasekhar, E. Duflo, and M. O. Jackson (2013). The diffusion of microfinance. Science 341(6144), 1236498.
[37]
Bailey, M., D. Johnston, T. Kuchler, J. Stroebel, and A. Wong (2022). Peer effects in product adoption. American Economic Journal: Applied Economics 14(3), 488–526.
[38]
Christakis, N. A. and J. H. Fowler (2007). The spread of obesity in a large social network over 32 years. New England journal of medicine 357(4), 370–379.
[39]
Aral, S. and C. Nicolaides (2017). Exercise contagion in a global social network. Nature communications 8(1), 14753.
[40]
Dishion, T. J. and J. M. Tipsord (2011). Peer contagion in child and adolescent social and emotional development. Annual review of psychology 62(1), 189–214.
[41]
Cascio, E. U. and D. W. Schanzenbach (2016). First in the class? age and the education production function. Education Finance and Policy 11(3), 225–250.
[42]
Smith, A. T., K. D. Singh, and J. H. Balsters (2007). A comment on the severity of the effects of non-white noise in fmri time-series. NeuroImage 36(2), 282–288.
[43]
Bullmore, E., M. Brammer, S. C. Williams, S. Rabe-Hesketh, N. Janot, A. David, J. Mellers, R. Howard, and P. Sham (1996). Statistical methods of estimation and inference for functional mr image analysis. Magnetic Resonance in Medicine 35(2), 261–277.
[44]
Yue, K., J. Webster, T. Grabowski, A. Shojaie, and H. Jahanian (2024). Iterative data-adaptive autoregressive (idar) whitening procedure for long and short tr fmri. Frontiers in Neuroscience 18, 1381722.
[45]
Lund, T. E., K. H. Madsen, K. Sidaros, W.-L. Luo, and T. E. Nichols (2006). Non-white noise in fmri: does modelling have an impact? Neuroimage 29(1), 54–66.
[46]
Luo, Q., M. Misaki, B. Mulyana, C.-K. Wong, and J. Bodurka (2020). Improved autoregressive model for correction of noise serial correlation in fast fmri. Magnetic resonance in medicine 84(3), 1293–1305.
[47]
Zhou, X. and M. Stephens (2012). Genome-wide efficient mixed-model analysis for association studies. Nature genetics 44(7), 821–824.
[48]
Fowler, J. H. and N. A. Christakis (2010). Cooperative behavior cascades in human social networks. Proceedings of the National Academy of Sciences 107(12), 5334–5338.
[49]
Papachristos, A. V. and C. Wildeman (2014). Network exposure and homicide victimization in an african american community. American journal of public health 104(1), 143–150.
[50]
Soininen, J., R. McDonald, and H. Hillebrand (2007). The distance decay of similarity in ecological communities. Ecography 30(1), 3–12.
[51]
Parkinson, C., A. M. Kleinbaum, and T. Wheatley (2018). Similar neural responses predict friendship. Nature communications 9(1), 332.
[52]
Milgram, S. (1967). The small world problem. Psychology today 2(1), 60–67.
[53]
Christakis, N. A. (2009). Connected: The surprising power of our social networks and how they shape our lives.
[54]
Backstrom, L., P. Boldi, M. Rosa, J. Ugander, and S. Vigna (2012). Four degrees of separation. In Proceedings of the 4th annual ACM Web science conference, pp. 33–42.
[55]
Travers, J. and S. Milgram (1977). An experimental study of the small world problem. In Social networks, pp. 179–197. Elsevier.
[56]
Daraghmi, E. Y. and S.-M. Yuan (2014). We are so close, less than 4 degrees separating you and me! Computers in Human Behavior 30, 273–285.
[57]
Kang, H. M., J. H. Sul, S. K. Service, N. A. Zaitlen, S.-y. Kong, N. B. Freimer, C. Sabatti, and E. Eskin (2010). Variance component model to account for sample structure in genome-wide association studies. Nature genetics 42(4), 348–354.
[58]
Sul, J. H., L. S. Martin, and E. Eskin (2018). Population structure in genetic studies: Confounding factors and mixed models. PLoS genetics 14(12), e1007309.
[59]
Lippert, C., J. Listgarten, Y. Liu, C. M. Kadie, R. I. Davidson, and D. Heckerman (2011). Fast linear mixed models for genome-wide association studies. Nature methods 8(10), 833–835.
[60]
Landon, B. E., N. L. Keating, J.-P. Onnela, A. M. Zaslavsky, N. A. Christakis, and A. J. O’Malley (2018). Patient-sharing networks of physicians and health care utilization and spending among medicare beneficiaries. JAMA internal medicine 178(1), 66–73.
[61]
Matthews, L. J., M. S. Schuler, R. Vardavas, J. Breslau, and I. Popescu (2024). Evaluation via simulation of statistical corrections for network nonindependence. Health Services and Outcomes Research Methodology 24(2), 211–226.
[62]
Neuman, E. J. and M. S. Mizruchi (2010). Structure and bias in the network autocorrelation model. Social Networks 32(4), 290–300.
[63]
Fujimoto, K., C.-P. Chou, and T. W. Valente (2011). The network autocorrelation model using two-mode data: Affiliation exposure and potential bias in the autocorrelation parameter. Social networks 33(3), 231–243.
[64]
Dow, M. M. (2007). Galton’s problem as multiple network autocorrelation effects: cultural trait transmission and ecological constraint. Cross-Cultural Research 41(4), 336–363.
[65]
Doreian, P., K. Teuter, and C.-H. Wang (1984). Network autocorrelation models: some monte carlo results. Sociological Methods & Research 13(2), 155–200.
[66]
Sewell, D. K. (2017). Network autocorrelation models with egocentric data. Social Networks 49, 113–123.
[67]
Li, H. and D. K. Sewell (2021). A comparison of estimators for the network autocorrelation model based on observed social networks. Social Networks 66, 202–210.
[68]
Dittrich, D., R. T. A. Leenders, and J. Mulder (2017). Bayesian estimation of the network autocorrelation model. Social Networks 48, 213–236.
[69]
McPherson, M. A. and M. L. Nieswiadomy (2005). Environmental kuznets curve: threatened species and spatial effects. Ecological Economics 55(3), 395–407.
[70]
Gimpel, J. G. and J. E. Schuknecht (2003). Political participation and the accessibility of the ballot box. Political Geography 22(5), 471–488.
[71]
Ver Hoef, J. M., E. M. Hanks, and M. B. Hooten (2018). On the relationship between conditional (car) and simultaneous (sar) autoregressive models. Spatial statistics 25, 68–85.
[72]
Doreian, P. (1981). Estimating linear models with spatially distributed data. Sociological methodology 12, 359–388.
[73]
Mizruchi, M. S. and E. J. Neuman (2008). The effect of density on the level of bias in the network autocorrelation model. Social Networks 30(3), 190–200.
[74]
Kelejian, H. H. and I. R. Prucha (1998). A generalized spatial two-stage least squares procedure for estimating a spatial autoregressive model with autoregressive disturbances. The journal of real estate finance and economics 17, 99–121.
[75]
Golub, G. H. and C. F. Van Loan (2013). Matrix computations. JHU press.
[76]
LeSage, J. and R. K. Pace (2009). Introduction to spatial econometrics. Chapman and Hall/CRC.
[77]
Erdős, P. and A. Rényi (1959). On random graphs i. Publ. math. debrecen 6(290-297), 18.
[78]
Székely, G. J., M. L. Rizzo, and N. K. Bakirov (2007). Measuring and testing dependence by correlation of distances.
[79]
Ogburn, E. L., I. Shpitser, and Y. Lee (2020). Causal inference, social networks and chain graphs. Journal of the Royal Statistical Society Series A: Statistics in Society 183(4), 1659–1676.
[80]
Tchetgen Tchetgen, E. J., I. R. Fulcher, and I. Shpitser (2021). Auto-g-computation of causal effects on a network. Journal of the American Statistical Association 116(534), 833–844.
[81]
Wu, Y. and R. Bhattacharya (2024). Network causal effect estimation in graphical models of contagion and latent confounding. arXiv preprint arXiv:2411.01371.
[82]
Menzel, K. (2025). Fixed-population causal inference for models of equilibrium. arXiv preprint arXiv:2501.19394.

  1. Department of Biostatistics, Brown University↩︎

  2. Department of Biostatistics, Brown University↩︎

  3. Department of Biostatistics, Brown University. Email: youjin_lee@brown.edu↩︎

  4. The scales increase substantially from Figures 1 (d) to 1 (e) as the value of \(\kappa\) is relatively larger, resulting in exponential growth of \(\mathbf{Y}^t\), even with only \(t=2\). To ensure the process reaches a steady state after a long run (\(t = 10000\) in this case) and to prevent the values of \(\mathbf{Y}^t\) from growing exponentially over time, we adjusted the parameters of the process, resulting in smaller scales in Figure 1 (f) compared to Figure 1 (e).↩︎