Distribution-Agnostic Database De-Anonymization Under Obfuscation And Synchronization Errors
1

Serhat Bakirtas, Elza Erkip
NYU Tandon School of Engineering
Emails: {serhat.bakirtas, elza}
nyu.edu?


Abstract

Database de-anonymization typically involves matching an anonymized database with correlated publicly available data. Existing research focuses either on practical aspects without requiring knowledge of the data distribution yet provides limited guarantees, or on theoretical aspects assuming known distributions. This paper aims to bridge these two approaches, offering theoretical guarantees for database de-anonymization under synchronization errors and obfuscation without prior knowledge of data distribution. Using a modified replica detection algorithm and a new seeded deletion detection algorithm, we establish sufficient conditions on the database growth rate for successful matching, demonstrating a double-logarithmic seed size relative to row size is sufficient for detecting deletions in the database. Importantly, our findings indicate that these sufficient de-anonymization conditions are tight and are the same as in the distribution-aware setting, avoiding asymptotic performance loss due to unknown distributions. Finally, we evaluate the performance of our proposed algorithms through simulations, confirming their effectiveness in more practical, non-asymptotic, scenarios.

dataset, database, matching, de-anonymization, alignment, distribution-agnostic, privacy, synchronization, obfuscation

1 Introduction↩︎

The accelerating growth of smart devices and applications has accelerated the collection of user-level micro-data both by private companies and public institutions. This data is often shared and/or sold after removing explicit user identifiers, a.k.a. anonymization, and coarsening of the data through noise, a.k.a. obfuscation. Despite these efforts, there is a growing concern about the privacy implications [1]. These concerns were further justified by the success of a series of practical de-anonymization attacks on real data [2][6]. In the light of these successful attacks, recently there has been an increasing effort on understanding the information-theoretic and statistical foundations of database de-anonymization, a.k.a. database alignment/matching/recovery [7][16].

Our recent work focuses on the database de-anonymization problem under synchronization errors. In [13], we investigated the matching of Markov databases under synchronization errors only, with no subsequent obfuscation/noise, using a histogram-based detection method. In [14], [15], we extended this effort to databases with noisy synchronization errors. We introduced a noisy replica detection algorithm, and a seeded deletion detection algorithm, and used joint-typicality-based matching to derive achievability results, which we subsequently showed to be tight, for a seed size double logarithmic with the row size of the database. In [13][15] we assumed that the underlying distributions are known and tailored repetition detection and matching algorithms for these known distributions.

Figure 1: An illustrative example of database matching under column repetitions. The column colored in red is deleted, whereas the column colored in blue is replicated. \(Y_{i,2}^{(1)}\) and \(Y_{i,2}^{(2)}\) denote noisy copies/replicas of \(X_{i,2}\). The goal of database de-anonymization studied in this paper is to estimate the correct row permutation \({\sigma_n=\left(\begin{smallmatrix} 1 & 2 & 3 & 4 & 5 & 6\\ 2 & 6 & 4 & 1 & 3 & 5 \end{smallmatrix}\right)}\), by matching the rows of \(\mathbf{X}\) and \(\mathbf{Y}\) without any prior information on the underlying database (\(p_{X}\)), obfuscation (\(p_{Y|X}\)), and repetition (\(p_S\)) distributions.

Motivated by most practical settings where the underlying distributions are not readily available, in this paper, we investigate the de-anonymization problem without any prior knowledge of the underlying distributions. Borrowing a noisy random column repetition model from [14], illustrated in Figure 1, we show that even though the distributions are not known a priori, matching can be performed with no asymptotic (in database size) loss of performance. To that end, we first modify the noisy replica detection algorithm proposed in [14] so that it still works in the distribution-agnostic setting. Then, we propose a novel outlier-detection-based deletion detection algorithm and show that when seeds, whose size grows double logarithmic with the number of users (rows), are available, the underlying deletion pattern could be inferred. Next, through a joint-typicality-based de-anonymization algorithm that relies on the estimated distributions and the repetition pattern, we derive a tight sufficient condition for database de-anonymization to succeed. Finally, we evaluate the performance of our proposed distribution-agnostic algorithms through simulations on finite databases and demonstrate their success in the non-asymptotic regime as well.

The structure of the rest of this paper is as follows: Section 2 introduces the formal statement of the problem. Section 3 contains our proposed algorithms, states our main result, and contains its proof. In Section 4 we evaluate the performances of our proposed algorithms in the non-asymptotic regime via simulations. Section 5 consists of the concluding remarks. Proofs are provided in the Appendix.

We denote a matrix \(\mathbf{X}\) with bold capital letters, and its \((i,j)\)th element with \(X_{i,j}\). A set is denoted by a calligraphic letter, e.g., \(\mathcal{X}\). \([n]\) denotes the set of integers \(\{1,\dots,n\}\). Asymptotic order relations are used as defined in [17]. Unless stated otherwise, all logarithms are base 2. \(H(.)\) and \(I(.;.)\) denote the Shannon entropy and the mutual information [18], respectively. \(D(p\|q)\) denotes the relative entropy [18] (in bits) between two Bernoulli distributions with respective parameters \(p\) and \(q\). \(\overset{p}{\to}\) denotes convergence in probability.

2 Problem Formulation↩︎

We use the following definitions, most of which are adopted from [15] to formalize our problem.

Definition 1. (Anonymized Database)An \({(m_n,n,p_{X})}\) anonymized database \(\mathbf{X}=\{X_{i,j}\in\mathcal{X}\}, (i,j)\in[m_n]\times [n]\) is a randomly generated \({m_n\times n}\) matrix with \(X_{i,j}\overset{\text{i.i.d.}}{\sim} p_X\), where \(p_X\) has a finite discrete support \(\mathcal{X}=\{1,\dots,|\mathcal{X}|\}\).

Definition 2. (Column Repetition Pattern)The column repetition pattern \(S^n=\{S_1,S_2,...,S_n\}\) is a random vector with \(\displaystyle S_i\overset{\text{i.i.d.}}{\sim} p_S\), where the repetition distribution \(p_S\) has a finite integer support \({\{0,\dots,s_{\max}\}}\). Here \(\delta\triangleq p_S(0)\) is called the deletion probability.

Definition 3. (Anonymization Function) The anonymization function \(\sigma_n\) is a uniformly-drawn permutation of \([m_n]\).

Definition 4. (Obfuscation Distribution) The obfuscation (also referred to as noise) distribution \(p_{Y|X}\) is a conditional probability distribution with both \(X\) and \(Y\) taking values from \(\mathcal{X}\).

Figure 2: Relation between the anonymized database \(\mathbf{X}\) and the labeled correlated database, \(\mathbf{Y}\).

Definition 5. (Labeled Correlated Database)Let \(\mathbf{X}\) and \(S^n\) be a mutually-independent \({(m_n,n,p_{X})}\) anonymized database and repetition pattern, \(\sigma_n\) be an anonymization function, and \(p_{Y|X}\) be an obfuscation distribution. Then, \(\mathbf{Y}\) is called the labeled correlated database if the \(i\)th row \(X^n_{i}\) of \(\mathbf{X}\) and the \(\sigma_n(i)\)th row \(Y^{K_n}_{\sigma_n(i)}=[Y_{\sigma_n(i),1},\dots,Y_{\sigma_n(i),K_{n}}]\) of \(\mathbf{Y}\) have the following relation \[\begin{align} \Pr(Y^{K_{n}}_{\sigma_n(i)} = y^{K_n}|X_i^n=x^n) &= \prod\limits_{j:S_j\neq 0} \Pr(Y_{\sigma_n(i),K_{j-1}+1},\dots,Y_{\sigma_n(i),K_{j}}=y_{K_{j-1}+1},\dots,y_{K_{j}}|X_{i,j}=x_j)\tag{1}\\ &= \prod\limits_{j:S_j\neq 0} \prod\limits_{s=1}^{S_{j}} p_{Y|X}(y_{K_{j-1}+s}|x_j),\tag{2} \end{align}\] where \[\begin{align} K_{j}&\triangleq \sum\limits_{t=1}^j S_{t}. \end{align}\]

Note that \(S_j\) indicates the times the \(j\)th column of \(\mathbf{X}\) is repeated in \(\mathbf{Y}\). When \(S_j=0\), the \(j\)th column of \(\mathbf{X}\) is said to be deleted and when \(S_j>1\), the \(j\)th column of \(\mathbf{X}\) is said to be replicated.

The \(i\)th row \(X_i\) of \(\mathbf{X}\) and the \(\sigma_n(i)\)th row \(Y_{\sigma_n(i)}\) of \(\mathbf{Y}\) are called matching rows.

The relationship between \(\mathbf{X}\) and \(\mathbf{Y}\), as described in Definition 5, is illustrated in Figure 2.

As often done in both the graph matching [19] and the database matching [14] literatures, we will assume the availability of a set of already-matched row pairs called seeds, to be used in the detection of the underlying repetition pattern.

Definition 6. (Seeds) Given a pair of anonymized and labeled correlated databases \((\mathbf{X},\mathbf{Y})\), a seed is a correctly matched row pair with the same underlying repetition pattern \(S^n\). A batch of \(\Lambda_n\) seeds is a pair of seed matrices of respective sizes \(\Lambda_n\times n\) and \(\Lambda_n\times \sum_{j=1}^n S_j\).

For the sake of notational brevity and without loss of generality, we assume that the seed matrices \(\mathbf{G}^{(1)}\) and \(\mathbf{G}^{(2)}\) are not submatrices of \(\mathbf{X}\) and \(\mathbf{Y}\). Throughout, we will assume a seed size \(\Lambda_n=\omega(\log n)=\omega(\log\log m_n)\) which is double-logarithmic with the number of users \(m_n\).

Note that as the number of rows \(m_n\) increases for a fixed number of columns \(n\), so does the probability of mismatch due to the increased number of candidates. In turn, in de-anonymization (a.k.a. matching) problems, the relationship between the row size \(m_n\) and the column size \(n\) directly impacts the de-anonymization performance. Hence, as done in [8], [12][14], [16], we utilize the database growth rate, defined below, as the main performance metric.

Definition 7. (Database Growth Rate) The database growth rate \(R\) of an \((m_n,n,p_X)\) anonymized database is defined as \[\begin{align} R&=\lim\limits_{n\to\infty} \frac{1}{n}\log m_n \end{align}\]

Definition 8. (Distribution-Agnostically Achievable Database Growth Rate)Consider a sequence of \({(m_n,n,p_X)}\) anonymized databases \(\mathbf{X}\) with database growth rate \(R\), an independent a repetition probability distribution \(p_S\), an obfuscation distribution \(p_{Y|X}\) and the resulting sequence of labeled correlated databases \(\mathbf{Y}\). Given seed matrices \((\mathbf{G}^{(1)},\mathbf{G}^{(2)})\) with a seed size \(\Lambda_n\), the database growth rate \(R\) is said to be distribution-agnostically achievable if there exists a successful matching scheme \(\psi:(\mathbf{X},\mathbf{Y},\mathbf{G}^{(1)},\mathbf{G}^{(2)})\mapsto \hat{\sigma}_n\) that does not rely on any knowledge on \(p_X\), \(p_S\) and \(p_{Y|X}\), with \[\begin{align} \lim\limits_{n\to\infty}\Pr\left(\sigma_n(I)\neq\hat{\sigma}_n(I)\right)&= 0 \text{ where } I\sim\text{Unif}([m_n])\label{eq:errordefiniton} \end{align}\tag{3}\] where \(\sigma_n\) is true the anonymization function.

Remark 1. We note that the performance criterion described in Definition 8 is known as almost-perfect recovery. Another well-known performance criterion is the perfect recovery criterion where all rows need to be matched. More formally, the perfect recovery criterion corresponds to \[\begin{align} \lim\limits_{n\to\infty} \Pr(\sigma_n\neq \hat{\sigma}_n) = 0 \end{align}\] Observe that the difference between the two performance criteria is that in the almost-perfect criterion allows a sublinear fraction of the database rows to be mismatched. This mismatch budget allows us to use tools such as typicality from information and communication theories. We refer to [10] for an in-depth analytical comparison of the Gaussian database matching results under these different performance criteria.

Definition 9. (Distribution-Agnostic Matching Capacity)The distribution-agnostic matching capacity \(C\) is the supremum of the set of all distribution-agnostically achievable database growth rates corresponding to a database distribution \(p_X\), repetition distribution \(p_S\), obfuscation distribution \(p_{Y|X}\), and seed size \(\Lambda_n\).

In this work, our main objective is the characterization of the distribution-agnostic matching capacity \(C\). We remark that this is in contrast with prior work ([8], [12][14]) that assumed complete distributional knowledge.

Throughout, since we are interested in the distribution-agnostic matching capacity, we assume a positive database growth rate \(R>0\). In other words, we assume \(n\sim\log m_n\).

Our analysis hinges on the following assumptions:

Remark 2. (Assumptions)

  1. The anonymized database \(\mathbf{X}\) and the column repetition pattern \(S^n\) are known to be independent. It is also known that their components \(\smash{X_{i,j}}\) and \(\smash{S_i}\) are i.i.d., while the distributions \(p_X\) and \(p_S\) are not known.

  2. The conditional independence of the noisy replicas stated in 2 is known, leading to a memoryless obfuscation model, whereas the noise distribution \(p_{Y|X}\) is not.

  3. \(|\mathcal{X}|\) and \(s_{\max}\) are known.

As we argue in Sections 3.1 and 3.2, \(S^n\) can be detected without any assumptions on the correlation between \(\mathbf{X}\) and \(S^n\). In turn, one could test the independence of \(\mathbf{X}\) and the estimate \(\hat{S}^n\) of \(S^n\). Furthermore, the i.i.d. nature of the components of \(\mathbf{X}\) and \(\hat{S}^n\), and the obfuscation can be inferred via the Markov order estimation algorithm of [20] with a probability of error vanishing in \(n\), justifying (a)-(b). Similarly, since \(|\mathcal{X}|\) and \(s_{\max}\) do not depend on \(n\), they can easily be estimated with a vanishing probability of error, justifying (c).

In our derivations, we rely on some well-known classical information-theoretic definitions and results which we present below for the sake of completeness.

Definition 10. (Joint Entropy [18]) The joint (Shannon) entropy associated with a pair of discrete random variables \((X,Y)\) with a joint distribution \(p_{X,Y}\) is defined as: \[\begin{align} H(X,Y) &\triangleq -\mathbb{E}[\log p_{X,Y}(X,Y)] \end{align}\]

Definition 11. (Joint Typicality [18]) The \(\epsilon\)-typical set \(A_\epsilon^{(n)}(X,Y)\) associated with discrete random variables \((X,Y)\) with joint distribution \(p_{X,Y}\) is the set of all sequence pairs \((x^n,y^n)\) satisfying \[\begin{align} \Big|-\frac{1}{n}&\log p_{X^n,Y^n}(x^n,y^n)-H(X,Y)\Big|\le \epsilon \end{align}\] where \(H(\mathcal{X},\mathcal{Y})\) is the entropy rate of \((\mathcal{X},\mathcal{Y})\).

Proposition 1. (Joint AEP [18]) Let \(\tilde{X}^n\) and \(\tilde{Y}^n\) be generated according to the i.i.d. marginal distributions \(p_{X^n}\) and \(p_{Y^n}\), independently. Then, the following holds: \[\begin{align} \Pr((\tilde{X}^n,\tilde{Y}^n)\in A_\epsilon^{(n)}(X,Y))&\le 2^{-n(I(X;Y)-3\epsilon)} \end{align}\] where \(I(X;Y)\triangleq H(X)+H(Y)-H(X,Y)\) is the mutual information.

3 Main Result↩︎

Our main result in Theorem 1 is on the distribution-agnostically achievable database growth rates when no prior information is provided on distributions \(p_{X}\), \(p_{Y|X}\), and \(p_S\).

Theorem 1. (Distribution-Agnostic Matching Capacity)Consider an anonymized and labeled correlated database pair. We assume that the underlying database distribution \(p_X\), the obfuscation function \(p_{Y|X}\), and the column repetition distribution \(p_S\) are unknown. Given a seed size \(\Lambda_n=\omega(\log n)\), the distribution-agnostic matching capacity is given by \[\begin{align} C&=I(X;Y^S|S) \label{eq:mainresult} \end{align}\qquad{(1)}\] where \(S\sim p_S\), \(X\sim p_X\) and \({Y^S=Y_1,\dots,Y_S}\) such that \(Y_i|X\overset{\text{i.i.d.}}{\sim} p_{Y|X}\).

Theorem 1 implies that given a seed size \({\Lambda_n=\omega(\log n)=\omega(\log\log m_n)}\) we can perform matching as if we knew the underlying distributions \(p_X\), \(p_{Y|X}\) and \(p_S\), and the actual column repetition pattern \(S^n\) a priori. Hence comparing Theorem 1 with [15], we conclude that in the asymptotic regime where \(n\to\infty\), not knowing the distributions and the realizations of the repetition pattern causes no loss in the matching capacity.

We stress that the proof of the converse part of Theorem 1 directly follows from  [15] since distribution-agnostic matching capacity cannot be better than the one that assumes distributional information. The rest of this section is on the proof of the achievability part of Theorem 1. In Section 3.1, we present an algorithm for detecting noisy replicas and prove its asymptotic success. Then in Section 3.2, we propose a seeded deletion detection algorithm and derive a sufficient seed size that guarantees its asymptotic success. Subsequently, in Section 3.3, we present our de-anonymization algorithm that incorporates Sections 3.1 and 3.2. Finally, in Section 3.5, we focus on the special case of no obfuscation, where seeds are obsolete.

3.1 Distribution-Agnostic Noisy Replica Detection↩︎

Similar to [14], we use the running Hamming distances between the consecutive columns \({\displaystyle C_j^{(2)}}\) and \(\displaystyle C_{j+1}^{(2)}\) of \(\mathbf{Y}\), denoted by \(W_j\), \(j\in[K_n-1]\), where \(K_n\triangleq\sum_{j=1}^n S_j\) as a permutation-invariant future of the labeled correlated database. More formally, \[\begin{align} W_j & \triangleq \sum\limits_{t=1}^{m_n} \mathbb{1}_{[Y_{t,j+1}\neq Y_{t,j}]}, \forall j\in[K_n-1]\label{eq:RHD} \end{align}\tag{4}\] We first note that \[\begin{align} W_j&\sim \begin{cases} \text{Binom}(m_n,p_0),& \text{if }C^{(2)}_j \raisebox{0.05em}{\rotatebox[origin=c]{90}{\models}}C^{(2)}_{j+1} \\ \text{Binom}(m_n,p_1), & \text{otherwise} \end{cases} \label{eq:binomialmixture} \end{align}\tag{5}\] where \[\begin{align} p_0 &\triangleq 1-\sum\limits_{y\in\mathcal{X}} p_Y(y)^2 \tag{6}\\ p_1 &\triangleq 1-\sum\limits_{x\in\mathcal{X}} p_X(x) \sum\limits_{y\in\mathcal{X}} p_{Y|X}(y|x)^2\tag{7} \end{align}\] From [16], we know that as long as the databases are dependent, i.e., \(p_{X,Y}\neq p_X p_Y\), we have \(p_0>p_1\) for any \(p_{X,Y}\) suggesting that replicas can be potentially detected based on \(W_j\) similar to [14], [16]. However, the algorithm in [14] relies on a threshold test with the threshold depending on \(p_{X,Y}\) through \(p_0\) and \(p_1\). In Algorithm 3, we propose the following modification for the distribution-agnostic setting: We first construct the estimates \(\hat{p}_0\) and \(\hat{p}_1\) for the respective parameters \(p_0\) and \(p_1\) through the moment estimator proposed by Blischke [21] and then use the algorithm proposed in [14] for replica detection. Note that we can use Blischke’s estimator because the Binomial mixture is guaranteed to have two distinct components. More formally, the distribution of \(W_j\) conditioned on \(S^n\) is given by \[\begin{align} \Pr(W_j=w|S^n) &= \binom{m_n}{w} [\alpha p_0^{w} (1-p_0)^{m_n-w}+(1-\alpha) p_1^w (1-p_1)^{m_n-w}] \end{align}\] for \(w=0,\dots,m_n\) where the mixing parameter \(\alpha\) is given by \[\begin{align} \alpha &= \frac{1}{K_n-1}\left(n-\sum\limits_{j=1}^n \mathbb{1}_{[S_j=0]}\right) \end{align}\] Since \(p_S\), and in turn \(\delta\) are constant in \(n\), it can easily be verified that as \(n\to\infty\), \(\alpha\overset{p}{\to} \frac{1-\delta}{\mathbb{E}[S]}\). Hence \(\alpha\) is bounded away from both 0 and 1, suggesting that the moment estimator of [21] and in turn, Algorithm 3 can be used to detect the replicas.

Figure 3: Distribution-Agnostic Noisy Replica Detection Algorithm

Lemma 1. (Noisy Replica Detection)Let \(E_j\) denote the event that Algorithm 3 fails to infer the correct relationship between \(C^{(2)}_{j}\) and \(C^{(2)}_{j+1}\), \(j=1,\dots,K_n-1\). Then, given \(m_n=\omega(\log n)\) \[\begin{align} \kappa^{(1)}_n&\triangleq \Pr(\bigcup\limits_{j=1}^{K-1} E_j)\to 0\text{ as }n\to\infty. \end{align}\]

Proof. See Appendix 5.1. ◻

Note that the condition in Lemma 1 is automatically satisfied since \(m_n\) is exponential in \(n\) (Definition 7). Furthermore, Algorithm 3 has a runtime of \(\mathcal{O}(m_n n)\), the computational bottleneck being the computation of the running Hamming distances \(W_j\). Finally, we stress that as opposed to deletion detection, discussed in Section 3.2, no seeds are necessary for replica detection.

3.2 Distribution-Agnostic Deletion Detection Using Seeds↩︎

In this section, we propose a deletion detection algorithm that utilizes the seeds. Since the replica detection algorithm of Section 3.1 (Algorithm 3) has a vanishing probability of error, for notational simplicity we will focus on a deletion-only setting throughout this subsection. Let \(\mathbf{G}^{(1)}\) and \(\mathbf{G}^{(2)}\) be the seed matrices with respective sizes \(\Lambda_n\times n\) and \(\Lambda_n\times \tilde{K}_n\), and denote the \(j\)th column of \(\mathbf{G}^{(r)}\) with \(\smash{G_j^{(r)}}\), \(r=1,2\) where \(\tilde{K}_n\triangleq\sum_{j=1}^n \mathbb{1}_{[S_j\neq 0]}\). Furthermore, for the sake of brevity, let \(L_{i,j}\) denote the Hamming distance between \(\smash{G^{(1)}_i}\) and \(\smash{G^{(2)}_j}\) for \((i,j)\in[n]\times [\tilde{K}_n]\). More formally, let \[\begin{align} L_{i,j}&\triangleq \sum\limits_{t=1}^{\Lambda_n} \mathbb{1}_{[G^{(1)}_{t,i}\neq G^{(2)}_{t,j}]}\label{eq:crosshammingdist} \end{align}\tag{8}\] Observe that \[\begin{align} L_{i,j}&\sim \begin{cases} \text{Binom}(\Lambda_n,q_0),& G^{(1)}_i \raisebox{0.05em}{\rotatebox[origin=c]{90}{\models}}G^{(2)}_{j} \\ \text{Binom}(\Lambda_n,q_1), & \text{otherwise} \end{cases} \end{align}\]where \[\begin{align} q_0 &= 1-\sum\limits_{x\in\mathcal{X}} p_X(x) p_Y(x)\\ q_1 &= 1-\sum\limits_{x\in\mathcal{X}} p_{X,Y}(x,x) \end{align}\]

Thus, we have a problem seemingly similar to the one in Section 3.1. However, we cannot utilize similar tools here because of the following:

  1. Recall that the two components \(p_0\) and \(p_1\) of the Binomial mixture discussed in Section 3.1 were distinct for any underlying joint distribution \(p_{X,Y}\) as long as the databases are dependent, i.e., \(p_{X,Y}\neq p_X p_Y\). Unfortunately, the same idea does not automatically work here as demonstrated by the following example: Suppose \(X_{i,j}\sim\text{Unif}(\mathcal{X})\), and the transition matrix \(\mathbf{P}\) associated with \(p_{Y|X}\) has unit trace. Then, \[\begin{align} q_0-q_1&=\sum\limits_{x\in\mathcal{X}} p_{X,Y}(x,x) - p_X(x) p_Y(x)\\ &= \frac{1}{|\mathcal{X}|} \sum\limits_{x\in\mathcal{X}} p_{Y|X}(x|x) - p_Y(x)\\ &= \frac{1}{|\mathcal{X}|}(\text{tr}(\mathbf{P})-1)\\ &= 0 \end{align}\] In [14], this problem was solved using the following modification: Based on \(p_{X,Y}\), a bijective remapping \(\Phi\in \mathcal{S}(\mathcal{X})\) is picked and applied to all the entries of \(\mathbf{G}^{(2)}\) to obtain \(\mathbf{G}^{(2)}(\Phi)\) before computing the Hamming distances \(L_{i,j}\), where \(\mathcal{S}(\mathcal{X})\) denotes the symmetry group of \(\mathcal{X}\). Denoting the resulting version of the Hamming distance by \(L_{i,j}(\Phi)\), where \[\begin{align} L_{i,j(\Phi)}&\triangleq \sum\limits_{t=1}^{\Lambda_n} \mathbb{1}_{[G^{(1)}_{t,i}\neq G^{(2)}_{t,j}(\Phi)]}\label{eq:crosshammingdistremapped} \end{align}\tag{9}\] it was proved in [14] that there as long as \(p_{X,Y}\neq p_X p_Y\), there exists \(\Phi\in \mathcal{S}(\mathcal{X})\) such that \(q_0(\Phi)\neq q_1(\Phi)\). We will call such \(\Phi\) a useful remapping.

  2. In the known distribution setting, we chose a useful remapping \(\Phi\) and threshold \(\tau_n\) for Hamming distances based on \(p_{X,Y}\). In Section 3.1, we solved the distribution-agnostic case via parameter estimation in Binomial mixtures. However, the same approach does not work here. Suppose the \(j\)th retained column \(\smash{G_{j}^{(2)}}\) of \(\mathbf{G}^{(2)}\) is correlated with \(\smash{G_{r_j}^{(1)}}\). Then the \(j\)th column of \(\mathbf{L}(\Phi)\) will have a Binom(\(\Lambda_n,q_1(\Phi)\)) component in the \(r_j\)th row, whereas the remaining \(n-1\) rows will contain Binom(\(\Lambda_n,q_0(\Phi)\)) components, as illustrated in Figure 4. Hence, it can be seen that the mixture parameter \(\beta\) of this Binomial mixture distribution approaches 1 since \[\begin{align} \beta &= \frac{(n-1)\tilde{K}_n}{n \tilde{K}_n} = 1-\frac{1}{n} \end{align}\] This imbalance prevents us from performing a parameter estimation as done in Algorithm 3.

Figure 4: Hamming distances between the columns of \(\mathbf{G}^{(1)}\) and \(\mathbf{G}^{(2)}\) with \(n=10\), \(\tilde{K}_n=7\) and \(\Lambda_n=10^4\) for \(q_0\approx 0.76\) and \(q_1\approx 0.92\). The \((i,j)\)th element corresponds to \(L_{i,j}\), with the color bar indicating the approximate values. It can be seen that there are no outliers in the \(4\)th, \(6\)th, and \(10\)th rows. Hence, it can be inferred that \(I_{\text{del}}=(4,6,10)\).

In Algorithm 5, we exploit the aforementioned observation that for a useful mapping \(\Phi\), in each column of \(\mathbf{L}(\Phi)\), there is exactly one element with a different underlying distribution, while the remaining \(n-1\) entries are i.i.d., rendering this entry an outlier. Note that \(L_{i,j}(\Phi)\) being an outlier corresponds to \(G_i^{(1)}\) and \(G_j^{(2)}(\Phi)\) being dependent, and in turn \(S_i\neq 0\). On the other hand, the lack of outliers in any given column of \(\mathbf{L}(\Phi)\) implies that \(\Phi\) is not useful. Thus, Algorithm 5 is capable of deciding whether a given remapping is useful or not. In fact, the algorithm sweeps over all elements of \(\mathcal{S}(\mathcal{X})\) until we encounter a useful one.

To detect the outliers in \(\mathbf{L}(\Phi)\), we propose to use the absolute deviations \(\mathbf{M}(\Phi)\), i.e., distances of \(L_{i,j}(\Phi)\) to the sample mean \(\mu(\Phi)\) of \(\mathbf{L}(\Phi)\). More formally, we have \[\begin{align} \mu(\Phi) &\triangleq \frac{1}{n \tilde{K}_n}\sum\limits_{i=1}^{n} \sum\limits_{j=1}^{\tilde{K}_n} L_{i,j}(\Phi)\tag{10}\\ M_{i,j}(\Phi) &\triangleq |L_{i,j}(\Phi)-\mu(\Phi)|, \forall (i,j)\in [n]\times[\tilde{K}_n]\tag{11} \end{align}\] In Algorithm 5, we test these absolute deviations \(M_{i,j}(\Phi)\) against a threshold \(\hat{\tau}_n\) independent of the underlying distribution \(p_{X,Y}\). If \(M_{i,j}(\Phi)\) is lower than \(\hat{\tau}_n\), we detect retention i.e., non-deletion.

Note that this step is equivalent to utilizing Z-scores (also known as standard scores), a well-studied concept in statistical outlier detection [22], where the absolute deviations are also divided by the sample standard deviation. In Algorithm 5, for the sake of notational brevity, we will avoid such division.

We observe that the runtime of Algorithm 5 is \(\mathcal{O}(\Lambda_n n^2)\) due to the computation of \(\mathbf{L}(\Phi)\).

Figure 5: Distribution-Agnostic Seeded Deletion Detection Algorithm

Lemma 2 below states that for sufficient seed size, \({\Lambda_n=\omega(\log n)= \omega(\log\log m_n)}\), Algorithm 5 works correctly with high probability.

Lemma 2. (Deletion Detection) Let \({I_R=\{j\in[n]: S_j\neq 0\}}\) be the true set of indices of retained columns and \(\hat{I}_R\) be its estimate output by Algorithm 5. Then for any seed size \({\Lambda_n=\omega(\log n)}\), we have \[\begin{align} \kappa^{(2)}_n\triangleq \lim\limits_{n\to\infty} \Pr(\hat{I}_R = I_R) &= 1 \end{align}\]

Proof. See Appendix 5.2. ◻

3.3 Distribution-Agnostic De-Anonymization Scheme↩︎

In this section, we propose a de-anonymization scheme by combining Algorithm 3 and Algorithm 5, and performing a modified version of the typicality-based de-anonymization proposed in [14]. This then leads to the achievability proof of Theorem 1 in Section 3.4.

Given the database pair \((\mathbf{X},\mathbf{Y})\) and the corresponding seed matrices \((\mathbf{G}^{(1)},\mathbf{G}^{(2)})\), the proposed de-anonymization scheme given in Algorithm 6 is as follows:

  1. Detect the replicas through Algorithm 3.

  2. Remove all the extra replica columns from the seed matrix \(\mathbf{G}^{(2)}\) to obtain \(\tilde{\mathbf{G}}^{(2)}\) and perform seeded deletion detection via Algorithm 5 using \(\mathbf{G}^{(1)},\tilde{\mathbf{G}}^{(2)}\). At this step, we have an estimate \(\hat{S}^n\) of the column repetition pattern \(S^n\).

  3. Based on \(\hat{S}^n\) and the matching entries in \(\mathbf{G}^{(1)},\tilde{\mathbf{G}}^{(2)}\), obtain the maximum likelihood estimate [23] \(\hat{p}_{X,Y^S|S}\) of \(p_{X,Y^S|S}\) where \[\begin{align} \hat{p}_X(x)&\triangleq \frac{1}{\Lambda_n n} \sum\limits_{i=1}^{\Lambda_n}\sum\limits_{j=1}^{n}\mathbb{1}_{[G_{i,j}^{(1)}=x]},\forall x\in\mathcal{X}\tag{12}\\ \hat{p}_{Y|X}(y|x) &\triangleq\frac{\sum\limits_{i=1}^{\Lambda_n}\sum\limits_{j=1}^{\tilde{K}_n}\mathbb{1}_{[G^{(1)}_{i,r_j}=x,\tilde{G}^{(2)}_{i,j}=y ]}}{\sum\limits_{i=1}^{\Lambda_n}\sum\limits_{j=1}^{\tilde{K}_n}\mathbb{1}_{[\tilde{G}^{(2)}_{i,j}=y ]}} ,\forall (x,y)\in\mathcal{X}^2\\ \hat{p}_S(s) &\triangleq\frac{1}{n}\sum\limits_{j=1}^n \mathbb{1}_{[S_j=s]},\forall s\ge0\tag{13} \end{align}\] and construct \[\begin{align} \hat{p}_{X,{Y}^S|S}(x,y^s|s)&=\begin{cases} \hat{p}_X(x) \mathbb{1}_{[y^s = \ast]} &\text{if } s=0\\ \hat{p}_X(x) \prod\limits_{j=1}^s \hat{p}_{Y|X}(y_j|x) &\text{if } s\ge 1 \end{cases} \end{align}\] with \(y^s=y_1\dots y_s\) and \(\ast\) denoting erasure.

  4. Using \(\hat{S}^n\), place markers between the noisy replica runs of different columns to obtain \(\tilde{\mathbf{Y}}\). If a run has length 0, i.e. deleted, introduce a column consisting of erasure symbol \(\ast\notin\mathcal{X}\).

  5. Fix \(\epsilon>0\). Match the \(l\)th row \(Y^{K_n}_l\) of \(\tilde{\mathbf{Y}}\) with the \(i\)th row \(X^n_i\) of \(\mathbf{X}\), if \(X_i\) is the only row of \(\mathbf{X}\) jointly \(\epsilon\)-typical (See Definition 11) with \(Y^{K_n}_l\) according to \(\hat{p}_{X,Y^S|S}\), assigning \(\hat{\sigma}_n(i)=l\). Otherwise, declare an error.

Finally, note that the runtime of Algorithm 6 is \(\mathcal{O}(m_n^2 n)\) due to the typicality check (each \(\mathcal{O}(n)\)) for all row pairs \((X_i^n,Y_j^{K_n})\), \((i,j)\in[m_n]^2\).

3.4 Proof of Theorem 1↩︎

We are now ready to prove Theorem 1 where we use Algorithm 6 to obtain a matching scheme \(\hat{\sigma}_n\).

Figure 6: Distribution-Agnostic Database De-Anonymization Scheme

Let \(\kappa_n^{(1)}\) and \(\kappa_n^{(2)}\) be the error probabilities of the noisy replica detection (Algorithm 3) and the seeded deletion (Algorithm 5) algorithms, respectively. Using 12 -13 and the Law of Large Numbers, we have \[\begin{align} \hat{p}_{X,Y^S|S}\overset{p}{\to} p_{X,Y^S|S} \end{align}\] and by the Continuous Mapping Theorem [24] we have \[\begin{align} \hat{H}(X,Y^S|S)&\overset{p}{\to} H(X,Y^S|S)\\ I(\hat{X};\hat{Y}^{\hat{S}}|\hat{S})& \overset{p}{\to} I(X;Y^S|S) \end{align}\] where \(\hat{H}(X,Y^S|S)\) and \(\hat{I}(X,Y^S|S)\) denote the conditional joint entropy and conditional mutual information associated with \(\hat{p}_{X,Y^S|S}\), respectively. Thus, for any \(\epsilon>0\) we have \[\begin{align} \kappa_n^{(3)}&\triangleq\Pr(|\hat{H}(X,Y^S|S)-H(X,Y^S|S)|>\epsilon)\overset{n\to\infty}{\longrightarrow}0\\ \kappa_n^{(4)}&\triangleq\Pr(|\hat{I}(X,Y^S|S)-I(X,Y^S|S)|>\epsilon)\overset{n\to\infty}{\longrightarrow}0 \end{align}\]

Using Proposition 1 and a series of union bounds and triangle inequalities involving \(\hat{H}(X,Y^S|S)\), \(H(X,Y^S|S)\), \(\hat{I}(X,Y^S|S)\), and \(I(X,Y^S|S)\), the probability of error of the de-anonymization scheme (See 3 .) can be bounded as \[\begin{align} \Pr(\text{Error})&\le 2^{-n(I(X;Y^S|S)-4 \epsilon-R)}+\epsilon + \sum\limits_{i=1}^4 \kappa_n^{(i)}\label{eq:error}\\ &\le 2\epsilon \end{align}\tag{14}\] as \(n\to\infty\) as long as \(R<I(X;Y^S|S)-4\epsilon\), concluding the proof of the main result.

3.5 No-Obfuscation Setting↩︎

In Sections 3.1 and 3.2, we focused on the noisy/obfuscated setting and proposed algorithms for detecting noisy replicas (Algorithm 3) and column deletions (Algorithm 5). As discussed in Sections 3.1 and 3.2, the key idea behind detection is either extracting permutation-invariant features of the columns (Algorithm 3) or assuming the correct matching is given for the seeds (Algorithm 5). We showed that in the general noisy setting, for the latter approach to succeed a double-logarithmic seed size \(\Lambda_n\) is sufficient.

Now, we will focus on the no-obfuscation (repetition-only) setting, where \[\begin{align} p_{Y|X}(y|x) &= \mathbb{1}_{[y=x]},\forall (x,y)\in \mathcal{X}^2 \end{align}\] as a special case. Following a similar approach to Algorithm 3, through the extraction of a new set of permutation-invariant features of the database columns, we argue that seeds are not required for repetition (both replica and deletion) detection. Specifically, we will replace Algorithm 3 and Algorithm 5 a single the histogram-based detection algorithm of [16] which works as follows:

  1. First, we construct the respective column histogram matrices \(\mathbf{H}^{(1)}\) and \(\mathbf{H}^{(2)}\) and of \(\mathbf{X}\) and \(\mathbf{Y}\) as follows: \[\begin{align} H^{(1)}_{i,j} &= \sum\limits_{t=1}^{m_n} \mathbb{1}_{[X_{t,j}=i]}, \forall i\in\mathcal{X}, \forall j\in[n]\tag{15}\\ H^{(2)}_{i,j} &= \sum\limits_{t=1}^{m_n} \mathbb{1}_{[Y_{t,j}=i]}, \forall i\in\mathcal{X}, \forall j\in[K_n]\tag{16} \end{align}\]

  2. Next, we count the number of times the \(j\)th column \(\smash{{H}^{(1)}_j}\) of \(\mathbf{H}^{(1)}\) is present in \(\mathbf{M}^{(2)}\). If \(\smash{{H}^{(1)}_j}\) is present \(s\) times in \(\mathbf{H}^{(2)}\), we assign \(\hat{S}_j = s\), where \(\hat{S}^n\) is the estimate of the repetition pattern \(S^n\). If \(\smash{{H}^{(1)}_j}\) is absent in \(\mathbf{H}^{(2)}\), we assign \(\hat{S}_j = 0\).

Note that a repetition error could occur only when there are identical columns in \(\mathbf{H}^{(1)}\) whose probability, we argue, vanishes with the column size \(n\) in the following lemma:

Lemma 3. (Asymptotic Uniqueness of the Column Histograms [16])Let \({H}^{(1)}_j\) denote the histogram of the \(j\)th column of \(\mathbf{X}\). Then, for any \(m_n=\omega(n^\frac{4}{|\mathcal{X}|-1})\) \[\begin{align} \lim\limits_{n\to \infty}\Pr\left(\exists i,j\in [n],\: i\neq j,H^{(1)}_i=H^{(1)}_j\right)\to 0. \label{eq:HBD} \end{align}\qquad{(2)}\]

Note that since \(m_n\) is exponential in \(n\) (See Definition 7), \(m_n=\omega(n^s)\) \(\forall s\in\mathbb{Q}\). Hence, the order relation given in Lemma 3 is automatically satisfied and hence repetition detection could be performed without any seeds in the no-obfuscation setting with a runtime of \(\mathcal{O}(m_n n)\) due to the computation of the histogram matrices \((\mathbf{H}^{(1)},\mathbf{H}^{(2)})\).

For the non-obfuscation setting, we also consider a modified de-anonymization scheme, given in Algorithm 7, that does not rely on any estimates of the underlying distributions. Specifically, we perform de-anonymization via exact sequence matching as follows:

  1. First, we perform repetition detection via the histogram-based detection algorithm described above.

  2. Next, we discard the deleted columns from \(\mathbf{X}\) to obtain \(\smash{\bar{\mathbf{X}}}\).

  3. Similarly, we discard all additional copies in a repetition run in \(\mathbf{Y}\) to obtain \(\bar{\mathbf{Y}}\).

  4. Finally, we match the \(i\)th row \(X_i^n\) with the \(j\)th row \(Y_j^{K_n}\), assigning \(\hat{\sigma}_n(i)=j\), if \(\smash{\bar{Y}_j^{\hat{K}_n}}\) is the only row of \(\bar{\mathbf{Y}}\) equal to \(\smash{\bar{X}_i^{\hat{K}_n}}\).

The histogram-based detection and the modified de-anonymization scheme are given in Algorithm 7. We note that the overall runtime of Algorithm 7 is \(\mathcal{O}(m_n^2 n)\) due to the exact row matching (each \(\mathcal{O}(n)\)) of \(m_n^2\) row pairs.

The matching scheme described above and given in Algorithm 7 leads to the achievability result given in Theorem 2. The converse immediately follows from the distribution-aware matching capacity result of [15].

Theorem 2. (Distribution-Agnostic Matching Capacity in the No-Obfuscation Setting) Consider an anonymized and labeled correlated database pair. We assume that the underlying database distribution \(p_X\) and the column repetition distribution \(p_S\) are unknown. Suppose there is no obfuscation, i.e., \[\begin{align} p_{Y|X}(y|x) &= \mathbb{1}_{[y=x]},\forall (x,y)\in \mathcal{X}^2. \end{align}\] Then, for any seed size \(\Lambda_n\), the distribution-agnostic matching capacity \(C\) is given by \[\begin{align} C&= (1-\delta) H(X).\label{eq:noiselessachievability} \end{align}\qquad{(3)}\]

Proof. See Appendix 5.3. ◻

Observe that Theorem 2 states that the repetition distribution \(p_S\) appears in the matching capacity only through the deletion probability \(\delta\) in the no-obfuscation setting.

Figure 7: Distribution-Agnostic De-Anonymization In No-Obfuscation Setting

4 Non-Asymptotic Regime↩︎

In Section 3, we devised various detection and de-anonymization algorithms and proved their performances (in terms of error probabilities and matching capacity) in the asymptotic regime where the column size \(n\) grows to infinity. In this section, through extensive experiments, we will study the performances of these algorithms when \(n\) is finite.

Throughout this section, unless stated otherwise, we will focus on the following simulation configuration:

  • Database size \((m_n,n)\): We choose column size \(n=100\) with the exception of the evaluation of the de-anonymization scheme in Section 4.3 where we take \(n=25\). We evaluate the performance of our algorithms for a range of values of row sizes \(m_n\).

  • Database distribution \(p_X\): Without loss of generality, we focus on the uniform distribution \(p_X(x)=\frac{1}{|\mathcal{X}|}, \forall x\in\mathcal{X}\) where we arbitrarily take the alphabet size \(|\mathcal{X}|=5\). In Section 4.4, we will sweep over \(|\mathcal{X}|\).

  • Obfuscation \(p_{Y|X}\): We will consider the \(|\mathcal{X}|\)-ary symmetric channel with a crossover probability \(\epsilon\). More formally, for any \((x,y)\in\mathcal{X}^2\), we have \[\begin{align} p_{Y|X}(y|x) &=\begin{cases} 1-\epsilon,&\text{if }y=x\\ \frac{\epsilon}{|\mathcal{X}|-1},& \text{if }y\neq x \end{cases} \end{align}\] where we simulate with a range of values of \(\epsilon\).

  • Repetition distribution \(p_S\): We consider a model for the repetition distribution with \[\begin{align} p_S(s) &= \begin{cases} \delta,&\text{if }s=0\\ 1-\delta-\gamma,& \text{if }s=1\\ \gamma,& \text{if }s=2 \end{cases} \end{align}\] where \(\delta = 0.3\) and \(\gamma =0.2\).

4.1 Noisy Replica Detection↩︎

Figure 8 demonstrates the relationship between the replica detection error \(\kappa^{(1)}_n\) of Algorithm 3 and the row size \(m_n\). Note that the linear relation in the semi-logarithmic plot validates the exponential decay of \(\kappa^{(1)}_n\) with \(m_n\) stated in 28 .

Figure 8: Probability of error of the noisy replica detection algorithm (Algorithm 3) \(\kappa^{(1)}_n\) vs. the row size \(m_n\) with \(10^5\) trials. The y-axis is given in logarithmic scale to validate the exponential relation between the error probability and \(m_n\) given in 28 . Different curves correspond to different crossover probabilities \(\epsilon\).

Figure 8 shows that as the crossover probability \(\epsilon\) increases, i.e., the labeled correlated database \(\mathbf{Y}\) gets noisier, the error probability decay becomes slower, leading to a worse detection performance, as expected. This is due to the following: Our simulation configuration leads to the following Binomial parameters \(p_0\) (Eq. 6 ) and \(p_1\) (Eq. 7 ): \[\begin{align} p_0 &= 1-\frac{1}{|\mathcal{X}|}\\ p_1 &= 1-(1-\epsilon)^2-\frac{\epsilon^2}{|\mathcal{X}|-1} \end{align}\] Consequently, \[\begin{align} p_0-p_1 &= (1-\epsilon)^2+\frac{\epsilon^2}{|\mathcal{X}|-1}-\frac{1}{|\mathcal{X}|}, \end{align}\] and \(D(\tau\|p_0)\) and \(D(1-\tau\|1-p_1)\) are decreasing functions of \(\epsilon\) for any \(\epsilon<1-\frac{1}{|\mathcal{X}|}\). Thus, we have a worse replica detection performance for a higher crossover probability \(\epsilon\).

4.2 Seeded Deletion Detection↩︎

While our replica detection algorithm (Algorithm 3) works for a small column size as shown in Figure 8, our seeded deletion detection algorithm (Algorithm 5) requires large column sizes. This is because the threshold \(\hat{\tau}_n\) of Algorithm 5 is chosen based on its asymptotic properties and does not take estimated parameters into account, unlike the threshold \(m_n\tau\) of Algorithm 3. Hence, while the outliers still exist in \(\mathbf{L}(\Phi)\) (Eq. 9 ) for any useful remapping \(\Phi\), the threshold becomes too large to distinguish the outlier from the rest of the entries.

Figure 9: Probability of error of the modified seeded deletion detection algorithm (Algorithm 10) vs. the seed size \(\Lambda_n\) with \(\tilde{\tau}=1.5\) and \(10^4\) trials. The y-axis is given in the logarithmic domain to validate the exponential relation between the deletion detection error probability and \(\Lambda_n\) given by Lemma 2. Different curves correspond to different crossover probabilities \(\epsilon\).

To overcome this problem, we propose a modification to Algorithm 5 based on the following observation: As shown for a family of distributions in [25], the largest two order statistics of a sequence of \(n\) i.i.d. random variables typically have close values for large \(n\). More formally, let \(T_{(1),j}\le \dots\le T_{(n),j}\) be the order statistics of the \(j\)th column of \(\mathbf{M}(\Phi)\) (Eq. 11 ) and define \[\begin{align} R_{(i),j} &\triangleq \frac{T_{(i),j}}{T_{(i-1),j}}, 2\le i\le n. \label{eq:ratioorderstats} \end{align}\tag{17}\] Then, for large \(n\), we expect to have \(T_{(n),j} = M_{r_j,j}(\Phi)\), where \(r_j\) is the index of the \(j\)th retained column (Eq. 30 ), and \[\begin{align} \lim\limits_{n\to\infty} R_{(n),j} &= \infty\\ \lim\limits_{n\to\infty} R_{(n-1),j} &= 1 \end{align}\] Based on this, we propose the following modification: First, we compute the sample means \(\bar{R}_{(n)}\) and \(\bar{R}_{(n-1)}\) of \(\smash{\{R_{(n),j}\}_{j=1}^{\hat{K}_n}}\) and \(\smash{\{R_{(n-1),j}\}_{j=1}^{\hat{K}_n}}\), respectively. Then, the algorithm declares \(\Phi\) to be useful if \[\begin{align} \frac{\bar{R}_{(n)}}{\bar{R}_{(n-1)}}&\ge\tilde{\tau} \end{align}\] for a threshold \(\tilde{\tau}\) above 1. We note that while any finite value of \(\tilde{\tau}\) that is larger than and bounded away from \(1\) works in the asymptotic regime where \(n\to\infty\), in the small column size regime, for each value of \(n\), \(\tilde{\tau}\) is a heuristically chosen threshold slightly larger than 1, e.g. \(\tilde{\tau} = 1.5\) for \(n=100\).

After deciding that \(\Phi\) is useful, the \(j\)th retained column is inferred as: \[\begin{align} \hat{I}_R(j) &= \mathop{\mathrm{\arg\!\max}}\limits_{i\in[n]} M_{i,j}(\Phi) \end{align}\] The resulting modified deletion detection algorithm is given in Algorithm 10. From Figure 9, it can be seen that Algorithm 10 performs well for a small column size.

Figure 10: Modified Distribution-Agnostic Seeded Deletion Detection Algorithm

Comparing the different curves in Figure 9, we can conclude that as the crossover probability \(\epsilon\) increases, the decay of the error probability becomes slower, indicating a worse detection performance. In turn, to achieve the same deletion detection error rate, we need a higher number of seeds in scenarios with a higher level of obfuscation.

4.3 De-Anonymization Scheme↩︎

In this subsection, we evaluate the performance of the de-anonymization scheme (Algorithm 6) proposed in Section 3.3. While doing so, we decrease the column size from \(n=100\) to \(n=25\) due to the following considerations:

  1. As discussed in Section 3.3, Algorithm 6 has a runtime of \(\mathcal{O}(m_n^2 n)\).

  2. As in the proof of Theorem 1 (Eq. 14 ), the matching error probability decreases exponentially in the column size \(n\). Hence to observe a non-trivial (non-zero) matching error, we must consider a row size \(m_n\) exponential in \(n\) (Definition 7). This suggests, simulating Algorithm 6 in the non-trivial regime becomes computationally prohibitive, even for \(n=100\).

In addition to the modified deletion detection algorithm (Algorithm 10), we make a slight modification to Algorithm 6, described below to accommodate the small column size. After performing replica and deletion detection, and estimating \(\hat{p}_{X,Y^S|S}\), instead of fixing an \(\epsilon>0\) (that depends on \(n\)) and checking the \(\epsilon\)-joint-typicality of the row pairs from \(\mathbf{X}\) and \(\tilde{\mathbf{Y}}\), in Algorithm 11, we do the following:

  1. For each \((i,j)\in[m_n]^2\), compute \[\begin{align} \Delta_{i,j}&\triangleq \left|\hat{H}(X,Y^S|S)-\hat{H}_{i,j}(X,Y^S|S)\right|\label{eq:Deltaij} \end{align}\tag{18}\] where \[\begin{align} \hat{H}_{i,j}(X,Y^S|S)&\triangleq -\frac{1}{n} \log\hat{p}_{X^n,Y^{K_n}|S^n}(X^{n}_i,Y^{K_n}_j|\hat{S}^n) \end{align}\]

  2. For each \(j\in[m_n]\), match the \(j\)th row \(Y^{K_n}_j\) of \(\tilde{\mathbf{Y}}\) with the \(l\)th row \(X^n_l\) of \(\mathbf{X}\), if \[\begin{align} l &= \mathop{\mathrm{\arg\!\min}}\limits_{i\in[m_n]} \Delta_{i,j}\label{eq:modifieddeltas} \end{align}\tag{19}\] assigning \(\hat{\sigma}_n(l)=j\).

  3. If there exist distinct \(j_1\), \(j_2\) such that \(l\) satisfies 19 , declare error and assign \(\hat{\sigma}_n(l)=0\).

We note that testing \(\Delta_{i,j}\) against a threshold \(\epsilon\) would correspond to the joint-typicality check as done in Algorithm 6.

The experimental evaluation of the performance of Algorithm 11 is given in Figure 12. As expected, we observe that a higher row size \(m_n\) leads to an increased matching error.

Figure 11: Modified Distribution-Agnostic De-Anonymization Scheme

Figure 12: Probability of de-anonymization error of the modified de-anonymization scheme (Algorithm 11) vs. the row size \(m_n\) with \(n=25\), \(\Lambda_n=25\) and \(10^4\) trials. Different curves correspond to different crossover probabilities \(\epsilon\).

Similar to Figures 8 and 9, Figure 12 demonstrates the impact of the obfuscation on the de-anonymization performance. As the databases become more obfuscated, the de-anonymization performance degrades potentially an order of magnitude. Hence, we can conclude that the amount of obfuscation plays a crucial role in preserving privacy in databases.

4.4 Histogram-Based Repetition Detection↩︎

In this subsection, we evaluate the performance of the histogram-based repetition detection algorithm (Algorithm 7) of Section 3.5 in the no-obfuscation setting.

For uniform database distribution \(X\sim\) Unif(\([\mathcal{X}]\)), we can obtain the following closed-form asymptotic expression for the repetition detection error probability of Algorithm 7:

Proposition 2. [15] Let \(\xi_n\) denote the probability of the column histograms of \(\mathbf{X}\) not being unique. If \(X\sim\text{Unif}(\mathcal{X})\), then \[\begin{align} \xi_n=n^2 m_n^{\frac{1-|\mathcal{X}|}{2}} \left(4\pi \right)^{\frac{1-|\mathcal{X}|}{2}} |\mathcal{X}|^{\frac{|\mathcal{X}|}{2}} (1+o_{m_n}(1))(1-o_n(1)) \end{align}\]

Proposition 2 states that for fixed \(n\) and \(|\mathcal{X}|\), \(\xi_n\propto m_n^{\frac{1-|\mathcal{X}|}{2}}\), i.e., \(\log \xi_n\) is linear with \(\log m_n\) for large \(n\) and \(m_n\). More formally, \[\begin{align} \log \xi_n &= \frac{1-|\mathcal{X}|}{2} \log m_n + C_{|\mathcal{X}|} + 2 \log n + \zeta_{m_n,n}\label{eq:HBDlinear} \end{align}\tag{20}\] where \[\begin{align} C_{|\mathcal{X}|} &\triangleq \frac{1-|\mathcal{X}|}{2}\log 4\pi + \frac{|\mathcal{X}|}{2}\log |\mathcal{X}| \end{align}\] and \(\zeta_{m_n,n}\to 0\) as \(m_n,n\to\infty\). We note that 20 implies that the order relation given in Lemma 3 is tight.

Figure 13: Probability of error of the histogram-based repetition detection (Section 3.5) vs. the row size \(m_n\) with \(10^6\) trials. Both axes are given in the logarithmic domain to validate the linear relation given in 20 .

From Figure 13, one can see that after some \(m_n\), \(\log \xi_n\) decays linearly with \(\log m_n\) (excluding \(|\mathcal{X}|=3\)) with respective approximate slopes of \(-1.40,-1.97,-2.51,-2.97\) demonstrating that 20 , which predicts respective asymptotic slopes of \(-1.50,-2,-2.50,-3\), holds even when \(n=100\). Note the value of \(m_n\) where \(\xi_n\) starts decaying becomes smaller with increasing alphabet size \(|\mathcal{X}|\).

In Figure 13, as the alphabet \(\mathcal{X}\) gets larger, the decay of the error probability is steeper. This is inherent to the nature of the histograms as in Proposition 2 and not an artifact of the simulation configuration chosen.

5 Conclusion↩︎

In this paper, we have investigated the distribution-agnostic database de-anonymization problem under synchronization errors, in the form of column repetitions, and obfuscation, in the form of noise on the retained entries of the database. We showed that through either modifications of the existing algorithms for the distribution-agnostic setting or novel ones, we can correctly detect the repetition pattern and perform de-anonymization, in both noisy/obfuscated and no-obfuscation settings. Interestingly, our results show that in terms of matching capacity, there is no penalty for not knowing the underlying distributions asymptotically. Our experimental results illustrate that our proposed algorithms or their slightly modified versions work in the non-asymptotic regime with small database sizes as well. Overall, our work provides insights into the practicality of distribution-agnostic database de-anonymization algorithms with theoretical guarantees.

5.1 Proof of Lemma 1↩︎

The estimator proposed in [21] works as follows: Define the \(k\)th sample factorial moment \(F_k\) as \[\begin{align} F_k&\triangleq \frac{1}{K_n-1} \sum\limits_{j=1}^{K_n-1} \prod\limits_{i=0}^{k-1} \frac{W_j-i}{m_n-i}, \forall k\in[m_n]\label{eq:factorialmoments} \end{align}\tag{21}\] and let \[\begin{align} U &\triangleq \frac{F_3-F_1 F_2}{F_2-F_1^2} \end{align}\] Then the respective estimators \(\hat{p}_0\) and \(\hat{p}_1\) for \(p_0\) and \(p_1\) can be constructed as: \[\begin{align} \hat{p}_0 &= \frac{U+\sqrt{U^2-4 U F_1 + 4 F_2}}{2}\tag{22}\\ \hat{p}_1 &= \frac{U-\sqrt{U^2-4 U F_1 + 4 F_2}}{2}\tag{23} \end{align}\] From [21], we get \(\hat{p}_i\overset{p}{\to}p_i\), \(i=0,1\), and in turn \(\tau\overset{p}{\to}\frac{U}{2}\). Thus for large \(n\), \(\tau\) is bounded away from \(p_0\) and \(p_1\). We complete the proof following the same steps taken in the proof of [15], which we provide below for the sake of completeness.

Let \(A_j\) denote the event that \(\smash{C^{(2)}_{j}}\) and \(\smash{C^{(2)}_{j+1}}\) are noisy replicas and \(B_j\) denote the event that the algorithm infers \(\smash{C^{(2)}_{j}}\) and \(\smash{C^{(2)}_{j+1}}\) as replicas. Via the union bound, we can upper bound the total probability of replica detection error \(\kappa^{(1)}_n\) as \[\begin{align} \kappa^{(1)}_n&= \Pr(\bigcup\limits_{j=1}^{{K_n}-1} E_j)\tag{24}\\ &\le \sum\limits_{j=1}^{{K_n}-1} \Pr(A_j ^c) \Pr(B_j|A_j ^c)+ \Pr(A_j) \Pr(B_j^c|A_j)\tag{25} \end{align}\]

Observe that conditioned on \(A_j^c\), \(W_j\sim\text{Binom}\smash{(m_n,p_0)}\) and conditioned on \(A_j\), \(W_j\sim\text{Binom}(m_n,p_1)\). Then, from the Chernoff bound [26], we get \[\begin{align} \Pr(B_j|A_j ^c)&\le 2^{-m_n D\left(\tau\|\smash{p_0}\right)}\tag{26}\\ \Pr(B_j^c|A_j)&\le 2^{-m_n D\left(1-\tau \|1-p_1\right)}\tag{27} \end{align}\]

Thus, through the union bound, we obtain \[\begin{align} \kappa^{(1)}_n&\le ({K_n}-1)\Big[ 2^{-m_n D\left(\tau\|\smash{p_0}\right)}+ 2^{-m_n D\left(1-\tau\|1-p_1\right)}\Big]\label{eq:replicadetectionlast} \end{align}\tag{28}\]

Since the RHS of 28 has \(2{K_n}-2=\mathcal{O}(n)\) terms decaying exponentially in \(m_n\), for any \(m_n=\omega(\log n)\) we have \[\begin{align} \kappa^{(1)}_n \to 0 \:\text{ as } n\to\infty. \end{align}\]

Observing that \(n\sim \log m_n\) concludes the proof. 0◻

5.2 Proof of Lemma 2↩︎

For now, suppose that \(\Phi\) is a useful remapping. Using Chebyshev’s inequality [27] it is straightforward to prove that for any \(\epsilon_n>0\) \[\begin{align} \gamma&\triangleq \Pr(|\mu(\Phi)-\Lambda_n q_0(\Phi)|>\Lambda_n {\epsilon}_n) \\ &= \mathcal{O}\left(\frac{1}{\tilde{K}_n n \Lambda_n {\epsilon}_n}\right)\label{eq:alpha} \end{align}\tag{29}\] First, let \[\begin{align} I_R&=\{r_1,\dots,r_{\tilde{K}_n}\} \label{eq:rjdefn} \end{align}\tag{30}\] and note \({L_{r_j,j}(\Phi)\sim\text{Binom}(\Lambda_n,q_1(\Phi))}\). Thus, from the Chernoff bound [26] we get \[\begin{align} \beta_{r_j,j}&\triangleq\Pr(|L_{r_j,j}(\Phi)-\Lambda_n q_1(\Phi)|\ge {\epsilon}_n \Lambda_n)\\ &\le 2^{-\Lambda_n D(q_1(\Phi)-{\epsilon}_n\|q_1(\Phi))} + 2^{-\Lambda_n D(1-q_1(\Phi)-{\epsilon}_n\|1-q_1(\Phi))}.\label{eq:beta} \end{align}\tag{31}\]

Now, for notational brevity, let \[\begin{align} f(\epsilon) &\triangleq D(q-\epsilon\|q) \end{align}\] Then, one can simply verify the following \[\begin{align} f^\prime(\epsilon) &= \log\frac{q}{1-q}-\log\frac{q-\epsilon}{1-q-\epsilon}\\ f^{\prime\prime}(\epsilon)&=\frac{1}{\log e} \left[\frac{1}{q-\epsilon} + \frac{1}{1-q+\epsilon}\right] \end{align}\] Similarly, letting \[\begin{align} g(\epsilon) &\triangleq D(1-q-\epsilon\|1-q) \end{align}\] we get \[\begin{align} g^\prime(\epsilon) &= \log\frac{1-q}{q}-\log\frac{1-q-\epsilon}{q+\epsilon}\\ g^{\prime\prime}(\epsilon)&=\frac{1}{\log e} \left[ \frac{1}{1-q-\epsilon} + \frac{1}{q+\epsilon}\right] \end{align}\] Observing that \[\begin{align} f(0)&=f^\prime(0)=0\\ g(0)&=g^\prime(0)=0 \end{align}\] and performing second-order MacLaurin series expansions on \(f\) and \(g\), we get for any \(\epsilon<1\) \[\begin{align} f(\epsilon) &= c(q) \epsilon^2 + \mathcal{O}(\epsilon^3)\\ g(\epsilon) &= c(q) \epsilon^2 + \mathcal{O}(\epsilon^3) \end{align}\] where \[\begin{align} c(q)&\triangleq\frac{1}{\log e} \left[\frac{1}{q} + \frac{1}{1-q}\right] \end{align}\]

Now, let \(\Lambda_n = \Gamma_n \log n\) and \(\epsilon_n =\Gamma_n^{-\frac{1}{3}}\) and pick the threshold as \(\hat{\tau}_n = 2\Lambda_n \epsilon_n\). Observe that since \(\Gamma_n=\omega_n(1)\), we get \[\begin{align} \hat{\tau}_n &= 2 \Lambda_n \epsilon_n = o_n(\Lambda_n)\\ \Lambda_n \epsilon_n^2 &= \Gamma_n^{\frac{1}{3}} \log n = \omega_n(\log n) \end{align}\] Then, we have \[\begin{align} \beta_{r_j,j} &\le 2^{1-\Lambda_n (c(q_1(\Phi)) \epsilon_{n}^2+\mathcal{O}(\epsilon_n^3))}\\ &= 2^{1- c(q_1(\Phi)) \Gamma_n^{\frac{1}{3}} \log n+\mathcal{O}(\epsilon_n^3))} \end{align}\] Note that with probability at least \(1-\gamma-\beta_{r_j,j}\) we have \[\begin{align} |\mu(\Phi)-\Lambda_n q_0(\Phi)|&\le \Lambda_n\epsilon_n\\ |L_{r_j,j}(\Phi)-\Lambda_n q_1(\Phi)|&\ge \Lambda_n\epsilon_n \end{align}\] From the triangle inequality, we have \[\begin{align} M_{r_j,j}(\Phi) &=|L_{r_j,j}(\Phi)-\mu(\Phi)|\\ &\ge \Lambda_n (|q_1(\Phi)-q_0(\Phi)|-2\epsilon_n)\\ &\ge \hat{\tau}_n \end{align}\] for large \(n\). Therefore, from the union bound we have \[\begin{align} \Pr(\exists j\in[\tilde{K}_n]:M_{r_j,j}(\Phi)\le \hat{\tau}_n) &\le \gamma + \sum\limits_{j=1}^{\tilde{K}_n} \beta_{r_j,j}\\ &= \gamma + 2^{\log \tilde{K}_n - 1- c(q_1(\Phi)) \Gamma_n^{\frac{1}{3}} \log n +\mathcal{O}(\epsilon_n^3))} \end{align}\] Since \(\tilde{K}_n\le n\) and \(\Lambda_n=\omega_n(\log n)\), we have \[\begin{align} \lim\limits_{n\to\infty} \log \tilde{K}_n -c(q_1(\Phi)) \Gamma_n^{\frac{1}{3}} \log n &= -\infty \end{align}\] Thus we have \[\begin{align} \lim\limits_{n\to\infty} \Pr(\exists j\in[\tilde{K}_n]:M_{r_j,j}\le \hat{\tau}_n) &= 0 \end{align}\]

Next, we look at \(i\neq r_j\). Repeating the same steps above, we get \[\begin{align} \beta_{i,j}&\triangleq\Pr(|L_{i,j}(\Phi)-\Lambda_n q_0(\Phi)|\ge \epsilon_n \Lambda_n)\\ &\le 2^{-\Lambda_n D(q_0(\Phi)-\epsilon_n\|q_0(\Phi))} + 2^{-\Lambda_n D(1-q_0(\Phi)-\epsilon_n\|1-q_0\Phi))}\\ &= 2^{1- c(q_0(\Phi)) \Gamma_n^{\frac{1}{3}} \log n+\mathcal{O}(\epsilon_n^3))} \end{align}\] Again, from the triangle inequality, we get \[\begin{align} M_{i,j}(\Phi) &= |L_{i,j}(\Phi)-\mu(\Phi)|\\ &\le 2\epsilon_n\\&= \hat{\tau}_n \end{align}\] From the union bound, we obtain \[\begin{align} \Pr(\exists j\in[\tilde{K}_n]\:\exists i\in[n]\setminus\{r_j\}: M_{i,j}(\Phi)\ge \hat{\tau}_n) &\le \gamma + \sum\limits_{j=1}^{\tilde{K}_n}\sum\limits_{i\neq r_j} \beta_{i,j}\\ &\le \gamma + n^2 2^{1- c(q_0(\Phi)) \Gamma_n^{\frac{1}{3}} \log n+\mathcal{O}(\epsilon_n^3))} \end{align}\] Since \(\Lambda_n=\omega(\log n)\), as \(n\to\infty\) we have \[\begin{align} \Pr(\exists j&\in[\tilde{K}_n]\:\exists i\in[n]\setminus\{r_j\}:M_{i,j}(\Phi)\ge \hat{\tau}_n)\longrightarrow0. \end{align}\] Thus, for any useful remapping \(\Phi\), the misdetection probability decays to zero as \(n\to \infty\).

For any remapping \(\Phi\) that is not useful, following the same steps, one can prove that \[\begin{align} \Pr(\text{Remapping }\Phi \text{ is declared useful,} &\text{ even though it is not.})\notag\\ &\le \gamma + \sum\limits_{i=1}^{n}\sum\limits_{j=1}^{\tilde{K}_n} \Pr(M_{i,j}\ge \epsilon_n \Lambda_n)\\ &\le \gamma + n^2 2^{1- c(q_0(\Phi)) \Gamma_n^{\frac{1}{3}} \log n+\mathcal{O}(\epsilon_n^3))}\\ &= o_n(1) \end{align}\] Since \(|\mathcal{S}(\mathcal{X})|=|\mathcal{X}|!=\mathcal{O}_n(1)\), we have \[\begin{align} \sum\limits_{\Phi:\text{not useful}} \Pr(\Phi \text{ is declared useful.})=o_n(1) \end{align}\] concluding the proof. 0◻

5.3 Proof of Theorem 2↩︎

Let \(\epsilon>0\) and denote by \(\xi_n\) the error probability of the histogram-based repetition algorithm. Denote the \(\epsilon\)-typical set of sequences (with respect to \(p_X\)) of length \(k=(1-\delta-\epsilon)n\) by \(\smash{A_{\epsilon}^{(k)}(X)}\) and the pairwise collision probability between \(X_1^n\) and \(X_i^n\), given \(\hat{K}_n=k\) and \(\smash{X_1^n\in A_\epsilon^{(k)}(X)}\), by \(P_{col,i}(k)\). Since additional columns in \(\bar{\mathbf{Y}}\) would decrease the pairwise collision probability between independent rows, we have \[\begin{align} P_{col,i}(\hat{K}_n)&\le P_{col,i}(k), \forall\hat{K}_n\ge k \end{align}\] Since we perform exact row matching and the rows of \(\bar{\mathbf{X}}\) are independent, we have \[\begin{align} P_{col,i}(k) &= \Pr(\bar{X}_i^k=\bar{X}_1^k|\bar{X}_1^k\in A_{\epsilon}^{(k)}(X))\\ &\le 2^{-k(H(X)-\epsilon)} \end{align}\] Thus, we can bound the probability of error \(P_e\) as \[\begin{align} P_e &\le \sum\limits_{i=2}^{m_n} P_{col,i}(k)+\epsilon+\kappa_n+\xi_n\\ &\le 2^{n R} 2^{-k(H(X)-\epsilon)}+\epsilon+\kappa_n+\xi_n \end{align}\] where \(\kappa_n=\Pr(\hat{K}_n<k)\). Since \(m_n\) is exponential in \(n\), by Lemma 3, \({\xi_n\to0}\) as \({n\to\infty}\). Furthermore, \(\hat{K}_n\) is a Binom(\(n,1-\delta\)) random variable and from law of large numbers \({\kappa_n\to0}\) as \({n\to\infty}\). Thus \(P_e\le \epsilon\) as \(n\to\infty\) if \[\begin{align} R<(1-\delta-\epsilon)H(X). \end{align}\] Thus, we can argue that any database growth rate \(R\) satisfying \[\begin{align} R<(1-\delta)H(X) \end{align}\] is achievable, by taking \(\epsilon\) small enough. 0◻

References↩︎

[1]
P. Ohm, “Broken promises of privacy: Responding to the surprising failure of anonymization,” UCLA L. Rev., vol. 57, p. 1701, 2009.
[2]
F. M. Naini, J. Unnikrishnan, P. Thiran, and M. Vetterli, “Where you are is who you are: User identification by matching statistics,” IEEE Trans. Inf. Forensics Security, vol. 11, no. 2, pp. 358–372, 2016.
[3]
A. Datta, D. Sharma, and A. Sinha, “Provable de-anonymization of large datasets with sparse dimensions,” in International Conference on Principles of Security and Trust.1em plus 0.5em minus 0.4emSpringer, 2012, pp. 229–248.
[4]
A. Narayanan and V. Shmatikov, “Robust de-anonymization of large sparse datasets,” in Proc. of IEEE Symposium on Security and Privacy, 2008, pp. 111–125.
[5]
L. Sweeney, “Weaving technology and policy together to maintain confidentiality,” The Journal of Law, Medicine & Ethics, vol. 25, no. 2-3, pp. 98–110, 1997.
[6]
N. Takbiri, A. Houmansadr, D. L. Goeckel, and H. Pishro-Nik, “Matching anonymized and obfuscated time series to users’ profiles,” IEEE Transactions on Information Theory, vol. 65, no. 2, pp. 724–741, 2019.
[7]
D. Cullina, P. Mittal, and N. Kiyavash, “Fundamental limits of database alignment,” in Proc. of IEEE International Symposium on Information Theory (ISIT), 2018, pp. 651–655.
[8]
F. Shirani, S. Garg, and E. Erkip, “A concentration of measure approach to database de-anonymization,” in Proc. of IEEE International Symposium on Information Theory (ISIT), 2019, pp. 2748–2752.
[9]
O. E. Dai, D. Cullina, and N. Kiyavash, Database alignment with Gaussian features,” in The 22nd International Conference on Artificial Intelligence and Statistics.1em plus 0.5em minus 0.4emPMLR, 2019, pp. 3225–3233.
[10]
D. Kunisky and J. Niles-Weed, “Strong recovery of geometric planted matchings,” in Proc. of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA).1em plus 0.5em minus 0.4emSIAM, 2022, pp. 834–876.
[11]
R. Tamir, On correlation detection of Gaussian databases via local decision making,” in 2023 IEEE International Symposium on Information Theory (ISIT).1em plus 0.5em minus 0.4emIEEE, 2023, pp. 1231–1236.
[12]
S. Bakirtas and E. Erkip, “Database matching under column deletions,” in Proc. of IEEE International Symposium on Information Theory (ISIT), 2021, pp. 2720–2725.
[13]
——, Matching of Markov databases under random column repetitions,” in 2022 56th Asilomar Conference on Signals, Systems, and Computers, 2022.
[14]
——, “Seeded database matching under noisy column repetitions,” in 2022 IEEE Information Theory Workshop (ITW).1em plus 0.5em minus 0.4emIEEE, 2022, pp. 386–391.
[15]
——, “Database matching under noisy synchronization errors,” arXiv preprint arXiv:2301.06796, 2023.
[16]
——, “Database matching under adversarial column deletions,” in 2023 IEEE Information Theory Workshop (ITW).1em plus 0.5em minus 0.4emIEEE, 2023, pp. 181–185.
[17]
T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction to Algorithms.1em plus 0.5em minus 0.4emMIT press, 2022.
[18]
T. M. Cover, Elements of Information Theory.1em plus 0.5em minus 0.4emJohn Wiley & Sons, 2006.
[19]
F. Shirani, S. Garg, and E. Erkip, “A concentration of measure approach to correlated graph matching,” IEEE Journal on Selected Areas in Information Theory, vol. 2, no. 1, pp. 338–351, 2021.
[20]
G. Morvai and B. Weiss, Order estimation of Markov chains,” IEEE Transactions on Information Theory, vol. 51, no. 4, pp. 1496–1497, 2005.
[21]
W. Blischke, Moment estimators for the parameters of a mixture of two Binomial distributions,” The Annals of Mathematical Statistics, pp. 444–454, 1962.
[22]
D. S. Moore and S. Kirkland, The Basic Practice of Statistics.1em plus 0.5em minus 0.4emWH Freeman New York, 2007, vol. 2.
[23]
T. W. Anderson and L. A. Goodman, Statistical inference about Markov chains,” The Annals of Mathematical Statistics, pp. 89–110, 1957.
[24]
A. W. Van der Vaart, Asymptotic Statistics.1em plus 0.5em minus 0.4emCambridge University Press, 2000, vol. 3.
[25]
N. Balakrishnan and A. Stepanov, “Asymptotic properties of the ratio of order statistics,” Statistics & probability letters, vol. 78, no. 3, pp. 301–310, 2008.
[26]
R. B. Ash, Information Theory.1em plus 0.5em minus 0.4emCourier Corporation, 2012.
[27]
L. Wasserman, All of Statistics: A Concise Course in Statistical Inference.1em plus 0.5em minus 0.4emSpringer, 2004, vol. 26.

  1. This work was presented in part at the 2023 IEEE International Workshop on Information Forensics and Security (WIFS), Nuremberg, Germany. This work is supported in part by National Science Foundation grants 2148293, 2003182, and 1815821, and NYU WIRELESS Industrial Affiliates.↩︎