March 11, 2025
Combining the technique of employing coset codes for communicating over a quantum broadcast channel and the recent discovery of tilting, smoothing and augmentation by Sen to perform simultaneous decoding over network quantum channels, we derive new inner bounds to the capacity region of a \(3-\)user classical quantum broadcast channel that subsumes all known.
Designing optimal qubit transmission strategies for ubiquitous quantum network channels such as broadcast and interference (IC) are central to realizing our collective vision of architecting a quantum internet. While the primary utility of quantum channels is to communicate qubits, the study of bit transmission has resulted in novel ideas [1]–[3], resolution of important stumbling blocks [4], [5], all of which have eventually served as building blocks in designing insightful [6] qubit communication strategies. This motivates our investigations.
We consider the scenario of bit communication over a a general \(3-\)user classical-quantum broadcast channel (\(3-\)CQBC) (Fig. 1). We undertake a Shannon theoretic study and focus on characterizing inner bounds to its CQ capacity region. Our findings yield a new inner bound to its capacity region that strictly enlarge upon the previous known best. In conjunction with our concurrent work on quantum interference channel, these two works are aimed at addressing canonical quantum network channels (QNC).
Our findings are based on combining two ideas for QNCs. The first is based on the use of coset codes endowed with algebraic closure properties. Most works in quantum Shannon theory adopt the conventional approach of designing coding strategies based on unstructured IID codes. The multiple codes employed for communication over a QNC are picked independent of each other. Furthermore, codewords of any code are also picked independently. The high probability codes that dominate the average performance possess only empirical properties and are bereft of any further structural properties. When communicating over a QNC, the codewords from different constituent codes jointly modulate the received quantum state(s), Building on our classical work [7], [8] proves that by employing partitioned coset codes (PCC) and designing a new decoding POVM, we can strictly improve upon all known approaches based on unstructured IID codes. In Secs. 2.1, 3, we explain the phenomenon underlying these findings. Our first idea involves the adoption of PCC.
An efficient coding strategy for interference and broadcast channels entails receivers (Rxs) decoding into multiple codebooks. Moreover, starting from [9] several works [10] have pointed to the need for simultaneous (or joint) decoding and the sub-optimality of successive decoding in general. While the former is straight forward for classical channels, the skewed orientations/mis-alignment of joint and conditional typical subspaces lends the analysis of simultaneous decoding for q-channels sufficiently challenging and requiring new approaches. Indeed, following the recognition of this difficulty by Winter [3], different approaches [11], [12] and several attempts [12] culminated in Sen’s new ideas of tilting, smoothing and augmentation (TSA). Through his works, Sen formulates a robust notion of unions and intersection of subspaces through a process of TSA. Thereby, [4], [5] provides a design for a decoding POVM whose decoding subspaces simultaneously perform the \(2^{k}-1\) checks that joint decoding into \(k\) codebooks entails.
Figure 1: Three independent messages to be communicated over a \(3-\)CQBC.
While [4] addresses the challenging one-shot regime and is also important in the broader area of quantum hypothesis testing, it considers only the unstructured IID codebooks. As Ex. 1 illustrates, an efficient coding strategy for a \(3-\)CQBC requires simultaneous decoding into both unstructured IID and PCCs. Our main contribution is the design and analysis of a coding strategy for a general \(3-\)CQBC that adopts ideas of TSA and performs simultaneous decoding into both unstructured IID and PCCs to yield new inner bounds (Thm. 1, 2). Given the involved nature of the presented inner bounds, we refer to [13] for elaboration of most elements in this article and detailed proofs.
The adoption of TSA to codes possessing algebraic structure poses two difficulties. Firstly, in contrast to IID codebooks, the different constituent codebooks in a random coset code based strategy are statistically dependent. In fact, the coset structure forces codewords of a random coset code also be only pairwise independent. This lends some of the arguments in the analysis of IID codebooks inapplicable. Secondly, the decoding POVMs for a coset code based strategy project that are not associated with any single codebook. We overcome these by resorting back to the appropriately defined conditional typical projectors and tilting them as suggested in [4], [5].
A few remarks are in order. While this work is a next step to [8], it combines new tools of TSA with coset codes to yield new inner bounds. The use of coset codes in Quantum Shannon theory being less prevalent, we have chosen to keep this article both self-contained and pedagogical, as against to being submerged in technical details. Sec. 3 provides a lucid description of the need for coset codes. We also mention that we have carefully chosen to utilize our two concurrent submissions as to complement and supplement each, thereby to utilize the space better. In this article, we do not provide a characterization of the decoding POVMs, however, we give a complete characterization of Step \(2\) (Thm. 2) in our development of inner bounds.Indeed, the decoding strategies for both broadcast and interference being this same, this is a careful choice. In a similar vein, see Note after Fact 1 and note at the end of Sec. 4.1. Our main contributions are the new inner bounds characterized in Theorems 1 and 2.
An information theoretic study of CQBCs was initiated by Yard et. al. [14] in the context of \(2\mhyphen\)CQBCs wherein the superposition coding scheme was studied. Furthering this study, Savov and Wilde [15] proved achievability of Marton’s binning [16] for a general \(2\mhyphen\)CQBC. While these studied the asymptotic regime, Radhakrishnan et. al. [17] proved achievability of Marton’s inner bound [16] in the one-shot regime which also extended to clear proofs for the former regime considered in [14], [15].
For \(n\in \mathbb{N}\), \([n] \mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}\left\{1,\cdots,n \right\}\). For a Hilbert space \(\mathcal{H}\), \(\mathcal{L}(\mathcal{H}),\mathcal{P}(\mathcal{H})\) and \(\mathcal{D}(\mathcal{H})\) denote the collection of linear, positive and density operators acting on \(\mathcal{H}\) respectively. We let an underline denote an appropriate aggregation of objects. For example, \(\underline{\mathcal{V}}\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}\mathcal{V}_{1}\times \mathcal{V}_{2} \times \mathcal{V}_{3}\) for sets, \(\mathcal{H}_{\underline{Y}} \mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}\otimes_{i=1}^{3}\mathcal{H}_{Y_{i}}\) for Hilbert spaces \(\mathcal{H}_{Y_{i}}: i \in [3]\) etc. We abbreviate probability mass function, conditional and unconditional typical projector as PMF, C-Typ-Proj and U-Typ-Proj respectively.
Consider a (generic) \(3\mhyphen\)CQBC \((\rho_{x} \in \mathcal{D}(\mathcal{H}_{\underline{Y}}): x \in \mathcal{X},\kappa)\) specified through (i) a finite set \(\mathcal{X}\), (ii) Hilbert spaces \(\mathcal{H}_{Y_{j}}: j \in [3]\), (iii) a collection \(( \rho_{x} \in \mathcal{D}(\mathcal{H}_{\underline{Y}} ): x \in \mathcal{X})\) and (iv) a cost function \(\kappa :\mathcal{X} \rightarrow [0,\infty)\). The cost function is assumed to be additive, i.e., the cost of preparing the state \(\otimes_{t=1}^{n}\rho_{x_{t}}\) is \(\overline{\kappa}^{n}(x^{n}) \mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}\frac{1}{n}\sum_{t=1}^{n}\kappa(x_{t})\). Reliable communication on a \(3\mhyphen\)CQBC entails identifying a code.
Definition 1. A \(3\mhyphen\)CQBC code \(c=(n,\underline{\mathcal{M}},e,\underline{\lambda})\) consists of three (i) message index sets \([\mathcal{M}_{j}]: j \in [3]\), (ii) an encoder map \(e: [\mathcal{M}_{1}] \times [\mathcal{M}_{2}] \times [\mathcal{M}_{3}] \rightarrow \mathcal{X}^{n}\) and (iii) POVMs \(\lambda_{j} \mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}\{ \lambda_{j,m_{j}}\in\mathcal{P}(\mathcal{H}_{Y_{j}}^{\otimes n}) : m_{j} \in [\mathcal{M}_{j}] \} : j \in [3]\). The average probability of error of the \(3\mhyphen\)CQBC code \((n,\underline{\mathcal{M}},e,\underline{\lambda})\) is \[\begin{align} \label{Eqn:AvgErrorProb} \overline{\xi}(e,\underline{\lambda}) \mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}1-\frac{1}{\mathcal{M}_{1}\mathcal{M}_{2}\mathcal{M}_{3}}\sum_{\underline{m}\in \underline{\mathcal{M}}}\tr(\lambda_{\underline{m}}\rho_{c,\underline{m}}^{\otimes n}), \nonumber \end{align}\qquad{(1)}\] where \(\lambda_{\underline{m}} \mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}\otimes_{j=1}^{3}\lambda_{j,m_{j}}\), \(\rho_{c,\underline{m}}^{\otimes n} \mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}\otimes_{t=1}^{n}\rho_{x_{t}}\) and \((x_{t}:1 \leq t \leq n) = x^{n}(\underline{m}) \mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}e(\underline{m})\). Average cost per symbol of transmitting message \(\underline{m}\in \underline{\mathcal{M}}\in \tau(e|\underline{m}) \mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}\overline{\kappa}^{n}(e(\underline{m}))\) and the average cost per symbol of \(3\mhyphen\)CQBC code is \(\tau(e) \mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}\frac{1}{|\underline{\mathcal{M}}|}\sum_{\underline{m}\in \underline{\mathcal{M}}}\tau(e|\underline{m})\).
Definition 2. A rate-cost quadruple \((R_{1},R_{2},R_{3},\tau) \in [0,\infty)^{4}\) is achievable* if there exists a sequence of \(3\mhyphen\)CQBC codes \((n,\underline{\mathcal{M}}^{(n)},e^{(n)},\underline{\lambda}^{(n)})\) for which \(\displaystyle\lim_{n \rightarrow \infty}\overline{\xi}(e^{(n)},\underline{\lambda}^{(n)}) = 0\), \[\begin{align} \label{Eqn:3CQICAchievability} \lim_{n \rightarrow \infty} n^{-1}\log \mathcal{M}_{j}^{(n)} = R_{j} :j \in [3],and \lim_{n \rightarrow \infty} \tau(e^{(n)}) \leq \tau . \nonumber \end{align}\tag{1}\] The capacity region \(\mathscr{C}\) of a \(3-\)CQBC is the set of all achievable rate-cost vectors and \(\mathscr{C}(\tau) \mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}\{ \underline{R}:(\underline{R},\tau) \in \mathscr{C}\}\).*
How and why do coset codes satisfying algebaric closure yield higher rates over a \(3-\)CQBC? Communication on a BC entails fusing codewords chosen for the different Rxs through a single input. From the perspective of any Rx, a specific aggregation of the codewords chosen for the other Rxs acts as interference. See [13]. The Tx can precode for this interference via Marton’s binning. In general, precoding entails a rate loss. In other words, suppose \(W\) is the interference seen by Rx \(1\), then the rate that Rx \(1\) can achieve by decoding \(W\) and peeling it off can be strictly larger than what it can achieve if the Tx precodes for \(W\). This motivates every Rx to decode as large a fraction of the interference that it can and precode only for the minimal residual uncertainty.
In contrast to a BC with \(2\) Rxs, the interference \(W\) on a BC with \(3\) Rxs is a bivariate function of \(V_{2},V_{3}\) - the signals of the other Rxs ([13]). A joint design of the \(V_{2}\mhyphen,V_{3}\mhyphen\)codes endowing them with structure can enable Rx \(1\) decode \(W\) efficiently even while being unable to decode \(V_{2}\) and \(V_{3}\). This phenomena is exemplified through Exs. 1.
The novelty of our coding scheme is the use of jointly designed coset codes whose structure is explained via the following definition.
Definition 3. An \((n,k,l,g_{I},g_{o/I}, b^n)\) nested coset code (NCC) over \(\mathcal{F}_{\upsilon}\) consists of (i) generators matrices \(g_{I} \in \mathcal{F}_{\upsilon}^{k\times n}\) and \(g_{o/I} \in \mathcal{F}_{\upsilon}^{l \times n}\), (ii) bias vector \(b^n \in \mathcal{F}_{\upsilon}^n\), and (iii) an encoder map \(e : \mathcal{F}_{\upsilon}^l \rightarrow \mathcal{F}_{\upsilon}^k\). We let \(u^n(a,m)= a g_{I}\oplus m g_{o/I} \oplus b^n\), for \((a,m) \in \mathcal{F}_{\upsilon}^k \times \mathcal{F}_{\upsilon}^l\) denote elements in its range space.
We explain the role of coset codes (Ex. 1) and their need for simultaneous decoding an example.
Notation for Ex. 1 : For \(b \in \{0,1\},\eta \in (0,\frac{1}{2})\), \(\sigma_{\eta}(b)\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}(1-\eta)\ketbra{1-b}+\eta\ketbra{b}\), and for \(k \in \{2,3\}\), \(\gamma_{k}(b) = \mathbb{1}_{\{b=0\}}\ketbra{0}+\mathbb{1}_{\{b=1\}}\ketbra{v_{\theta_{k}}}\), where \(\ket{v_{\theta_{k}}} = \left[\cos\theta_{k}~\sin\theta_{k} \right]^{t}\) and \(0 < \theta_{2}< \theta_{3} < \frac{\pi}{2}\).
Example 1. Let \(\underline{\mathcal{X}}= \mathcal{X}_{1}\times \mathcal{X}_{2} \times \mathcal{X}_{3}\) be the input set with \(\mathcal{X}_{j} = \{0,1\}\) for \(j \in [3]\). For \(\underline{x}= (x_{1},x_{2},x_{3}) \in \underline{\mathcal{X}}\), let \(\rho_{\underline{x}} = \sigma_{\epsilon}(x_{1}\oplus x_{2} \oplus x_{3}) \otimes \beta_{2}(x_{2}) \otimes \beta_{3}(x_{3})\). The symbol \(X_{1}\) is costed via a Hamming cost function i.e., the cost function \(\kappa:\underline{\mathcal{X}}\rightarrow \{0,1\}\) is given by \(\kappa(\underline{x}) = x_{1}\). Investigate \(\mathscr{C}(\tau)\).
Commutative Case: \(\beta_{k}(x_{k}) = \sigma_{\delta}(x_{k})\) for \(k =2,3\) with \(\tau * \epsilon \boldsymbol{\stackrel{(i)}{<}} \delta\) and \(h_{b}(\delta) \boldsymbol{\stackrel{(ii)}{<}} \frac{1+h_{b}(\tau * \epsilon)}{2}\), where \(a*b \mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}a(1-b)+(1-a)b\).
Since \(\rho_{\underline{x}} : \underline{x}\in \underline{\mathcal{X}}\) are commuting, the \(3-\)CQBC for the commuting case can be identified via a \(3\mhyphen\)user classical BC equipped with input set \(\underline{\mathcal{X}}\) and output sets \(\mathcal{Y}_{1}=\mathcal{Y}_{2}=\mathcal{Y}_{3}=\{0,1\}\). Its input \(X=(X_{1},X_{2},X_{3}) \in \underline{\mathcal{X}}\) and outputs \(Y_{j}: j \in [3]\) are related via \(Y_{1}=X_{1}\oplus X_{2} \oplus X_{3}\oplus N_{1}, Y_{j}=X_{j} \oplus N_{j}\) where \(N_{1}, N_{2},N_{3}\) are mutually independent Ber\((\epsilon)\), Ber\((\delta)\), Ber\((\delta)\) RVs respectively. We study the question : What is the maximum achievable rate for Rx \(1\) while Rxs \(2\) and \(3\) are fed at their respective capacities \(C_{2}\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}C_{3}\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}1-h_{b}(\delta)\)?
Choose any \(k \in \{2,3\}\), any \(j \in [3]\setminus\{k\}\) and any input PMF \(p_{X}=p_{X_{1}X_{2}X_{3}}\). Observe \(I(X_{j};Y_{k})=0\). Rx \(2\) and \(3\) can be pumped information only through \(X_{2}\) and \(X_{3}\) respectively. Thus, achieving capacities for Rxs \(2\), \(3\) forces \((X_{2},X_{3})\sim p_{X_{2}X_{3}}\!=Ber(\frac{1}{2})\otimes Ber(\frac{1}{2})\), i.e independent uniform. This implies Rx \(1\) suffers interference \(X_{2}\oplus X_{3}\!\!\sim\)Ber\((\frac{1}{2})\). With \(X_{1}\) being Hamming weight constrained to \(\tau<\frac{1}{2}\), Tx cannot cancel this interference. Can Rx \(1\) achieve its interference-free cost constrained capacity \(C_{1}\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}h_{b}(\tau * \epsilon)-h_{b}(\epsilon)\)?
The relationship \(Y_{1}=X_{1}\oplus \left[ X_{2}\oplus X_{3}\right]\oplus N_{1}\) between input \(X_{1}\) and output \(Y_{1}\) with \(X_{2}\oplus X_{3} \sim\)Ber\((\frac{1}{2})\) known only to the Tx is a typical well studied case of rate loss [7] in a binary point-to-point channel with Tx side information [10]. These imply that Rx \(1\) can achieve a rate of \(C_{1}\) only if Rx \(1\) recovers \(X_{2}\oplus X_{3}\) perfectly. Can it recover \(X_{2}\oplus X_{3}\) perfectly?
With unstructured IID codes, Rx \(1\) can recover \(X_{2}\oplus X_{3}\) perfectly only by recovering \(X_{2}\) and \(X_{3}\) perfectly [7]. However, from inequality \(\boldsymbol{(ii)}\), we have \(1-h_{b}(\epsilon) < C_{1}+C_{2}+C_{3}\) implying the \(X_{1}-Y_{1}\) or the \(X-Y_{1}\) channel cannot support Rx \(1\) decoding \(X_{1},X_{2},X_{3}\). Let’s consider linear/coset codes.
A simple linear coding scheme can achieve the rate triple \((C_{1},C_{2},C_{3})\). In contrast to building independent codebooks for Rxs \(2\) and \(3\), choose cosets \(\lambda_{2},\lambda_{3}\) of a common linear code \(\lambda\) that achieves capacity \(1-h_{b}(\delta)\) of both \(X_{2}-Y_{2}\) and \(X_{3}-Y_{3}\) binary symmetric channels. Since the sum of two cosets is another coset of the same linear code, the interference patterns seen by Rx \(1\) are constrained to another coset \(\lambda_{2}\oplus \lambda_{3}\) of the same linear code of rate \(1-h_{b}(\delta)\). Instead of attempting to decode the pair of Rx \(2,3\)’s codewords, suppose Rx \(1\) decodes only the sum of the Rx \(2,3\) codewords within \(\lambda_{2}\oplus \lambda_{3}\). Specifically, suppose Rx \({1}\) attempts to jointly decode it’s codeword and the sum of Rx \(2\) and \(3\)’s codewords, the latter being present in \(\lambda_{2}\oplus \lambda_{3}\), then it would be able to achieve a rate of \(C_{1}\) for itself if \(\mathscr{C}_{1}=1-h_{b}(\epsilon) > C_{1}+ \max\{C_{2},C_{3}\} = C_{1}+1-h_{b}(\delta)\). The latter inequality is guaranteed since \(\tau * \epsilon \boldsymbol{\stackrel{(i)}{<}} \delta\) holds. We conclude this case with the following fact proven in [7].
Fact 1. For Ex. 1 with inequalities \(\boldsymbol{(i),(ii)}\), \((C_{1},C_{2},C_{3})\) is not achievable via any known unstructured IID coding scheme. \((C_{1},C_{2},C_{3})\) is achievable via the above linear code strategy.
Non-Commuting Case: \(\beta_{k}(x_{k}) = \gamma_{k}(x_{k})\) for \(k=2,3\) with \(h_{b}(\tau*\epsilon)+\tilde{h}_{b}(\cos\theta_{2})+\tilde{h}_{b}(\cos\theta_{3}) \boldsymbol{\stackrel{(a)}{>}} 1 \boldsymbol{\stackrel{(b)}{>}} h_{b}(\tau*\epsilon)+\tilde{h}_{b}(\cos\theta_{3})\) where \(\tilde{h}_{b}(x)\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}h_{b}(\frac{1+x}{2})\) for \(x \in [0,\frac{1}{2}]\).
We follow the same line of argument as above. For \(k=2,3\), the capacity of user \(k\) is \(C_{k}\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}\tilde{h}_{b}(\cos\theta_{k})\) and is achieved only by choosing \(p_{X_{2}X_{3}}=p_{X_{2}}p_{X_{3}}\) with each marginal Ber\((\frac{1}{2})\)[18]. This forces Rx \(1\) to suffer Ber\((\frac{1}{2})\) interference \(X_{2}\oplus X_{3}\). Identical arguments as above revolving around rate loss imply that Rx \(1\) can achieve a rate of \(C_{1}\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}h_{b}(\tau * \epsilon)-h_{b}(\epsilon)\) only if Rx \(1\) recovers \(X_{2}\oplus X_{3}\) perfectly. The only way unstructured IID codes can enable Rx \(1\) recover \(X_{2}\oplus X_{3}\) perfectly is by recovering the pair \(X_{2},X_{3}\) perfectly. This can be done only if the unconstrained capacity \(\mathcal{C}_{1}=1-h_{b}(\epsilon) \geq C_{1}+C_{2}+C_{3}\). However, from \(\boldsymbol{(a)}\) we have \(\mathcal{C}_{1}< C_{1}+C_{2}+C_{3}\). Unstructured IID codes cannot achieve rate triple \((C_{1},C_{2},C_{3})\).
We design capacity achieving linear codes \(\lambda_{2},\lambda_{3}\) for Rxs \(2,3\) respectively in such a way that \(\lambda_{2}\) is a sub-coset of \(\lambda_{3}\). Then, the interference patterns are contained within \(\lambda_{2}\oplus \lambda_{3}\) which is now a coset of \(\lambda_{3}\). Rx \(1\) can therefore decode into this collection which is of rate atmost \(C_{3}=h_{b}(\tilde{\delta}_{3})\). Inequality \(\boldsymbol{(b)}\) implies the unconstrained capacity \(\mathcal{C}_{1}=1-h_{b}(\epsilon) \geq C_{1}+\max\{C_{2},C_{3}\} = C_{1}+C_{3}\) implying coset codes can be achieve rate triple \((C_{1},C_{2},C_{3})\). A proof of this follows from [13].
Fact 2. For Ex. 1 with inequalities \(\boldsymbol{(a),(b)}\), \((C_{1},C_{2},C_{3})\) is not achievable via any known unstructured IID coding scheme. \((C_{1},C_{2},C_{3})\) is achievable via the above linear code strategy.
Owing to rate loss, since (i) decoding is in general more efficient than Tx precoding and (ii) interference on a \(3-\)CQBC is in general a bivariate function of the interferer signals, Exs. 1 illustrate that by employing jointly designed coset codes and Rxs decoding the bivariate components directly, interference is more efficiently managed. A general coding scheme must facilitate each Rx to efficiently decode both univariate and bivariate components of its two interfering signals. Unstructured IID codes and jointly coset codes are efficient at decoding the former and latter components respectively.
Exs. 1 vividly portray the need for enabling Rxs efficiently decode bivariate components of the interference they suffer. A general coding strategy must therefore incorporate codebooks to enable each Rx decoding bivariate functions of the other Rx’s signals.
We are led to the following general coding strategy. See Fig. 2. Each Rx \(j\)’s message is split into six parts \(m_{j}=(m_{j}^{W},m_{ji}^{Q},m_{jk}^{Q},m_{ji}^{U},m_{jk}^{U},m_{j}^{V})\) where \(i,j,k \in [3]\) are distinct indices i.e., \(\{ i,j,k\}=[3]\). \(m_{j}^{W},m_{ji}^{Q},m_{jk}^{Q}\) and \(m_{j}^{V}\) are encoded using unstructured ID codes built on \(\mathcal{W},\mathcal{Q}_{ji},\mathcal{Q}_{jk},\mathcal{V}_{j}\) respectively. \(m_{ji}^{U}\) and \(m_{jk}^{U}\) are encoded via coset codes built on \(\mathcal{U}_{ji}=\mathcal{F}_{\upsilon_{i}},\mathcal{U}_{jk}=\mathcal{F}_{\upsilon_{k}}\) respectively. In addition to decoding all these six parts, Rx \(j\) also decodes \(m_{ij}^{T},m_{kj}^{T}\) and the sum \(u_{ij}^{n}(m_{ij}^{U}) \oplus u_{kj}^{n}(m_{kj}^{U})\) of the codewords corresponding to \(m_{ij}^{U}\) and \(m_{kj}^{U}\) respectively. We let (i) \(\underline{T}_{j*}=(T_{ji},T_{jk})\), (ii) \(\underline{T}_{*j}=(T_{ij},T_{kj})\) and (iii) \(U_{j}^{\oplus} = U_{ij}\oplus U_{kj}\) denote (i) RVs encoding semi-private parts \(m_{ji}^{T},m_{jk}^{T}\) of Rx \(j\), (ii) semi-private parts of Rx \(i\),\(k\) decoded by Rx \(j\) and (iii) bivariate component decoded by Rx \(j\) respectively. We present the analysis of general strategy in two steps.
Figure 2: Depiction of all RVs in the full blown coding strategy. In Sec. 4.1 (Step I) only RVs in the gre dashed box are non-trivial, with the rest trivial.
Notation: For prime \(\upsilon\in \mathbb{Z}\), \(\mathcal{F}_{\upsilon}\) will denote finite field of size \(\upsilon\in \mathbb{Z}\) with \(\oplus\) denoting field addition in \(\mathcal{F}_{\upsilon}\) (i.e. mod\(-\upsilon\)). In any context, when used in conjunction \(i,j,k\) will denote distinct indices in \([3]\), hence \(\{i,j,k\}=[3]\). \(\underline{\underline{U}}\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}(\underline{U}_{1*},\underline{U}_{2*},\underline{U}_{3*})=(\underline{U}_{*1},\underline{U}_{*2},\underline{U}_{*3})\), \(\underline{\underline{T}}\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}(\underline{T}_{1*},\underline{T}_{2*},\underline{T}_{3*})\)
Figure 3: The \(9\) codebooks on the left are used by Tx. The \(3\) rightmost cosets are obtained by adding corresp. cosets. Rx \(k\) decodes into \(4\) codes in row \(k\).
In this first step, we activate only the ‘bivariate’ \(\underline{\underline{U}}\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}(\underline{U}_{1*},\underline{U}_{2*},\underline{U}_{3*})\) and private parts \(\underline{V}=V_{1},V_{2},V_{3}\) (Fig. 2) are non-trivial, with the rest \(\underline{W}=\phi,\underline{Q}= (\underline{Q}_{k*}: k \in [3])=\phi\) trivial (Fig. 2). \(\underline{V}\) being unstructured IID and \(\underline{\underline{U}}\) coded via coset codes, Thm. 1 possesses all non-trivial elements and meets all objectives.
Theorem 1. Let \(\hat{\alpha}_{S} \in [0,\infty)^{4}\) be the set of all rate-cost quadruples \((R_{1},R_{2},R_{3},\tau) \in [0,\infty)^{4}\) for which there exists (i) finite sets \(\mathcal{V}_{j}: j \in [3]\), (ii) finite fields \(\mathcal{U}_{ij}=\mathcal{F}_{\upsilon_{j}}\) for each \(ij \in \Xi\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}\{12,13,21,23,31,32\}\), (iii) a PMF \(p_{\underline{\underline{U}}~\!\!\underline{V}X}=p_{\underline{U}_{1*}\underline{U}_{2*}\underline{U}_{3*}V_{1}V_{2}V_{3} X}\) on \(\underline{\underline{\mathcal{U}}}\times \underline{\mathcal{V}}\times\mathcal{X}\), (iv) nonnegative numbers \(S_{ij},T_{ij}:ij \in \left\{12,13,21,23,31,32 \right\}, K_{j},L_{j}:j\in \left\{ 1,2,3\right\}\), such that \(R_{1}=T_{12}\log \upsilon_{2}+T_{13}\log \upsilon_{3}+L_{1}, R_{2}=T_{21}\log \upsilon_{1}+T_{23}\log \upsilon_{3}+L_{2}, R_{3}=T_{31}\log _{1}+T_{32}\log \upsilon_{2}+L_{3}\), \(\mathbb{E}\left\{ \kappa(X) \right\} \leq \tau\), \[\begin{align} \label{Eqn:ManyToManySourceCodingBounds} && S_{A}+M_{B}+K_{C} \stackrel{\boldsymbol{\alpha}}{>}\Theta(A,B,C), where\end{align}\qquad{(2)}\] \[\begin{align} \Theta(A,B,C)\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}\!\! \max_{\substack{(\theta_{j}:j \in B) \\\in \underset{j \in B}{\prod} \mathcal{F}_{\upsilon_{j}}}} \left\{ \!\!\! \begin{array}{c}\sum_{a \in A}\log |\mathcal{U}_{a}| + \sum_{j \in B}\log \upsilon_{j} +\sum_{c \in C} H(V_{c}) - H(U_{A},U_{ji}\oplus \theta_{j}U_{jk}:j \in B,V_{C})\end{array}\!\!\! \right\} \nonumber \end{align}\] for all \(A \subseteq \left\{12,13,21,23,31,32\right\}, B \subseteq \left\{ 1,2,3 \right\}, C \subseteq \left\{ 1,2,3 \right\}\), that satisfy \(A \cap A(B) = \phi\), where \(A({B}) = \cup_{j \in B}\{ ji,jk\}\), \(U_{A} = (U_{jk}:jk \in A)\), \(V_{C}=(V_{c}:c \in C)\), \(S_{A} = \sum _{jk \in A}S_{jk}, M_{B}\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}\sum_{j \in B} \max\{ S_{ij}+T_{ij},S_{kj}+T_{kj}\}, K_{C} = \sum_{c \in C}K_{c}\), and \[\begin{align} \label{Eqn:CQBCChannelCodingStep1Bounds2} S_{\mathcal{A}_{j}}+T_{\mathcal{A}_{j}} \leq \sum_{a \in \mathcal{A}_{j}}\!\!\log |\mathcal{U}_{a}| - H(U_{\mathcal{A}_{j}}|U_{\mathcal{A}_{j}^{c}},U_{ij}\oplus U_{kj},V_{j},Y_{j}) \\\label{Eqn:CQBCChannelCodingStep1Bounds3} S_{\mathcal{A}_{j}}+T_{\mathcal{A}_{j}}+S_{ij}+T_{ij} \leq \sum_{a \in \mathcal{A}_{j}}\log |\mathcal{U}_{a}| + \log \upsilon_{j} - H(U_{\mathcal{A}_{j}},U_{ij}\oplus U_{kj}|U_{\mathcal{A}_{j}^{c}},V_{j},Y_{j}) \\ \label{Eqn:CQBCChannelCodingStep1Bounds4} S_{\mathcal{A}_{j}}+T_{\mathcal{A}_{j}}+S_{kj}+T_{kj} \leq \sum_{a \in \mathcal{A}_{j}}\log |\mathcal{U}_{a}| + \log \upsilon_{j} - H(U_{\mathcal{A}_{j}},U_{ij}\oplus U_{kj}|U_{\mathcal{A}_{j}^{c}},V_{j},Y_{j}) \\ \label{Eqn:CQBCChannelCodingStep1Bounds5} S_{\mathcal{A}_{j}}+T_{\mathcal{A}_{j}}+K_{j}+L_{j} \leq \sum_{a \in \mathcal{A}_{j}}\log |\mathcal{U}_{a}|+H(V_{j})-H(U_{\mathcal{A}_{j}},V_{j}|U_{\mathcal{A}_{j}^{c}},U_{ij}\oplus U_{kj},Y_{j}) \\ \label{Eqn:CQBCChannelCodingStep1Bounds6} S_{\mathcal{A}_{j}}+T_{\mathcal{A}_{j}}+K_{j}+L_{j}+S_{ij}+T_{ij} \leq \sum_{a \in \mathcal{A}_{j}}\log |\mathcal{U}_{a}| + \log \upsilon_{j} +H(V_{j}) - H(U_{\mathcal{A}_{j}},V_{j},U_{ij}\oplus U_{kj}|U_{\mathcal{A}_{j}^{c}},Y_{j}) \\ \label{Eqn:CQBCChannelCodingStep1Bounds7} S_{\mathcal{A}_{j}}+T_{\mathcal{A}_{j}}+K_{j}+L_{j}+S_{kj}+T_{kj} \leq \sum_{a \in \mathcal{A}_{j}}\!\!\log |\mathcal{U}_{a}| + \log \upsilon_{j} +H(V_{j}) - H(U_{\mathcal{A}_{j}},V_{j},U_{ij}\oplus U_{kj}|U_{\mathcal{A}_{j}^{c}},Y_{j}), \end{align}\] {#eq: sublabel=eq:Eqn:CQBCChannelCodingStep1Bounds2,eq:Eqn:CQBCChannelCodingStep1Bounds3,eq:Eqn:CQBCChannelCodingStep1Bounds4,eq:Eqn:CQBCChannelCodingStep1Bounds5,eq:Eqn:CQBCChannelCodingStep1Bounds6,eq:Eqn:CQBCChannelCodingStep1Bounds7} for every \(\mathcal{A}_{j} \subseteq \left\{ ji,jk\right\}\) with distinct indices \(i,j,k\) in \(\left\{ 1,2,3 \right\}\), where \(S_{\mathcal{A}_{j}} \mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}\sum_{a \in \mathcal{A}_{j}}S_{a}, T_{\mathcal{A}_{j}} \mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}\sum_{a \in \mathcal{A}_{j}}T_{a}, U_{\mathcal{A}_{j}} = (U_{a}:a \in \mathcal{A}_{j})\) and all the information quantities are evaluated wrt state \[\begin{align} \label{Eqn:StageITestChnl} \Psi^{\underline{\underline{U}}\!\!~\underline{U}^{\oplus}\!\!~\underline{V}X\!\!~\underline{Y}}\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}\!\!\!\!\!\! \sum_{\substack{\underline{\underline{u}}, \underline{\underline{v}},x\\u_{1}^{\oplus},u_{2}^{\oplus},u_{3}^{\oplus} }}\!\!\!\!\!p_{\underline{\underline{U}}\underline{V}X}(\underline{u}_{1*},\underline{u}_{2*},\underline{u}_{3*},\underline{v},x)\mathbb{1}_{\left\{\substack{u_{ji}\oplus u_{jk}\\=u_{j}^{\oplus}:j\in[3]}\right\}}\!\!\ketbra{\underline{u}_{1*}~\! \underline{u}_{2*}~\! \underline{u}_{3*}~\! u_{1}^{\oplus}~\! u_{2}^{\oplus}~\! u_{3}^{\oplus}~\! \underline{v}~\! x} \!\otimes\! \rho_{x}. \nonumber \end{align}\qquad{(3)}\] Let \(\alpha_{S}\) denote the convex closure of \(\hat{\alpha}_{S}\). Then \(\alpha_{S} \subseteq \mathscr{C}(\tau)\) is an achievable rate region.
Remark 1. Inner bound \(\alpha_{S}\) (i) subsumes [8], [13], (ii) is the CQ analogue of [19], (iii) does not* include a time-sharing RV and includes the ‘dont’s care’ inequalities [20], [21] - ?? , ?? with \(A_{j}=\phi\). Thus, \(\alpha_{S}\) can be enlarged.*
Discussion of the bounds: We explain how (i) each bound arises and (ii) their role in driving error probability down. All the source coding bounds are captured through lower bound ?? \(\boldsymbol{\alpha}\) by choosing different \(A,B,C\). To understand these bounds, lets begin with simple example. Suppose we have \(K\) codebooks \(C_{k}=(G_{k}^{n}(m_{k}):m_{k} \in 2^{nR_{k}}):k \in [K]\) with all of them mutually independent and picked IID in the standard fashion with distribution \(G_{k}^{n}(m_{k})\sim q_{G_{k}}^{n}\) for \(k \in [K]\). What must the bounds satify so that we can find one among the \(2^{n(R_{1}+\cdots +R_{K})}\) \(K-\)tuples of codewords to be jointly typical wrt a joint PMF \(r_{\underline{G}}=r_{G_{1}G_{2}\cdots G_{K}}\). Denoting \(q_{\underline{G}}= \prod_{k=1}^{K}q_{G_{k}}\), we know from standard info. theory that if \(\sum_{l \in S}R_{l} > D(r_{G_{S}}||\otimes_{l\in S}r_{G_{l}})+D(\otimes_{l\in S}r_{G_{l}}||\otimes_{l\in S}q_{G_{l}})\) for all \(S\subseteq [K]\), then1 via the standard second moment method [22] we can find the desired \(K-\)tuple. The bounds in ?? \(\boldsymbol{\alpha}\) are just these. We summarize here the detailed explanation provided in [13]. We have \(9\) codebooks (Fig. 3), however we should also consider sum of \(U_{ji}\) and \(U_{jk}\) codebooks, therefore 12 codebooks, hence \(K=12\). \(r_{G_{1}\cdots G_{6}}=p_{\underline{\underline{U}}}\) and \(q_{U_{ji}}=\frac{1}{|\mathcal{U}_{ji}||}\) is uniform. Indeed, owing to pairwise independence and uniform distribution of codewords in the random coset code [23]. This explains the the first term within the \(\max\) defining \(\Theta(\cdots)\). \(r_{G_{7}G_{8}G_{9}}=p_{\underline{V}}\) and the \(V-\)codewords have been picked independently \(\prod p_{V_{j}}\). This explains the third term in definition of \(\Theta(\cdots)\). Owing to the coset structure, it turns out that [7] the \(U_{ij}\)-codeword plus \(\theta_{j}\) times \(U_{kj}\)-codeword must be found in the sum of the \(U_{ij}, U_{kj}\) cosets (Fig. 3 rightmost codes). These vectors being uniformly distributed \(\sim \frac{1}{\upsilon_{j}}\), we have the second term and the presence of \(U_{ji}\oplus \theta_{j}U_{jk}\) in the entropy term in the defn. of \(\Theta(\cdots)\).2 Due to page limit constraints, we are unable to explain the channel coding bounds. However, these are identical to the ones obtained on a \(3-\)user CQ interference channel (\(3\)-CQIC). Our companion paper [24] explains these bounds vividly and further sketches the elements of tilting. The explanation of the channel coding bounds and in fact a full proof of Thm. 1 is provided in [13].
Note : We have chosen to discuss the source coding bounds in this article since the source coding bounds are different for a broadcast channel as against to an interference channel wherein the source coding bounds are trivial. The channel coding bounds are identical to both and hence provided in the IC paper.
Proof. Throughout this detailed sketch of proof, \(i,j,k\) will represent distinct indices in \([3]\), i.e., \(\{i,j,k\} = [3]\) and we let \(\llbracket 3 \rrbracket\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}\{12,13,21,23,31,32\}\). Consider (i) \(\mathcal{U}_{ji} = \mathcal{F}_{\upsilon_{i}}\) for \(ji \in \llbracket 3 \rrbracket\) , (ii) PMF \(p_{\underline{\underline{U}}\underline{V}X}=p_{\underline{U}_{1*}\underline{U}_{2*}\underline{U}_{3*} V_1 V_2 V_3 X}\) on \(\underline{\underline{\mathcal{U}}}\times\underline{\mathcal{V}}\times\mathcal{X}\) as described in the hypothesis. Let \((R_1,R_2,R_3)\) be a rate triple for which there exists non negative
numbers \(S_{ji},T_{ji},K_{j},L_{j}\) for \(ji \in \llbracket 3 \rrbracket\), \(j \in [3]\) such that \(\mathbb{E}\{\kappa(X)\}\leq
\tau\), \(R_{j}=T_{ji}+T_{jk}+L_{j}\) and the bounds in (?? )-(?? ) hold. We begin our proof by describing the code structure, encoding and decoding.
Code structure: Message \(m_j\) intended for Rx \(j\) splits \(\underline{m}_j=(m_{ji},m_{jk},m_{jj})\) into three parts - two semi-private parts
\(m_{ji},m_{jk}\) and one private part \(m_{jj}\), see Fig. 4. The semi-private parts \(m_{ji},m_{jk}\) are
encoded using coset codes built over \(\mathcal{U}_{ji},\mathcal{U}_{jk}\) respectively, and \(m_{jj}\) is encoded using a conventional IID random code \(\mathcal{C}_j\) built over \(\mathcal{V}_{j}\). RVs \(\underline{U}_{j*}\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}(U_{ji},U_{jk})\)
represent information of Tx \(j\) encoded in the semi-private parts and RV \(V_{j}\) represents the information of Tx \(j\) encoded in the private part \(m_{jj}\). Note \(\underline{U}_{j*},V_{j} \in \mathcal{F}_{\upsilon_{i}} \times \mathcal{F}_{\upsilon_{k}} \times \mathcal{V}_j\), where \(\mathcal{V}_{j}\) is
an arbitrary finite set and \(\mathcal{F}_{\upsilon_{i}}\) is the finite field of size \(\upsilon_{i}\). Specifically, let \(\mathcal{C}_{j}
=\{v_{j}^{n}(b_{j},m_{jj}) : 1 \leq b_{j} \leq 2^{nK_{j}}, 1 \leq m_{jj}\leq 2^{nL_{j}} \}\) be built on \(\mathcal{V}_{j}\) with the marginal \(p_{V_{j}}^{n}\). For \(i \in [3] \setminus \{j\}\), the message \(m_{ji}\) is encoded via NCC denoted as \((n,r_{ji},t_{ji},g^{ji}_{I},g^{ji}_{o/I}, b_{ji}^n)\) over \(\mathcal{F}_{\upsilon_i}\).
Figure 4: \(9\) codes employed in the proof of Thm. 1. The \(U_{ji} : ji \in \llbracket 3 \rrbracket\) are coset codes built over finite fields. Codes with the same color are built over the same finite field, and the smaller of the two is a sub-coset of the larger. The black codes are built over auxiliary finite sets \(\mathcal{V}_{j}\) using the conventional IID random code structure. Row \(j\) depicts the codes of Tx, Rx \(j\). Rx \(j\), in addition to decoding into codes depicted in row \(j\) also decodes \(U_{ij}\oplus U_{kj}\).
Let \(c(m_{ji}^{t_{ji}})\) be the coset corresponding to message \(m_{ji}^{t_{ji}}\) defined as \[c(m_{ji}^{t_{ji}})= \{ u_{ji}^n(a^{r_{ji}},m_{ji}^{t_{ji}}) :
a^{r_{ji}} \in \mathcal{F}_{\upsilon_i}^{r_{ji}} \}\] where, \(u_{ji}^n(a^{r_{ji}},m_{ji}^{t_{ji}})= a^{r_{ji}} g^{ji}_{I}+ m_{ji}^{t_{ji}} g^{ji}_{o/I} + b_{ji}^n\) denote the generic code in the coset \(c(m_{ji}^{t_{ji}})\). Henceforth, denoting \(\underaccent{\tilde{}}{m} \mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}(\underline{m}_{1},\underline{m}_{2},\underline{m}_{3})}\)
, let \[\begin{align} \mathcal{D}(\underaccent{\tilde{}}{m})\!\!=\!\left\{ \left( \!\!\!\begin{array}{c}( u_{ji}^{n} (a^{r_{ji}},m_{ji}^{t_{ji}})\! :\! ji \in \llbracket 3 \rrbracket),\\(v_{j}^{n}(b_{j},m_{jj})\! :\! j \in
[3])\end{array}\!\!\!\right) \!:\! \left( \!\!\!\begin{array}{c}( u_{ji}^{n} (a^{r_{ji}},m_{ji}^{t_{ji}})\! :\! ji \in \llbracket 3 \rrbracket),\\(v_{j}^{n}(b_{j},m_{jj})\! :\! j \in [3])\end{array}\!\!\!\right) \in
T_{\delta}^n(p_{\underline{\underline{U}}\underline{V}}) \right\} \nonumber
\end{align}}\] be the list of the codewords that are jointly typical according to \(p_{\underline{\underline{U}}\underline{V}}\).
Encoding Rule: Note that every element of \(\mathcal{D}(\underaccent{\tilde{}}{m})}\) is a collection of \(9\) codewords chosen from the respective \(9\) codebooks. For every message \(\underaccent{\tilde{}}{m}=(\underline{m}_1,\underline{m}_2,\underline{m}_3)}\), the encoder selects one of the possible \(9-\)codeword collection from \(\mathcal{D}(\underaccent{\tilde{}}{m})}\). This selected collection of codewords, denoted \(\left((
u_{ji}^{n}(a_{ji}^{*},m_{ji}^{t_{ji}}): ji \in \llbracket 3 \rrbracket),(v_{j}^{n}(b_{j}^{*},m_{jj}) : j \in [3])\right)\) is referred to as the chosen collection of codewords. If \(\mathcal{D}(\underaccent{\tilde{}}{m})}\) is empty, a default collection \(\left(( u_{ji}^{n}(a_{ji}^{*},m_{ji}^{t_{ji}}): ji \in \llbracket 3 \rrbracket),(v_{j}^{n}(b_{j}^{*},m_{jj}) : j \in
[3])\right)\) from the codebooks is used instead and referred to as the chosen collection. The symbols input on the channel is then obtained by evaluating the function \(f\) letterwise on the chosen
collection \[x^n(\underaccent{\tilde{}}{m}) := f^n\left(\begin{array}{c}( u_{ji}^{n}(a_{ji}^{*},m_{ji}^{t_{ji}}): ji \in \llbracket 3 \rrbracket),(v_{j}^{n}(b_{j}^{*},m_{jj}) : j \in [3])\end{array} \right).}\] The
analysis of this encoder error event is identical to the encoder error event analyzed in [7]. Specifically, in [7] we employ a standard second moment method [23], [25] [10] [26] to upper bound the error event therein. An identical series of steps, i.e. employing the same second
moment method, it is straight forward to derive an exponentially decaying bound on the probability of \(\mathcal{D}(\underaccent{\tilde{}}{m})}\) being empty if \[\begin{align}
\label{founsvid}
&&
S_{A}+M_{B}+K_{C} \stackrel{\boldsymbol{\alpha}}{>}\Theta(A,B,C) \nonumberwhere\nonumber
\\
&& \Theta(A,B,C)\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}\!\!\!\max_{\!\!\!\substack{(\theta_{j}:j \in B) \in \underset{j \in B}{\prod} \mathcal{F}_{\upsilon_{j}}}} \left\{ \begin{array}{c} \!\!\!\!\sum_{a \in A}\!\log
|\mathcal{U}_{a}|\!\! +\!\! \sum_{j \in B}\!\log \upsilon_{j}\!\! +\!\!\sum_{c \in C} H(V_{c})\!\! -\!\! H(U_{A},U_{ji}\!\oplus \!\theta_{j}U_{jk}:j \!\in \! B,V_{C})\!\!\!\end{array}
\! \right\}
\nonumber
\end{align}\tag{2}\] We now proceed to the decoding rule.
Decoding POVM: Rx \(j\) decodes a bivariate component \(U_{ij}(m_{ij}) \oplus U_{kj}(m_{kj})\) in addition to the codewords \(U_{ji}(m_{ji}),
U_{jk}(m_{jk}), V_j(m_{jj})\) corresponding to its message. Specifically, Rx \(j\) decodes the quartet \[(U_{ji}(m_{ji}), U_{jk}(m_{jk}), V_j(m_{jj}), U_{ij}(m_{ij}) \oplus
U_{kj}(m_{kj}))\] using a simultaneous decoding POVM. In particular, the construction of our simultaneous decoding POVM adopts the tilting, smoothing, and augmentation approach of Sen [4]. Towards describing this, we begin by characterizing the effective CQ MAC state,
\[\begin{align} \tag{3} \xi^{\underline{U}_{j*}V_{j}U_{j}^{\oplus}Y_{j}} &=& \sum_{\underline{u}_{j*},v_{j},u_{j}^{\oplus}} p_{\underline{U}_{j*}V_{j}}(\underline{u}_{j*},v_{j}) p_{U_{j}^{\oplus}}(u_{j}^{\oplus}) \ketbra{\underline{u}_{j*}~v_{j}~u_{j}^{\oplus}} \otimes \xi_{\underline{u}_{j*}v_{j}u_{j}^{\oplus}} \tag{4}. \end{align}\] Rx \(j\)’s error analysis is essentially the error analysis of the above MAC, where the Rx attempts to decode \(U_{ji}, U_{jk}, V_{j}\), and \(U_{ij} \oplus U_{kj}\). We may, therefore, restrict our attention to the effective five-transmitter MAC specified via 3 . To reduce clutter and simplify notation, we rename \(Z_{j1}\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}U_{ji}\) ,\(Z_{j2} \mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}U_{jk}\), \(Z_{j3} \mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}V_{j}\),\(Z_{j4} \mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}U_{ij}\oplus U_{kj}\) ,\(\underline{Z}_{j} \mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}Z_{j1}, Z_{j2}, Z_{j3} , Z_{j4}\), and a PMF \(p_{\underline{Z}_{j}}(\underline{z}_{j})= \sum_{u_{ij}} \sum_{u_{kj}} p_{U_{ji} U_{jk} V_j U_{ij} U_{kj}} (z_{j1}, z_{j2},z_{j3}, u_{ij},u_{kj}) \mathbb{1} \{z_{j4}= u_{ij} \oplus u_{kj}\}\). Furthermore, to simplify notation, we drop the subscript \(j\) too. The construction of the decoder POVM and the analysis below must be carried out identically for each of the three decoders \(j=1,2,3\). Through the rest of our proof, we restrict attention to analyzing the error probability of the above five transmitter MAC on which the decoder attempts to decode \(z_1, z_2, z_3\) and \(z_4\). Towards that end, we relabel the state in (4 ) as \[\begin{align} \xi^{\underline{Z}Y} \mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}\sum_{z_{1},z_{2},z_{3},z_{4}} p_{\underline{Z}}(\underline{z})\ketbra{z_{1}~z_{2}~z_{3}~z_{4}}\otimes \xi_{z_{1}z_{2}z_{3}z_{4}}. \nonumber \end{align}\]
Figure 5: Communication over the effective CQMAC : The five RVs or codebooks on the left form only a sub-collection of RVs or codes handled by the encoder.
Consider four auxiliary finite sets \(\mathcal{A}_{i}\) for \(i \in [4]\) along with corresponding four auxiliary Hilbert spaces \({\mathcal{A}_{i}}\) of
dimension \(|\mathcal{A}_{i}|\) for each \(i \in [4]\).3 Define the enlarged space \(\mathcal{H}_{Y}^{e}\) as follows \[\mathcal{H}_{Y}^{e} \mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}\mathcal{H}_{Y_{G}}
\oplus \bigoplus_{S \subseteq [4] \setminus [4]} \left(\mathcal{H}_{Y_{G}} \otimes \mathcal{A}_{S} \right) where \mathcal{H}_{Y_{G}} =\mathcal{H}_{Y} \otimes \mathbb{C}^{2}, \mathcal{A}_{S} = \bigotimes_{s \in S} \mathcal{A}_{s}\] and the symbol
\(\oplus\) denotes orthogonal direct sum of space. Note that \(\dim(\mathcal{A}_{S}) = \prod_{s \in S}\dim(\mathcal{A}_{s})\). For \(S \subseteq [4]\), let
\(\ket{a_{S}}=\otimes_{s \in S} \ket{a_s}\) be a computational basis vector of \({\mathcal{A}_{S}}\), \(Z_{S}=(Z_{s} : s \in S ),\) and \(0 \leq \eta^n \leq \frac{1}{10}\). In the sequel, we let \(\boldsymbol{\mathcal{H}_{Y}}= \mathcal{H}_{Y}^{\otimes n}\), \(\boldsymbol{\mathcal{H}_{Y_G}}=\mathcal{H}_{Y_G}^{\otimes n}\), \(\boldsymbol{\mathcal{H}_{Y}^{e}}=(\mathcal{H}_{Y}^{\otimes n})^{e}\) and \(\boldsymbol{\mathcal{A}_{S}}= \mathcal{A}_{S}^{\otimes n}\). We define the tilting map \(\mathcal{T}^{S}_{ \underline{a}_{S}^{n}, \eta^{n}} : \boldsymbol{\mathcal{H}_{Y_G}}\rightarrow
\boldsymbol{\mathcal{H}_{Y_G}} \otimes \boldsymbol{\mathcal{A}_{S}}\) as \[\begin{align}
\mathcal{T}^{S}_{ \underline{a}_{S}^{n}, \eta^{n}}(\ket{h}) = \frac{1}{\sqrt{\Omega(S,\eta)} } (\ket{h} +\eta^{n |S| } \ket{h} \otimes \ket{a_{S}^{n}})
\nonumber
\end{align}\] where \(\Omega(S,\eta) = 1+ \eta^{2n |S|}\). And, \(\mathcal{T}_{ \underline{a}^{n}, \eta^{n}} : \boldsymbol{\mathcal{H}_{Y_G}} \rightarrow
\boldsymbol{\mathcal{H}_{Y}^{e}}\) as \[\begin{align}
\mathcal{T}_{ \underline{a}^{n}, \eta^{n}}(\ket{h}) = \frac{1}{\sqrt{\Omega(\eta)} } (\ket{h} + \sum_{S \subseteq [4] \setminus [4]} \eta^{n |S| } \ket{h} \otimes \ket{a_{S}^{n}}), \nonumber
\end{align}\] where, \(\Omega(\eta) = 1+16 \eta^{2n}+ 36 \eta^{4n}+ 16 \eta^{6n}\). Let \[\begin{align} \theta^{ \otimes
n}_{\underline{z}^n\underline{a}^n} = \mathcal{T}_{ \underline{a}^{n}, \eta^n} \left\{\xi^{\otimes n}_{\underline{z}^n}\otimes \ketbra{0} \right\} \nonumber \label{Eqn:Thetiltedstate}
\end{align}\tag{5}\] be the state tilted with components in all appended auxiliary spaces. Here, the map \(\mathcal{T}_{ \underline{a}^{n}, \eta^{n}}\) acts on a mixed state by acting on each pure state in
mixture individually. In Appendix 5, we show that, \[\begin{align}
\label{Eqn:ClosenessOfTiltedState} \norm{\theta_{\underline{z}^{n}, \underline{a}^{n}}^{\otimes n} - \xi_{\underline{z}^{n}}^{\otimes n} \otimes \ket{0}\bra{0} }_{1} \leq 90 \eta^{n}.
\end{align}\tag{6}\] Next, we construct the decoding POVM. For partition \(S,S^{c}\) of [4] when \(S \neq \phi\), \[\begin{align}
G^{S}_{\underline{z}^{n}}\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}\Pi_{z_{S^{c}}^{n}} \Pi_{\underline{z}^{n}}\Pi_{z_{S^{c}}^{n}}, \nonumber
\end{align}\] where \(\Pi_{\underline{z}^{n}}\) is the conditional typical projector (C-Typ-Proj) of \(\otimes_{t=1}^{n}\xi_{z_{1t}z_{2t}z_{3t}z_{4t}}\) and \(\Pi_{z_{S}^{n} }\) is C-Typ-Proj of \(\otimes_{t=1}^{n}\xi_{z_{st}:s \in S}\). By Gelfand-Naimark’s thm. [18], there exists a projector \(\overline{G}^{S}_{\underline{z}^{n}}\in \mathcal{P}(\boldsymbol{\mathcal{H}_{Y_G}} )\) that yields identical measurement statistic as \(G^{S}_{\underline{z}^{n}}\)on states in \(\boldsymbol{\mathcal{H}_{Y}}\). Let \(\overline{B}_{\underline{z}^{n}}^{S}=I_{\boldsymbol{\mathcal{H}_{Y_G}}}-\overline{G}^{S}_{\underline{z}^{n}}\) be the complement projector. Now, consider \[\begin{align} \beta^{S}_{\underline{z}^{n},\underline{a}^{n}} = \mathcal{T}_{\underline{a}^{n}_{S},\eta^n}^{S} ( \overline{B}_{\underline{z}^{n}}^{S}), for \: S \neq [4]and\beta^{[4]}_{\underline{z}^{n},\underline{a}^{n}} =
\overline{B}_{\underline{z}^{n}}^{[4]}, \label{Eqn:Tiltingofprojectors}
\end{align}\tag{7}\] where \(\beta^{S}_{\underline{z}^{n},a_{S}^{n}}\) represent the tilted projector along direction \(\underline{a}_{S}^{n}\). Next, we define \(\beta^{*}_{\underline{z}^{n},\underline{a}^{n}}\) as the projector in \(\boldsymbol{\mathcal{H}_{Y}^{e}}\) whose support is the union of the supports of \(\beta^{S}_{\underline{z}^{n},a_{S}^{n}}\) for all \(S \subseteq [4]\), \(S \neq \phi\). Let \(\Pi_{\boldsymbol{\mathcal{H}_{Y_G}}}\), be the orthogonal projector in \(\boldsymbol{\mathcal{H}_{Y}^{e}}\) onto \(\boldsymbol{\mathcal{H}_{Y_G}}\) . We define
the POVM element \(\mu_{\underline{m}}\) as \[\begin{align}
\mu_{\underline{m}} \mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}\left(\sum_{\uline{\widehat{m}}} \gamma^{*}_{(\underline{z}^{n},\underline{a}^{n})(\widehat{\underline{m}})}\right)^{-\frac{1}{2}}\!\!\!\!\!\!
\gamma^{*}_{(\underline{z}^{n},\underline{a}^{n})(\underline{m})} \left(\sum_{\uline{\widehat{m}}} \gamma^{*}_{(\underline{z}^{n},\underline{a}^{n})(\widehat{\underline{m}})}\right)^{-\frac{1}{2}}\!\!\!\!\!\!\!,where \label{POVM}
\gamma^{*}_{\underline{z}^{n},\underline{a}^{n}} = (I_{\boldsymbol{\mathcal{H}_{Y}^{e}}}- \beta^{*}_{\underline{z}^{n},\underline{a}^{n}}) \Pi_{\boldsymbol{\mathcal{H}_{Y_G}} }(I_{\boldsymbol{\mathcal{H}_{Y}^{e}}}-
\beta^{*}_{\underline{z}^{n},\underline{a}^{n}}).
\end{align}\tag{8}\] Having stated the decoding POVM, we now return to the error analysis of the CQBC. Our notation is summarized in Table 1
for ease of reference.
Symbol | Description | Symbol | Description |
---|---|---|---|
\(\underaccent{\tilde{}}{m}}\) | \((\underline{m}_{1},\underline{m}_{2},\underline{m}_{3})\) | \(\underline{m}_{j}\) | \((m_{ji},m_{jk},m_{jj})\) |
\(\underline{\underline{\mathcal{U}}}\) | \(\underline{\mathcal{U}}_{1*} \times \underline{\mathcal{U}}_{2*} \times \underline{\mathcal{U}}_{3*}\) | \(\underline{\mathcal{U}}_{j*}\) | \(\mathcal{U}_{ji} \times \mathcal{U}_{jk}\) |
\(\underline{\mathcal{V}}\) | \(\mathcal{V}_1 \times \mathcal{V}_2 \times \mathcal{V}_3\) | \(U_{j}^{\oplus}\) | \(U_{ij}\oplus U_{kj}\) |
\(g^{ji}_{I} \in \mathcal{F}_{\upsilon_{i}}^{r_{ji}\times n}\), \(g^{ji}_{o/I} \in \mathcal{F}_{\upsilon_i}^{t_{ji} \times n}\) | Generator matrices of user \((i,k) : (i,k) \neq j\) | \(c(m_{ji}^{t_{ji}})\) | \(\{ u_{ji}^n(a^{r_{ji}},m_{ji}^{t_{ji}}) : a^{r_{ji}} \in \mathcal{F}_{\upsilon_i}^{r_{ji}} \}\) |
\(u_{ji}^n(a^{r_{ji}},m_{ji}^{t_{ji}})= a^{r_{ji}} g^{ji}_{I}+ m_{ji}^{t_{ji}} g^{ji}_{o/I} + b_{ji}^n\) | A generic codeword in bin/coset indexed by message \(m_{ji}^{t_{ji}}\) | \(\mathcal{D}(\underaccent{\tilde{}}{m})}\) | Codeword list jointly typical wrt \(p_{\underline{\underline{U}}\underline{V}}\) |
\(\left( \!\!\!\begin{array}{c}( u_{ji}^{n} (a^{r_{ji}},m_{ji}^{t_{ji}})\! :\! ji \in \llbracket 3 \rrbracket),\\(v_{j}^{n}(b_{j},m_{jj})\! :\! j \in[3])\end{array}\!\!\!\right)\) | Codewords chosen from \(\mathcal{D}(\underaccent{\tilde{}}{m})}\) for message \(\underaccent{\tilde{}}{m}}\) | \(f : \underline{\underline{\mathcal{U}}}\times \uline{\mathcal{V}} \rightarrow \mathcal{X}\) | Mapping function |
\(p_{\underline{\underline{U}}\uline{V} X} \triangleq p_{\uline{U}_{1*} \uline{U}_{2*} \uline{U}_{3*} V_1 V_2 V_3 X}\) | Chosen test channel | \(x^n(\underaccent{\tilde{}}{m})}\) | \(f^n\left(\begin{array}{c}( u_{ji}^{n}(a_{ji}^{*},m_{ji}^{t_{ji}}): ji \in \llbracket 3 \rrbracket),(v_{j}^{n}(b_{j}^{*},m_{jj}) : j \in[3])\end{array} \right)\) |
\(\xi_{\uline{u}_{j*} v_{j} u_{j}^{\oplus}}\) | \(\sum_{(\Bar{u}_{j*},\Bar{v}_{j}, x)} p_{\Bar{U}_{j*} \Bar{V}_{j} X | \uline{U}_{j*} V_{j} U_{j}^{\oplus}} (\Bar{u}_{j*},\Bar{v}_{j}, x | \uline{u}_{j*}, v_{j},u_{j}^{\oplus}) \operatorname{Tr}_{\mathcal{Y}_i \mathcal{Y}_k }(\rho_{x})\) | \(( \mu_{\underaccent{\tilde{}}{m}} : \underaccent{\tilde{}}{m} \in \mathcal{M})}}\) | Decoding POVM as defined in (8 ) |
\(\mathcal{H}_{Y_{G}}\) | \(\mathcal{H}_{Y} \otimes \mathbb{C}^{2}\) | \(\mathcal{H}_{Y}^{e}\) | \(\mathcal{H}_{Y_{G}} \oplus \bigoplus_{S \subseteq[4] \setminus[4]} \left( \mathcal{H}_{Y_{G}} \otimes \mathcal{A}_{S}\right)\) |
\(\boldsymbol{\mathcal{H}_{Y}}\) | \(\mathcal{H}_{Y}^{\otimes n}\) | \(\boldsymbol{\mathcal{H}_{Y_G}}\) | \(\mathcal{H}_{Y_G}^{\otimes n}\) |
\(\boldsymbol{\mathcal{H}_{Y}^{e}}\) | \((\mathcal{H}_{Y}^{\otimes n})^{e}\) | \(\boldsymbol{\mathcal{A}_{S}}\) | \(\mathcal{A}_{S}^{\otimes n}\) |
\(\mathcal{T}^{S}_{ \underline{a}_{S}^{n}, \eta^{n}}\) | Map that tilts along direction \(\underline{a}_{S}^n\) | \(\mathcal{T}_{ \underline{a}^{n}, \eta^{n}}\) | Map that tilts into all subspaces |
Error Analysis: We now analyze the error probability for the 3-user CQBC. First, we begin with a simple union bound to separate the error analysis of the three users. From the Appendix 7, we have \[\begin{align} \label{seperation32of32the32error32analysis32of32the32three32users} I \otimes I \otimes I - \mu_{\underline{m}_{1}} \otimes \mu_{\underline{m}_{2}} \otimes \mu_{\underline{m}_3} \leq (I-\mu_{\uline{m}_1}) \otimes I \otimes I + I \otimes (I - \mu_{\uline{m}_2} ) \otimes I + I \otimes I \otimes (I - \mu_{\uline{m}_3} ). \nonumber \end{align}\tag{9}\] From this, we have \[\begin{align} &&P(e^n,\underaccent{\tilde{}}{\lambda}^{n}) = \frac{1}{\mathcal{M}} \sum_{\underaccent{\tilde{}}{m} \in \mathcal{M}}\tr\left((I-\mu_{\underaccent{\tilde{}}m} )\rho_{e(\underaccent{\tilde{}}m)}^{\otimes n} \right) \nonumber \\ & = & \frac{1}{\mathcal{M}} \sum_{\underaccent{\tilde{}}{m} \in \mathcal{M}} \tr\left((I-\mu_{\underline{m}_1} \otimes \mu_{\underline{m}_2} \otimes \mu_{\underline{m}_3} )\rho_{e(\underaccent{\tilde{}}m)}^{\otimes n} \right) \nonumber \\ &\leq& \frac{1}{\mathcal{M}} \sum_{\underaccent{\tilde{}}{m} \in \mathcal{M}} \tr\left(((I -\mu_{\underline{m}_1}) \otimes I \otimes I + I \otimes (I - \mu_{\underline{m}_2} )\otimes I + I \otimes I \otimes (I - \mu_{\underline{m}_3} ))\rho_{e(\underaccent{\tilde{}}m)}^{\otimes n} \right) \nonumber \\ &=& \! \! \! \! \! \! \frac{1}{\mathcal{M}} \sum_{\underaccent{\tilde{}}{m} \in \mathcal{M}} \! \left[ \tr\left( ((I -\mu_{\underline{m}_1} ) \otimes \! I \otimes \! I) \rho_{e(\underaccent{\tilde{}}m)}^{\otimes n} \right) \! + \! \tr\left((I \otimes \! (I-\mu_{\underline{m}_2}) \otimes \! I ) \rho_{e(\underaccent{\tilde{}}m)}^{\otimes n} \right) \! +\! \tr\left(( I \otimes \! I \otimes \! (I - \mu_{\underline{m}_3})) \rho_{e(\underaccent{\tilde{}}m)}^{\otimes n} \right) \right] \nonumber \\ &=& \frac{1}{\mathcal{M}} \sum_{\underaccent{\tilde{}}{m} \in \mathcal{M}} \left[\tr\left( (I - \mu_{\underline{m}_1} ) \tr_{Y_2,Y_3}\{\rho_{e(\underaccent{\tilde{}}m)}^{\otimes n}\}\right) \!\!+\! \tr\left( (I -\mu_{\underline{m}_2}) \tr_{Y_1,Y_3}\{\rho_{e(\underaccent{\tilde{}}m)}^{\otimes n} \}\right) \!\!+\! \tr\left((I - \mu_{\underline{m}_3}) \tr_{Y_1,Y_2}\{\rho_{e(\underaccent{\tilde{}}m)}^{\otimes n}\} \right) \right] . \nonumber \end{align}}}}}}}}}}}}}}}}}\] The above three terms being identical, henceforth, we analyze just one of them, i.e., the first decoder’s error event. To make the notation shorter, we relabel \(\uline{U}_{\bar{1}*} = (\underline{U}_{2*},\underline{U}_{3*}), V_{\bar{1}} =(V_2,V_3), \uline{u}_{\bar{1}*}^n = (\underline{u}_{2*}^n,\underline{u}_{3*}^n), v_{\bar{1}}^n =(v_2^n,v_3^n), \uline{m}_{\bar{1}}=(\underline{m}_2,\underline{m}_3)\). Observe \[\begin{align} &&\frac{1}{\mathcal{M}} \sum_{\underaccent{\tilde{}}{m} \in \mathcal{M}} \tr\left( (I - \mu_{\underline{m}_1} ) \tr_{Y_2,Y_3}\{\rho_{e(\underaccent{\tilde{}}m)}^{\otimes n}\}\right) = \frac{1}{\mathcal{M}} \sum_{\underaccent{\tilde{}}{m} \in \mathcal{M}} \sum_{(\uline{u}_{\bar{1}*}^n, v_{\bar{1}}^n, x^n)} \tr\left( (I - \mu_{\underline{m}_1} ) \tr_{Y_2,Y_3}\{\rho_{x^n}^{\otimes n}\}\right) \nonumber \\ && \mathbb{1}_{\Big\{ X^n(\underaccent{\tilde{}}{m}) = x^n, \uline{U}_{\bar{1}*}^n( \uline{m}_{\bar{1}})=\uline{u}_{\bar{1}*}^n,V_{\bar{1}}^n( \uline{m}_{\bar{1}}) =v_{\bar{1}}^n \Big\}}.\nonumber \end{align}}}}}\] We evaluate the conditional expectation given \(\{ \underline{U}_{1*}^n=\underline{u}_{1*}^n , V_{1}^n=v_{1}^n , U_{21}^n\oplus U_{31}^n= u_{21}^n \oplus u_{31}^n \}\) \[\begin{align} &&\frac{1}{\mathcal{M}} \sum_{\underaccent{\tilde{}}{m} \in \mathcal{M}} \sum_{(\uline{u}_{\bar{1}*}^n, v_{\bar{1}}^n, x^n)} p^{n}_{X, \uline{U}_{\bar{1}*}^n, V_{\bar{1}}^n | \underline{U}_{1*}V_{1} U_{21}\oplus U_{31} } \left( x^n,\uline{u}_{\bar{1}*}^n, v_{\bar{1}}^n,|\underline{u}_{1*}^n, v_{1}^n,u_{21}^n \oplus u_{31}^n \right) \tr\left( (I - \mu_{\underline{m}_1} ) \tr_{Y_2,Y_3}\{\rho_{x}^{\otimes n}\}\right) \nonumber \\ &=& \! \! \! \frac{1}{\mathcal{M}} \sum_{\underaccent{\tilde{}}{m} \in \mathcal{M}} \! \tr\left(\! \!(I - \mu_{\underline{m}_1}) \! \left[ \sum_{(\uline{u}_{\bar{1}*}^n, v_{\bar{1}}^n, x^n)} \! \! \! p^{n}_{X, \uline{U}_{\bar{1}*}^n, V_{\bar{1}}^n | \underline{U}_{1*}V_{1} U_{21}\oplus U_{31} } \left( x^n,\uline{u}_{\bar{1}*}^n, v_{\bar{1}}^n,|\underline{u}_{1*}^n, v_{1}^n,u_{21}^n \oplus u_{31}^n \right) \tr_{Y_2,Y_3}\{\rho_{x}^{\otimes n}\} \right] \right) \nonumber \\ &=& \! \! \! \frac{1}{\mathcal{M}_{1}} \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \tr((I -\mu_{\underline{m}_{1}} ) \xi_{\underline{z}_{1}^n(\underline{m}_{1})}^{\otimes n}), \nonumber \end{align}}}\] this term corresponds to the probability of error of the CQMAC, which we shall analyze now. Using the Hayashi-Nagaoka inequality [27], we have \[\begin{align} && \frac{1}{\mathcal{M}_{1}} \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \tr((I -\mu_{\underline{m}_{1}} ) \xi_{\underline{z}_{1}^n(\underline{m}_{1})}^{\otimes n}) = \frac{1}{\mathcal{M}_{1}} \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \tr((I -\mu_{\underline{m}_{1}} ) \xi_{\underline{z}_{1}^n(\underline{m}_{1})}^{\otimes n} \otimes \ketbra{0}) \nonumber \\ &\leq & \! \frac{1}{\mathcal{M}_{1}} \! \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \! \tr((I -\mu_{\underline{m}_{1}} ) \theta_{(\underline{z}_{1}^{n}, \underline{a}_{1}^{n})(\underline{m}_1)}^{\otimes n}) \! +\frac{1}{\mathcal{M}_{1}} \!\sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \! \norm{\theta_{(\underline{z}_{1}^{n}, \underline{a}_{1}^{n})(\underline{m}_1)}^{\otimes n} - \xi_{\underline{z}_{1}^{n}(\underline{m}_1)}^{\otimes n} \otimes \ket{0}\bra{0} }_{1} \!. \nonumber \\ &\leq & \frac{2}{\mathcal{M}_{1}} \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \tr\left( (I -\gamma^{*}_{(\underline{z}_{1}^{n},\underline{a}_{1}^{n})(\underline{m}_1)} ) \theta_{(\underline{z}_{1}^{n}, \underline{a}_{1}^{n})(\underline{m}_1)}^{\otimes n} \right) + \frac{4}{\mathcal{M}_{1}} \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \sum_{\uline{\tilde{m}}_1 \neq \underline{m}_1} \tr\left( \gamma^{*}_{(\underline{z}_{1}^{n},\underline{a}_{1}^{n})(\uline{\tilde{m}}_1)} \theta_{(\underline{z}_{1}^{n}, \underline{a}_{1}^{n})(\underline{m}_1)}^{\otimes n} \right) \nonumber \\ &+& \frac{1}{\mathcal{M}_{1}} \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \norm{\theta_{(\underline{z}_{1}^{n}, \underline{a}_{1}^{n})(\underline{m}_1)}^{\otimes n} - \xi_{\underline{z}_{1}^{n}(\underline{m}_1)}^{\otimes n} \otimes \ket{0}\bra{0} }_{1} \nonumber \end{align}\] \[\begin{align} &\leq & \! \!\!\! \frac{1}{\mathcal{M}_{1}} \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \norm{\theta_{(\underline{z}_{1}^{n}, \underline{a}_{1}^{n})(\underline{m}_1)}^{\otimes n} - \xi_{\underline{z}_{1}^{n}(\underline{m}_1)}^{\otimes n} \otimes \ket{0}\bra{0} }_{1} +\frac{2}{\mathcal{M}_{1}} \!\!\! \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \!\!\!\tr\left( (I -\gamma^{*}_{(\underline{z}_{1}^{n},\underline{a}_{1}^{n})(\underline{m}_1)} ) \theta_{(\underline{z}_{1}^{n}, \underline{a}_{1}^{n})(\underline{m}_1)}^{\otimes n} \right) \nonumber \\ \label{Eqn:TheFinalThreeTermsForEff5TxMAC} &+& \!\!\! \!\!\! \frac{4}{\mathcal{M}_{1}} \!\! \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \sum_{S \subseteq [4]} \sum_{(\tilde{m}_{1s} : s \in S)} \!\!\!\!\!\! \tr\left( \gamma^{*}_{(z_{1S}^n,a_{1S}^n)(\tilde{m}_{1S}),(z_{1S^{c}}^n,a_{1S^{c}})(m_{1S^{c}})} \theta_{(\underline{z}_{1}^{n}, \underline{a}_{1}^{n})(\underline{m}_1)}^{\otimes n} \right). \end{align}\tag{10}\] As is standard, we derive an upper bound on the error probability by averaging over an ensemble of codes. Let us now specify the distribution of the random codebook with respect to which we average the error probability. Specifically, each of the generator matrices is picked independently and uniformly from their respective range space. In other words, all the entries of every generator matrices are picked independently and uniformly from their respective finite fields. Furthermore, the bias vector is also picked independently and uniformly from their respective range space. The codewords of the conventional IID random \(\mathcal{V}_j\) codebook are mutually independent and identically distributed wrt to PMF \(\prod p_{V_j}\). We emphasize, that the coset codes built over \(\mathcal{U}_{ji}\), \(\mathcal{U}_{ki}\) intersect. In other words the smaller of these two codes is a sub-coset of the larger, this is accomplished by ensuring that the rows of the generator matrix of the larger code contains all the rows of the generator matrix of the smaller code. For specific details, the reader is referred to the distribution of the random code used to prove [7] or its simpler version [7]. Having described the distribution of the random code, we now continue with the analysis of the terms in 10 . In regard to the first two terms on the RHS of 10 , we have \[\begin{align} \frac{1}{\mathcal{M}_{1}} \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \norm{\theta_{(\underline{z}_{1}^{n}, \underline{a}_{1}^{n})(\underline{m}_1)}^{\otimes n} - \xi_{\underline{z}_{1}^{n}(\underline{m}_1)}^{\otimes n} \otimes \ket{0}\bra{0} }_{1} &\leq&90 \sqrt{\epsilon} + \epsilon \nonumber \\ \frac{2}{\mathcal{M}_{1}} \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \tr\left( (I -\gamma^{*}_{(\underline{z}_{1}^{n},\underline{a}_{1}^{n})(\underline{m}_1)} ) \theta_{(\underline{z}_{1}^{n}, \underline{a}_{1}^{n})(\underline{m}_1)}^{\otimes n} \right) &\leq& 10808 \sqrt{\epsilon},\nonumber \end{align}\] which are proven in Appendix 6. We are left only with the third term in the RHS of 10 . We elaborate now on the analysis of this third term on the RHS of 10 , and in doing so we leverage the smoothing and augmentation [4] property of the tilting maps we have defined. As is standard, this will be broken into fifteen terms - four singleton errors, six pair errors, four triple errors and one quadruple error. The analysis of the quadruple error is slightly different from the analysis of the rest fourteen terms. To analyze the quadruple error, we first prove the difference between the average of the original state and the average of the tilted state4 5 is small. This is done analogous to the steps used in [4] to derive an upper bound on the \(\mathbb{L}_{\infty}\)-norm of \(N_{\delta}\) in [4]. Having done this, the rest of the analysis of the quadruple error is straight forward since the corresponding projector has not been tilted and we are left with it operating on the average of the original state.
Having indicated analysis of the quadruple error term, we are left with fourteen terms. The analysis of each of these fourteen terms is essentially identical. To analyze each of the terms, our first step is similar to above. Specifically, we first prove the difference between the partially averaged original state and the partially averaged tilted state@eq:Eqn:Thetiltedstate is small in the \(\mathbb{L}_{\infty}\)-norm. This is done analogous to the steps used in [4] to derive an upper bound on the \(\mathbb{L}_{\infty}\)-norm of \(N_{X;l_{x}\delta},N_{Y;l_{y}\delta}\) in [4]. Following this, we are left with both the state and the projector being tilted. Extracting out the corresponding isometry, as done in [4], we can reduce this term analysis to the standard analysis involving corresponding typicality projectors acting on the original state. Having mentioned the similarities in our analysis, we now highlight the new elements owing to coset codes.
Remark 2. The use of coset codes results in codewords being uniformly distributed. In particular, the codewords are not automatically typical wrt the desired distribution. This forces us to certain new analysis steps beyond those mentioned above, which maybe viewed as being standard extension of [4]. Specifically, the codewords being uniformly distributed results in an enlargement of the bins. This enlargement of the bins will result in a more careful analysis of the above terms beyond the steps mentioned above. These additional steps will be fleshed out in detail in due course in an enlarged version of this manuscript. Essentially, we incorporate the same sequence of steps as employed in [26] to derive upper bounds on \(\zeta_{4}(\underline{m})\), \(\zeta_{5}(\underline{m})\) therein. In particular, the use of a ‘list error event’ with \(L_{j}>1\) assists us in our pursuit. See [26] for more details regarding these sequence of steps.
Collating all of these steps, an exponentially decaying upper bound on the third term in the RHS of 10 can be proven if
\[\begin{align} \tag{11} S_{A_{j}} < \sum_{a \in A_{j}} \log |\mathcal{U}_{a}| - H(U_{A_{j}}|U_{A_{j}^{c}},U_{j}^{\oplus},V_{j},Y_{j}), \\ S_{A_{j}}+S_{ij} < \sum_{a \in A_{j}} \log |\mathcal{U}_{a}| + \log \upsilon_{j} \tag{12} - H(U_{A_{j}},U_{j}^{\oplus}|U_{A_{j}^{c}},V_{j},Y_{j}), \\ \tag{13} S_{A_{j}}+S_{kj} < \sum_{a \in A_{j}}\log |\mathcal{U}_{a}| + \log \upsilon_{j} - H(U_{A_{j}},U_{j}^{\oplus}|U_{A_{j}^{c}},V_{j},Y_{j}),\\ S_{A_{j}} +K_{j}+L_{j} < \sum_{a \in A_{j}} \log |\mathcal{U}_{a}|+H(V_{j}) \tag{14} - H(U_{A_{j}},V_{j}|U_{A_{j}^{c}},U_{j}^{\oplus},Y_{j}), \\ \nonumber S_{A_{j}}+K_{j}+L_{j}+S_{ij} < \sum_{a \in A_{j}} \log |\mathcal{U}_{a}| + \log \upsilon_{j}+H(V_{j}) \tag{15} - H(U_{A_{j}},V_{j},U_{j}^{\oplus} |U_{A_{j}^{c}},Y_{j}),and \\ S_{A_{j}}+K_{j}+L_{j}+S_{kj} < \sum_{a \in A_{j}}\log |\mathcal{U}_{a}| +\log \upsilon_{j}+H(V_{j}) \tag{16} - H(U_{A_{j}},V_{j},U_{j}^{\oplus}|U_{A_{j}^{c}},Y_{j}). \end{align}\] This completes our detailed sketch of proof. Additional steps filling in for some of the standard arguments above will be provided in an enlarged version of this manuscript. ◻
The inner bound proven in Thm. 1 only enables Rxs to decode a bivariate component of the interference. As Marton’s common codebook [16] strategy proves, there is import to decoding a univariate part of the other user’s codewords. In other words, by combining both unstructured codebook based strategy and coset code strategy, we can derive an inner bound that subsumes all current known inner bounds for the three user CQBC. The following inner bound is obtained by combining the conventional unstructured IID code based strategy due to Marton [16] and the above coset code based strategy (Thm. 1). A proof of this inner bound is involved, but essentially is a generalization of the proof provided for Thm. 1. We indicate steps in the proof in an enlarged version of this article. In regards to the code structure, refer to Fig. 6.
Figure 6: \(16\) codes employed in the proof of Thm. 2. Row \(j\) depicts the codes of Rx \(j\). The \(U_{ji} : ji \in \llbracket 3 \rrbracket\) are coset codes built over finite fields. Codes with the same color are built over the same finite field and the smaller of the two is a sub-coset of the larger. The black, yellow and the orange codes are built over auxiliary finite sets \(\mathcal{W}\), \(\mathcal{Q}_{ji} : ji \in \llbracket 3 \rrbracket\) and \(\mathcal{V}_{j} : j \in[3]\) respectively using the conventional IID random code structure.
Notation: As before, \(i,j,k \in [3]\) denote distinct indices, i.e., \(\{i,j,k\} = [3]\). We let \(\underline{\underline{Q}}\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}(\underline{Q}_{1*},\underline{Q}_{2*},\underline{Q}_{3*})=(\underline{Q}_{*1},\underline{Q}_{*1},\underline{Q}_{*1})\), where \(\underline{Q}_{j*}=(Q_{ji},Q_{jk})\) and similarly \(\underline{\underline{\mathcal{Q}}}=\times_{j=1}^{3}\underline{\mathcal{Q}}_{j*}\), \(\underline{\underline{q}}= (\underline{q}_{1*},\underline{q}_{2*},\underline{q}_{3*})\), where \(\underline{q}_{j*}=(q_{ji},q_{jk})\) and analogously the rest of the objects.
Theorem 2. Let \(\hat{\alpha}_{US} \in [0,\infty)^{4}\) be the set of all rate-cost quadruples \((R_{1},R_{2},R_{3},\tau) \in [0,\infty)^{4}\) for which there exists (i) finite sets \(\mathcal{W},\mathcal{Q}_{\xi},\mathcal{V}_{j}\) for \(\xi \in \Xi\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}\{12,13,21,23,31,32\}\) and \(j \in [3]\), (ii) finite fields \(\mathcal{U}_{ij}=\mathcal{F}_{\upsilon_{j}}\) for each \(ij \in \Xi\), (ii) a PMF \(p_{W\underline{\underline{Q}}\underline{\underline{U}}~\!\!\underline{V}X}=p_{W\underline{Q}_{1*}\underline{Q}_{2*}\underline{Q}_{3*}\underline{U}_{1*}\underline{U}_{2*}\underline{U}_{3*}V_{1}V_{2}V_{3} X}\) on \(\underline{\underline{\mathcal{U}}}\times \underline{\underline{Q}}\times \underline{\mathcal{V}}\times\mathcal{X}\) (iii) nonnegative numbers \(\alpha,\alpha_{j},\beta_{ij},\nu_{ij},S_{ij},T_{ij}:ij \in \Xi, K_{j},L_{j}:j\in \left\{ 1,2,3\right\}\), such that \(\alpha > \alpha_{1},\alpha_{2},\alpha_{3}\), \(R_{1}=\alpha_{1}+\nu_{12}+\nu_{13}+T_{12}\log \upsilon_{2}+T_{13}\log \upsilon_{3}+L_{1}, R_{2}=\alpha_{2}+\nu_{21}+\nu_{23}+T_{21}\log \upsilon_{1}+T_{23}\log \upsilon_{3}+L_{2}, R_{3}=\alpha_{3}+\nu_{31}+\nu_{32}+T_{31}\log \upsilon_{1}+T_{32}\log \upsilon_{2}+L_{3}\), \(\mathbb{E}\left\{ \kappa(X) \right\} \leq \tau\), \(S_{A}+M_{B}+K_{C}+\beta_{D} >\Theta(A,B,C,D)\) where \(\Theta(A,B,C,D)\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}\) \[\begin{align} \!\!\!\max_{\substack{(\theta_{j}:j \in B) \in \underset{j \in B}{\prod} \mathcal{F}_{\upsilon_{j}}}}\!\! \left\{\!\!\!\! \begin{array}{c}\sum_{a \in A}\log |\mathcal{U}_{a}| + \sum_{j \in B}\log \upsilon_{j}+\sum_{c \in C} H(V_{c}|W)+\sum_{d \in D}H(Q_{D}|W) - \nonumber \\H(U_{A},U_{ji}\oplus \theta_{j}U_{jk}\!:j \in B,V_{C}|W)\end{array} \!\!\!\right\}\!\!\!\!\! \nonumber \end{align}\] for all \(A,D \subseteq \left\{12,13,21,23,31,32\right\}, B \subseteq \left\{ 1,2,3 \right\}, C \subseteq \left\{ 1,2,3 \right\}\), that satisfy \(A \cap A(B) = \phi\), where \(A({B}) = \cup_{j \in B}\{ ji,jk\}\), \(U_{A} = (U_{jk}:jk \in A)\), \(V_{C}=(V_{c}:c \in C)\), \(Q_{D}=(Q_{d}:d \in D)\), \(S_{A} = \sum _{jk \in A}S_{jk}, M_{B}\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}\sum_{j \in B} \max\{ S_{ij}+T_{ij},S_{kj}+T_{kj}\}, K_{C} = \sum_{c \in C}K_{c}\), \(\beta_{D} = \sum _{jk \in D}\beta_{jk}\) and \[\begin{align} \label{Eqn:Chnlbnd1} S_{\mathcal{A}_{j}} + T_{\mathcal{A}_{j}} + \beta_{\mathcal{C}_{j}} + \nu_{\mathcal{C}_{j}} &\leq& \sum_{a \in \mathcal{A}_{j}} \log |\mathcal{U}_{a}| + H(Q_{\mathcal{C}_{j}}|Q_{\mathcal{C}_{j}^{C}},W) \nonumber \\ && -H(U_{\mathcal{A}_{j}},Q_{\mathcal{C}_{j}}|U_{\mathcal{A}_{j}^{c}},Q_{\mathcal{C}_{j}^{C}},U_{ij} \oplus U_{kj}, V_{j}, Y_{j}, W), \nonumber \end{align}\qquad{(4)}\] \[\begin{align} \label{Eqn:Chnlbnd2} S_{\mathcal{A}_{j}} + T_{\mathcal{A}_{j}} + \beta_{\mathcal{C}_{j}} + \nu_{\mathcal{C}_{j}} + S_{ij} + T_{ij} &\leq& \sum_{a \in \mathcal{A}_{j}} \log |\mathcal{U}_{a}| + \log \upsilon_{j} + H(Q_{\mathcal{C}_{j}}|Q_{\mathcal{C}_{j}^{C}},W) \nonumber \\ &&- H(U_{\mathcal{A}_{j}}, Q_{\mathcal{C}_{j}}, U_{ij} \oplus U_{kj}|U_{\mathcal{A}_{j}^{c}},Q_{\mathcal{C}_{j}^{C}},V_{j},Y_{j},W), \nonumber \\ \label{Eqn:Chnlbnd4} S_{\mathcal{A}_{j}}+T_{\mathcal{A}_{j}}+\beta_{\mathcal{C}_{j}}+\nu_{\mathcal{C}_{j}}+S_{kj}+T_{kj} &\!\leq\!& \sum_{a \in \mathcal{A}_{j}}\!\!\log |\mathcal{U}_{a}| + \log \upsilon_{j} +H(Q_{\mathcal{C}_{j}}|Q_{\mathcal{C}_{j}^{C}},W) \nonumber \\ &&- H(U_{\mathcal{A}_{j}}\!,\!Q_{\mathcal{C}_{j}},\!U_{ij}\!\oplus\! U_{kj}|U_{\mathcal{A}_{j}^{c}},Q_{\mathcal{C}_{j}^{C}},V_{j},Y_{j},W), \nonumber \\ \label{Eqn:Chnlbnd6} S_{\mathcal{A}_{j}} +T_{\mathcal{A}_{j}} +\beta_{\mathcal{C}_{j}} + \nu_{\mathcal{C}_{j}} + K_{j} +\L_{j} &\leq& \sum_{a \in \mathcal{A}_{j}} \log |\mathcal{U}_{a}|+ H(V_{j}|W) + H(Q_{\mathcal{C}_{j}}|Q_{\mathcal{C}_{j}^{C}},W) \nonumber \\ && -H(U_{\mathcal{A}_{j}}, Q_{\mathcal{C}_{j}},\!V_{j}|U_{\mathcal{A}_{j}^{c}},Q_{\mathcal{C}_{j}^{C}},U_{ij} \oplus U_{kj},Y_{j},W), \nonumber \\ \label{Eqn:Chnlbnd8} S_{\mathcal{A}_{j}} + T_{\mathcal{A}_{j}} +\beta_{\mathcal{C}_{j}} +\nu_{\mathcal{C}_{j}} +K_{j}+L_{j}+S_{ij}+T_{ij} &\leq& \sum_{a \in \mathcal{A}_{j}} \log |\mathcal{U}_{a}| + \log \upsilon_{j} +H(V_{j}|W) +H(Q_{\mathcal{C}_{j}}|Q_{\mathcal{C}_{j}^{C}},W) \nonumber \\ \label{Eqn:Chnlbnd9} && -H(U_{\mathcal{A}_{j}}, Q_{\mathcal{C}_{j}}, V_{j},U_{ij}\oplus U_{kj}|U_{\mathcal{A}_{j}^{c}}, Q_{\mathcal{C}_{j}^{C}}, Y_{j},W) \nonumber \\ \label{Eqn:Chnlbnd10} S_{\mathcal{A}_{j}} + T_{\mathcal{A}_{j}} + \beta_{\mathcal{C}_{j}} + \nu_{\mathcal{C}_{j}} + K_{j}+L_{j}+S_{kj}+T_{kj} &\leq& \sum_{a \in \mathcal{A}_{j}}\log |\mathcal{U}_{a}| + \log \upsilon_{j}+H(V_{j}|W) +H(Q_{\mathcal{C}_{j}}|Q_{\mathcal{C}_{j}^{C}},W) \nonumber\\ \label{Eqn:Chnlbnd11} && - H(U_{\mathcal{A}_{j}}, Q_{\mathcal{C}_{j}},V_{j},U_{ij}\oplus U_{kj}|U_{\mathcal{A}_{j}^{c}},Q_{\mathcal{C}_{j}^{C}},Y_{j},W), \nonumber \\ \label{Eqn:Chnlbnd12} \alpha+\overline{S}+\overline{T}+K_{j}+L_{j}+\overline{\beta}+\overline{\nu} &\leq& \log(\upsilon_{i}\upsilon_{k})+H(Q_{ji},Q_{jk}|W)+H(V_{j}|W)\nonumber \\ && -H(\underline{U}_{j*},\underline{Q}_{j*},W,V_{j}|Y_{j},U_{ij}\oplus U_{kj}), \nonumber \\ \label{jtquewdl} \alpha+ \overline{S}+ \overline{T}+ K_{j} + L_{j} + \overline{\beta}+ \overline{\nu}+ S_{ij} + K_{ij} & \leq & \log(\upsilon_{i}\upsilon_{k})+H(Q_{ji},Q_{jk}|W)+H(V_{j}|W) \nonumber \\ && -H(\underline{U}_{j*}, \underline{Q}_{j*}, W, V_{j}, U_{ij}\oplus U_{kj}|Y_{j}), \nonumber \\ \label{pmathrks} \alpha+ \overline{S}+ \overline{T}+ K_{j} + L_{j} + \overline{\beta} + \overline{\nu} + S_{ij} + K_{ij} & \leq & \log(\upsilon_{i}\upsilon_{k})+H(Q_{ji}, Q_{jk}|W)+H(V_{j}|W) \nonumber \\ && -H(\underline{U}_{j*}, \underline{Q}_{j*}, W, V_{j}, U_{ij}\oplus U_{kj}|Y_{j}), \nonumber \end{align}\] {#eq: sublabel=eq:Eqn:Chnlbnd2,eq:Eqn:Chnlbnd4,eq:Eqn:Chnlbnd6,eq:Eqn:Chnlbnd8,eq:Eqn:Chnlbnd9,eq:Eqn:Chnlbnd10,eq:Eqn:Chnlbnd11,eq:Eqn:Chnlbnd12,eq:jtquewdl,eq:pmathrks} for every \(\mathcal{A}_{j} \subseteq \left\{ ji,jk\right\},\mathcal{C}_{j} \subseteq \left\{ ji,jk\right\}\) with distinct indices \(i,j,k\) in \(\left\{ 1,2,3 \right\}\), where \(S_{\mathcal{A}_{j}} \mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}\sum_{a \in \mathcal{A}_{j}}S_{a}\), \(T_{\mathcal{A}_{j}} \mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}\sum_{a \in \mathcal{A}_{j}}T_{a}\), \(\beta_{\mathcal{C}_{j}} = \sum _{jk \in \mathcal{C}_{j}}\beta_{jk}\), \(\nu_{\mathcal{C}_{j}} = \sum _{jk \in \mathcal{C}_{j}}\nu_{jk}\), \(U_{\mathcal{A}_{j}} = (U_{a}:a \in \mathcal{A}_{j})\), \(Q_{\mathcal{C}_{j}} = (Q_{c}:c \in \mathcal{C}_{j})\) with the notation \(\overline{S}= S_{ji}+S_{jk}\overline{T}= T_{ji}+T_{jk}, \overline{\beta} = \beta_{ji}+\beta_{jk},\overline{\nu}= \nu_{ji}+\nu_{jk}\) and all the information quantities are evaluated wrt state \[\begin{align} \label{Eqn:StageIITestChnl} \Phi^{W\underline{\underline{Q}}\!\!~\underline{\underline{U}}\!\!~\underline{U}^{\oplus}\!\!~\underline{V}X\!\!~\underline{Y}}\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}\!\!\!\!\!\!\! \sum_{\substack{w,\underline{\underline{q}},\underline{\underline{u}},\underline{\underline{v}},x\\u_{1}^{\oplus},u_{2}^{\oplus},u_{3}^{\oplus} }}\!\!\!\!\!\!p_{W\underline{\underline{Q}}~\!\!\underline{\underline{U}}~\!\!\underline{V}X}(w,\underline{\underline{q}},\underline{\underline{u}},\underline{v},x)\mathbb{1}_{\left\{\substack{u_{ji}\oplus u_{jk}\\=u_{j}^{\oplus}:j\in[3]}\right\}}\!\ketbra{w~\underline{\underline{q}}~\underline{\underline{u}}~ u_{1}^{\oplus}~ u_{2}^{\oplus}~ u_{3}^{\oplus}~ \underline{v}~ x} \!\otimes\! \rho_{x}\nonumber \end{align}\qquad{(5)}\] Let \(\alpha_{US}\) denote the convex closure of \(\hat{\alpha}_{US}\). Then \(\alpha_{US} \subseteq \mathscr{C}(\tau)\) is an achievable rate region.
We shall provide a description of the code structure, encoding and decoding in an enlarged version of this manuscript. Specifically, Fig. 6 depicts the code structure with the rate parameters as specified in the theorem statement. We shall explain details of this coding scheme and sketch elements of the proof in enlarged version of this manuscript.
To verify \[\begin{align} \norm{\theta_{\underline{z}^{n}, \underline{a}^{n}}^{\otimes n} - \xi_{\underline{z}^{n}}^{\otimes n} \otimes \ket{0}\bra{0} }_{1} \leq 90 \eta^{n}, \nonumber \end{align}\] it is sufficient to show that \[\begin{align} \norm{ \mathcal{T}_{ \underline{a}^{n}, \eta^{n}} ( \ket{h}\bra{h}) - \ket{h}\bra{h} }_{1} \leq 90 \eta^{n}. \nonumber \end{align}\] Towards the same, note that \[\begin{align} &&\mathcal{T}_{ \underline{a}^{n}, \eta^{n}} ( \ket{h}\bra{h}) = \frac{1}{\Omega(\eta)} \left(\ket{h} + \sum_{S \subseteq [4]\setminus [4]} \eta^{n |S| } \ket{h} \otimes \ket{a_{s}^{n}} \right) \times \left(\bra{h} + \sum_{\tilde{S} \subseteq [4]\setminus [4]} \eta^{n |\tilde{S}| } \bra{h} \otimes \bra{a_{\tilde{S}}^{n}} \right) \nonumber \\&=& \! \! \frac{1}{\Omega(\eta)} \ket{h} \bra{h} \! \!+\! \! \frac{1}{\Omega(\eta)} \ket{h} \left( \sum_{\tilde{S} \subseteq [4] \setminus [4]} \! \! \! \eta^{n |\tilde{S}| } \bra{h} \otimes \bra{a_{\tilde{S}}^{n}} \right) \! \! + \! \! \frac{1}{\Omega(\eta)} \left( \sum_{S \subseteq [4]\setminus [4]} \! \! \! \eta^{n |S| } \ket{h} \otimes \ket{a_{s}^{n}} \right) \bra{h} \nonumber \\ &&+ \frac{1}{\Omega(\eta)} \! \! \left( \sum_{S \subseteq [4] \setminus [4]} \! \! \! \eta^{n |S| } \ket{h} \otimes \ket{a_{s}^{n}} \right)\! \! \! \left( \sum_{\tilde{S} \subseteq [4] \setminus [4]} \! \! \! \eta^{n |\tilde{S}| } \bra{h} \otimes \bra{a_{\tilde{S}}^{n}} \right).\nonumber \end{align}\] Next, note that \[\begin{align} \left\| \mathcal{T}_{ \underline{a}^{n}, \eta^{n}} ( \ket{h}\bra{h}) - \ket{h}\bra{h} \right\|_{1} \leq \frac{1-\Omega(\eta)}{\Omega(\eta)} \left\|\ket{h} \bra{h} \right\|_{1} + \frac{1}{\Omega(\eta)} \left\| \ket{h} \left( \sum_{\tilde{S} \subseteq [4]\setminus [4]} \eta^{n |\tilde{S}| } \bra{h} \otimes \bra{a_{\tilde{S}}^{n}} \right) \right\|_{1} +~~~~~~~~~~~~~~~\nonumber\\ \!\!\!\!\!\!\!\!\!\!\!\!\frac{1}{\Omega(\eta)} \left\| \left( \sum_{S \subseteq [4]\setminus [4]} \eta^{n |S| } \ket{h} \otimes \ket{a_{s}^{n}} \right) \!\!\bra{h} \right\|_{1} \! \! \!+\! \frac{1}{\Omega(\eta)} \bigg\| \left( \sum_{S \subseteq [4]\setminus [4]} \!\!\!\! \eta^{n |S| } \ket{h} \otimes \ket{a_{s}^{n}} \right) \!\!\!\left( \sum_{\tilde{S} \subseteq [4]\setminus [4]} \!\!\!\! \eta^{n |\tilde{S}| } \bra{h} \otimes \bra{a_{\tilde{S}}^{n}} \right) \bigg\|_{1} \label{eq:line4}. \end{align}\tag{17}\] In the following, we analyze each term on the RHS of 17 separately. Towards the same, note that \[\begin{align} \frac{1-\Omega(\eta)}{\Omega(\eta)} \left\|\ket{h} \bra{h} \right\|_{1} = \frac{16 \eta^{2n} + 36 \eta^{4n} + 16 \eta^{6n}}{\Omega(\eta)}, \nonumber \end{align}\] \[\begin{align} \left\| \ket{h} \left( \sum_{\tilde{S} \subseteq [4] \setminus [4]} \eta^{n |\tilde{S}|} \bra{h} \otimes \bra{a_{j\tilde{S}}^{n}} \right) \right\|_{1} \! \! \!\!\!\! \!&=& \! \!\! \! \! \tr \left( \sqrt{ \left( \sum_{S \subseteq [4] \setminus [4]} \eta^{n |S|} \ket{h} \otimes \ket{a_{jS}^{n}} \right) \bra{h} \ket{h} \left( \sum_{\tilde{S} \subseteq [4] \setminus [4]} \eta^{n |\tilde{S}|} \bra{h} \otimes \bra{a_{j\tilde{S}}^{n}} \right) } \right) \nonumber \\ \! \! \!\!\!\! \!&=& \! \!\! \! \! \tr \left( \sqrt{ \left( \sum_{S \subseteq [4] \setminus [4]} \eta^{n |S|} \ket{h} \otimes \ket{a_{jS}^{n}} \right) \left( \sum_{\tilde{S} \subseteq [4] \setminus [4]} \eta^{n |\tilde{S}|} \bra{h} \otimes \bra{a_{j\tilde{S}}^{n}} \right) } \right) \nonumber \\ \! \! \!\!\!\! \!&=& \! \!\! \! \!\sqrt{ \left( \sum_{S \subseteq [4] \setminus [4]} \eta^{n |S|} \bra{h} \otimes \bra{a_{jS}^{n}} \right) \left( \sum_{\tilde{S} \subseteq [4] \setminus [4]} \eta^{n |\tilde{S}|} \ket{h} \otimes \ket{a_{j\tilde{S}}^{n}} \right) } \nonumber \\ \! \! \!\!\!\! \!&=& \! \!\! \! \! \sqrt{ \sum_{S \subseteq [4] \setminus [4]} \sum_{\tilde{S} \subseteq [4] \setminus [4]} \eta^{n |S|} \eta^{n |\tilde{S}|} \bra{h} \ket{h} \otimes \bra{a_{jS}^{n}} \ket{a_{j\tilde{S}}^{n}} } \nonumber \\ \! \! \!\!\!\! \!&=& \! \!\! \! \!\sqrt{ \sum_{S \subseteq [4] \setminus [4]} \eta^{2n |S|}} \nonumber = \sqrt{ 4 \eta^{2n} + 6 \eta^{4n} + 4 \eta^{6n}}, \nonumber \end{align}\] \[\begin{align} \left\| \left( \sum_{S \subseteq [4]\setminus [4]} \eta^{n |S| } \ket{h} \otimes \ket{a_{S}^{n}} \right) \bra{h} \right\|_{1} \! \! \!\!\!\! \!&=& \! \!\! \! \! \tr \left( \sqrt{ \ket{h} \left( \sum_{\tilde{S} \subseteq [4]\setminus [4]} \eta^{n |\tilde{S}| } \bra{h} \otimes \bra{a_{S}^{n}} \right) \left( \sum_{S \subseteq [4]\setminus [4]} \eta^{n |S| } \ket{h} \otimes \ket{a_{S}^{n}} \right) \bra{h}} \right) \nonumber \\ \! \! \!\!\!\! \!&=& \! \!\! \! \!\sqrt{4 \eta^{2n} + 6 \eta^{4n} + 4 \eta^{6n}}, and \nonumber \end{align}\] \[\begin{align} \bigg\| \left( \sum_{S \subseteq [4] \setminus [4]} \! \! \eta^{n |S|} \ket{h} \otimes \ket{a_{S}^{n}} \right) \! \! \left( \sum_{\tilde{S} \subseteq [4] \setminus [4]} \! \! \eta^{n |\tilde{S}|} \bra{h} \otimes \bra{a_{\tilde{S}}^{n}} \right) \bigg\|_{1} \!\! \! \! \!\! \! \!&=& \! \!\! \! \! \left( \sum_{S \subseteq [4] \setminus [4]} \! \! \!\eta^{n |S|} \bra{h} \otimes \bra{a_{S}^{n}} \! \! \right) \!\! \left( \sum_{\tilde{S} \subseteq [4] \setminus [4]} \! \! \! \eta^{n |\tilde{S}|} \ket{h} \otimes \ket{a_{\tilde{S}}^{n}} \! \! \right) \nonumber \\ \! \! \!\!\!\! \!&=& \! \!\! \! \!4 \eta^{2n} + 6 \eta^{4n} + 4 \eta^{6n}. \nonumber \end{align}\] Hence, \[\begin{align} \left\| \mathcal{T}_{ \underline{a}^{n}, \eta^{n}} ( \ket{h}\bra{h}) - \ket{h}\bra{h} \right\|_{1} &\leq \frac{16 \eta^{2n} + 36 \eta^{4n} + 16 \eta^{6n}}{\Omega(\eta)} + 2 \left(\frac{\sqrt{4 \eta^{2n} + 6 \eta^{4n} + 4 \eta^{6n}}}{\Omega(\eta)}\right) + \frac{4 \eta^{2n} + 6 \eta^{4n} + 4 \eta^{6n}}{\Omega(\eta)} \\ &= \frac{20 \eta^{2n} + 42 \eta^{4n} + 20 \eta^{6n}}{\Omega(\eta)} + \frac{2\sqrt{4 \eta^{2n} + 6 \eta^{4n} + 4 \eta^{6n}}}{\Omega(\eta)} \\ & \leq \frac{82 \eta^{2n}+ 2 \sqrt{14} \eta^{n}}{\Omega(\eta)} \\ &\leq 90 \eta^{n} .\nonumber \end{align}\]
We begin by proving \[\begin{align} \frac{1}{\mathcal{M}_{1}} \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \norm{\theta_{(\underline{z}_{1}^{n}, \underline{a}_{1}^{n})(\underline{m}_1)}^{\otimes n} - \xi_{\underline{z}_{1}^{n}(\underline{m}_1)}^{\otimes n} \otimes \ket{0}\bra{0} }_{1} \leq 90 \sqrt{\epsilon} + \epsilon. \nonumber \end{align}\] We have, \[\begin{align} \frac{1}{\mathcal{M}_{1}} \! \! \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \! \! \! \norm{\theta_{(\underline{z}_{1}^{n}, \underline{a}_{1}^{n})(\underline{m}_1)}^{\otimes n} \! \! - \xi_{\underline{z}_{1}^{n}(\underline{m}_1)}^{\otimes n} \otimes \ket{0}\bra{0} }_{1} \! \! = \! \! \frac{1}{\mathcal{M}_{1}} \! \! \! \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \sum_{\underline{z}_{1}^n} \sum_{\underline{a}_{1}^n} \norm{\theta_{(\underline{z}_{1}^{n}, \underline{a}_{1}^{n})(\underline{m}_1)}^{\otimes n} \! \! - \xi_{\underline{z}_{1}^{n}(\underline{m}_1)}^{\otimes n} \otimes \ket{0}\bra{0} }_{1} \nonumber \\ \mathbb{1} \{\underline{z}_{1}^{n}(\underline{m}_1)=\underline{z}_{1}^{n}, \underline{a}_{1}^{n}(\underline{m}_{1}) = \underline{a}_{1}^{n}\}, \nonumber \end{align}\] by evaluating the expectation
\[\begin{align} && \frac{1}{\mathcal{M}_{1}} \frac{1}{|\uline{\mathcal{A}_{1}|}^n} \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \sum_{\underline{z}_{1}^n} \sum_{\underline{a}_{1}^n} p_{\underline{Z}_{1}}^n (\underline{z}_{1}^{n}) \norm{\theta_{(\underline{z}_{1}^{n}, \underline{a}_{1}^{n})}^{\otimes n} - \xi_{\underline{z}_{1}^{n}}^{\otimes n} \otimes \ket{0}\bra{0} }_{1} \nonumber \\ &=& \frac{1}{\mathcal{M}_{1}} \frac{1}{|\uline{\mathcal{A}_{1}|}^n} \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \sum_{\underline{z}_{1}^n \in T^{n}_{\delta}(p_{\underline{Z}_{1}} )} \sum_{\underline{a}_{1}^n} p_{\underline{Z}_{1}}^n (\underline{z}_{1}^{n}) \norm{\theta_{(\underline{z}_{1}^{n}, \underline{a}_{1}^{n})}^{\otimes n} - \xi_{\underline{z}_{1}^{n}}^{\otimes n} \otimes \ket{0}\bra{0} }_{1} \nonumber \\ &+& \frac{1}{\mathcal{M}_{1}} \frac{1}{|\uline{\mathcal{A}_{1}|}^n} \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \sum_{\underline{z}_{1}^n \notin T^{n}_{\delta}(p_{\underline{Z}_{1}} )} \sum_{\underline{a}_{1}^n} p_{\underline{Z}_{1}}^n (\underline{z}_{1}^{n}) \norm{\theta_{(\underline{z}_{1}^{n}, \underline{a}_{1}^{n})}^{\otimes n} - \xi_{\underline{z}_{1}^{n}}^{\otimes n} \otimes \ket{0}\bra{0} }_{1} \nonumber \\ & \leq & 90 \eta^n + \epsilon. \nonumber \end{align}\] For \(\eta ^n = \sqrt{\epsilon}\), \(\frac{1}{\mathcal{M}_{1}} \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \norm{\theta_{(\underline{z}_{1}^{n}, \underline{a}_{1}^{n})(\underline{m}_1)}^{\otimes n} - \xi_{\underline{z}_{1}^{n}(\underline{m}_1)}^{\otimes n} \otimes \ket{0}\bra{0} }_{1} \leq 90 \sqrt{\epsilon} + \epsilon\). Now, we proceed to prove \[\begin{align} \frac{2}{\mathcal{M}_{1}} \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \tr\left( (I -\gamma^{*}_{(\underline{z}_{1}^{n},\underline{a}_{1}^{n})(\underline{m}_1)} ) \theta_{(\underline{z}_{1}^{n}, \underline{a}_{1}^{n})(\underline{m}_1)}^{\otimes n} \right) \leq 10808 \sqrt{\epsilon}. \nonumber \end{align}\] We have, \[\begin{align} &&E \left\{ \frac{2}{\mathcal{M}_{1}} \!\! \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \!\!\! \tr\left( (I -\gamma^{*}_{(\underline{z}_{1}^{n},\underline{a}_{1}^{n})(\underline{m}_1)} ) \theta_{(\underline{z}_{1}^{n}, \underline{a}_{1}^{n})(\underline{m}_1)}^{\otimes n} \right)\;\right\} \nonumber \\ &=& E \left\{\frac{2}{\mathcal{M}_{1}} \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \!\sum_{\underline{z}_{1}^n} \sum_{\underline{a}_{1}^n} \tr\left( (I \gamma^{*}_{(\underline{z}_{1}^{n},\underline{a}_{1}^{n})} ) \theta_{(\underline{z}_{1}^{n}, \underline{a}_{1}^{n})}^{\otimes n} \right) \mathbb{1}\{\underline{z}_{1}^{n}(\underline{m}_1)=\underline{z}_{1}^{n}, \underline{a}_{1}^{n}(\underline{m}_{1}) = \underline{a}_{1}^{n}\} \right\} \nonumber \\ &=& \frac{2}{\mathcal{M}_{1}} \frac{1}{|\uline{\mathcal{A}_{1}|}^n} \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \sum_{\underline{z}_{1}^n} \sum_{\underline{a}_{1}^n} p_{\underline{Z}_{1}}^n (\underline{z}_{1}^{n}) \tr\left( (I -\gamma^{*}_{(\underline{z}_{1}^{n},\underline{a}_{1}^{n})} ) \theta_{(\underline{z}_{1}^{n}, \underline{a}_{1}^{n})}^{\otimes n} \right) \nonumber \\ &=& \frac{2}{\mathcal{M}_{1}} \frac{1}{|\uline{\mathcal{A}_{1}|}^n} \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \sum_{\underline{z}_{1}^n} \sum_{\underline{a}_{1}^n} p_{\underline{Z}_{1}}^n (\underline{z}_{1}^{n}) \left[\tr\left(\theta_{(\underline{z}_{1}^{n}, \underline{a}_{1}^{n})}^{\otimes n} \right) - \tr\left( \gamma^{*}_{(\underline{z}_{1}^{n},\underline{a}_{1}^{n})} \theta_{(\underline{z}_{1}^{n}, \underline{a}_{1}^{n})}^{\otimes n} \right)\right] \nonumber \\ &=& \frac{2}{\mathcal{M}_{1}} \frac{1}{|\uline{\mathcal{A}_{1}|}^n} \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \sum_{\underline{z}_{1}^n} \sum_{\underline{a}_{1}^n} p_{\underline{Z}_{1}}^n (\underline{z}_{1}^{n})\left[\tr\left(\theta_{(\underline{z}_{1}^{n}, \underline{a}_{1}^{n})}^{\otimes n} \right) -\tr\left( (I_{\boldsymbol{\mathcal{H}_{Y_{1}}^{e}}}- \beta^{*}_{\underline{z}_1^{n},\underline{a}_1^{n}}) \Pi_{\boldsymbol{\mathcal{H}_{Y_{1G}}} }(I_{\boldsymbol{\mathcal{H}_{Y_1}^{e}}}- \beta^{*}_{\underline{z}_1^{n},\underline{a}_1^{n}}) \theta_{(\underline{z}_{1}^{n}, \underline{a}_{1}^{n})}^{\otimes n} \right)\right] \nonumber \\ \!\!\!\!\!&=&\!\!\!\!\! \frac{2}{\mathcal{M}_{1}} \frac{1}{|\uline{\mathcal{A}_{1}|}^n} \!\!\sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \sum_{\underline{z}_{1}^n} \sum_{\underline{a}_{1}^n} p_{\underline{Z}_{1}}^n (\underline{z}_{1}^{n}) \left[\tr\left(\theta_{(\underline{z}_{1}^{n}, \underline{a}_{1}^{n})}^{\otimes n} \right) \!\! -\!\!\tr\left(\Pi_{\boldsymbol{\mathcal{H}_{Y_{1G}}} }(I_{\boldsymbol{\mathcal{H}_{Y_1}^{e}}} \! \! \! - \beta^{*}_{\underline{z}_1^{n},\underline{a}_1^{n}}) \theta_{(\underline{z}_{1}^{n}, \underline{a}_{1}^{n})}^{\otimes n}(I_{\boldsymbol{\mathcal{H}_{Y_1}^{e}}} \! \! \!- \beta^{*}_{\underline{z}_1^{n},\underline{a}_1^{n}}) \Pi_{\boldsymbol{\mathcal{H}_{Y_{1G}}} }\right)\right] \nonumber\\ &\leq& \frac{8}{\mathcal{M}_{1}} \frac{1}{|\uline{\mathcal{A}_{1}|}^n}\sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \sum_{\underline{z}_{1}^n} \sum_{\underline{a}_{1}^n} p_{\underline{Z}_{1}}^n (\underline{z}_{1}^{n}) \bigg[\tr\left( (I -\Pi_{\boldsymbol{\mathcal{H}_{Y_{1G}}} }) \theta_{(\underline{z}_{1}^{n}, \underline{a}_{1}^{n})} \right) + \tr\left( \beta^{*}_{\underline{z}_{1}^{n},\underline{a}_{1}^{n}} \theta_{(\underline{z}_{1}^{n}, \underline{a}_{1}^{n})} \right) \bigg] \label{eq:noncommutativebound} \\ &=& \frac{8}{\mathcal{M}_{1}} \frac{1}{|\uline{\mathcal{A}_{1}|}^n} \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \sum_{\underline{z}_{1}^n} \sum_{\underline{a}_{1}^n} p_{\underline{Z}_{1}}^n (\underline{z}_{1}^{n}) \bigg[\tr\left( (I - \Pi_{\boldsymbol{\mathcal{H}_{Y_{1G}}} }) \theta_{(\underline{z}_{1}^{n}, \underline{a}_{1}^{n})} \right) + \tr\left( \beta^{*}_{\underline{z}_{1}^{n},\underline{a}_{1}^{n}} \theta_{(\underline{z}_{1}^{n}, \underline{a}_{1}^{n})} \right)\bigg] \nonumber \end{align}\tag{18}\] \[\begin{align} &=& \frac{8}{\mathcal{M}_{1}} \frac{1}{|\uline{\mathcal{A}_{1}|}^n} \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \sum_{\underline{z}_{1}^n} \sum_{\underline{a}_{1}^n} p_{\underline{Z}_{1}}^n (\underline{z}_{1}^{n}) \tr\left( (I - \Pi_{\boldsymbol{\mathcal{H}_{Y_{1G}}} } + \beta^{*}_{\underline{z}_{1}^{n},\underline{a}_{1}^{n}}) \theta_{(\underline{z}_{1}^{n}, \underline{a}_{1}^{n})} \right) \nonumber \\ & \leq & \frac{8}{\mathcal{M}_{1}} \frac{1}{|\uline{\mathcal{A}_{1}|}^n} \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \sum_{\underline{z}_{1}^n} \sum_{\underline{a}_{1}^n} p_{\underline{Z}_{1}}^n (\underline{z}_{1}^{n}) \tr\left( (I -\Pi_{\boldsymbol{\mathcal{H}_{Y_{1G}}} } + \beta^{*}_{\underline{z}_{1}^{n},\underline{a}_{1}^{n}}) \xi_{\underline{z}_{j}^{n}}^{\otimes n} \otimes \ket{0}\bra{0} \right) \nonumber \\ &+& \frac{8}{\mathcal{M}_{1}} \frac{1}{|\uline{\mathcal{A}_{1}|}^n} \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \sum_{\underline{z}_{1}^n} \sum_{\underline{a}_{1}^n} p_{\underline{Z}_{1}}^n (\underline{z}_{1}^{n}) \norm{\theta_{\underline{z}_{j}^{n}, \underline{a}_{j}^{n}}^{\otimes n} - \xi_{\underline{z}_{j}^{n}}^{\otimes n} \otimes \ket{0}\bra{0} }_{1}. \label{eq:line5} \end{align}\tag{19}\] Where inequality 18 follows from the non-commutative bound [4]. In the following, we analyze each term on the RHS of 19 separately. The following steps essentially mimic those employed by Sen in establishing [4] - an upper bound on the first term therein. Specifically, by [Corollary 1][4] and Gelfand-Naimark thm. [18], we have \[\begin{align} &&\frac{8}{\mathcal{M}_{1}} \frac{1}{|\uline{\mathcal{A}_{1}|}^n} \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \sum_{\underline{z}_{1}^n} \sum_{\underline{a}_{1}^n} p_{\underline{Z}_{1}}^n (\underline{z}_{1}^{n}) \tr\left( (I -\Pi_{\boldsymbol{\mathcal{H}_{Y_{1G}}} } + \beta^{*}_{\underline{z}_{1}^{n},\underline{a}_{1}^{n}}) \xi_{\underline{z}_{j}^{n}}^{\otimes n} \otimes \ket{0}\bra{0} \right) \nonumber \\ &=& \frac{8}{\mathcal{M}_{1}} \frac{1}{|\uline{\mathcal{A}_{1}|}^n} \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \sum_{\underline{z}_{1}^n} \sum_{\underline{a}_{1}^n} p_{\underline{Z}_{1}}^n (\underline{z}_{1}^{n}) \left[ \tr\left( (I - \Pi_{\boldsymbol{\mathcal{H}_{Y_{1G}}} }) \xi_{\underline{z}_{1}^{n}}^{\otimes n} \otimes \ket{0}\bra{0} \right) + \tr\left( \beta^{*}_{\underline{z}_{1}^{n},\underline{a}_{1}^{n}} \xi_{\underline{z}_{1}^{n}}^{\otimes n} \otimes \ket{0}\bra{0} \right) \right]\nonumber \\ &=&\frac{8}{\mathcal{M}_{1}} \frac{1}{|\uline{\mathcal{A}_{1}|}^n} \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \sum_{\underline{z}_{1}^n} \sum_{\underline{a}_{1}^n} p_{\underline{Z}_{1}}^n (\underline{z}_{1}^{n}) \tr\left( \beta^{*}_{\underline{z}_{1}^{n},\underline{a}_{1}^{n}} \xi_{\underline{z}_{1}^{n}}^{\otimes n} \otimes \ket{0}\bra{0} \right) \nonumber \\&\leq& \frac{336}{\mathcal{M}_{1}} \frac{1}{\eta^n} \frac{1}{|\uline{\mathcal{A}_{1}|}^n} \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \sum_{\underline{z}_{1}^n} \sum_{\underline{a}_{1}^n} p_{\underline{Z}_{1}}^n (\underline{z}_{1}^{n}) \sum_{S \subseteq [4]} \tr\left( \overline{B}_{\underline{z}_{1}^{n}}^{S} \xi_{\underline{z}_{1}^{n}}^{\otimes n} \otimes \ket{0}\bra{0} \right) \nonumber \\ &=& \frac{336}{\mathcal{M}_{1}} \frac{1}{\eta^n} \frac{1}{|\uline{\mathcal{A}_{1}|}^n} \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \sum_{\underline{z}_{1}^n} \sum_{\underline{a}_{1}^n} \sum_{S \subseteq [4]} p_{\underline{Z}_{1}}^n (\underline{z}_{1}^{n}) \left[1-\tr\left( \overline{G}_{\underline{z}_{1}^{n}}^{S} \xi_{\underline{z}_{1}^{n}}^{\otimes n} \otimes \ket{0}\bra{0} \right)\right]\nonumber \\ &=&\frac{336}{\mathcal{M}_{1}} \frac{1}{\eta^n} \frac{1}{|\uline{\mathcal{A}_{1}|}^n} \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \sum_{\underline{z}_{1}^{n} } \sum_{\underline{a}_{1}^n} \sum_{S \subseteq [4]} p_{\underline{Z}_{1}}^n (\underline{z}_{1}^{n}) \left[1-\tr\left( G_{\underline{z}_{1}^{n}}^{S} \xi_{\underline{z}_{1}^{n}}^{\otimes n} \right)\right]\nonumber \\ &=&\frac{336}{\mathcal{M}_{1}} \frac{1}{\eta^n} \frac{1}{|\uline{\mathcal{A}_{1}|}^n} \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \sum_{\underline{z}_{1}^{n} \in T^{n}_{\delta}(p_{\underline{Z}_{1}} )} \sum_{\underline{a}_{1}^n} \sum_{S \subseteq [4]} p_{\underline{Z}_{1}}^n (\underline{z}_{1}^{n}) \left[1-\tr\left( G_{\underline{z}_{1}^{n}}^{S} \xi_{\underline{z}_{1}^{n}}^{\otimes n} \right)\right]\nonumber \\ &+& \frac{336}{\mathcal{M}_{1}} \frac{1}{\eta^n} \frac{1}{|\uline{\mathcal{A}_{1}|}^n} \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \sum_{\underline{z}_{1}^{n} \notin T^{n}_{\delta}(p_{\underline{Z}_{1}} )} \sum_{\underline{a}_{1}^n} \sum_{S \subseteq [4]} p_{\underline{Z}_{1}}^n (\underline{z}_{1}^{n}) \left[1-\tr\left( G_{\underline{z}_{1}^{n}}^{S} \xi_{\underline{z}_{1}^{n}}^{\otimes n} \right)\right]\nonumber \\ &=& \frac{336}{\mathcal{M}_{1}} \frac{1}{\eta^n} \frac{1}{|\uline{\mathcal{A}_{1}|}^n} \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \sum_{\underline{z}_{1}^{n} \in T^{n}_{\delta}(p_{\underline{Z}_{1}} )} \sum_{\underline{a}_{1}^n} \sum_{S \subseteq [4]} p_{\underline{Z}_{1}}^n (\underline{z}_{1}^{n}) \left[1-\tr\left( G_{\underline{z}_{1}^{n}}^{S} \xi_{\underline{z}_{1}^{n}}^{\otimes n} \right)\right]\nonumber \\ &+& \frac{336}{\mathcal{M}_{1}} \frac{1}{\eta^n} \frac{1}{|\uline{\mathcal{A}_{1}|}^n} \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \sum_{\underline{z}_{1}^{n} \notin T^{n}_{\delta}(p_{\underline{Z}_{1}} )} \sum_{\underline{a}_{1}^n} \sum_{S \subseteq [4]} p_{\underline{Z}_{1}}^n (\underline{z}_{1}^{n}) \nonumber \\ & \leq & \frac{5040}{\eta^n}(1- (1-\epsilon)) + \frac{5040}{\eta^n}\epsilon \nonumber \\ &=& \frac{10080}{\eta^n} \epsilon, \nonumber \end{align}\] \[\begin{align} &&\frac{8}{\mathcal{M}_{1}} \frac{1}{|\uline{\mathcal{A}_{1}|}^n} \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \sum_{\underline{z}_{1}^n} \sum_{\underline{a}_{1}^n} p_{\underline{Z}_{1}}^n (\underline{z}_{1}^{n}) \norm{\theta_{\underline{z}_{1}^{n}, \underline{a}_{1}^{n}}^{\otimes n} - \xi_{\underline{z}_{1}^{n}}^{\otimes n} \otimes \ket{0}\bra{0} }_{1} \nonumber \\ &=& \frac{8}{\mathcal{M}_{1}} \frac{1}{|\uline{\mathcal{A}_{1}|}^n} \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \sum_{\underline{z}_{1}^n \in T^{n}_{\delta}(p_{\underline{Z}_{1}} )} \sum_{\underline{a}_{1}^n} p_{\underline{Z}_{1}}^n (\underline{z}_{1}^{n}) \norm{\theta_{\underline{z}_{1}^{n}, \underline{a}_{1}^{n}}^{\otimes n} - \xi_{\underline{z}_{1}^{n}}^{\otimes n} \otimes \ket{0}\bra{0} }_{1} \nonumber \\ &+& \frac{8}{\mathcal{M}_{1}} \frac{1}{|\uline{\mathcal{A}_{1}|}^n} \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \sum_{\underline{z}_{1}^n \notin T^{n}_{\delta}(p_{\underline{Z}_{1}} )} \sum_{\underline{a}_{1}^n} p_{\underline{Z}_{1}}^n (\underline{z}_{1}^{n}) \norm{\theta_{\underline{z}_{1}^{n}, \underline{a}_{1}^{n}}^{\otimes n} - \xi_{\underline{z}_{1}^{n}}^{\otimes n} \otimes \ket{0}\bra{0} }_{1} \nonumber \\ &\leq& 720 \eta^n + 8 \epsilon. \nonumber \end{align}\] Hence, \[\begin{align} \frac{2}{\mathcal{M}_{1}} \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \tr\left( (I -\gamma^{*}_{(\underline{z}_{1}^{n},\underline{a}_{1}^{n})(\underline{m}_1)} ) \theta_{(\underline{z}_{1}^{n}, \underline{a}_{1}^{n})(\underline{m}_1)}^{\otimes n} \right) \leq \frac{10080 \epsilon}{ \eta^n} + 720 \eta^n + 8 \epsilon. \nonumber \end{align}\] For \(\eta^n = \sqrt{\epsilon}\),\(\frac{2}{\mathcal{M}_{1}} \sum_{\underline{m}_{1} \in \mathcal{M}_{1}} \tr\left( (I -\gamma^{*}_{(\underline{z}_{1}^{n},\underline{a}_{1}^{n})(\underline{m}_1)} ) \theta_{(\underline{z}_{1}^{n}, \underline{a}_{1}^{n})(\underline{m}_1)}^{\otimes n} \right) \leq 10808 \sqrt{\epsilon}.\)
\[\begin{align} && I \otimes I \otimes I - \mu_{\underline{m}_1} \otimes \mu_{\underline{m}_2} \otimes \mu_{\underline{m}_3} = I \otimes I \otimes I- \mu_{\underline{m}_1} \otimes \mu_{\underline{m}_2} \otimes \mu_{\underline{m}_3} - \mu_{\underline{m}_1} \otimes I \otimes I + \mu_{\underline{m}_1} \otimes I \otimes I \nonumber \\ &=& (I-\mu_{\underline{m}_1}) \otimes I \otimes I- \mu_{\underline{m}_1} \otimes \mu_{\underline{m}_2} \otimes \mu_{\underline{m}_3} + \mu_{\underline{m}_1} \otimes I \otimes I \nonumber \\ &=& (I-\mu_{\underline{m}_1}) \otimes I \otimes I + \mu_{\underline{m}_1} \otimes (I- \mu_{\underline{m}_2}) \otimes (I- \mu_{\underline{m}_3}) \nonumber \\ &\leq & (I-\mu_{\underline{m}_1}) \otimes I \otimes I + I \otimes (I- \mu_{\underline{m}_2}) \otimes (I- \mu_{\underline{m}_3}) \tag{20} \\ &=& (I-\mu_{\underline{m}_1}) \otimes I \otimes I + I \otimes (I- \mu_{\underline{m}_2}) \otimes I - I \otimes (I- \mu_{\underline{m}_2}) \otimes \mu_{\underline{m}_3} \nonumber \\ & \leq & (I-\mu_{\underline{m}_1}) \otimes I \otimes I + I \otimes (I - \mu_{\underline{m}_2} ) \otimes I - I \otimes I \otimes \mu_{\underline{m}_3} \tag{21} \\ &=& (I-\mu_{\underline{m}_1}) \otimes I \otimes I + I \otimes (I - \mu_{\underline{m}_2} ) \otimes I - I \otimes I \otimes \mu_{\underline{m}_3} + I \otimes I \otimes I - I \otimes I \otimes I \nonumber \\ &=& (I-\mu_{\underline{m}_1}) \otimes I \otimes I + I \otimes (I - \mu_{\underline{m}_2} ) \otimes I + I \otimes I \otimes (I - \mu_{\underline{m}_3} ) - I \nonumber \\ & \leq & (I-\mu_{\underline{m}_1}) \otimes I \otimes I + I \otimes (I - \mu_{\underline{m}_2} ) \otimes I + I \otimes I \otimes (I - \mu_{\underline{m}_3} ), \nonumber \end{align}\] here, inequalities (20 , 21 ) follow from the fact that \(\mu_{\underline{m}_1} \leq I\), and \((I -\mu_{\underline{m}_2}) \leq I\).
As is standard, here \(r_{G_{S}}\!\) isthe marginal of \(r_{\underline{G}}\) over the RVs indexed in S.↩︎
The curious reader may look at the two bounds in [7] with the \(\max_{\theta \neq 0}\) which in fact yields [7].↩︎
\(\mathcal{A}_{i}\) denotes both the finite set and the corresponding auxiliary Hilbert space. The specific reference will be clear from context.↩︎
When we say ‘average of \(\cdots\)’ we mean the expectation of the tilted state over the random codebook.↩︎