1-Bit Unlimited Sampling Beyond Fourier Domain:
Low-Resolution Sampling of Quantization Noise
August 26, 2025
Analog-to-digital converters (ADCs) play a critical role in digital signal acquisition across various applications, but their performance is inherently constrained by sampling rates and bit budgets. This bit budget imposes a trade-off between dynamic range (DR) and digital resolution, with ADC energy consumption scaling linearly with sampling rate and exponentially with bit depth. To bypass this, numerous approaches, including oversampling with low-resolution ADCs, have been explored. A prominent example is 1-Bit ADCs with Sigma-Delta Quantization (SDQ), a widely used consumer-grade solution. However, SDQs suffer from overloading or saturation issues, limiting their ability to handle inputs with arbitrary DR. The Unlimited Sensing Framework (USF) addresses this challenge by injecting modulo non-linearity in hardware, resulting in a new digital sensing technology. In this paper, we introduce a novel 1-Bit sampling architecture that extends both conventional 1-Bit SDQ and USF. Our contributions are twofold: (1) We generalize the concept of noise shaping beyond the Fourier domain, allowing the inclusion of non-bandlimited signals in the Fourier domain but bandlimited in alternative transform domains. (2) Building on this generalization, we develop a new transform-domain recovery method for 1-Bit USF. When applied to the Fourier domain, our method demonstrates superior performance compared to existing time-domain techniques, offering reduced oversampling requirements and improved robustness. Extensive numerical experiments validate our findings, laying the groundwork for a broader generalization of 1-Bit sampling systems.
Accepted to IEEE Journal of Selected Topics in Signal Processing.
Special Issue on Low-Bit-Resolution Signal Processing: Algorithms, Implementations, and Applications
Linear Canonical Transform, Noise Shaping, Quantization Noise, Sigma-Delta, Unlimited Sensing.
Digital acquisition of signals is based on quantization in both time and amplitude dimensions. The well-known Shannon-Nyquist sampling theorem states that, for band-limited signals, time quantization is lossless if the sampling rate exceeds a specific threshold, the Nyquist rate. This theorem serves as a bridge between the continuous and discrete realms. However, to fully digitize a signal, amplitude quantization is also necessary. Unlike time quantization, amplitude quantization introduces a permanent loss of information.
In practice, digitization is performed by analog-to-digital converters (ADCs), which carry out both time and amplitude quantization. In this process, “bits” act as the digital currency, defining the precision of signals and controlling the extent of information loss during the amplitude quantization. Since ADCs are characterized by their sampling rates and bit budgets, their operation naturally involves fundamental trade-offs.
From a practical perspective, ADCs consume energy that scales linearly with the sampling rate but exponentially [1] with the number of bits. Therefore, it is far more energy efficient to oversample than to increase the bit depth. This insight has driven the development of techniques that leverage oversampling to improve digital resolution within a fixed bit budget. Notable examples include oversampled ADCs [2], [3] and dithering [4], which redistribute quantization noise to enhance signal fidelity. A notable technological achievement in this context is one-bit or 1-Bit sampling, which comes in 3 main flavors:
Sigma-Delta (\(\Sigma\Delta\)) Quantization or \(\Sigma\Delta\mathrm{Q}\)[5]–[7]
Time-Encoded Sampling (or Asynchronous \(\Sigma\Delta\mathrm{Q}\)) [8]
In all these variants, the core idea is to utilize simple, low-complexity hardware combined with high oversampling. This approach achieves effective resolution, even when the ADC resolution is reduced to a single bit.
. Given a fixed bit budget, it is impossible to simultaneously cover a signal’s full dynamic range (DR) and achieve arbitrary digital resolution (DRes). The DR sets a fundamental limit on the voltage quantum, restricting the granularity of amplitude quantization, see Fig. 1 (a). Conversely, increasing the DRes is possible, but only at the expense of clipping the signal’s range, see Fig. 1 (b). Both scenarios are suboptimal. In the first case, signals weaker than the ADC’s DRes are lost in the quantizer’s null space. In the second case, signal saturation occurs due to range clipping, resulting in loss of information, particularly in pulse-like features. Even when bits are allocated to span the full DR, it has been widely reported [11]–[14] that ADCs remain vulnerable to saturation when high dynamic range (HDR) inputs exceed the ADC’s designated DR.
In recent years, the Unlimited Sensing Framework (USF) [15]–[18] has emerged as an alternative digital acquisition pipeline capable of overcoming the bottlenecks of conventional ADCs. The USF is built upon a simple yet powerful mathematical insight: For smooth signals, their fractional part encodes their integer part. This insight leads to a fundamentally new perspective on sampling: the quantized values of a signal (integer part) can be decoded from its quantization noise (fractional part) [19].
Note that the quantization noise (QN)—equivalently, the fractional part or modulo representation—of a bandlimited signal manifests as a non-bandlimited signal (see Fig. 1 (a)). However, surprisingly, the sampling theorem at the core of the USF proves that constant-factor oversampling is sufficient to recover the bandlimited input from the QN or modulo samples. This ensures that time quantization in the USF is lossless, akin to the Shannon-Nyquist theorem. Furthermore, with side information, recovery at the Nyquist rate is also possible [20].
The subtlety of the USF lies in the fact that the QN must be acquired in the analog domain, rather than in the digital domain, where it is traditionally interpreted. To highlight this distinction, we refer to the analog-domain QN as the modulo representation, emphasizing the critical role of non-linearity applied prior to sampling and quantization. Modulo folding is achieved through innovative hardware implementation, resulting in \({{\mathscr M}}_{\lambda}\)-ADC(modulo ADC) where \(\lambda>0\) is the ADC’s DR. A variety of hardware validations are provided in [17], [18], [21]–[26]. The presence of modulo non-linearity prior to the ADC ensures that HDR signals are folded or aliased back into the ADC’s DR, preventing saturation. As illustrated in Fig. 1 (a), for a given bit budget, the QN or modulo signal provides significantly higher DRes [17], [27].
Similar to conventional ADCs [3], 1-Bit variants [5]–[10] impose a DR limit on the input signal; exceeding this limit leads to encoding errors and permanent information loss. This defect is overcome by leveraging USF as it folds the signal prior to entering the 1-Bit ADC. While still in its early stages, the USF has been adapted to 1-Bit sampling scheme, and this comes in three flavors (see Fig. 2).
\(\Sigma\Delta\mathrm{Q}\): Graf et al.[28] propose a \(\Sigma\Delta\mathrm{Q}\) scheme to encode 1-Bit modulo samples of a bandlimited signal. A time-domain thresholding algorithm is then devised for residue estimation, which leads to recovery. The authors considered only noise-free modulo signal, and the study of robustness in the presence of noise is missing.
Time-Encoding: Leveraging modulo hysteresis architecture [21], Florescu & Bhandari [22] develop recovery methods for 1-Bit time-encoded [8] modulo measurements, validated through simulations and hardware implementation. Extensions include system identification [29].
Sign-Based 1-Bit Sampling: Recently, Emaz et al. proposed UNO (Unlimited One-Bit Sampling) [30], [31] where modulo folded signal is encoded with a sign-based quantizer, resulting in a 1-Bit scheme. Emaz et al. consider a dithered threshold scheme combined with randomized Kaczmarz algorithm for signal recovery. The authors also consider sparse signals in their later papers [32].
The emergence of both 1-Bit sampling [3] and the USF, along with their combination [22], [28], [30], has led to significant advancements. However, two key challenges still hinder practical applications.
Firstly, while conventional 1-Bit\(\Sigma\Delta\mathrm{Q}\)[3] leverages noise shaping in the Fourier domain [7] and has enabled successful implementations [3], there remain many signal classes that are non-bandlimited in the Fourier domain. Importantly, such signals can be bandlimited in other transform domains, such as the Fresnel transform (used in holography [33]) and the fractional Fourier transform (advantageous for MIMO systems [34], [35]). Generalizing the concept of bandwidth and consequently developing a noise-shaping approach beyond the Fourier domain would expand the applicability of \(\Sigma\Delta\mathrm{Q}\) to these signal classes, thereby motivating the need for a generalized \(\Sigma\Delta\mathrm{Q}\) framework. An initial attempt in this direction has been [36]. Secondly, while the combination of 1-Bit sampling [22], [28], [30] and USF has been instrumental in addressing the dynamic range (DR) problem, current approaches operate exclusively in the time domain. As a result, they cannot handle signals specified by their Fourier features, such as band-pass and multi-band signals. To overcome this limitation, a transform-domain understanding of noise shaping is essential. However, this theoretical investigation is challenging due to the presence of modulo non-linearity. These aspects constitute the central motivation of this paper.
This paper introduces a generalized noise-shaping framework that advances in two key directions: extending bandwidth and overcoming dynamic range limitations present in conventional approaches. Our main contributions are as follows:
We formalize the problem of 1-Bit sampling in the Linear Canonical Transform (LCT) domain, which parametrically generalizes the concept of bandwidth to transforms, including Fresnel, Laplace, Gauss-Weierstrass, and Bargmann transforms (see Table 1). Our theory is fully compatible with the Fourier domain.
We propose 1st and 2nd-order \(\Sigma\Delta\mathrm{Q}\) architectures for the LCT domain, denoted as \(\Lambda\Sigma\Delta\mathrm{Q}\). We analyze the boundedness of quantization noise and demonstrate that the recovery error from 1-Bit sampling remains bounded.
Addressing the saturation problem, we introduce \(\mathscr{M}\) \(\Lambda\Sigma\Delta\mathrm{Q}\), or USF-based \(\Lambda\Sigma\Delta\mathrm{Q}\). We leverage the insights from noise shaping presented in Fig. 6 (d) which shows that the bandlimited input, modulo folds and quantization noise are segregated in transform domain. This leads to a novel recovery method and introduces the first class of algorithms designed for transform-domain recovery.
We show that the reduction of \(\mathscr{M}\) \(\Lambda\Sigma\Delta\mathrm{Q}\) to the Fourier domain case achieves superior performance compared to existing time-domain techniques [28], offering reduced oversampling requirements and enhanced robustness.
The set of integers, positive integers, real and complex numbers is denoted as \({\mathbb{Z}}\), \({{\mathbb{Z}}^+}\), \({\mathbb{R}}\) and \({\mathbb{C}}\). The real part of a complex number \(x\) is denoted as \(\Re\left (x \right )\) and the imaginary part of \(x\) is denoted as \(\Im\left (x \right )\). Difference filter is defined as \(v\left [ n \right ]= \delta\left [ n \right ]- \delta\left [ n - 1 \right ]\), and applying this filter to a sequence \(a\left [ n \right ]\) is denoted as \(\underline{a}\left [ n \right ]= \left (a \ast v \right )\left [ n \right ]\), \(\Delta\) denotes difference operator \((\Delta a)\left [ n \right ]= \underline{a}\left [ n \right ]\), \(\mathsf{csgn}\left (a\left [ n \right ] \right ) = \text{sgn}\left (\Re\left (a\left [ n \right ] \right ) \right ) + \jmath \text{sgn}\left (\Im\left (a\left [ n \right ] \right ) \right )\), \(\left\lVert g \left [ n \right ]\right\rVert_\infty\) denotes absolute max-norm of \(g\left [ n \right ]\), \(\lfloor \cdot \rfloor\) denotes the floor operator, and \(\left\langle {f,g} \right\rangle = \int {f{g^*}}\) is the \(L_2\) inner-product. The mean-square error (MSE) between \({{\mathbf{x}}},{{\mathbf{y}}}\in{\mathbb{C}}^{N}\) is denoted as \(\mathbf{\epsilon}\left ({{\mathbf{x}}}, {{\mathbf{y}}} \right ) \stackrel{\rm{def}}{=}\frac{1}{N}\sum\nolimits_{n=0}^{N-1} \left|x\left [ n \right ] -y\left [ n \right ] \right|^{2}\). \(\mathsf{Unif}(a, b)\) is a uniform distribution from \(a\) to \(b\). Convolution is defined as \(\ast\). \(\mathbb{1}_{\mathcal{D}}(\cdot)\) is the indicator function on domain \({\mathcal{D}}\), \(\left (\cdot \right )^H\) denotes the conjugate transpose.
Linear Canonical Transform (LCT) was introduced in [37] in the context of harmonic oscillators in quantum mechanics. The LCT generalizes a wide class of transformations, which are summarized in Table 1. Subsequent research on the LCT includes optimal filtering [38], nonuniform sampling [39], multi-channel consistent sampling [40], and sampling of signals bandlimited to a disc [41]. The LCT has also been used for modelling optical cavities [42], implementing an optical crypto-system [43], and the optical implementation of 2D LCT was considered in [44]. The LCT, also referred to as the Affine Fourier Transform (AFT), has shown strong potential in handling doubly-dispersive channels in high-mobility communication environments [35], [45], [46]. In what follows, we will revisit some of the definitions associated with the LCT.
Definition 1 (Linear Canonical Transform (LCT)). Let \(\Lambda = \bigl[ \begin{smallmatrix} a & b\\ c & d\\ \end{smallmatrix} \bigr]\) with \(ad - bc = 1\). The LCT of a function \(g\left (t \right ), t \in {\mathbb{R}}\), is a unitary, integral mapping: \({\mathcal{L}_{\mathbf{\Lambda}}}: g \to \widehat{g}_{\mathbf{\Lambda}}\) defined by \[{\mathcal{L}_{\mathbf{\Lambda}}}\left [ g \right ]\left (\omega \right )= \widehat{g}_{\mathbf{\Lambda}}\left (\omega \right )= \begin{cases} \left \langle g, {\kappa_\mathbf{\Lambda}}\left (\cdot, \omega \right ) \right \rangle & b \neq 0\\ \sqrt{d} e^{\jmath\frac{1}{2}d\omega^2} & b = 0 \end{cases} .\] The LCT kernel \({\kappa_\mathbf{\Lambda}}\left (\mathbf{r} \right )\), which depends on the time-frequency coordinate \(\mathbf{r} = \left [ t \;\;\omega \right ]^\top\), is defined as \[{\kappa_\mathbf{\Lambda}}\left (\mathbf{r} \right ) = \frac{\exp\left (-\jmath \mathbf{r}^\top \mathbf{U} \mathbf{r} \right )}{\sqrt{-\jmath 2 \pi b}} , \;\;\mathbf{U} = \frac{1}{2b} \begin{bmatrix} a & -1\\ -1 & d \end{bmatrix}.\] The inverse-LCT is another LCT defined via \({\mathcal{L}_{{\mathbf{\Lambda}^{-1}}}}: \widehat{g}_{\mathbf{\Lambda}} \to g\), \[g\left (t \right )= {\mathcal{L}_{{\mathbf{\Lambda}^{-1}}}}\left [ \widehat{g}_{\mathbf{\Lambda}} \right ]\left (t \right )= \begin{cases} \left \langle \widehat{g}_{\mathbf{\Lambda}}, {\kappa_{\mathbf{\Lambda}^{-1}}}\left (\cdot, t \right ) \right \rangle & b \neq 0\\ \sqrt{a}e^{-\jmath \frac{1}{2} c a t ^ 2} g\left (at \right ) & b = 0 \end{cases}.\]
When working with sampled sequences, the discrete-time version of the LCT is a relevant mathematical tool.
Definition 2 (Discrete-Time LCT(DT-LCT)). Let \(g\left (t \right )\) be a continuous-time function with samples \(g\left [ n \right ]= g\left (nT \right ), T > 0\). The DT-LCT of \(g[n]\), denoted by \(\widehat{G}_{\mathbf{\Lambda}}\left (\omega \right )\), is defined as \[\begin{align} \widehat{G}_{\mathbf{\Lambda}}\left (\omega \right )&= T \left \langle g, {\kappa_\mathbf{\Lambda}}\left (\cdot, \omega \right ) \right \rangle\\ &= \frac{T}{\sqrt{\jmath 2\pi b}} \sum_{n = -\infty}^{+\infty} g\left [ n \right ]e^{\frac{\jmath}{2b} \left (a \left (n T \right )^2 - 2 n T \omega + d \omega^2 \right )}. \end{align}\] The Inverse Discrete-Time LCT(IDT-LCT) is then defined as \[\begin{align} g\left [ n \right ]&= \left \langle \widehat{G}_{\mathbf{\Lambda}}, {\kappa_{\mathbf{\Lambda}^{-1}}}\left (\cdot, nT \right ) \right \rangle_{\left [ -\pi b/T,\pi b/T \right ]}\\ &= \frac{1}{\sqrt{-\jmath 2\pi b}} \int_{-\pi b / T}^{\pi b / T} \widehat{G}_{\mathbf{\Lambda}}\left (\omega \right )e^{-\frac{\jmath}{2b} \left (a\left (n T\right )^2 - 2 n T \omega + d \omega^2 \right )} d\omega. \end{align}\]
When working with compactly supported functions, Linear Canonical Series (LCS) parallels the Fourier Series.
Definition 3 (Linear Canonical Series (LCS)). Let \(g\left (t \right )\) be a continuous-time function on the interval \(t \in [0, \tau)\). Its LCS is \[g\left (t \right )= \kappa_{b,\tau} \sum\nolimits_{m\in{\mathbb{Z}}} \widehat{g}_{\mathbf{\Lambda}}\left [ m \right ] {\kappa_\mathbf{\Lambda}}\left (t, m \omega_0 b \right ), \omega_0 = \frac{2 \pi}{\tau}\] where \(\kappa_{b,\tau} = \sqrt{\omega_0 b}\). The LCS coefficients are given by \(\widehat{g}_{\mathbf{\Lambda}}\left [ m \right ] = \kappa_{b,\tau}{\mathcal{L}_{\mathbf{\Lambda}}}\left [ g \right ]\left (m \omega_0 b \right ) \equiv \kappa_{b,\tau}\left \langle g, {\kappa_\mathbf{\Lambda}}\left (\cdot, m \omega_0 b \right ) \right \rangle\).
Similar to the Discrete Fourier Transform (DFT), we define the Discrete LCT(DLCT) as follows.
Definition 4 (Discrete LCT(DLCT)). Let \(g\left (t \right )\) be defined on the interval \([0, \tau)\) with samples \(g\left [ n \right ]= g\left (nT \right ), T > 0, n \in [0, N - 1]\) and \(N = \left \lfloor \frac{\tau}{T} \right \rfloor\), its DLCT is defined as \[\begin{align} \widehat{g}_{\mathbf{\Lambda}}\left [ m \right ]= T\kappa_{b,\tau}\sum\nolimits_{n=0}^{N-1} g\left [ n \right ]{\kappa_{\mathbf{\Lambda}^{-1}}}\left (m \omega_0 b, nT \right ) \label{eq:dlct-def} \end{align}\qquad{(1)}\] where \(\omega_0 = \frac{2\pi}{\tau}\). The Inverse DLCT(IDLCT) is defined as \[g\left [ n \right ]= \kappa_{b,\tau}\sum\nolimits_{\left | m \right | \leqslant M} \widehat{g}_{\mathbf{\Lambda}}\left [ m \right ]{\kappa_\mathbf{\Lambda}}\left (nT, m \omega_0 b \right ).\]
Since basis functions for LCT are chirps and not complex exponentials, it is helpful to define chirp (de)modulation.
Definition 5 (Chirp (De)modulation). Let \(\mathbf{\Lambda} = \bigl[ \begin{smallmatrix} a & b \\ c & d \end{smallmatrix} \bigr]\) be \(2 \times 2\) matrix. The chirp modulation function is defined as \[m_{\mathbf{\Lambda}}\left (t \right )\stackrel{\rm{def}}{=}\exp \left (\jmath \frac{a}{2b} t ^ 2 \right ). \label{eq:chirp-def}\qquad{(2)}\] \({\mathbf{\Lambda}}\)-parametrized up and down chirp modulation is denoted by \[{\overset{\lower 0.5em\smash{\scriptstyle \hpup}} {{g}}}\left (t \right )\stackrel{\rm{def}}{=}m_{\mathbf{\Lambda}}\left (t \right )g \left (t \right )\quad\text{and}\quad {\overset{\lower 0.5em\smash{\scriptstyle \hpdn}} {{g}}}\left (t \right )\stackrel{\rm{def}}{=}m_{\mathbf{\Lambda}}^*\left (t \right )g\left (t \right ).\]
Standard convolution \(\ast\) does not yield multiplication in the LCT domain. Hence, we define the LCT convolution \(\ast_{\mathbf{\Lambda}}\).
Definition 6 (LCT Convolution and Product Theorem [47]). Let \(\{g,f\}\) be continuous-time functions with samples \(\{g\left [ n \right ],f\left [ n \right ]\}\), respectively. The LCT convolution is defined as \[h\left [ n \right ]= \left (f \ast_{\mathbf{\Lambda}}g \right )\left [ n \right ]\stackrel{\rm{def}}{=}K_{\mathbf{\Lambda}}m_{\mathbf{\Lambda}}^*\left [ n \right ]\left ( {\overset{\lower 0.5em\smash{\scriptstyle \hpup}} {{f}}} \ast {\overset{\lower 0.5em\smash{\scriptstyle \hpup}} {{g}}} \right )\left [ n \right ] \label{eq:lct-conv-time}\qquad{(3)}\] where \(K_{\mathbf{\Lambda}}= \frac{1}{\sqrt{\jmath 2 \pi b}}\). The DT-LCT of \(h\left [ n \right ]\) is \[h\left [ n \right ]\xrightarrow{\mathsf{LCT}} {\widehat{H}_{\mathbf{\Lambda}}\left (\omega \right )= \frac{\Phi_\mathbf{\Lambda}\left (\omega \right )}{T} \widehat{F}_{\mathbf{\Lambda}}\left (\omega \right )\widehat{G}_{\mathbf{\Lambda}} \left (\omega \right )}\] where \(\{\widehat{F}_{\mathbf{\Lambda}},\widehat{G}_{\mathbf{\Lambda}},\widehat{H}_{\mathbf{\Lambda}} \}\) represent the DT-LCT of sequences, \(\{f\left [ n \right ],g\left [ n \right ],h\left [ n \right ]\}\), respectively and \(\Phi_\mathbf{\Lambda}\left (\omega \right )= e^{-\jmath\frac{d \omega^2}{2b}}\).
For signals that are bandlimited in the LCT domain, the following extension of Shannon’s sampling theorem guarantees recovery from samples [47].
Theorem 1. Let \(g\in\mathcal{B}_{\Omega_{\mathbf{\Lambda}}}\), then, provided that \(T\leqslant{\pi b}/{\Omega_m}\), \(g\left (t \right )\) can be recovered from samples via \[g\left (t \right )= e^{-\jmath\tfrac{a t^2}{2b}}\sum\limits_{n\in{\mathbb{Z}}} {\overset{\lower 0.5em\smash{\scriptstyle \hpup}} {{g}}}\left (nT \right ) \mathrm{sinc}\left (\frac{t - nT}{T} \right ).\]
\(\left(\boldsymbol{\Lambda}_{\sf S}\right)\) | Corresponding Transformations |
---|---|
\(\bigl[ \begin{smallmatrix} & 0 & 1 \\- & 1 &0 \end{smallmatrix} \bigr] = \pmb\Lambda_\textsf{FT}\) | Fourier Transform (FT) |
\(\bigl[ \begin{smallmatrix} &\cos\theta&\sin\theta \\ -&\sin\theta&\cos\theta \end{smallmatrix} \bigr] = \pmb\Lambda_\theta\) | Fractional Fourier Transform (FrFT) |
\(\bigl[ \begin{smallmatrix} & 1 & b \\ & 0 & 1 \end{smallmatrix} \bigr] = \pmb\Lambda_\textsf{FrT}\) | Fresnel Transform (FrT) |
\(\bigl[ \begin{smallmatrix} & 0 & \j \\ & \j & 0 \end{smallmatrix} \bigr] = \pmb\Lambda_\textsf{LT}\) | Laplace Transform (LT) |
\(\bigl[ \begin{smallmatrix} & \j \cos\theta &\j \sin\theta \\ & \j \sin \theta & -\j\cos\theta \end{smallmatrix} \bigr]\) | Fractional Laplace Transform |
\(\bigl[ \begin{smallmatrix} & 1 &\jmath b \\ & \jmath & 1 \end{smallmatrix} \bigr]\) | Bilateral Laplace Transform |
\(\bigl[ \begin{smallmatrix} & 1 & -\jmath b \\ & 0 & 1 \end{smallmatrix} \bigr]\), \(b\;\geqslant\;0\) | Gauss–Weierstrass Transform |
\(\tfrac{1}{{\sqrt 2 }} \bigl[\begin{smallmatrix} & 0 & e^{ - {{\jmath\pi } \mathord{\left/{\vphantom{{j\pi } 2}} \right.\kern-\nulldelimiterspace} 2}} \\ & -e^{ - {{\jmath\pi } \mathord{\left/{\vphantom{{j\pi } 2}} \right.\kern-\nulldelimiterspace} 2}} & 1 \end{smallmatrix} \bigr]\) | Bargmann Transform |
Here, we review the fundamental principles of \(\Sigma\Delta\mathrm{Q}\) for Fourier-bandlimited signals. This will be the stepping stone towards the extension of \(\Sigma\Delta\mathrm{Q}\) to a broader class of transformations (as outlined in Table 1) in the next section. We denote an \(\Omega\)-bandlimited function in the Fourier domain by, \[\label{eq:FBL} g \in {\mathcal{B}}_{\Omega_{\textsf{FT}}}\leftrightarrow \widehat{g}_{{\mathbf{\Lambda}_{\textsf{FT}}}}\left (\omega \right )= \widehat{g}_{{\mathbf{\Lambda}_{\textsf{FT}}}}\left (\omega \right )\mathbb{1}_{\left [ -\Omega_{\textsf{FT}}, \Omega_{\textsf{FT}} \right ]}\left (\omega \right ).\tag{1}\] Such signals can be reconstructed from \(\{\pm 1\}\) samples obtained via \(\Sigma\Delta\mathrm{Q}\), with a bounded reconstruction error [7]. \(\Sigma\Delta\mathrm{Q}\) is a low-complexity acquisition scheme that employs a feedback loop with a signum function. For bounded signals, say \({|g|\leqslant 1}\), the \(\Sigma\Delta\mathrm{Q}\) scheme is summarized as follows [7]: \[\tag{2} \begin{empheq}[box=\shadowbox*]{align} u\left [ n \right ]&= u\left [ n - 1 \right ] + g \left (nT h \right ) - q\left [ n \right ], \quad u \in \left (-1, 1\right ) \tag{3} \\ q\left [ n \right ]&= \text{sgn}\left (u\left [ n - 1 \right ] + g \left (nT h \right ) \right ), \qquad T = \pi/\Omega \end{empheq}\] where \(u\) is an intermediate variable and \(h\in \left (0, 1\right )\) denotes the oversampling ratio. We rearrange 3 to establish the relation between 1-Bit samples \(q\) and the bounded input signal \(g\) \[q\left [ n \right ]= \underbrace{g \left (nT h \right )}_{g \in {\mathcal{B}}_{\Omega_{\textsf{FT}}}} - \underbrace{\left (u \ast v\right )\left [ n \right ]}_\text{High-Pass} \equiv {g \left (nT h \right )} - \left (\Delta u \right )\left [ n \right ]. \label{eq:qn-noise-shaping}\tag{4}\] Since \(v\left [ n \right ]\) is a high-pass filter, it impels the quantization noise into high frequencies, moving it away from \(\widehat{g}_{{\mathbf{\Lambda}_{\textsf{FT}}}}\left (\omega \right )\), aptly justifying why \(\Sigma\Delta\mathrm{Q}\) is associated with noise shaping[7]. The net effect is that one can recover \(g\left (t \right )\) by filtering \(q\left [ n \right ]\), \[\widetilde{g}\left (t \right )= h\sum\limits_{n \in {\mathbb{Z}}} q\left [ n \right ]\varphi_\Omega\left (\frac{t}{T} - nh \right )\] where \(\varphi_\Omega\) is a low-pass interpolation function [7]. Not surprisingly, in line with conventional methods, redundancy improves the reconstruction quality as noise shaping is more effective—\(u\left [ n \right ]\) is displaced further away from the low-pass interval of \(\widehat{g}_{{\mathbf{\Lambda}_{\textsf{FT}}}}\left (\omega \right )\). The reconstruction error can be expressed as \[\begin{align} e\left (t \right )\stackrel{\rm{def}}{=}\left (g-\widetilde{g} \right )\left (t \right ) = h\underbrace{\sum\nolimits_{n \in {\mathbb{Z}}} \left (u \ast v \right )\left [ n \right ]\varphi\left (\tfrac{t}{T} - nh \right )}_\text{Low-pass filtered quantization noise}. \end{align}\] The following result quantifies the relationship between oversampling ratio \(\left (h \right )\) and reconstruction quality: \[\left | e\left (t \right ) \right | \leqslant h\left\lVert\partial_t \varphi_\Omega\right\rVert_{L^1} \label{eq:ft-sdq-err-bound}\tag{5}\] which can be further improved to \(\mathcal{O}\left (h^3 \right )\) (see [7] Section 2.3).
Here, we develop the \(\Lambda\Sigma\Delta\mathrm{Q}\) scheme—the \(\Sigma\Delta\mathrm{Q}\) scheme for the LCT domain. Clearly, the \(\Lambda\Sigma\Delta\mathrm{Q}\) scheme should maintain backwards compatibility with the conventional Fourier domain \(\Sigma\Delta\mathrm{Q}\). The first step towards this goal is to generalize the notation of bandlimitedness in 1 . We do so by defining the class of \(\Omega\)-bandlimited signals in the LCT domain as, \[g \in \mathcal{B}_{\Omega_{\mathbf{\Lambda}}}\leftrightarrow \widehat{g}_{\mathbf{\Lambda}}\left (\omega \right )= \widehat{g}_{\mathbf{\Lambda}}\left (\omega \right )\mathbb{1}_{[-\Omega_{\mathbf{\Lambda}}, \Omega_{\mathbf{\Lambda}}]} \left (\omega \right ).\] This generalizes the notion of bandlimitedness to a much wider class of transformations (see Table 1). Such signals, better described by their LCT, have been studied in optics [48], [49], harmonic analysis [50], signal processing [51], [52], or communications [35], [46]. Despite these research efforts, \(\Sigma\Delta\mathrm{Q}\) for LCT-bandlimited signals was not studied previously.
Clearly, whenever \(\mathbf{\Lambda}\neq {\mathbf{\Lambda}_{\textsf{FT}}}\), the difficulty is that noise shaping can no longer be achieved via 4 because of the breakdown of the well-known convolution-multiplication property4 of the Fourier transforms for the LCT domain. This necessitates the development of a new strategy. Here, we will develop 1st and 2nd-order \(\Lambda\Sigma\Delta\mathrm{Q}\) schemes. Higher-order generalizations may be considered in future works.
Since our goal is to achieve noise shaping for any \(g \in \mathcal{B}_{\Omega_{\mathbf{\Lambda}}}\), it is natural to consider the LCT of the form, \[\widehat{Q}_{\mathbf{\Lambda}}\left (\omega \right )= \underbrace{\widehat{G}_{\mathbf{\Lambda}}\left (\omega \right )}_\text{g \in \mathcal{B}_{\Omega_{\mathbf{\Lambda}}}} - \underbrace{\widehat{U}_{\mathbf{\Lambda}}\left (\omega \right )\left (1 - e^{-\jmath \frac{\omega T h}{b}}\right )}_\text{High-Pass} \label{eq:q-lct-def}\tag{6}\] where \(\widehat{Q}_{\mathbf{\Lambda}}\left (\omega \right )\) denotes the DT-LCT(see Definition 6) of 1-Bit samples, \(q_{\mathbf{\Lambda}}\left [ n \right ]\). In the time domain, one would expect \[q_{\mathbf{\Lambda}}\left [ n \right ]= {g\left [ n \right ]} - {\left (u \ast_{\mathbf{\Lambda}}{v_{\mathbf{\Lambda}}}\right )\left [ n \right ]} \label{eq:qLn-hp}\tag{7}\] where \({v_{\mathbf{\Lambda}}}\) (noise shaping filter) and \(u\) remain to be characterized. Note that we have utilized the LCT convolution operator, i.e., \(\ast_{\mathbf{\Lambda}}\), which respects the convolution and product theorem [52] in the LCT domain, and allows us to represent 7 as, \[\widehat{Q}_{\mathbf{\Lambda}}\left (\omega \right )= \widehat{G}_{\mathbf{\Lambda}}\left (\omega \right )- \frac{e^{-\jmath \frac{d \omega ^ 2}{2b}}}{T h} \widehat{U}_{\mathbf{\Lambda}}\left (\omega \right )\widehat{V}_{\mathbf{\Lambda}} \left (\omega \right ). \label{eq:qw-lct-conv}\tag{8}\] We can now relate 8 with 6 , which leads to the identification of the noise shaping filter in terms of LCT parameters, \[\begin{align} \label{eq:Vw} \widehat{V}_{\mathbf{\Lambda}} \left (\omega \right )&= \left (T h \right ) e^{\jmath\frac{d\omega^2}{2b}} \left (1 - e^{-\jmath \frac{\omega T h}{b}}\right ), \quad T = {\pi b}/{\Omega_m} \end{align}\tag{9}\] where the magnitude response implements a high-pass filter in the LCT domain, or \(|{\widehat{V}_{\mathbf{\Lambda}}\left (\omega \right )}| = 2 T h\left | \sin \left ({\omega T h/ \left (2 b \right )} \right ) \right |\).
To express the state equations of \(\Lambda\Sigma\Delta\mathrm{Q}\), we need to deduce \(u\left [ n \right ]\) from 7 . To this end, we first identify \({v_{\mathbf{\Lambda}}}\left [ n \right ]\) as follows, \[\begin{align} {v_{\mathbf{\Lambda}}}\left [ n \right ]& = \left \langle \widehat{V}_{\mathbf{\Lambda}}, {\kappa_{\mathbf{\Lambda}^{-1}}}\left (\cdot, n T h \right ) \right \rangle \\ &= \frac{e^{-\jmath\frac{a\left (nT h \right )^2}{2b}}}{\sqrt{-\jmath 2 \pi b}} \left (T h \right ) \int_{-\pi b / \left (T h \right )}^{\pi b / \left (T h \right ) } {\left( {1 - {e^{ - \jmath \frac{{\omega hT}}{b}}}} \right)} {e^{\jmath \frac{{\omega hT}}{b}n}}d\omega\\ &= \sqrt{\jmath 2 \pi b} {\overset{\lower 0.5em\smash{\scriptstyle \hpdn}} {{v}}} \left [ n \right ]. \end{align}\] Setting \({v_{\mathbf{\Lambda}}}\left [ n \right ]= \sqrt{\jmath 2 \pi b} {\overset{\lower 0.5em\smash{\scriptstyle \hpdn}} {{v}}}\), next we re-write 7 as follows, \[\begin{align} q_{\mathbf{\Lambda}}\left [ n \right ]&= g\left [ n \right ]- K_{\mathbf{\Lambda}}m_{\mathbf{\Lambda}}^*\left [ n \right ]\left ( {\overset{\lower 0.5em\smash{\scriptstyle \hpup}} {{u}}} \ast {\overset{\lower 0.5em\smash{\scriptstyle \hpup}} {{v}}}_\mathbf{\Lambda}\right )\left [ n \right ]\notag\\ &= \left (g - u \right )\left [ n \right ]+ m_{\mathbf{\Lambda}}^*\left [ n \right ]m_{\mathbf{\Lambda}}\left [ n - 1 \right ] u\left [ n - 1 \right ]. \label{eq:qL-exp} \end{align}\tag{10}\] Now, since \(m_{\mathbf{\Lambda}}^*\left [ n \right ]m_{\mathbf{\Lambda}}\left [ n - 1 \right ] = e^{-\jmath\frac{a\left (2n - 1 \right ) \left (T h\right )^2}{2b} }\) in the above, we can express the intermediate state variable \(u\left [ n \right ]\) as \[u\left [ n \right ]= g\left [ n \right ]- q_{\mathbf{\Lambda}}\left [ n \right ]+ e^{-\jmath\frac{a\left (2n - 1 \right ) \left (T h\right )^2}{2b} } u\left [ n - 1 \right ]. \label{eq:un-sdq-lct}\tag{11}\] In analogy to 2 , the 1st-order \(\Lambda\Sigma\Delta\mathrm{Q}\) is written as follows, \[\begin{empheq}[box=\shadowbox*]{align} u\left [ n \right ]&= g\left [ n \right ]- q_{\mathbf{\Lambda}}\left [ n \right ]+ e^{-\jmath\frac{a\left (2n - 1 \right )\left (T h\right )^2}{2b}} u\left [ n - 1 \right ]\tag{12}\\ q_{\mathbf{\Lambda}}\left [ n \right ]&= \mathsf{csgn}\left (e^{-\jmath\frac{a\left (2n - 1 \right )\left (T h\right )^2}{2b}} u \left [ n - 1 \right ] + g\left [ n \right ] \right )\tag{13}. \end{empheq} \tag{14}\] The novel \(\Lambda\Sigma\Delta\mathrm{Q}\) architecture is depicted in Fig. 3. We reconstruct \(g\left (t \right )\) by low-pass filtering 1-Bit samples \(q_{\mathbf{\Lambda}}\left [ n \right ]\) with interpolation kernel \(\varphi\) with bandwidth \(\Omega\) as follows, \[\begin{align} \widetilde{g}\left (t \right )& = he^{-\jmath \frac{a t^2}{2b}} \sum\limits_{n \in {\mathbb{Z}}} {\overset{\lower 0.5em\smash{\scriptstyle \hpup}} {{q}}}_{\mathbf{\Lambda}}\left [ n \right ]\varphi_\Omega \left (\frac{t}{T} - n h \right )\notag\\ &\stackrel{\eqref{eq:qLn-hp}}{=} g\left (t \right )- \underbrace{he^{-\jmath \frac{a t^2}{2b}} \sum\limits_{n \in {\mathbb{Z}}} \left ( {\overset{\lower 0.5em\smash{\scriptstyle \hpup}} {{u}}} \ast v\right )\left [ n \right ]\varphi\left (\frac{t}{T} - n h \right )}_{e\left (t \right )}\label{eq:lsdq1-rec} \end{align}\tag{15}\] where the approximation error \(e\left (t \right )\) is attributed to \(u \ast_{\mathbf{\Lambda}}{v_{\mathbf{\Lambda}}}\)—the contamination of the baseband by the quantization noise. Since the approximation error is defined as \(e\left (t \right )= g\left (t \right )- \widetilde{g}\left (t \right )\), we are interested in bounding its value. Before bounding the approximation error, we first bound the state variable \(u\left [ n \right ]\).
Lemma 1 (Boundedness Property). Assume \(\left | g\left (t \right ) \right | \leqslant 1\), \(\left | \Re\left (u\left [ 0 \right ] \right ) \right | < 1\) and \(\left | \Im\left (u\left [ 0 \right ] \right ) \right | < 1\), then for \(\{u\left [ n \right ]\}_{n \in {{\mathbb{Z}}^+}}\) defined in 12 , it holds that \(\left | \Re\left (u\left [ n \right ] \right ) \right | < 1\) and \(\left | \Im\left (u\left [ n \right ] \right ) \right | < 1\).
Proof. Let \(p_\mathbf{\Lambda}\left [ n \right ]= e^{-\jmath \frac{a\left (2n - 1 \right )\left (T h \right )^2}{2b}}\), clearly \(\left | p_\mathbf{\Lambda} \right | = 1\). By induction, \(\left | \Re\left (u\left [ n - 1 \right ] \right ) \right | < 1\) and \(\left | \Im\left (u\left [ n - 1 \right ] \right ) \right | < 1\), then, \(\left | \Re\left (p_\mathbf{\Lambda}\left [ n \right ]u\left [ n - 1 \right ] \right ) \right | < 1\) and \(\left | \Im\left (p_\mathbf{\Lambda}\left [ n \right ]u\left [ n - 1 \right ] \right ) \right | < 1\). Since \(\left\lVert g\right\rVert_\infty \leqslant 1\), then \(\left | \Re\left (p_\mathbf{\Lambda}\left [ n \right ]u \left [ n - 1 \right ] + g\left [ n \right ] \right ) \right | < 2\) and \(\left | \Im\left (p_\mathbf{\Lambda}\left [ n \right ]u \left [ n - 1 \right ] + g\left [ n \right ] \right ) \right | < 2\). Assuming \(q_{\mathbf{\Lambda}}\left [ n \right ]\) defined in 13 , \(\left | \Re{\left (q_{\mathbf{\Lambda}}\left [ n \right ] \right )} \right | = 1\) and \(\left | \Im{\left (q_{\mathbf{\Lambda}}\left [ n \right ] \right )} \right | = 1\) which implies that \(\left | \Re\left (u\left [ n \right ] \right ) \right | = \left | \Re\left (p_\mathbf{\Lambda}\left [ n \right ]u \left [ n -1 \right ] + g\left [ n \right ]- q_{\mathbf{\Lambda}}\left [ n \right ] \right ) \right | < 1\) and \(\left | \Im\left (u\left [ n \right ] \right ) \right | = \left | \Im\left (p_\mathbf{\Lambda}\left [ n \right ]u \left [ n -1 \right ] + g\left [ n \right ]- q_{\mathbf{\Lambda}}\left [ n \right ] \right ) \right | < 1\). ◻
With \(u\left [ n \right ]\) bounded, next we show that the approximation error, \(e\left (t \right ) =g\left (t \right ) - \widetilde{g}\left (t \right )\) (see 15 ), is also bounded.
Proposition 1 (Recovery Error Bound). Let \(g \in \mathcal{B}_{\Omega_{\mathbf{\Lambda}}}, \left\lVert g\right\rVert_\infty \leqslant 1\) with 1-Bit samples, \(q_{\mathbf{\Lambda}}\left [ n \right ]\) 13 . Then the following holds, \[\begin{align} \label{eq:etbound} e\left (t \right ) & = he^{-\jmath \frac{a t^2}{2b}} \sum\limits_{n \in {\mathbb{Z}}} \left ( {\overset{\lower 0.5em\smash{\scriptstyle \hpup}} {{u}}} \ast v\right )\left [ n \right ]\varphi\left (\frac{t}{T} - n h \right ), \;\;\; h< \left (0, 1 \right )\notag \\ & \Longrightarrow \left | e\left (t \right ) \right | \leqslant h\sqrt{2} \left\lVert\partial_t \varphi_\Omega\right\rVert_{L^1}. \end{align}\qquad{(4)}\]
Proof. With \(\left | e\left (t \right ) \right | = h\left | \sum\nolimits_{n\in{\mathbb{Z}}} \left ( {\overset{\lower 0.5em\smash{\scriptstyle \hpup}} {{u}}} \ast v \right )\left [ n \right ]\varphi_{\Omega} \left (\frac{t}{T} - nh\right )\right |\) and since \(|{ {\overset{\lower 0.5em\smash{\scriptstyle \hpup}} {{u}}}\left [ n \right ]}| = \sqrt{2}\), we can simplify \(\left | e\left (t \right ) \right |\) as follows, \[\begin{align} &\left | e\left (t \right ) \right | = h\left | \sum\nolimits_{m\in{\mathbb{Z}}} {\overset{\lower 0.5em\smash{\scriptstyle \hpup}} {{u}}}\left [ m \right ]\sum\nolimits_{n \in {\mathbb{Z}}} v\left [ n - m \right ] \varphi_{\Omega} \left (\tfrac{t}{T} - nh\right )\right | \\ &\leqslant h\sqrt{2} \sum\nolimits_{n \in {\mathbb{Z}}} \left | \left (\varphi_{\Omega} \left (\tfrac{t}{T} - nh\right )- \varphi_{\Omega} \left (\tfrac{t}{T} - \left (n + 1 \right )h\right )\right )\right |\\ & \leqslant h\sqrt{2} \sum\nolimits_{n \in {\mathbb{Z}}} \int_{\frac{t}{T} - \left (n + 1 \right )h}^{\frac{t}{T} - nh}\left | \partial_t \varphi\left (y \right ) dy \right | = h\sqrt{2} \left\lVert\partial_t \varphi_\Omega\right\rVert_{L^1}. \end{align}\] This concludes the proof. The additional \(\sqrt{2}\) factor is due to the use of complex numbers. For the FT case and real input, replacing \(\mathsf{csgn}\) in 13 with \(\text{sgn}\) leads to the conventional \(\Sigma\Delta\) architecture in 2 with error bound shown in 5 . ◻
Using a 2nd-order difference filter offers improved noise shaping in the Fourier domain [7]. Here, we leverage this property for LCT and derive the corresponding 2nd-order scheme. In the view of 6 , replacing \(\left (1 - e^{-\jmath \left (\omega T h/b \right )} \right )\) with \(\left (1 - e^{-\jmath \left (\omega T h/b \right )}\right )^2\) improves noise rejection. DT-LCT of 2nd-order \(\Lambda\Sigma\Delta\mathrm{Q}\), akin to 6 , satisfies \[\begin{align} &\widehat{Q}_{\mathbf{\Lambda}}^{[2]}\left (\omega \right ) = {\widehat{G}_{\mathbf{\Lambda}}\left (\omega \right )} - {\widehat{U}_{\mathbf{\Lambda}}\left (\omega \right )\left (1 - e^{-\jmath \frac{\omega T h}{b}} \right )^ 2} \label{eq:sd2-lct-req} \\ &\stackrel{\eqref{eq:Vw}}{=} \widehat{G}_{\mathbf{\Lambda}}\left (\omega \right )- \left (\tfrac{e^{-\jmath \frac{d\omega^2}{2b}}}{T h} \right ) \widehat{U}_{\mathbf{\Lambda}}\left (\omega \right )\left (1 - e^{-\jmath \frac{\omega T h}{b}} \right )\widehat{V}_{\mathbf{\Lambda}}\left (\omega \right )\notag \end{align}\tag{16}\] where \(\widehat{Q}_{\mathbf{\Lambda}}^{[2]}\left (\omega \right )\) denotes the DT-LCT of \(q_{\mathbf{\Lambda}}^{[2]}\left [ n \right ]\). Since \(q_{\mathbf{\Lambda}}^{[2]}\left [ n \right ]\) utilizes a 2nd-order filter (see 16 ), we expect an auto-convolution structure in the time domain, \[q_{\mathbf{\Lambda}}^{[2]}\left [ n \right ]= g\left [ n \right ]- \left (u \ast_{\mathbf{\Lambda}}{v_{\mathbf{\Lambda}}}\ast_{\mathbf{\Lambda}}{v_{\mathbf{\Lambda}}} \right )\left [ n \right ]. \label{eq:qL2-n}\tag{17}\]
By introducing \(x = u \ast_{\mathbf{\Lambda}}{v_{\mathbf{\Lambda}}}\), we can interpret 17 as a 1st-order \(\Lambda\Sigma\Delta\mathrm{Q}\), enabling a simplified representation, \[q_{\mathbf{\Lambda}}^{[2]}\left [ n \right ]= g\left [ n \right ]- \left (x \ast_{\mathbf{\Lambda}}{v_{\mathbf{\Lambda}}}\right )\left [ n \right ] \label{eq:qlsec-n}\tag{18}\] with \(\widehat{Q}_{\mathbf{\Lambda}}^{[2]}\left (\omega \right )= \widehat{G}_{\mathbf{\Lambda}}\left (\omega \right )- \widehat{X}_{\mathbf{\Lambda}}\left (\omega \right )\left (1 - \exp\left (-\jmath \frac{\omega T h}{b} \right ) \right )\) being the DT-LCT representation. We now express \(x\left [ n \right ]\) as \[x\left [ n \right ]= g\left [ n \right ]- q_{\mathbf{\Lambda}}^{[2]}\left [ n \right ]+ \underbrace{e^{-\jmath\frac{a\left (2n - 1 \right ) \left (hT \right )^2}{2b} } x\left [ n - 1 \right ]}_{\widetilde{x}\left [ n \right ]}.\] In the DT-LCT domain, \(x\left [ n \right ]\) becomes 16 , whenever \[\begin{align} \widehat{X}_{\mathbf{\Lambda}}\left (\omega \right )= \frac{e^{-\jmath \frac{d\omega^2}{2b}}}{T h} \widehat{U}_{\mathbf{\Lambda}}\left (\omega \right )\widehat{V}_{\mathbf{\Lambda}} \left (\omega \right )= \widehat{U}_{\mathbf{\Lambda}}\left (\omega \right )\left (1 - e^{-\jmath \frac{\omega T h}{b}} \right ). \end{align}\]
We can now relate \(u\left [ n \right ]\) with \(x\left [ n \right ]\) in the time domain, \[\begin{align} x\left [ n \right ]&= \left (u \ast_{\mathbf{\Lambda}}{v_{\mathbf{\Lambda}}}\right )\left [ n \right ]= m_{\mathbf{\Lambda}}^*\left [ n \right ]\left ( {\overset{\lower 0.5em\smash{\scriptstyle \hpup}} {{u}}} \ast v \right )\left [ n \right ]\notag\\ &= u\left [ n \right ]- e^{-\jmath\frac{a\left (2n - 1 \right ) \left (T h \right )^2}{2b}} u\left [ n - 1 \right ]. \label{eq:xn-lctconv-vL} \end{align}\tag{19}\] From 19 we express \(u\left [ n \right ]\) as \[u\left [ n \right ]= x\left [ n \right ]+ {e^{-\jmath\frac{a\left (2n - 1 \right ) \left (T h \right )^2}{2b}} u\left [ n - 1 \right ]} \equiv x\left [ n \right ]+ {\widetilde{u}\left [ n \right ]}.\] Lastly, in analogy to the FT case [7], we set \(q_{\mathbf{\Lambda}}^{[2]}\left [ n \right ]\) as follows \[q_{\mathbf{\Lambda}}^{[2]}\left [ n \right ]= \mathsf{csgn}\left (G\left (\widetilde{x}\left [ n \right ], \widetilde{u}\left [ n \right ], g\left [ n \right ] \right ) \right )\] where \(G\left (x, u, g \right ) = c_0 x + u + g\) with \(c_0 = \frac{1}{2}\). Our construction of the 2nd-order \(\Lambda\Sigma\Delta\mathrm{Q}\) is shown in Fig. 4, and it is described by the following state equations: \[\tag{20} \begin{empheq}[box=\shadowbox*]{align} x\left [ n \right ]&= g\left [ n \right ]- q_{\mathbf{\Lambda}}^{[2]}\left [ n \right ]+ e^{-\jmath\frac{a\left (2n - 1 \right ) \left (T h \right )^2}{2b}} x\left [ n - 1 \right ]\tag{21}\\ u\left [ n \right ]&= x\left [ n \right ]+ e^{-\jmath\frac{a \left (2n - 1 \right ) \left (T h \right )^2}{2b}} u \left [ n - 1 \right ]\tag{22}\\ q_{\mathbf{\Lambda}}^{[2]}\left [ n \right ]&= \mathsf{csgn}\left (c_0 \widetilde{x}\left [ n \right ]+ \widetilde{u}\left [ n \right ]+ g\left [ n \right ] \right ),\quad c_0 = \frac{1}{2}.\tag{23} \end{empheq}\] The reconstruction of \(g\left (t \right )\) is obtained by low-pass filtering \(q_{\mathbf{\Lambda}}^{[2]}\left [ n \right ]\) with any low-pass kernel \(\varphi_\Omega \in \mathcal{B}_{\Omega_{\mathbf{\Lambda}}}\) \[\widetilde{g}\left (t \right )= he^{-\jmath \frac{a t^2}{2b}} \sum\limits_{n \in {\mathbb{Z}}} q_{\mathbf{\Lambda}}^{[2]}\left [ n \right ]\varphi_\Omega \left (\frac{t}{T} - n h \right ).\] The approximation error \(e = g- \widetilde{g}\) measures contamination of \(\widehat{g}_{\mathbf{\Lambda}}\) by filtered quantization noise.
For conventional \(\Sigma\Delta\mathrm{Q}\) to function properly, it is essential that \(\left\lVert g\left (t \right )\right\rVert_\infty < 1\) (see Lemma 1), and this limitation also applies to \(\Lambda\Sigma\Delta\mathrm{Q}\) schemes. Whenever \(\left\lVert g\left (t \right )\right\rVert_\infty > 1\), the reconstruction fails (see Fig. 3 in [28]). To prevent the quantizer from saturating, we leverage the USF strategy and introduce modulo non-linearity prior to \(\Lambda\Sigma\Delta\mathrm{Q}\), defined by, \[{\mathscr M}_{\lambda} \left (g \right ) = 2\lambda \left (\left\llbracket \frac{g}{2\lambda} + \frac{1}{2} \right\rrbracket - \frac{1}{2} \right ), \;\;\; \begin{array} {l}{\lambda\in{\mathbb{R}}^+}\\{\llbracket g \rrbracket = g - \lfloor g \rfloor} \end{array}.\] For complex-valued functions, \(g\in{\mathbb{C}}\) we define, \[{\mathscr M}_{\lambda} \left (g \right ) = {\mathscr M}_{\lambda} \left (\Re\left (g \right ) \right )+ \jmath {\mathscr M}_{\lambda} \left (\Im \left (g \right ) \right ),\quad\lambda\in{\mathbb{R}}^+.\] We can now design a novel 1-Bit USF acquisition scheme for LCT-bandlimited signals as illustrated in Fig. 5, where the folding (with \(\lambda<1\)) is implemented in continuous-time, prior to digitization, ensuring that the quantizer never overloads. However, this deviates from the signal model in Section 4 and necessitates new recovery approaches that can decode \(g\) from modulo encoded \(q_{\mathbf{\Lambda},\lambda}\left [ n \right ]\).
Our goal is to recover \(g\in \mathcal{B}_{\Omega_{\mathbf{\Lambda}}}\) from 1-Bit modulo samples \(q_{\mathbf{\Lambda},\lambda}\left [ n \right ]\) acquired via \(\mathscr{M}\) \(\Lambda\Sigma\Delta\mathrm{Q}\).
In this section, we develop a new recovery algorithm for \(\mathscr{M}\) \(\Lambda\Sigma\Delta\mathrm{Q}\), which jointly capitalizes on the transform domain separation of the signal and folds together with noise-shaping of quantization noise. The proposed algorithm is agnostic to the folding threshold (\(\lambda\)), unlike the time-domain methods proposed in [22], [28], [30]. Although we use \(q_{\mathbf{\Lambda},\lambda}\left [ n \right ]\) in our derivations, the same approach works with \(q_{\mathbf{\Lambda},\lambda}^{[l]}\left [ n \right ]\) captured by \(l\)th-order schemes. The starting point for the recovery of \(g\left (t \right )\in \mathcal{B}_{\Omega_{\mathbf{\Lambda}}}\) is the modular decomposition property [15], which applies to arbitrary signals and enables us to write, \[\begin{align} g\left (t \right )&= {\overset{\lower 0.5em\smash{\scriptstyle \hpdn}} {{y}}}\left (t \right )+ {\overset{\lower 0.5em\smash{\scriptstyle \hpdn}} {{\varepsilon}}}\left (t \right )\label{eq:lct-mod-dec}\\ &= m_{\mathbf{\Lambda}}^*\left (t \right )\left ({\mathscr M}_{\lambda}\left ( {\overset{\lower 0.5em\smash{\scriptstyle \hpup}} {{g}}}\left (t \right ) \right ) + \sum\limits_{k=0}^{K-1}c_k\mathbb{1}\left (t - n_kT h \right ) \right ).\notag \end{align}\tag{24}\] We explain the modular decomposition property in Fig. 6 on the Fourier transform (\(\mathbf{\Lambda}= {\mathbf{\Lambda}_{\textsf{FT}}}\)) example. The input signal \(g\) to \(\mathscr{M}\) \(\Lambda\Sigma\Delta\mathrm{Q}\) and the modulo signal \(y\) are shown in Fig. 6 (a). Residue \(\varepsilon= {\overset{\lower 0.5em\smash{\scriptstyle \hpup}} {{g}}} - y\) to be estimated is depicted in Fig. 6 (b). One-bit modulo samples \(q_{\mathbf{\Lambda},\lambda}\left [ n \right ]\) captured by \(\mathscr{M}\) \(\Lambda\Sigma\Delta\mathrm{Q}\) are shown in Fig. 6 (c). Due to the superposition of \({\overset{\lower 0.5em\smash{\scriptstyle \hpup}} {{g}}}\) and \(\varepsilon\) in the LCT domain, we expect to observe a separation as shown in Fig. 6 (d), without the effect of quantization noise. This separation property has been previously leveraged in the case of Fourier [17] and LCT domain [53], respectively. In our case, this separation does not apply directly because of the contamination of quantization noise resulting from 1-Bit samples. However, by controlled oversampling, the noise-shaping property of \(\Lambda\Sigma\Delta\mathrm{Q}\) can be utilized to our advantage to estimate the residue in the interval between \(\Omega_1 = \frac{2 \pi M_1 b}{\tau}\) and \(\Omega_2= \frac{2 \pi M_2 b}{\tau}\), shown in Fig. 6 (d). Hence, despite the presence of heavy distortion arising from 1-Bit samples, we can still repurpose the transform domain separation.
Rewriting 12 with \({\overset{\lower 0.5em\smash{\scriptstyle \hpdn}} {{y}}}\left (t \right )\) as an input produces, \[q_{\mathbf{\Lambda},\lambda}\left [ n \right ]= {\overset{\lower 0.5em\smash{\scriptstyle \hpdn}} {{y}}}\left (nT h \right ) + e^{-\jmath\frac{a \left (2n - 1 \right ) \left (T h \right )^2}{2b}} u \left [ n - 1 \right ] - u\left [ n \right ]. \label{eq:qLn-dny}\tag{25}\] Next, we substitute \({\overset{\lower 0.5em\smash{\scriptstyle \hpdn}} {{y}}}\left (t \right )\) from 24 in 25 and obtain \[\begin{align} q_{\mathbf{\Lambda},\lambda}\left [ n \right ]&= g\left [ n \right ]- {\overset{\lower 0.5em\smash{\scriptstyle \hpdn}} {{\varepsilon}}}\left [ n \right ]+ e^{-\jmath\frac{a \left (2n - 1 \right ) \left (T h \right )^2}{2b}} u \left [ n - 1 \right ] - u\left [ n \right ]\notag\\ &= g\left [ n \right ]- {\overset{\lower 0.5em\smash{\scriptstyle \hpdn}} {{\varepsilon}}}\left [ n \right ]+ r_\mathbf{\Lambda}\left [ n \right ]. \label{eq:qln-mdp-expanded} \end{align}\tag{26}\] By definition, \(\varepsilon\left [ n \right ]\) is a simple function and hence, its first-order difference \(\underline{\varepsilon}\left [ n \right ]\) results in a sparse representation with support \(\{n_k\}_{k=0}^{K-1}\) and weights \(\{c_k\}_{k=0}^{K-1}\). In what follows, we will elucidate that \(\underline{\varepsilon}\left [ n \right ]\) maps to a sum of complex exponentials in the LCT domain, enabling their parameter estimation using known spectral estimation methods. To capitalize on this insight, we proceed by applying \({v_{\mathbf{\Lambda}}}\left [ n \right ]\) to 26 , resulting in \[\begin{align} z\left [ n \right ]&= \left (q_{\mathbf{\Lambda},\lambda}\ast_{\mathbf{\Lambda}}{v_{\mathbf{\Lambda}}}\right )\left [ n \right ]= m_{\mathbf{\Lambda}}^*\left [ n \right ]\left ( {\overset{\lower 0.5em\smash{\scriptstyle \hpup}} {{q}}}_{\mathbf{\Lambda},\lambda} \ast v \right )\left [ n \right ]\notag\\ &= m_{\mathbf{\Lambda}}^*\left [ n \right ]\left ({g}^{\Diamond}_\mathbf{\Lambda}\left [ n \right ]- \underline{\varepsilon}_\mathbf{\Lambda}\left [ n \right ]+ {r}^{\Diamond}_\mathbf{\Lambda}\left [ n \right ]\right ) \label{eq:wn-def} \end{align}\tag{27}\] where \({o}^{\Diamond}_\mathbf{\Lambda}\stackrel{\rm{def}}{=}\underline{( {\overset{\lower 0.5em\smash{\scriptstyle \hpup}} {{o}}})}\). We decompose the sum in 27 as follows, \[\begin{cases} & m_{\mathbf{\Lambda}}^*\left [ n \right ]{g}^{\Diamond}_\mathbf{\Lambda}\left [ n \right ]\\ -&m_{\mathbf{\Lambda}}^*\left [ n \right ]\underline{\varepsilon}_\mathbf{\Lambda}\left [ n \right ]= -m_{\mathbf{\Lambda}}^*\left [ n \right ]\sum\nolimits_{k=0}^{K-1} c_k\delta\left [ n - n_k \right ]\\ & m_{\mathbf{\Lambda}}^*\left [ n \right ]{r}^{\Diamond}_\mathbf{\Lambda}\left [ n \right ] \end{cases}.\] Note that \({g}^{\Diamond}_\mathbf{\Lambda}\in \mathcal{B}_{\Omega_{\mathbf{\Lambda}}}\Rightarrow m_{\mathbf{\Lambda}}^*{g}^{\Diamond}_\mathbf{\Lambda}\in \mathcal{B}_{\Omega_{\mathbf{\Lambda}}}\). The second term \(m_{\mathbf{\Lambda}}^*\underline{\varepsilon}_\mathbf{\Lambda}\) is completely parametrized by \(\{c_k, n_k\}_{k=0}^{K-1}\). The last term \(m_{\mathbf{\Lambda}}^*{r}^{\Diamond}_\mathbf{\Lambda}\) is the effect of noise shaping. We can now analyze the constituent components in the transform domain.
Let \(\widehat{z}_{\mathbf{\Lambda}}\left [ m \right ]\) denote the DLCT of \(z\left [ n \right ]\) as defined in ?? . We can partition \(\widehat{z}_{\mathbf{\Lambda}}\left [ m \right ]\) as follows \[\widehat{z}_{\mathbf{\Lambda}}\left [ m \right ]= \begin{cases} {\widehat{g}_{\mathbf{\Lambda}}}^{\Diamond}\left [ m \right ]- \widehat{\underline{\varepsilon}}_{\mathbf{\Lambda}}\left [ m \right ]+ {\widehat{r}_{\mathbf{\Lambda}}}^{\Diamond}\left [ m \right ]&\left | m \right | < M_1\\ -\widehat{\underline{\varepsilon}}_{\mathbf{\Lambda}}\left [ m \right ]+ {\widehat{r}_{\mathbf{\Lambda}}}^{\Diamond}\left [ m \right ]&M_1 \leqslant \left | m \right | < M_2\\ - \widehat{\underline{\varepsilon}}_{\mathbf{\Lambda}}\left [ m \right ]+ {\widehat{r}_{\mathbf{\Lambda}}}^{\Diamond}\left [ m \right ]&M_2 \leqslant \left | m \right | \end{cases}. \label{eq:wm-dlct}\tag{28}\] Spectral separation of \(\widehat{z}_{\mathbf{\Lambda}}\left [ m \right ]\) is shown in Fig. 6 (d).
The effect of noise shaping in 28 is that the contribution of \({r}^{\Diamond}_\mathbf{\Lambda}\) in the interval between \(0\) and \(M_2\) is negligible. This enables the isolation of \(\widehat{\underline{\varepsilon}}_{\mathbf{\Lambda}}\left [ m \right ]\) from 28 in the interval between \(M_1\) and \(M_2\). We use this insight to estimate the residue. Note that by definition, \[\widehat{\underline{\varepsilon}}_{\mathbf{\Lambda}}\left [ m \right ]= T h\sqrt{\frac{-\jmath}{\tau}} \sum\limits_{n=0}^{N-1} {\overset{\lower 0.5em\smash{\scriptstyle \hpdn}} {{\underline{\varepsilon}}}}\left [ n \right ]{\kappa_{\mathbf{\Lambda}^{-1}}}\left (m b \omega_0, nT h \right ). \label{eq:z-lct-m}\tag{29}\] Then, by modulating 29 with \(\frac{\sqrt{\tau}}{T h\sqrt{-\jmath}} e^{-\jmath \frac{d\left (m b \omega_0 \right )^2}{2b}}\), we get, \[\begin{align} \widehat{f}_{\mathbf{\Lambda}}\left [ m \right ]&= \frac{\widehat{\underline{\varepsilon}}_{\mathbf{\Lambda}}\left [ m \right ]\sqrt{\tau}}{T h\sqrt{-\jmath}} e^{-\jmath \frac{d\left (m b \omega_0 \right )^2}{2b}} = \sum\limits_{k=0}^{K-1} c_ke^{-\jmath n_kT m \omega_0} \label{eq:fm-def} \end{align}\tag{30}\] which is a sum of exponentials. Hence, the frequencies corresponding to folding instants \(\{ n_k\}_{k=0}^{K-1}\) can be estimated using Prony’s method. To this end, let us denote by \({{\mathbf{h}}}\) a \((K + 1)\)-tap filter with z-transform \(H\left (z \right ) = \sum\nolimits_{n=0}^{K}h\left [ n \right ]z^{-n} = \prod\nolimits_{k = 0}^{K - 1}\left (1 - r_k z^{-1}\right )\) where \(r_k = e^{-\jmath n_kT \omega_0}\). It is well known that \(\mathbf{h}\) annihilates \(\widehat{f}_{\mathbf{\Lambda}}\left [ l \right ],l \in [M_1 + K, M_2 - 1]\) [54], since \[\left (h \ast\widehat{f}_{\mathbf{\Lambda}} \right )\left [ l \right ] = \sum\nolimits_{k=0}^{K - 1} c\left [ k \right ] r_{k}^{l - M_1} \sum\nolimits_{p=0}^{K} h\left [ p \right ]r_{k}^{-p} = 0. \label{eq:ann-eq}\tag{31}\] This can be algebraically rewritten as \(\mathbf{T}({{\mathbf{\widehat{f}_{\mathbf{\Lambda}})}}}{{\mathbf{h}}} = {{\mathbf{0}}}\), where \(\mathbf{T}({{{\mathbf{\widehat{f}_{\mathbf{\Lambda}}}}}})\) is a \(\left (M_2 - M_1 - K \right ) \times \left (K + 1 \right )\) Toeplitz matrix constructed from \(\left \{ \widehat{f}_{\mathbf{\Lambda}}\left [ m \right ]\right \}_{m=M_1}^{M_2 - 1}\) with length \(|\mathsf{M}_{\varepsilon}^{{{\mathbf{n}}}}| = M_2 - M_1\) where \(\mathsf{M}_{\varepsilon}^{{{\mathbf{n}}}}= [M_1,M_2].\) This system of equations can be solved when \(|\mathsf{M}_{\varepsilon}^{{{\mathbf{n}}}}| \geqslant 2K\), which leads to the estimation of the filter \(\mathbf{h}\). Folding locations \(\left \{ n_k\right \}_{k=0}^{K-1}\) are obtained from filter roots \(\{r_k\}_{k=0}^{K-1}\). Prony’s method is known to be sensitive to perturbations, and robust solutions to 31 can be achieved using high-resolution spectral estimation methods, such as the Matrix Pencil Method (MPM) [55]. With \(\{n_k\}_{k=0}^{K-1}\) known, the amplitudes \(\left \{ c_k\right \}_{k}^{K-1}\) are estimated using least-squares (LS) inversion of the system of equations in 30 .
The estimates of \(\{ c_k\}_{k=0}^{K-1}\) rely on the estimates \(\{ n_k\}_{k=0}^{K-1}\), but require only \(K\) equations. Let \(\mathsf{M}_{\varepsilon}^{{{\mathbf{c}}}}= \left[ M_1,M_2 \right]\), then, as shown in Fig. 8, we have observed the estimation using a smaller interval \(M_3\in \mathsf{I} \subseteq \mathsf{M}_{\varepsilon}^{{{\mathbf{c}}}}, |\mathsf{I}|\geqslant K\) is an empirically superior approach.
To recover \(g\left (t \right )\), we first obtain an estimate \(\widetilde{\varepsilon}\left [ n \right ]\) via the anti-difference operator such that, \(\widetilde{\varepsilon}\left [ n \right ]= \sum\nolimits_{k=0}^{n - 1} \underline{\varepsilon}\left [ k \right ], n\geqslant 1\) where \(\underline{\varepsilon}\left [ n \right ]= \sum\nolimits_{k = 0}^{K-1}c_k\delta\left [ n - n_k \right ]\) with \(\left \{ c_k, n_k\right \}_{k=0}^{K-1}\) obtained in the previous step. Adding demodulated residue estimate \(\widetilde{\varepsilon}\left [ n \right ]\) to \(q_{\mathbf{\Lambda},\lambda}\left [ n \right ]\) gives multi-bit samples \(q_{\sf{MB}}\left [ n \right ]= q_{\mathbf{\Lambda},\lambda}\left [ n \right ]+ m_{\mathbf{\Lambda}}^*\left [ n \right ]\widetilde{\varepsilon}\left [ n \right ]\). To obtain the full reconstruction of \(\widetilde{g}\left (t \right )\), we filter \(q_{\sf{MB}}\left [ n \right ]\) with the interpolation kernel \(\varphi\) with bandwidth \(\Omega\) \[\widetilde{g}\left (t \right )= he^{-\jmath\frac{a t ^ 2}{2b}} \sum\limits_{n \in {\mathbb{Z}}} {\overset{\lower 0.5em\smash{\scriptstyle \hpup}} {{q}}}_{\sf{MB}} \left [ n \right ]\varphi_{\Omega_{\mathbf{\Lambda}}} \left (\frac{t}{T} - \frac{n}{h} \right ). \label{eq:mod-1b-low-pass}\tag{32}\] The designed algorithm is summarized in Algorithm 7.
Exp. | \(\mathbf{\Lambda}\) | \(M\) | \(h\) | \(\mathbf{\epsilon}_1\left ({{\mathbf{g}}}, {{\mathbf{\widetilde{g}}}} \right )\) | \(\mathbf{\epsilon}_2\left ({{\mathbf{g}}}, {{\mathbf{\widetilde{g}}}} \right )\) |
---|---|---|---|---|---|
\(\times 10^{-3}\) | \(\times 10^{-6}\) | \(\times 10^{-7}\) | |||
\(1\) | \(\Lambda_{\mathsf{FT}}\) | \(10\) | \(6.67\) | \(0.82\) | \(6.79\) |
\(2\) | \(\Lambda_{\theta}, \theta=\frac{\pi}{3}\) | \(10\) | \(6.67\) | \(2.49\) | \(6.95\) |
\(3\) | \(\Lambda_{\mathsf{FrT}}, b=1\) | \(10\) | \(6.67\) | \(2.14\) | \(4.87\) |
\(4\) | \(\Lambda_{\mathsf{FT}}\) | \(30\) | \(5.00\) | \(0.71\) | \(4.23\) |
\(5\) | \(\Lambda_{\theta}, \theta=\frac{\pi}{16}\) | \(30\) | \(5.00\) | \(1.28\) | \(5.65\) |
\(6\) | \(\Lambda_{\mathsf{FrT}}, b=2\) | \(30\) | \(5.00\) | \(1.56\) | \(4.33\) |
Exp. | \(\Lambda_{LS}\) | \(M_2 - M_1\) | \(M_3 - M_1\) | \(h\) | \(\left\lVert g\right\rVert_\infty\) | \(\lambda\) | \(K\) | \(\mathbf{\epsilon}_{1}\left (c_k, \widetilde{c}_k \right )\) | \(\mathbf{\epsilon}_{1}\left (\varepsilon, \widetilde{\varepsilon} \right )\) | \(\mathbf{\epsilon}_{1}\left (g, \widetilde{g} \right )\) | \(\mathbf{\epsilon}_{2}\left (c_k, \widetilde{c}_{k} \right )\) | \(\mathbf{\epsilon}_{2}\left (\varepsilon, \widetilde{\varepsilon} \right )\) | \(\mathbf{\epsilon}_{2}\left (g, \widetilde{g} \right )\) |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
\(\times 10^{-4}\) | \(\times 10^{-7}\) | \(\times 10^{-4}\) | \(\times 10^{-6}\) | \(\times 10^{-9}\) | \(\times 10^{-4}\) | \(\times 10^{-7}\) | |||||||
\(1\) | \(\Lambda_{\mathsf{FT}}\) | \(400\) | \(80\) | \(2.86\) | \(1.90\) | \(0.75\) | \(10\) | \(0.54\) | \(3.06\) | \(0.43\) | \(8.79\) | \(3.06\) | \(1.19\) |
\(2\) | \(\Lambda_{\theta}, \theta = \frac{\pi}{3}\) | \(500\) | \(80\) | \(2.50\) | \(2.73\) | \(0.85\) | \(18\) | \(2.30\) | \(6.21\) | \(2.39\) | \(5.12\) | \(6.19\) | \(2.33\) |
\(3\) | \(\Lambda_{\theta}, \theta = \frac{\pi}{4}\) | \(500\) | \(120\) | \(2.00\) | \(3.74\) | \(0.85\) | \(22\) | \(1.61\) | \(6.12\) | \(6.35\) | \(8.46\) | \(6.06\) | \(2.26\) |
\(4\) | \(\Lambda_{\theta}, \theta = \frac{\pi}{16}\) | \(800\) | \(200\) | \(1.25\) | \(6.69\) | \(0.80\) | \(42\) | \(1.77\) | \(6.44\) | \(4.04\) | \(6.43\) | \(6.40\) | \(2.99\) |
\(5\) | \(\Lambda_{\mathsf{FrT}}, b = 1\) | \(600\) | \(150\) | \(1.72\) | \(2.83\) | \(0.80\) | \(16\) | \(2.25\) | \(3.38\) | \(2.03\) | \(4.55\) | \(3.36\) | \(1.03\) |
\(6\) | \(\Lambda_{\mathsf{FrT}}, b = 2\) | \(750\) | \(180\) | \(1.25\) | \(3.83\) | \(0.80\) | \(38\) | \(2.12\) | \(5.80\) | \(1.32\) | \(6.83\) | \(5.79\) | \(2.61\) |
\(7\) | \(\Lambda_{\mathsf{FrT}}, b = 3\) | \(800\) | \(200\) | \(1.00\) | \(5.49\) | \(0.80\) | \(42\) | \(1.11\) | \(5.13\) | \(1.10\) | \(1.94\) | \(5.12\) | \(2.08\) |
We validate the proposed architectures and algorithm through numerical experiments using LCT-bandlimited signals. We begin by applying 1st and 2nd-order \(\Lambda\Sigma\Delta\mathrm{Q}\) and compare the reconstruction performance for signals with bounded inputs. Next, we conduct experiments with the \(\mathscr{M}\) \(\Lambda\Sigma\Delta\mathrm{Q}\) architecture using signals that significantly exceed the quantizer’s threshold. We also examine how outband intervals \(\mathsf{M}_{\varepsilon}^{{{\mathbf{n}}}}\) and \(\mathsf{M}_{\varepsilon}^{{{\mathbf{c}}}}\), respectively, and the oversampling ratio \(h\) affect reconstruction performance. For the Fourier Transform case, we compare our recovery algorithm with the time-domain algorithm proposed in [28] and demonstrate the empirical robustness of our algorithm.
We generate \(g \in \mathcal{B}_{\Omega_{\mathbf{\Lambda}}}\) with \(\left\lVert g\right\rVert_\infty \leqslant 1\) and pass it into the 1st and 2nd-order \(\Lambda\Sigma\Delta\mathrm{Q}\) schemes. We perform numerical experiments using three different transforms and present the results in Table 2. The metric \(\mathbf{\epsilon}_1({{\mathbf{g}}}, {{\mathbf{\widetilde{g}}}})\) represents the MSE for reconstruction of the 1st-order scheme, while \(\mathbf{\epsilon}_2({{\mathbf{g}}}, {{\mathbf{\widetilde{g}}}})\) denotes the MSE achieved by the 2nd-order scheme. As expected, the 2nd-order scheme reconstructs the signal with lower MSE due to its superior noise-shaping capability. The results for the real part from Experiment 6 are illustrated in Fig. 9.
With \(\left\lVert g\right\rVert_\infty > 1\), non-USF \(\Lambda\Sigma\Delta\mathrm{Q}\)(in Section 4) would saturate, and hence conventional recovery fails. In such scenarios, we utilize our novel \(\mathscr{M}\) \(\Lambda\Sigma\Delta\mathrm{Q}\) architecture and demonstrate its capability through simulations for three transforms: Fourier Transform (FT), Fractional Fourier Transform (FrFT), and Fresnel Transform (FrT). We generate an input signal \(g[n]\) such that \(g \in \mathcal{B}_{\Omega_{\mathbf{\Lambda}}}\) and \(\left\lVert g\right\rVert_\infty > 1\). The results are presented in Table 3. Since the 2nd-order \(\Lambda\Sigma\Delta\mathrm{Q}\) offers better noise rejection, it outperforms the setup using the 1st-order scheme. Experiment 4 is illustrated in Fig. 10—the acquisition is shown in Fig. 10 (a)-(d), and the reconstruction is depicted in Fig. 10 (e)-(g).
We compare our method with the time-domain approach developed in [28]. Let us denote the noisy input as \[\overline{g}\left [ n \right ]= g\left [ n \right ]+ w\left [ n \right ], \quad w \sim \mathcal{N}\left (0,\sigma^2 \right )\] where \(w\) are i.i.d. samples drawn from a Gaussian distribution with zero mean and variance \(\sigma^2\). We omit chirp modulation since \(\mathbf{\Lambda}={\mathbf{\Lambda}_{\textsf{FT}}}\). We choose \(h\) to satisfy the recovery criterion given in [28] and set \(\sigma^2 = P_s 10^{-\mathsf{SNR}/ 10}\) where \(P_s = {N^{-1}}\sum\nolimits_{n=0}^{N-1}\left | g\left [ n \right ] \right |^2\). We compare both algorithms based on average MSE, or \(\mathbf{\epsilon}_{\textsf{avg}} = \frac{1}{PN}\sum\nolimits_{p = 0}^{P - 1}\sum\nolimits_{n=0}^{N-1} \left | g_p\left [ n \right ]- \widetilde{g}_p\left [ n \right ] \right |^2\) where \(P\) is the number of trials, \(g_p\left [ n \right ]\) denotes a randomly generated ground truth signal, \[g_p\left [ n \right ]= \frac{\left\lVert g\right\rVert_\infty \Re \left ({\mathcal{L}_{{\mathbf{\Lambda}^{-1}}}}\left [ \widehat{g}_{\mathbf{\Lambda}}\left [ m \right ] \right ] \right )}{\left\lVert\Re \left ({\mathcal{L}_{{\mathbf{\Lambda}^{-1}}}}\left [ \widehat{g}_{\mathbf{\Lambda}}\left [ m \right ] \right ] \right )\right\rVert_\infty},\quad\widehat{g}_{\mathbf{\Lambda}}\left [ m \right ]\in \mathsf{Unif}\left (0, 1\right )\] where \(\left | m \right | \leqslant M\), \(\mathbf{\Lambda}= {\mathbf{\Lambda}_{\textsf{FT}}}\) and \(\left\lVert g\right\rVert_\infty\) is the desired amplitude. Furthermore, \(\widetilde{g}_p\left [ n \right ]\) is the corresponding signal reconstruction. Fig. 11 shows the comparison of the two methods, where we used a second-order B-spline as a filtering kernel for the time-domain method [28]. We implement our method in Alg. 7 with cyclic differences, and for spectral estimation (Step 4), we use the MPM technique [55]. As seen in Fig. 11, our transform-domain method achieves better reconstruction. In contrast, the time-domain approach relies on thresholding based on local information, which proves less stable in noisy scenarios and struggles with dense signal folds. We demonstrate that the proposed method reduces the required oversampling factor (\(\frac{h}{h_{\textsf{TD}}}\)). Results from three different experiments are presented in Table 4. The final experiment, illustrated in Fig. 12, shows that the proposed approach successfully reconstructs the input signal with \(19\times\) lower oversampling compared to the time-domain method.
Exp. | \(K\) | \(\mathsf{M}_{\varepsilon}^{{{\mathbf{n}}}}\) | \(\mathsf{M}_{\varepsilon}^{{{\mathbf{c}}}}\) | \(\frac{h}{h_\textsf{TD}}\) | \(\mathbf{\epsilon}_{\textsf{TD}}\left ({{\mathbf{g}}}, {{\mathbf{\widetilde{g}}}} \right )\) | \(\mathbf{\epsilon}_{\textsf{FD}}\left ({{\mathbf{g}}}, {{\mathbf{\widetilde{g}}}} \right )\) |
---|---|---|---|---|---|---|
\(1\) | \(2\) | \(35\) | \(7\) | \(26.78\) | \(0.53\) | \(2.27\times 10^{-3}\) |
\(2\) | \(4\) | \(110\) | \(38\) | \(20.89\) | \(0.96\) | \(7.57\times 10^{-3}\) |
\(3\) | \(8\) | \(200\) | \(32\) | \(19.67\) | \(8.23\) | \(1.11\times 10^{-2}\) |
To investigate the impact of \(\{\mathsf{M}_{\varepsilon}^{{{\mathbf{n}}}},\mathsf{M}_{\varepsilon}^{{{\mathbf{c}}}}\}\) on the reconstruction, we vary the length of \(\{\mathsf{M}_{\varepsilon}^{{{\mathbf{n}}}},\mathsf{M}_{\varepsilon}^{{{\mathbf{c}}}}\}\) for \(P\) randomly generated inputs. For an \(l\)th-order quantizer, \(\{\mathsf{M}_{\varepsilon}^{{{\mathbf{n}}}},\mathsf{M}_{\varepsilon}^{{{\mathbf{c}}}}\}\), we calculate the average MSE \[\overline{\mathbf{\epsilon}}_{l}\left ({{\mathbf{n}}}, {{\mathbf{c}}} \right ) = \frac{1}{P}\sum\nolimits_{p=0}^{P-1}\mathbf{\epsilon}_l\left ({{\mathbf{g}}}_{p}, \widetilde{{{\mathbf{g}}}}_{p}\left (\mathsf{M}_{\varepsilon}^{{{\mathbf{n}}}}, \mathsf{M}_{\varepsilon}^{{{\mathbf{c}}}} \right ) \right )\] where \({{\mathbf{g}}}_p\in \mathcal{B}_{\Omega_{\mathbf{\Lambda}}}\) and \(\widetilde{{{\mathbf{g}}}}_{p}\left (\mathsf{M}_{\varepsilon}^{{{\mathbf{n}}}}, \mathsf{M}_{\varepsilon}^{{{\mathbf{c}}}} \right )\) is reconstruction for given choice of \(\{\mathsf{M}_{\varepsilon}^{{{\mathbf{n}}}},\mathsf{M}_{\varepsilon}^{{{\mathbf{c}}}},p\}\). The first two columns of Fig. 13 show heatmaps with MSE for different \(\{\mathsf{M}_{\varepsilon}^{{{\mathbf{n}}}},\mathsf{M}_{\varepsilon}^{{{\mathbf{c}}}}\}\) and transforms: FT, FrFT, and FrT. As expected, the heatmaps verify that 2nd-order \(\mathscr{M}\) \(\Lambda\Sigma\Delta\mathrm{Q}\) achieves a smaller reconstruction MSE. Results show that optimal \(\left | \mathsf{M}_{\varepsilon}^{{{\mathbf{c}}}} \right | \in [0.1 \left | \mathsf{M}_{\varepsilon}^{{{\mathbf{n}}}} \right |, 0.4 \left | \mathsf{M}_{\varepsilon}^{{{\mathbf{n}}}} \right |]\) for the 1st-order scheme and optimal \(\left | \mathsf{M}_{\varepsilon}^{{{\mathbf{c}}}} \right | \in [0.15 \left | \mathsf{M}_{\varepsilon}^{{{\mathbf{n}}}} \right |, 0.6 \left | \mathsf{M}_{\varepsilon}^{{{\mathbf{n}}}} \right |]\) for the 2nd-order scheme for the majority of \(\mathsf{M}_{\varepsilon}^{{{\mathbf{n}}}}\). Clearly, it is beneficial to use a larger window \(\mathsf{M}_{\varepsilon}^{{{\mathbf{n}}}}\) to estimate folding locations and then use a smaller window \(\mathsf{M}_{\varepsilon}^{{{\mathbf{c}}}}\) closer to the baseband interval to estimate the folding amplitudes. This is because frequency estimation is less sensitive to noise, and also choosing a smaller interval to estimate amplitudes implies smaller quantization noise.
Here, we investigate the impact of the \(h\) and \(\mathsf{M}_{\varepsilon}^{{{\mathbf{n}}}}\) on the reconstruction. For given \(h\), \(\mathsf{M}_{\varepsilon}^{{{\mathbf{n}}}}\) and \(l\)-th order quantizer, we define average MSE as: \(\overline{\mathbf{\epsilon}}_{l}\left (h, {{\mathbf{n}}} \right ) = \frac{1}{P}\sum\nolimits_{p=0}^{P-1}\mathbf{\epsilon}_{l}\left ({{\mathbf{g}}}_{p}, \widetilde{{{\mathbf{g}}}}_{p}\left (h, \mathsf{M}_{\varepsilon}^{{{\mathbf{n}}}} \right ) \right )\) where \({{\mathbf{g}}}_p \in \mathcal{B}_{\Omega_{\mathbf{\Lambda}}}\) for each trial \(p\) and \(\widetilde{{{\mathbf{g}}}}_{p}\left (h, \mathsf{M}_{\varepsilon}^{{{\mathbf{n}}}} \right )\) is the recovery for a given \(h\), \(\mathsf{M}_{\varepsilon}^{{{\mathbf{n}}}}\) and \(p\). We compare both schemes using the MSE difference \(\mathbf{\epsilon}_\Delta\left (h, {{\mathbf{n}}} \right )= \overline{\mathbf{\epsilon}}_{1}\left (h, {{\mathbf{n}}} \right ) - \overline{\mathbf{\epsilon}}_{2}\left (h, {{\mathbf{n}}} \right ).\) The third column of Fig. 13 depicts \(\mathbf{\epsilon}_\Delta\left (h, {{\mathbf{n}}} \right )\) for an input signal with \(M = 4\), \(\left | \mathsf{M}_{\varepsilon}^{{{\mathbf{c}}}} \right | = 0.4\left | \mathsf{M}_{\varepsilon}^{{{\mathbf{n}}}} \right |\) and \(K = 10\) folds for three different transforms: FT in Fig. 13 (c), FrFT in Fig. 13 (f), and FrT in Fig. 13 (i). In all the cases, the setup with 2nd-order \(\Lambda\Sigma\Delta\mathrm{Q}\) achieves smaller reconstruction MSE for the majority of trials. The most significant performance improvement occurs in intervals with low oversampling (see Fig. 13 (c), (f), (i)), where the 2nd-order scheme benefits from enhanced noise rejection.
In this paper, we present a generalized noise-shaping framework with two key areas of generalization: (1) bandwidth and (2) dynamic range. By leveraging the flexibility of the Linear Canonical Transform (LCT) domain, we have expanded the applicability of 1-Bit sampling to a wide range of signal classes beyond traditional Fourier domain assumptions. Our work extends the notion of bandwidth, enabling 1-Bit sampling schemes to handle signals that may not be bandlimited in the Fourier domain but are bandlimited in other transform domains, such as the Fresnel, fractional Fourier, or Bargmann transforms. To address the overloading or saturation problem common in current 1-Bit sampling methods, we incorporate the Unlimited Sensing Framework (USF). This novel approach separates key signal components—bandlimited input, modulo folds, and quantization noise—resulting in a new class of recovery algorithms optimized for transform-domain processing. Our approach outperforms existing time-domain methods, offering reduced oversampling requirements and enhanced robustness.
The interplay of modulo and signum non-linearities at the core of our work opens several avenues for further research: (i) A deeper theoretical understanding of noise robustness in the transform domain is crucial for identifying the fundamental limits of dynamic range improvement. (ii) The joint utilization of time-domain and frequency-domain methods has the potential to significantly enhance system performance. (iii) Since oversampling is central to 1-Bit sampling, the development of algorithms capable of efficiently handling large-scale data remains an important area of investigation.
This work is supported by the UKRI’s HASC Program under grant EP/X040569/1, European Research Council’s Starting Grant for “CoSI-Fold” under grant 101166158 and the UKRI Future Leader’s Fellowship “Sensing Beyond Barriers via
Non-Linearities” under grant MR/Y003926/1. Further details on Unlimited Sensing and materials on reproducible research are available via https://bit.ly/USF-Link
. We acknowledge computational
resources and support provided by the Imperial College Research Computing Service (http://doi.org/10.14469/hpc/2232
).↩︎
The authors are with the Dept. of Electrical and Electronic Engineering, Imperial College London, South Kensington, London SW7 2AZ, UK. (Email: {vaclav.pavlicek20,a.bhandari}@imperial.ac.uk
.↩︎
Manuscript received Mon XX, 20XX; revised Mon XX, 20XX.↩︎
That is, when working with LCTs, convolution in one domain does not imply multiplication in another domain.↩︎