February 17, 2025
Diffusion models have emerged as a promising alternative to autoregressive models in modeling discrete categorical data. However, diffusion models that directly work on discrete data space fail to fully exploit the power of iterative refinement, as the
signals are lost during transitions between discrete states. Existing continuous diffusion models for discrete data underperform compared to discrete methods, and the lack of a clear connection between the two approaches hinders the development of
effective diffusion models for discrete data. In this work, we propose a continuous diffusion model for language modeling that incorporates the geometry of the underlying categorical distribution. We establish a connection between the discrete diffusion
and continuous flow on the statistical manifold, and building on this analogy, introduce a simple diffusion process that generalizes existing discrete diffusion models. We further propose a simulation-free training framework based on radial symmetry, along
with a simple technique to address the high dimensionality of the manifold. Comprehensive experiments on language modeling benchmarks and other modalities show that our method outperforms existing discrete diffusion models and approaches the performance of
autoregressive models. The code is available at https://github.com/harryjo97/RDLM.
Discrete diffusion models [1], [2] emerged as a promising competitor to autoregressive models for the generative modeling of discrete data. These models have demonstrated competitive performance on tasks such as language modeling [3], [4] and code generation [5]. Unlike autoregressive models that generate data sequentially, diffusion models generate the sequence in parallel, allowing for bidirectional controllable generation and faster sampling.
However, discrete diffusion models do not fully harness the power of iterative refinement, which is the key to generative modeling of continuous data such as image synthesis [6], [7] and video generation [8], [9]. In these models, the forward process progressively corrupts data through stochastic jumps between discrete states, modeled as a Markov chain. Denoising is achieved through transitions between these discrete states, which results in the loss of informative signals during refinement. Hence, discrete diffusion models often exhibit limited generative performance and reduced controllability.
Several efforts have been made to adapt continuous diffusion models for discrete data, motivated by their advantages in controllability [10], efficient sampling [11], [12], optimized design choices [13], [14], and the potential to unify different modalities [15], [16]. However, their performance often significantly lags behind that of discrete diffusion models. Early methods [17], [18] extended image diffusion models to discrete domains by applying unconstrained continuous relaxation. Other approaches [19], [20] project discrete data onto the probability simplex using the Dirichlet distribution as its prior over categorical distributions, but often fail to capture complex patterns. Recent works [21], [22] apply flow matching on the statistical manifold to learn categorical distributions, but these methods are limited to short sequences and small vocabularies. In particular, the connection between discrete and continuous diffusion remains poorly understood, hindering the development of a unified diffusion framework.
In this work, we present Riemannian Diffusion Language Model (RDLM), a continuous diffusion framework for language modeling that incorporates the geometry of the statistical manifold into the diffusion processes. We establish a connection between continuous flow on the statistical manifold and the discrete diffusion process, showing that the transition distribution can be modeled as a conditional flow on the manifold. Based on the analogy, we introduce a simple design of the diffusion processes on the manifold that generalizes previous discrete diffusion models. We further present a simulation-free training scheme that leverages radial symmetry, consisting of a simple parameterization and maximum likelihood-based training objectives. Through experiments on language modeling, image modeling, and biological sequence design, we validate that our framework outperforms existing discrete and continuous diffusion models.
Discrete diffusion models [1]–[4] define the diffusion process directly on discrete states using the Markov chains. The forward process describes the transition from the current state to other states, which is formalized by multiplying the transition matrix \(Q_t\): \[\begin{align} q(x_t|x_{t-1}) = \text{Cat}(x_t; {Q}_t x_{t-1}), \phantom{0^{0}_0} \end{align}\] where \(x_t\) is the random variable for the discrete states and \(\text{Cat}(\cdot)\) denotes the categorical distribution. The marginal distribution corresponds to repeatedly multiplying transition matrices over time steps: \[\begin{align} q(x_t|x) = \text{Cat}(x_t; \bar{Q}_t x) = \text{Cat}(x_t;Q_t\cdots Q_1 x). \phantom{0^{0}_0} \label{eq:discrete95transition} \end{align}\tag{1}\] [1] introduced several designs of the transition matrices, including masked (absorbing state) and uniform diffusion, and has been extended to continuous-time Markov chains (CTMC) [1], [23].
Let \(\mathcal{X}=\{1,\cdots,d\}\) denote the discrete data space, and let \(\Delta^{d-1}=\{(p_1,\cdots, p_d)\in\mathbb{R}^d|\sum_i p_i=1, p_i\geq0\}\) denote the \((d-1)\)-dimensional probability simplex. A categorical distribution over \(\mathcal{X}\) can be parameterized by the parameters \(p_1,\cdots,p_d\) satisfying \(\sum_i p_i=1\) and \(p_i \geq 0\). The statistical manifold \(\mathcal{P}(\mathcal{X})\) of the categorical distributions thus corresponds to the simplex \(\Delta^{d-1}\) equipped with the Fisher-Rao metric [24], [25] (see Appendix 9.1). There exists a diffeomorphism from the statistical manifold \(\mathcal{P}(\mathcal{X})\) to the positive orthant of the \((d-1)\)-dimensional sphere \(\mathbb{S}^{d-1}_{+}\): \[\begin{align} \begin{aligned} \pi: \mathcal{P}(\mathcal{X}) \rightarrow \mathbb{S}^{d-1}_{+} ;\; p_i\mapsto u_i=\sqrt{p_i}, \end{aligned} \label{eq:diffeomorphism} \end{align}\tag{2}\] which induces the geodesic distance \(d_g(\boldsymbol{u},\boldsymbol{v}) \!=\! \cos^{\scalebox{0.75}[1.0]{-}1}\langle\boldsymbol{u}, \boldsymbol{v}\rangle\) for \(\boldsymbol{u},\boldsymbol{v}\in \mathbb{S}^{d-1}_{+}\), where \(\langle\cdot,\cdot\rangle\) denotes the Euclidean inner product. We provide a more detailed explanation in Appendix 9.1.
We introduce a novel continuous diffusion model for language modeling. In this section, we present a single token generation framework, which we generalize to modeling sequences in Section 4.
To incorporate the geometry of the underlying categorical distribution, we leverage the statistical manifold to parameterize discrete data [21], [22]. Each point on the statistical manifold \(\mathcal{P}(\mathcal{X})\) corresponds to the parameters of a categorical distribution over the discrete sample space \(\mathcal{X}=\{1,\cdots,d\}\). In this way, discrete data can be represented as continuous parameters of categorical distributions on the manifold.
Yet the Fisher-Rao metric is ill-defined on the boundary of the manifold where the initial distribution of the parameterized data lies, leading to numerical instabilities near the boundary. To address this, we leverage the diffeomorphism \(\pi\) (Eq. 2 ) which maps \(\mathcal{P}(\mathcal{X})\) to the positive orthant of a hypersphere \(\mathbb{S}^{d-1}_{+}\) [21], [22], where each point \(\boldsymbol{u}\in\mathbb{S}^{d-1}_{+}\) corresponds to \(\text{Cat}(\cdot;\pi^{\scalebox{0.75}[1.0]{-}1}(\boldsymbol{u}))\). This mapping enables discrete data to be reparameterized as continuous states on \(\mathbb{S}^{d-1}\) while preserving the geometry of the categorical distribution, which we illustrate in Figure 1 (a). The reparameterized data distribution \(p_{data}\) on the hypersphere can be written as \(p_{data}(x) = \sum^{d}_{k=1} p_k \delta(x \!-\! {\boldsymbol{e}_k})\) where \(p_k\) denotes the probability of the \(k\)-th state, and \(e_k\) are \(d\)-dimensional one-hot vectors. In the case of masked diffusion, the discrete sample space is augmented with an additional mask state \(m\).
Our key observation is that the transition distribution \(q_t(x_t|x)\) of a discrete diffusion process (Eq. 1 ) is a categorical distribution on \(\mathcal{X}\). Therefore, modeling \(q_t\) is equivalent to modeling continuous flow on the statistical manifold \(\mathcal{P}(\mathcal{X})\). We show in the following proposition that discrete diffusion models over \(\mathcal{X}\) can be modeled by a continuous flow on \(\mathcal{P}(\mathcal{X})\) and further on \(\mathbb{S}^{d-1}_{+}\) (we provide the full proof in Appendix 9.2).
Proposition 1. The transition distribution of discrete diffusion processes can be modeled by the continuous flow on the statistical manifold, and further on the hypersphere.
A flow on \(\mathbb{S}^{d-1}_{+}\) that interpolates \(\boldsymbol{y}_0\) and \(\boldsymbol{y}_1\) as geodesic is described by the ODE: \[\begin{align} \frac{\mathrm{d}\boldsymbol{Y}_t}{\mathrm{d}t} = -\frac{\mathrm{d}\log \kappa_t}{\mathrm{d}t} \exp^{\scalebox{0.75}[1.0]{-}1}_{\boldsymbol{Y}_t}(\boldsymbol{y}_1), \;\; \boldsymbol{Y}_0 = \boldsymbol{y}_0, \end{align}\] where \(\exp^{\scalebox{0.75}[1.0]{-}1}\) denotes the logarithm map on the hypersphere. Then, for a well-designed schedule \(\kappa_t\) and endpoint \(\boldsymbol{y}_1\), the process \(\boldsymbol{Z}_t\mathrel{\vcenter{:}}= \pi(\boldsymbol{Y}_t)\) on \(\mathcal{P}(\mathcal{X})\) corresponds to the transition distribution of the discrete diffusion process. In particular, we obtain the masked diffusion process for \(\boldsymbol{y}_1=\boldsymbol{e}_m\), i.e., the mask token, and the uniform diffusion process for \(\boldsymbol{y}_1=\sum^{d}_{i=1} \boldsymbol{e}_i/\sqrt{d}\).
Although discrete diffusion processes can be represented as a flow on the statistical manifold, this flow cannot be learned by a neural network. The network fails to generalize to points outside the geodesic that interpolates the prior and the data distribution, producing an incorrect vector field. Therefore, we present a simple design for the continuous diffusion model that generalizes existing discrete diffusion models.
The task of modeling the distribution of discrete data can be reformulated as modeling a distribution \(p_{data}\) on the hypersphere. Building upon the Riemannian diffusion mixture framework [26], we construct a diffusion process on \(\mathbb{S}^{d-1}\) such that its terminal distribution matches \(p_{data}\). The construction entails deriving a diffusion mixture representation based on bridge processes defined on \(\mathbb{S}^{d-1}\).
We first derive a bridge process \(\{\bar{\boldsymbol{X}}_t\}^T_{t=0}\) on \(\mathbb{S}^{d-1}\) from an arbitrary point \(\boldsymbol{x}_0\in\mathbb{S}^{d-1}\) to \(\boldsymbol{e}_k\) as follows (we provide detailed derivation in Appendix 9.3): \[\begin{align} \mathrm{d}\bar{\boldsymbol{X}}_t = \gamma_t \frac{\cos^{\scalebox{0.75}[1.0]{-}1}\langle\bar{\boldsymbol{X}}_t, \boldsymbol{e}_k \rangle (\boldsymbol{e}_k - \langle\bar{\boldsymbol{X}}_t, \boldsymbol{e}_k \rangle \bar{\boldsymbol{X}}_t)}{\sqrt{1 - \langle\bar{\boldsymbol{X}}_t, \boldsymbol{e}_k \rangle^2}} \mathrm{d}t + \sigma_t\mathrm{d}\mathbf{B}^{d}_t \;,\;\; \bar{\boldsymbol{X}}_0=\boldsymbol{x}_0, \label{eq:logarithm95bridge} \end{align}\tag{3}\] where \(\gamma_t \!\mathrel{\vcenter{:}}=\! \sigma^2_t / \int^T_t\sigma^2_s\mathrm{d}s\) and \(\mathbf{B}^{d}_t\) denotes the Brownian motion defined on \(\mathbb{S}^{d-1}\). Intuitively, the current state \(\boldsymbol{X}_t\) moves in the direction that minimizes the geodesic distance to the endpoint, resulting in a process that bridges the starting and end points. While different forms of the bridge process exist, for example, scaling the drift or the diffusion coefficients, Eq. 3 yields a specific transition distribution that enables simulation-free training, which we explain in Section 3.3.
From the bridge processes, we construct a generative process \(\{\boldsymbol{X}_t\}^T_{t=0}\) on \(\mathbb{S}^{d-1}\) using the diffusion mixture representation (see Appendix 9.4 for the formal definition of the diffusion mixture representation and the derivation of the generative process in Corollary 3): \[\begin{align} \mathrm{d}\boldsymbol{X}_t = \left[ \, \sum^d_{k=1} p_{T|t}(\boldsymbol{e}_k|\boldsymbol{X}_t)\, \eta^{k}(\boldsymbol{X}_t,t) \right] \mathrm{d}t + \sigma_t\mathrm{d}\mathbf{B}^{d}_t, \; \boldsymbol{X}_0 = \boldsymbol{x}_0 , \label{eq:bridge95mixture} \end{align}\tag{4}\] where \(\eta^{k}(\cdot,t)\) denote the drift of the bridge process in Eq. 3 . Here, \(p_{T|t}(\boldsymbol{e}_k|\boldsymbol{X}_t)\) represents the probability that \(\boldsymbol{e}_k\) will be the final outcome of the process at time \(T\), given the current state \(\boldsymbol{X}_t\) at time \(t\). Note that the construction guarantees the terminal distribution of the process to be \(p_{data}\).
An ideal generative process is one that gradually refines the uninformative states to recover the original tokens. We analyze the convergence of the bridge process through its radial process \(r^k_t\mathrel{\vcenter{:}}= d_g(\bar{\boldsymbol{X}}_t,\boldsymbol{e}_k)\) described by the following SDE (see Appendix [app:derivation:radial] for the derivation using Itô’s formula): \[\begin{align} \mathrm{d}r^k_t = \left[ -\gamma_t r^k_t + \frac{(d-1)\sigma^2_t}{2}\cot r^k_t \right]\mathrm{d}t + \sigma_t\mathrm{d}W_t, \;\; r^k_0 = \cos^{\scalebox{0.75}[1.0]{-}1}\langle\boldsymbol{x}_0,\boldsymbol{e}_k\rangle, \end{align}\] where \(W_t\) is a 1-dimensional Wiener process. For \(\sigma_0>\sigma_T\), the radial process converges rapidly in early time steps, making it difficult for a neural network to approximate accurately. We empirically find that the geometric schedule \(\sigma_t = \sigma_0^{T-t}\sigma_T^{t}\) with \(\sigma_0<\sigma_T\) leads to gradual convergence.
Based on Proposition [prop:discrete95generalize], initializing the generative process in Eq. 4 with the mask token, i.e., \(\boldsymbol{X}_0 \!=\! \boldsymbol{e}_m\), yields a mixture process that generalizes the discrete masked diffusion framework. The diffusion process starts at the mask token and progressively evolves toward one of the target tokens \(\boldsymbol{e}_k\), as visualized in Figure 1 (b). From the perspective of the discrete diffusion model, our mixture process smoothly interpolates the discrete jump from \(\boldsymbol{e}_m\) to \(\boldsymbol{e}_k\) through intermediate continuous states \(\boldsymbol{X}_t\), where the final token is determined by the probability \(p_{T|t}(\boldsymbol{e}_k|\boldsymbol{X}_t)\).
The fundamental difference is that discrete masked diffusion operates through direct jumps between a token and the mask token, where any incorrect transition is irreversible. In contrast, our continuous approach allows for gradual transitions, providing numerous opportunities to correct wrong predictions during the process. This leads to more accurate modeling of the underlying data distribution.
Based on Proposition [prop:discrete95generalize], the generalization of uniform diffusion can be achieved by initializing the generative process of Eq. 4 with the barycenter of the simplex \(\Delta^{d-1}\) projected onto \(\mathbb{S}^{d-1}\), i.e., \(\boldsymbol{X}_0 \!=\! \pi( \sum^d_{i=1} \boldsymbol{e}_i/d) \!=\! \sum^d_{i=1} \boldsymbol{e}_i / \sqrt{d}\). We visualize the diffusion process in Figure 1 (b). Intuitively, the barycenter of \(\Delta^{d-1}\) corresponds to the uniform categorical distribution over \(d\) categories, which serves as the stationary distribution of the discrete uniform diffusion process.
We derive a new family of generative processes by constructing a mixture over the time marginals of generative processes \(\{\mathbb{Q}^i_t\!: 1\leq i\leq n\}\) (see Appendix 9.5 for derivation): \[\begin{align} \mathbb{Q}^{mix}_t \mathrel{\vcenter{:}}= \sum^{n}_{i=1} \lambda^{i}_t \mathbb{Q}^i_t \;\;,\;\; \sum^{n}_{i=1} \lambda^{i}_t = 1 \,,\; 0\leq \lambda^i_t \leq 1 \,, \label{eq:mixture95path} \end{align}\tag{5}\] where \(\lambda^i_t\) is the time-dependent mixing schedule assigned to the \(i\)-the generative path. This construction allows the resulting process to transition between different generative behaviors over time.
In particular, we propose a simple yet effective mixture path built from mixing the time marginals of the masked diffusion and uniform diffusion, for a time-dependent schedule \(\lambda_t\) as follows: \[\begin{align} \lambda_t\mathbb{Q}^{mask}_t + (1-\lambda_t)\mathbb{Q}^{unif}_t, \label{eq:mixture95path95mask95unif} \end{align}\tag{6}\] with initial distribution \(\lambda_0 \delta(\boldsymbol{e}_m) + (1-\lambda_0) \delta(\sum^d_{i=1} \boldsymbol{e}_i / \sqrt{d})\). This formulation generalizes the mixture paths used in discrete flow matching [27] and the state-dependent schedule [3].
Notably, our framework generalizes previous flow matching methods on the statistical manifold [21], [22]. By designing the noise schedule in Eq. 3 to be \(\sigma_t \equiv \sigma_0\rightarrow 0\), we obtain the conditional vector field of the flow matching models.
Next, we introduce our training scheme. We present a simple parameterization of our generative model and derive the likelihood bound and training objectives. Further, we present a simulation-free training method based on the radial symmetry of the hypersphere.
To use the diffusion process in Eq. 4 as a generative model, its unknown drift should be learned through a neural network, similarly to flow matching [28], [29] or bridge matching [26]. Yet the drift of the mixture process diverges near the terminal time \(T\), which makes it challenging to learn. Therefore, instead of approximating the drift function directly, we propose to model the probability \(p_{T|t}(\boldsymbol{X}_T|\boldsymbol{X}_t)\) with a neural network \(\boldsymbol{s}_{\theta}\) as follows: \[\begin{align} \begin{aligned} p_{\theta}(\boldsymbol{X}_t,t) \mathrel{\vcenter{:}}=\texttt{softmax}\left( \boldsymbol{s}_{\theta}(\boldsymbol{X}_t,t) \right) = \Big[ p_{T|t}(\boldsymbol{e}_1|\boldsymbol{X}_t), \cdots, p_{T|t}(\boldsymbol{e}_d|\boldsymbol{X}_t) \Big]^{\text{T}}, \end{aligned} \label{eq:prob95parameterization} \end{align}\tag{7}\] which converges to \(\boldsymbol{e}_k\) for some \(k\) as \(t\rightarrow T\). In the case of masked diffusion, we set the probability \(p_{T|t}(\boldsymbol{e}_m|\boldsymbol{X}_t)\) to be zero for all \(t\), indicating that the final state cannot be a mask token. From Eq. 7 , the drift of the mixture process in Eq. 4 is parameterized as follows: \[\begin{align} \eta_{\theta}(\boldsymbol{X}_t,t) = \sum^{d}_{k=1} \big\langle p_{\theta}(\boldsymbol{X}_t,t), \boldsymbol{e}_k\big\rangle \eta^{k}(\boldsymbol{X}_t,t). \label{eq:drift95parameterization} \end{align}\tag{8}\] Our parameterization shares similar properties with the discrete masked diffusion [4]: (1) Zero Mask Probabilities. The final state cannot be a mask token. (2) Carry-Over Unmasking. If \(\boldsymbol{X}_t\) converges to a token \(\boldsymbol{e}_k\) before the terminal time, \(\eta_{\theta}\) converges to zero, and the state \(\boldsymbol{X}_t\) is carried over without changing to different token.
We derive a tractable upper bound on the negative likelihood of our generative model by applying the Girsanov theorem on compact manifolds ([30], Corollary H.3). Specifically, we first establish a point-wise upper bound on the negative log-likelihood under the parameterized mixture process \(\mathbb{Q}^{\theta}\), using the KL divergence between \(\mathbb{Q}^{\theta}\) and a bridge process \(\mathbb{Q}^{k}\), which is conditioned on endpoints \(\boldsymbol{x}_0\) and \(\boldsymbol{e}_k\). Applying the Girsanov theorem, we obtain the following variational upper bound (we provide a detailed derivation in Appendix 9.6): \[\begin{align} -\log \hat{p}_{\theta}(\boldsymbol{e}_k) = D_{KL}(\mathbb{Q}^k_T \| \mathbb{Q}^{\theta}_T ) \leq \mathbb{E}_{\boldsymbol{X}\sim\mathbb{Q}^{k}} \left[ \frac{1}{2}\int^T_0 \bigg\| \sigma_t^{-1} \Big( \eta_{\theta}(\boldsymbol{X}_t,t) - \eta^{k}(\boldsymbol{X}_t,t) \Big) \bigg\|^2_2 \mathrm{d}t \right] \end{align}\] where \(\eta^k\) is the drift defined in Eq. 3 . The point-wise likelihood bound yields an upper bound on the negative log-likelihood of our generative model parameterized by \(p_{\theta}\): \[\begin{align} \mathbb{E}_{\boldsymbol{z}\sim p_{data}}\big[-\log \hat{p}_{\theta}(\boldsymbol{z})\big] \leq \mathbb{E}_{\substack{\boldsymbol{e}_k\sim p_{data} \\ \boldsymbol{X}\sim\mathbb{Q}^{k}}} \left[\frac{1}{2}\int^T_0 \bigg\| \sigma_t^{-1} \Big( \eta_{\theta}(\boldsymbol{X}_t,t) - \eta^{k}(\boldsymbol{X}_t,t) \Big) \bigg\|^2_2 \mathrm{d}t \right]. \label{eq:elbo} \end{align}\tag{9}\]
Based on the likelihood bound in Eq. 9 , we introduce a maximum likelihood training objective for the model parameterization \(p_{\theta}\) in Eq. 7 : \[\begin{align} \mathcal{L}(\theta) &= \mathbb{E}_{\substack{\boldsymbol{e}_k\sim p_{data} \\ \boldsymbol{X}\sim\mathbb{Q}^{k}}} \left[ \frac{1}{2} \int^T_0 \sigma_t^{-2} \Bigg\| \sum^d_{l=1} \big\langle p_{\theta}(\boldsymbol{X}_t,t), \boldsymbol{e}_l \big\rangle \eta^l(\boldsymbol{X}_t,t) - \eta^k(\boldsymbol{X}_t,t) \Bigg\|^2_2 \mathrm{d}t \right] . \label{eq:mixture95objective} \end{align}\tag{10}\] This objective corresponds to minimizing the mean squared error in approximating the drift term.
In particular, \(\mathcal{L}(\theta)\) can be minimized by reducing the cross-entropy between the predicted probability \(p_{\theta}(\boldsymbol{X}_t,t)\) and the target one-hot vector \(\boldsymbol{e}_k\). Therefore we present a cross-entropy-based training objective, analogous to those used in discrete diffusion models [3], [4]: \[\begin{align} \mathcal{L}^{CE}(\theta) = \mathbb{E}_{\substack{\boldsymbol{e}_k\sim p_{data} \\ \boldsymbol{X}\sim\mathbb{Q}^{k}}} \left[ \int^T_0 -\log \big\langle p_{\theta}(\boldsymbol{X}_t,t), \boldsymbol{e}_k \big\rangle \mathrm{d}t \right]. \label{eq:ce95objective} \end{align}\tag{11}\] We show in Appendix 9.7 that minimizing the cross-entropy-based objective in Eq. 11 leads to minimizing \(\mathcal{L}(\theta)\), thereby ensuring maximum likelihood training. We experimentally find that the cross-entropy loss \(\mathcal{L}^{CE}(\theta)\) yields faster convergence in training and leads to better performance than the mean squared error loss \(\mathcal{L}(\theta)\).
The difficulty of approximating the probability \(p_{T|t}(\boldsymbol{X}_T|\boldsymbol{X}_t)\) varies significantly across different time points \(t\). While predicting \(\boldsymbol{X}_T\) is fairly easy in the later stage of the process, it is challenging to do so during the middle of the process. The training can be improved by training more on the challenging time points. We derive an equivalent objective by applying importance sampling over \(t\), which reweights the time distribution to focus on a specific interval: \[\begin{align} \mathcal{L}^{CE}_{q}(\theta) = \mathbb{E}_{\substack{\boldsymbol{e}_k\sim p_{data} \\ \boldsymbol{X}\sim\mathbb{Q}^{k}}} \mathbb{E}_{t\sim q} \Big[ -q(t)^{\scalebox{0.75}[1.0]{-}1} \log \big\langle p_{\theta}(\boldsymbol{X}_t,t), \boldsymbol{e}_k \big\rangle \Big] \label{eq:importance95mixture95objective} \end{align}\tag{12}\] where \(q\) is a normalized proposal distribution over \(t\). We find that a simple choice \(q(t) = \epsilon + (1-2\epsilon) \mathbf{1}_{[a,b]}(t)\) with small \(\epsilon\) effectively concentrates sampling within the desired time interval.
Our training objective involves sampling \(\boldsymbol{X}_t\) from the bridge processes at each iteration. Yet this introduces a significant bottleneck during training, as it requires simulating the process due to its intractable transition distribution on the \(d\)-dimensional sphere. Therefore, we present an approximate sampling method that bypasses the need for simulation, thereby enabling scalable training across large vocabularies.
We propose to approximate the distribution \(p(\boldsymbol{X}_t|\boldsymbol{X}_0,\!\boldsymbol{X}_T)\) as the push-forward of a Gaussian distribution on the tangent space via the exponential map, i.e., the Riemannian normal. This approximation is justified by the fact that Eq. 3 results from applying a time change [31] to a simple bridge process (Eq. 26 ), which yields a transition distribution similar to Riemannian normal.
We parameterize the mean of the Riemannian normal distribution as \(\boldsymbol{\mu}_t\mathrel{\vcenter{:}}= \mathbb{E}\boldsymbol{X}_{t}/\|\mathbb{E}\boldsymbol{X}_{t}\|\) and its covariance \(\boldsymbol{\Sigma}_t\mathrel{\vcenter{:}}= \text{Cov}\left[ \exp^{\scalebox{0.75}[1.0]{-}1}_{\boldsymbol{\mu}_t}(\boldsymbol{X}_t) \right]\), using the parameters \(\alpha_t\) and \(\rho_t\) as follows: \[\begin{align} \boldsymbol{\mu}_{t} = \frac{\alpha_t}{\sin\phi_0}\boldsymbol{X}_T + \left( \sqrt{1-\alpha_t^2} - \frac{\alpha_t\cos\phi_0}{\sin\phi_0} \right)\boldsymbol{X}_0 \;,\;\; \boldsymbol{\Sigma}_t = \rho_{t}^2 \mathbf{I} , \label{eq:riemannian95normal} \end{align}\tag{13}\] where \(\phi_0\mathrel{\vcenter{:}}= \cos^{\scalebox{0.75}[1.0]{-}1}\langle\boldsymbol{X}_0,\boldsymbol{X}_T\rangle\). Intuitively, \(\boldsymbol{\mu}_{t}\) represents the normalized centroid of the samples \(\boldsymbol{X}_t\), and \(\boldsymbol{\Sigma}_t\) captures to the covariance of the lifted samples in the tangent space \(\mathcal{T}_{\boldsymbol{\mu}_{t}}\).
While the parameters \(\alpha_t\) and \(\rho_t\) are generally intractable, we derive them from the 1-dimensional projections of the mixture process. Our main idea is to express the parameters in terms of the projected processes \(z^T_t \!\mathrel{\vcenter{:}}=\! \langle\boldsymbol{X}_{t|0,T}, \boldsymbol{X}_T\rangle\) and \(z^0_t \!\mathrel{\vcenter{:}}=\! \langle\boldsymbol{X}_{t|0,T}, \boldsymbol{X}_0\rangle\), where \(\boldsymbol{X}_{t|0,T}\) denotes the diffusion process \(\{\boldsymbol{X}_t\}^T_{t=0}\) conditioned on fixed endpoints \(\boldsymbol{X}_0\) and \(\boldsymbol{X}_T\). These projected processes are modeled by the following 1-dimensional SDEs (see Appendix 9.8 for the derivation using the Itô’s formula and the radial symmetry of \(\mathbb{S}^{d-1}\)): \[\begin{align} \mathrm{d}z^T_t &= \left[ \gamma_t \cos^{\scalebox{0.75}[1.0]{-}1}\!z^T_t \, \sqrt{1 - (z^T_t)^2} -\frac{(d-1)\sigma^2_t}{2} z^T_t \right] \mathrm{d}t + \sigma_t\sqrt{1 - (z^T_t)^2}\, \mathrm{d}W^T_t, \tag{14} \\[3pt] \mathrm{d}z^0_t &= \left[ \gamma_t \frac{\cos^{\scalebox{0.75}[1.0]{-}1}\!z^T_t}{\sqrt{1 - (z^T_t)^2}} \Big( z^T_0 - z^0_t z^T_t \Big) -\frac{(d-1)\sigma^2_t}{2}z^0_t \right] \mathrm{d}t + \sigma_t\sqrt{1 - (z^0_t)^2}\, \mathrm{d}W^0_t, \tag{15} \end{align}\] with \(z^T_0 = \langle\boldsymbol{X}_0, \boldsymbol{X}_T\rangle\) and \(z^0_0=1\), where \(W^T_t\) and \(W^0_t\) denote 1-dimensional Wiener processes. In the case of masked and uniform diffusion, \(\boldsymbol{X}_0\) is fixed to a single point such that \(\langle \boldsymbol{X}_0,\boldsymbol{e}_k\rangle\) is identical for all non-mask tokens \(\boldsymbol{e}_k\). As a result, the mean projections \(\mathbb{E}z^T_t\) and \(\mathbb{E}z^0_t\) remain invariant with respect to the choice of \(\boldsymbol{X}_T\).
Based on the radial symmetry of \(\mathbb{S}^{d-1}\), we derive the parameters \(\alpha_t\) and \(\rho_t\) from the mean projections \(\mathbb{E}z^0_t\) and \(\mathbb{E}z^T_t\) as follows (we provide detailed derviation in Appendix 9.9): \[\begin{align} \alpha_t = \sqrt{\frac{(\mathbb{E}z^T_t / \mathbb{E}z^0_t - \cos\phi_0)^2}{\sin^2\phi_0 + (\mathbb{E}z^T_t / \mathbb{E}z^0_t - \cos\phi_0)^2}} \;,\;\; \rho_t &= F_d^{\scalebox{0.85}[1.0]{-}1}\left( \mathbb{E}z^0_t / \sqrt{1 - \alpha_t^2} \right), \label{eq:from95proj95process} \end{align}\tag{16}\] where \(\phi_0\mathrel{\vcenter{:}}= \cos^{\scalebox{0.75}[1.0]{-}1}\langle\boldsymbol{X}_0, \boldsymbol{X}_T\rangle\) and \(F_d^{\scalebox{0.85}[1.0]{-}1}\) denotes the inverse of a damped Kummer function (Eq. 34 ). For small values of \(d\), we calibrate \(\rho_t\) by applying a constant scaling factor.
The mean projections \(\mathbb{E}z^0_t\) and \(\mathbb{E}z^T_t\) can be easily obtained by simulating the 1-dimensional processes Eq. 14 and Eq. 15 . Therefore, prior to training our model \(p_{\theta}\), we precompute the parameters \(\{\alpha_{i/K},\rho_{i/K}\}^K_{i=0}\) once, using a sufficiently large value of \(K\). The procedure for this precomputation is outlined in Algorithm 4 in the Appendix.
During training, we can sample \(\boldsymbol{X}_t\) from the Riemannian normal distribution without expensive simulation of the bridge processes. Compared to the simulation-based training, our approach yields a \(\times\)50 speedup. In Section 6.4, we experimentally demonstrate that the Riemannian normal provides an accurate approximation of the distribution of \(\boldsymbol{X}_t\).
We now extend the single-token modeling framework to the generation of token sequences. Since each token in the sequence is reparameterized onto a \(d\)-dimensional sphere, a sequence of length \(n\) is modeled on the product manifold \((\mathbb{S}^{d-1})^{n}\). This formulation allows the sequence-level diffusion to be treated as a joint process over the spherical components.
We model the generative process as a system of \(n\) SDEs \(\{(\boldsymbol{X}^1_t,\!\cdots\!,\boldsymbol{X}^n_t)\}^T_{t=0}\), where each \(\boldsymbol{X}^i_t\) evolves according to a diffusion process on \(\mathbb{S}^{d-1}\), analogous to the single-token formulation in Eq. 4 : \[\begin{align} \mathrm{d}\boldsymbol{X}^{i}_t = \left[\, \sum^{d}_{k=1} p(\boldsymbol{X}^{i}_T \!=\! \boldsymbol{e}_k | \boldsymbol{X}^{1:n}_t)\; \eta^{k}(\boldsymbol{X}^{i}_t, t) \right]\mathrm{d}t + \sigma_t\mathrm{d}\mathbf{B}^{d}_t ,\; 1\leq i\leq n . \end{align}\] Here \(p(\boldsymbol{X}^{i}_T \!=\! \boldsymbol{e}_k|\boldsymbol{X}^{1:n}_t)\) denotes the probability that the \(i\)-th token corresponds to the \(k\)-th state, conditioned on the current intermediate sequence \(\boldsymbol{X}^{1:n}_t\). Using the parameterization defined in Eq. 7 , we train a neural network to predict \(p(\boldsymbol{X}^{1:n}_T|\boldsymbol{X}^{1:n}_t)\). The training and sampling procedures for modeling token sequences are outlined in Algorithms 2 and 3, respectively.
For a large vocabulary set, the corresponding statistical manifold becomes high-dimensional, which introduces two challenges: (1) Sharp transition. Bridge processes on high-dimensional spheres tend to exhibit sharp transitions near the terminal time. This high-dimensional convergence behavior makes the mixture process difficult for neural networks to learn. (2) High input dimensionality. The input to the network resides in a high-dimensional space, requiring sufficiently large hidden dimensions to encode the data adequately. Models with limited capacity fail to learn the conditional probability \(p(\boldsymbol{X}^{1:n}_T|\boldsymbol{X}^{1:n}_t)\).
To address these challenges, we introduce dimension splitting, a simple technique to reduce the dimensionality of the parameterized manifold. Instead of mapping the \(k\)-th token directly to \(\mathbb{S}^{d-1}\), we first represent the index \(k\) in base \(b\), and then map the represntation to the product manifold \((\mathbb{S}^b)^m\) for \(m\!\mathrel{\vcenter{:}}=\!\lceil\log_{b}d\rceil\). Dimension splitting reparameterizes a sequence of length \(L\) to a product manifold \((\mathbb{S}^b)^{mL}\). The resulting bridge processes on \(\mathbb{S}^b\) with small \(b\) exhibit gradual convergence over time, making them significantly easier for neural networks to learn. Dimension splitting significantly enhances the likelihood when used together with the mixture path (Eq. 6 ).
Discrete diffusion directly models the Markov chain on the discrete data space. One-hot data distributions are gradually corrupted to a stationary distribution using specific transition matrices, and the noising process corresponds to the stochastic jumps between states in the Markov chain. D3PM [1] introduces discrete-time Markov forward processes with both uniform and absorbing state transition matrices, and has been generalized to the continuous-time Markov chain framework [23]. SEDD [2] proposes learning the score entropy of discrete states instead of predicting the mean. Recent works [3], [4] introduce continuous-time masked diffusion models, which offer simpler likelihood bounds compared to previous works. We provide further discussions on comparison with discrete diffusion models in Appendix 9.10.
Early approaches to discrete data modeling either fully relaxed discrete data into continuous space [17] or embedded tokens into a latent space [18], [32], without imposing any constraint. However, continuous relaxation without constraint fails to capture the discreteness of the categorical distribution. Recent works operate directly in logit space [33], [34] or on the probability simplex [19], [20], but rely on imperfect assumptions that fail to accurately represent the underlying categorical distribution. Flow matching has been applied to the statistical manifold to model the categorical distribution [21], [22], but these methods are limited to short sequences and small vocabularies. We provide a detailed comparison in Appendix 9.10.
We evaluate our Riemannian Diffusion Language Model (RDLM) for text generation tasks on two language benchmarks: Text8 [35] and One Billion Words Dataset [36].
We compare against state-of-the-art diffusion and autoregressive models. Multinomial Diffusion [33], D3PM [1], SEDD [2], MDLM [4], MD4 [3] are discrete diffusion models. Plaid [37] and Bayesian Flow Network (BFN) [34] are continuous diffusion models. IAF/SCF [38], AR Argmax Flow [33], and Discrete Flow [39] are flow-based models, and ARDM [40] and MAC [41] are any-order autoregressive models. We also compare with the transformer AR model [42]. We provide further details on the baselines in Appendix 10.2
For all experiments, we use the same data split and context size following [2] and [4]. For Text8, we randomly sample contiguous chunks of length 256 as done in previous works [1], [2]. For One Billion Words, we use the same tokenizer as in [43] with context size 128. We use a diffusion transformer architecture [44] with rotary positional embeddings [45] for all the experiments and match the number of parameters as used in the previous works [2], [4]. For our model, we use the mixture path of masked and uniform diffusion (Eq. 6 ) and apply dimension splitting for a large vocabulary. We provide more details in Appendix 10.2.
h0.4
| Method | BPC (\(\downarrow\)) |
|---|---|
| Autoregressive | |
| AR Argmax Flow [33] | 1.39 |
| Transformer AR [42] | 1.23 |
| Discrete Flow [39] | 1.23 |
| Any-order Autoregressive | |
| ARDM [40] | \(\leq\) 1.43 |
| MAC [41] | \(\leq\) 1.40 |
| Discrete Diffusion | |
| Multinomial Diffusion [33] | \(\leq\) 1.72 |
| D3PM Uniform [1] | \(\leq\) 1.61 |
| D3PM Absorb [1] | \(\leq\) 1.45 |
| SEDD Absorb [2] | \(\leq\) 1.39 |
| MDLM [4] | \(\leq\) 1.40 |
| MD4 [3] | \(\leq\) 1.37 |
| Continuous Diffusion | |
| Plaid [37] | \(\leq\) 1.48 |
| BFN [34] | \(\leq\) 1.41 |
| RDLM (Ours) | \(\leq\) 1.32 |
We first evaluate on a small-scale character-level language modeling task. The Text8 [35] dataset is a character-level text modeling benchmark extracted from English Wikipedia. We train models on short text chunks of length 256 and evaluate the performance using Bits Per Character (BPC). As shown in Table 1, our framework outperforms all previous diffusion models, including both discrete and continuous methods. We also outperform any-order autoregressive models that generate texts in flexible decoding order, similar to discrete diffusion models. We achieve similar generative perplexity and entropy compared to existing discrete diffusion models. We provide generated texts from RDLM in Appendix 11.1.
We further evaluate RDLM on One Billion Words Dataset (LM1B) [36], a medium-scale real-world language benchmark with a vocabulary size of 30522. We evaluate the performance using perplexity (PPL), and the results are summarized in Table [tab:lm1b]. RDLM outperforms most existing diffusion models and is competitive with the state-of-the-art discrete diffusion model [4]. Notably, ours significantly outperforms the prior continuous diffusion model [18], demonstrating the effectiveness of incorporating the geometry of the underlying categorical distribution. We provide a discussion of the results with MDLM [4] in Appendix 10.2.0.4. The generated texts are presented in Appendix 11.2.
We further explore applications of RDLM beyond the text domain by applying it to order-agnostic image data. Each image is represented as a set of discrete tokens with a vocabulary of size 256, removing information about pixel proximity. Note that this is different from the experimental settings with image diffusion models [14], [46] that use spatial information. We compare RDLM against autoregressive models and discrete diffusion models that operate directly on raw pixel space, which we describe in Appendix 10.3. As shown in Table [tab:cifar10], our method achieves the lowest Bits Per Dimension (BPD), outperforming the discrete diffusion models [1], [3] and autoregressive baselines [47], [48]. We attribute this strong performance on inherently continuous data to the continuous nature of our framework, which fully exploits iterative refinement, suggesting its potential for unifying modeling across different modalities.
h0.4
| Method | MSE (\(\downarrow\)) |
|---|---|
| Bit-Diffusion (bit) [13] | 0.041 |
| Bit-Diffusion (one-hot) [13] | 0.040 |
| D3PM Uniform [1] | 0.038 |
| DDSM [19] | 0.033 |
| DirichletFM [20] | 0.034 |
| Language Model | 0.034 |
| Fisher-Flow [22] | 0.029 |
| RDLM (Ours) | 0.027 |
We demonstrate that our framework can be applied to biological sequence generation. We evaluate our method on the promoter DNA sequence design task, which aims to generate valid promoter DNA sequences conditioned on transcription profiles. A detailed description of the task is provided in Appendix 10.4. Model performance is measured by the mean squared error (MSE) between the predicted regulatory activity of the generated sequence and that of the original sequence corresponding to the transcription profile. Table 2 shows that our framework achieves the lowest MSE, outperforming the flow matching methods [20], [22] and the discrete diffusion model [1].
We validate the effectiveness of our cross-entropy-based loss of Eq. 11 in Table 5. Compared to the mean squared error loss of Eq. 10 , the cross-entropy loss provides faster convergence in training and better NLL. Furthermore, Table 5 shows that applying importance sampling to the training objective as defined in Eq. 12 yields improved likelihood.
We validate that our approximate sampling method closely matches the true transition distribution of the mixture process. In Figure 6, we report the maximum mean discrepancy (MMD) [49] distance between the simulated transition distribution and the approximated distribution obtained using the Riemannian normal. The approximated distributions exhibit nearly identical MMD as the simulated distributions, indicating that he approximation is accurate and reliable. Notably, the discrepancy approaches zero in high-dimensional manifolds, where simulation becomes increasingly expensive, making simulation-based training impractical.
For datasets with a large vocabulary, such as the LM1B dataset, our dimension splitting technique (Section 4.0.0.2) results in a significant improvement. Table [tab:analysis95dimension] shows that directly training a model on discrete data with a large vocabulary fails to capture the underlying distribution, due to the high input dimensionality. In particular, the sharp transition near the terminal time for a high-dimensional mixture process makes it challenging for neural networks to learn. In large vocabulary settings, we achieve the best result via dimension splitting, combined with modeling the generative process using a mixture path of masked and uniform diffusion.
In this work, we introduced the Riemannian Diffusion Language Model (RDLM), a continuous diffusion model for language and discrete data. We present a simple framework that generalizes discrete diffusion models, building on the connection between the transition distribution and continuous flow on the statistical manifold. We provide general designs for generative processes and introduce a simulation-free training scheme leveraging the radial symmetry. Through experiments on language modeling benchmarks, RDLM demonstrates strong performance over prior discrete and continuous diffusion models. We further extend our approach to other modalities, including image and biological sequence generation, where RLDM achieves consistently strong results.
This work was supported by National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. RS-2023-00256259), Institute for Information & communications Technology Promotion(IITP) grant funded by the Korea government(MSIT) (No.RS-2019-II190075 Artificial Intelligence Graduate School Program(KAIST)), Information & Communications Technology Planning & Evaluation (IITP) with a grant funded by the Ministry of Science and ICT (MSIT) of the Republic of Korea in connection with the Global AI Frontier Lab International Collaborative Research. (No. RS-2024-00469482 & RS-2024-00509279), and artificial intelligence industrial convergence cluster development project funded by the Ministry of Science and ICT(MSIT, Korea)&Gwangju Metropolitan City.
Appendix
For a discrete sample space \(\mathcal{X}=\{1,2,\cdots,d\}\), a \(d\)-class categorical distribution over \(\mathcal{X}\) is parameterized by \(d\) number of parameters \(p_1,\cdots,p_d \geq 0\) such tat \(\sum^d_{i=1} p_i = 1\). The parameter space corresponds to the \((d-1)\)-dimensional probability simplex: \[\begin{align} \Delta^{d-1} = \left\{ (p_1,\cdots,p_d)\in\mathbb{R}^d : \sum^{d}_{i=1} p_i = 1, p_i\geq 0 \right\}, \end{align}\] A natural choice of a Riemannian metric on the simplex is the Fisher-Rao metric [24], [25]. For an interior point \(\boldsymbol{p}\in\Delta^{d-1}\), the Fisher-Rao metric is defined as follows: \[\begin{align} g_{FR}(\boldsymbol{p})[\boldsymbol{x},\boldsymbol{y}] \mathrel{\vcenter{:}}= \langle \boldsymbol{x},\boldsymbol{y} \rangle_{\boldsymbol{p}} \mathrel{\vcenter{:}}= \left\langle \frac{\boldsymbol{x}}{\sqrt{\boldsymbol{p}}}, \frac{\boldsymbol{y}}{\sqrt{\boldsymbol{p}}} \right\rangle = \sum^{d}_{i=1} \frac{\boldsymbol{x}_i \boldsymbol{y}_i}{\boldsymbol{p}_i}, \;\; \boldsymbol{x}, \boldsymbol{y} \in \mathcal{T}_{\boldsymbol{p}} \Delta^{d-1}, \end{align}\] where the normalization by \(\sqrt{\boldsymbol{p}}\) in the inner product is performed component-wise. This induces a geodesic distance on the simplex defined as follows: \[\begin{align} d(\boldsymbol{p}, \boldsymbol{q}) = 2 \cos^{-1}\left(\sum^d_{i=1} \sqrt{p_i q_i}\right), \;\; \boldsymbol{p}, \boldsymbol{q} \in \Delta^{d-1}, \end{align}\] where \(\boldsymbol{p}\) and \(\boldsymbol{q}\) corresponds to the parameters of categorical distributions. The probability simplex \(\Delta^{d-1}\) equipped with the Fisher-Rao metric is a Riemannian manifold called the statistical manifold of categorical distribution, denoted as \(\mathcal{P}(\mathcal{X})\) throughout the paper. The tangent space at an interior point \(\boldsymbol{p}\) is identified as \(\mathcal{T}_{\boldsymbol{p}}(\mathcal{P}(\mathcal{X})) = \left\{\boldsymbol{x}\in\mathbb{R}^d: \sum^d_{i=1}\boldsymbol{x}_i = 0 \right\}\). For further details on the geometry of the statistical manifold, we refer the reader to [50].
\(\mathbb{S}^{d\!-\!1}\) denotes the \((d\!-\!1)\)-dimensional sphere \(\left\{ \boldsymbol{u}\!=\!(\boldsymbol{u}_1,\cdots,\boldsymbol{u}_d): \sum_i \boldsymbol{u}_i^2=1 \right\}\) and \(\mathbb{S}^{d-1}_{+} \!=\! \left\{\boldsymbol{u}\!=\!(\boldsymbol{u}_1,\cdots,\boldsymbol{u}_d): \sum_i \boldsymbol{u}_i^2=1, \boldsymbol{u}_i\geq 0 \right\}\) denotes a positive orthant of \(\mathbb{S}^{d-1}\). The hypersphere \(\mathbb{S}^{d-1}\) can be embedded into the ambient Euclidean space \(\mathbb{R}^d\), which induces a canonical inner product \(\big\langle \boldsymbol{x}, \boldsymbol{y} \big\rangle \mathrel{\vcenter{:}}= \sum^d_{i=1} \boldsymbol{x}_i\boldsymbol{y}_i\). For a discrete sample space \(\mathcal{X}=\{1,2,\cdots,d\}\), there exists a diffeomorphism from \(\mathcal{P}(\mathcal{X})\) to \(\mathbb{S}^{d-1}_{+}\) defined as follows: \[\begin{align} \begin{aligned} &\pi: \mathcal{P}(\mathcal{X}) \rightarrow \mathbb{S}^{d-1}_{+} \;\; ; \;\; \boldsymbol{p}_i\mapsto \boldsymbol{u}_i=\sqrt{\boldsymbol{p}_i}, \\[6pt] &\pi^{-1}: \mathbb{S}^{d-1}_{+} \rightarrow \mathcal{P}(\mathcal{X}) \;\; ; \;\; \boldsymbol{u}_i\mapsto \boldsymbol{p}_i= \boldsymbol{u}_i^2. \end{aligned} \label{eq:diffeomorphism95app} \end{align}\tag{17}\] The diffeomorphism induces the the geodesic distance on \(\mathbb{S}^{d-1}_{+}\): \[\begin{align} d_g(\boldsymbol{u},\boldsymbol{v}) = \cos^{-1}\langle\boldsymbol{u}, \boldsymbol{v}\rangle, \;\; \boldsymbol{u},\boldsymbol{v}\in\mathbb{S}^{d-1}_{+}, \end{align}\] for which the geodesic corresponds to the great circle connecting two points \(\boldsymbol{u}\) and \(\boldsymbol{v}\). The corresponding exponential and logarithm maps on \(\mathbb{S}^{d-1}\) can be computed as follows: \[\begin{align} &\exp_{\boldsymbol{u}}{\boldsymbol{x}} = \cos(\|\boldsymbol{x}\|)\boldsymbol{u} + \sin(\|\boldsymbol{x}\|)\frac{\boldsymbol{x}}{\|\boldsymbol{x}\|} \;, \;\; \boldsymbol{u}\in\mathbb{S}^{d-1} , \boldsymbol{x}\in \mathcal{T}_{\boldsymbol{u}}(\mathbb{S}^{d-1}), \\ &\exp^{\scalebox{0.75}[1.0]{-}1}_{\boldsymbol{u}}(\boldsymbol{v}) = \frac{\cos^{\scalebox{0.75}[1.0]{-}1}\langle \boldsymbol{u},\boldsymbol{v} \rangle}{\sqrt{1 - \langle \boldsymbol{u},\boldsymbol{v} \rangle^2}}\Big( \boldsymbol{v} - \langle \boldsymbol{u},\boldsymbol{v} \rangle\boldsymbol{u} \Big) \;,\;\; \boldsymbol{u}, \boldsymbol{v} \in \mathbb{S}^{d-1}. \label{eq:sphere95exp95log} \end{align}\tag{18}\]
Additionally, define the radial distance \(r^{\boldsymbol{v}}(\boldsymbol{x})\mathrel{\vcenter{:}}= d_g(\boldsymbol{x}, \boldsymbol{v}) \in \mathbb{R}\) where \(d_g\) denotes the geodesic distance on \(\mathbb{S}^{d-1}\). Then we have the following identities: \[\begin{align} &\nabla r^{\boldsymbol{v}}(\boldsymbol{x}) = -\frac{\boldsymbol{v} - \langle \boldsymbol{v}, \boldsymbol{x}\rangle \boldsymbol{x}}{\sqrt{1 - \langle \boldsymbol{v}, \boldsymbol{x}\rangle^2}}, \\ &\Delta r^{\boldsymbol{v}}(\boldsymbol{x}) = (d-1)\cot(r^{\boldsymbol{v}}(\boldsymbol{x})), \\[6pt] &\Big\langle \nabla r^{\boldsymbol{v}}(\boldsymbol{x}), \nabla r^{\boldsymbol{w}}(\boldsymbol{x}) \Big\rangle = \frac{\langle\boldsymbol{v}, \boldsymbol{w}\rangle - \langle\boldsymbol{v}, \boldsymbol{x}\rangle \langle\boldsymbol{w}, \boldsymbol{x}\rangle}{\sqrt{ \left(1 - \langle\boldsymbol{v}, \boldsymbol{x}\rangle^2\right) \left(1 - \langle\boldsymbol{w}, \boldsymbol{x}\rangle^2\right) }} = \frac{\langle\boldsymbol{v},\boldsymbol{w}\rangle - \cos r^{\boldsymbol{v}}(\boldsymbol{x})\cos r^{\boldsymbol{w}}(\boldsymbol{x})}{\sin r^{\boldsymbol{v}}(\boldsymbol{x}) \sin r^{\boldsymbol{w}}(\boldsymbol{x})}. \end{align}\] In particular, the logarithm map in Eq. 18 can be represented in radial distance: \[\begin{align} \exp^{\scalebox{0.75}[1.0]{-}1}_{\boldsymbol{x}}(\boldsymbol{v}) = -r^{\boldsymbol{v}}(\boldsymbol{x}) \nabla r^{\boldsymbol{v}}(\boldsymbol{x}), \end{align}\]
In this section, we derive the connection between the discrete diffusion models and the continuous flow on a hypersphere.
We first derive a useful lemma for continuous flows on hyperspheres. The following lemma describes a continuous flow on the hypersphere as a spherical linear interpolation.
Lemma 1. Define a flow \(\{\boldsymbol{Y}_t\}^T_{t=0}\) on \(\mathbb{S}^{d-1}\) from \(\boldsymbol{y}_0\in\mathbb{S}^{d-1}\) to \(\boldsymbol{y}_1\in\mathbb{S}^{d-1}\!\setminus\!\{\boldsymbol{y}_0,\! -\boldsymbol{y}_0\}\): \[\begin{align} \frac{\mathrm{d}\boldsymbol{Y}_t}{\mathrm{d}t} = -\frac{\mathrm{d}\log \kappa_t}{\mathrm{d}t} \exp^{-1}_{\boldsymbol{Y}_t}(\boldsymbol{y}_1), \;\; \boldsymbol{Y}_0=\boldsymbol{y}_0, \label{eq:flow95def} \end{align}\tag{19}\] where \(\kappa_t:[0,T]\rightarrow[0,1]\) is a scalar function satisfying \(\kappa_0=1\) and \(\kappa_T=0\). Then the flow \(\boldsymbol{Y}_t\) has a closed form solution: \[\begin{align} \boldsymbol{Y}_t = \frac{\sin(\theta_0-\theta_t)}{\sin\theta_0}\boldsymbol{y}_1 + \frac{\sin\theta_t}{\sin\theta_0}\boldsymbol{y}_0, \;\; \theta_t\mathrel{\vcenter{:}}= \kappa_t\cos^{\scalebox{0.75}[1.0]{-}1}\langle \boldsymbol{y}_0,\boldsymbol{y}_1 \rangle, \label{eq:flow95solution} \end{align}\tag{20}\] which corresponds to the spherical linear interpolation, i.e., slerp: \[\begin{align} \boldsymbol{Y}_t = \exp_{\boldsymbol{y}_1}\Big( \kappa_{t}\exp^{-1}_{\boldsymbol{y}_1}(\boldsymbol{y}_0) \Big) \label{eq:geodesic} \end{align}\tag{21}\]
Proof. Let \(\theta_t\mathrel{\vcenter{:}}= \cos^{\scalebox{0.75}[1.0]{-}1}\langle \boldsymbol{Y}_t,\boldsymbol{y}_1 \rangle\). Then \(\boldsymbol{Y}_t\) can be written as follows: \[\begin{align} \boldsymbol{Y}_t = \cos\theta_t\boldsymbol{y}_1 + \sin\theta_t\boldsymbol{w}_t, \end{align}\] where \(\boldsymbol{w}_t\in\mathbb{R}^d\) is an unit vector. From the definition of \(\theta_t\), we have the following identity: \[\begin{align} \frac{\mathrm{d}\theta_t}{\mathrm{d}t} &= -\frac{1}{\sin\theta_t} \left\langle \frac{\mathrm{d}\boldsymbol{Y}_t}{\mathrm{d}t}, \boldsymbol{y}_1 \right\rangle = -\frac{1}{\sin\theta_t} \left\langle -\frac{\mathrm{d}\log\kappa_t}{\mathrm{d}t} \frac{\theta_t(\boldsymbol{y}_1 - \boldsymbol{Y}_t\cos\theta_t)}{\sin\theta_t}, \boldsymbol{y}_1 \right\rangle \\ &= \frac{1}{\sin\theta_t}\frac{\mathrm{d}\log\kappa_t}{\mathrm{d}t} \theta_t \frac{1 - \cos^2\theta_t}{\sin\theta_t} = \frac{\mathrm{d}\log\kappa_t}{\mathrm{d}t} \theta_t , \label{eq:theta95derivative} \end{align}\tag{22}\] which yields representation of the flow \(\boldsymbol{Y}_t\) in Eq. 19 with respect to \(\theta\): \[\begin{align} \frac{\mathrm{d}\boldsymbol{Y}_t}{\mathrm{d}t} &= \frac{\mathrm{d}\theta_t}{\mathrm{d}t}\frac{\boldsymbol{y}_1 - \boldsymbol{Y}_t\cos\theta_t}{\sin\theta_t}. \label{eq:flow95in95theta} \end{align}\tag{23}\] Using the result of Eq. 23 , we can see that \(\boldsymbol{w}_t\) is a constant vector independent of \(t\): \[\begin{align} \frac{\mathrm{d}\boldsymbol{w}_t}{\mathrm{d}t} &= \frac{1}{\sin^2\theta_t} \left[\left(\frac{\mathrm{d}\boldsymbol{Y}_t}{\mathrm{d}t} - \frac{\mathrm{d}\cos\theta_t}{\mathrm{d}t}\boldsymbol{y}_1\right)\sin\theta_t - \left(\boldsymbol{Y}_t - \cos\theta_t\boldsymbol{y}_1\right)\frac{\mathrm{d}\sin\theta_t}{\mathrm{d}t}\right] \\ &= \frac{1}{\sin^2\theta_t}\frac{\mathrm{d}\theta_t}{\mathrm{d}t} \Big[ -(\boldsymbol{y}_1 - \boldsymbol{Y}_t\cos\theta_t) + \sin^2{\theta_t}\boldsymbol{y}_1 - \cos\theta_t\boldsymbol{Y}_t + \cos^2\theta_t\boldsymbol{y}_1 \Big] =0. \end{align}\] Therefore we get the closed form solution for \(\boldsymbol{Y}_t\): \[\begin{align} \boldsymbol{Y}_t = \cos\theta_t\boldsymbol{y}_1 + \sin\theta_t\frac{\boldsymbol{y}_0 - \cos\theta_0\boldsymbol{y}_1}{\sin\theta_0} = \frac{\sin(\theta_0-\theta_t)}{\sin\theta_0}\boldsymbol{y}_1 + \frac{\sin\theta_t}{\sin\theta_0}\boldsymbol{y}_0 , \end{align}\] where \(\theta_t=\kappa_t\theta_0\) from Eq. 22 . Note that the solution Eq. 20 is well-defined in the sense that \(\sin\theta_0>0\) always holds. This is because \(\|\langle \boldsymbol{Y}_t, \boldsymbol{y}_1 \rangle\|\leq 1\) as \(\boldsymbol{Y}_t\) and \(\boldsymbol{y}_1\) are on \(\mathbb{S}^{d-1}\). Finally, using the definition of \(\theta_t\), we can show the following: \[\begin{align} \exp^{-1}_{\boldsymbol{Y}_T}(\boldsymbol{Y}_t) = \theta_t\frac{\boldsymbol{Y}_t - \boldsymbol{Y}_T\cos\theta_t}{\sin\theta_t} = \kappa_t\theta_0\boldsymbol{w}_t = \kappa_t\theta_0\boldsymbol{w}_0 = \kappa_t\exp^{-1}_{\boldsymbol{Y}_T}(\boldsymbol{Y}_0) , \end{align}\] which gives the spherical linear interpolation defined in Eq. 21 . ◻
Our key observation is that the transition distribution \(q_t(x_t|x)\) of a discrete diffusion process (Eq. 1 ) is a categorical. Therefore, modeling \(q_t\) is equivalent to modeling the continuous flow on the statistical manifold \(\mathcal{P}(\mathcal{X})\). Here, we show that discrete diffusion models over \(\mathcal{X}\) can be modeled by a continuous flow on \(\mathbb{S}^{d-1}_{+}\). Specifically, we derive that the transition distribution of discrete diffusion processes can be modeled by the continuous flow on the hypersphere.
We first show that discrete masked diffusion models correspond to a continuous flow on the statistical manifold starting from an absorbing state.
Proposition 2. Define a flow \(\{\boldsymbol{Y}_t\}^T_{t=0}\) on \(\mathbb{S}^{d-1}\) from \(\boldsymbol{e}_k\) to \(\boldsymbol{e}_m\): \[\begin{align} &\frac{\mathrm{d}\boldsymbol{Y}_t}{\mathrm{d}t} = -\frac{\mathrm{d}\log \kappa_t}{\mathrm{d}t} \exp^{-1}_{\boldsymbol{Y}_t}(\boldsymbol{e}_m), \;\; \boldsymbol{Y}_0=\boldsymbol{e}_k, \;\; \kappa_t = \frac{2}{\pi}\sin^{-1}\!\left( \sqrt{\alpha_t} \right) \label{eq:mask95flow} \end{align}\qquad{(1)}\] where \(\boldsymbol{e}_m\) denotes the absorbing state (i.e., mask state) and \(\alpha_t\in[0,1]\) is some differentiable noise schedule satisfying \(\alpha_0\approx1\) and \(\alpha_1\approx0\). Then the random variable \(\boldsymbol{Z}_t\mathrel{\vcenter{:}}= \pi\left(\boldsymbol{Y}_t \right) \in\mathbb{R}^d\) satisfies the following: \[\begin{align} \boldsymbol{Z}_t = \alpha_t\boldsymbol{e}_k + (1-\alpha_t)\boldsymbol{e}_m, \label{eq:mask95simplex} \end{align}\qquad{(2)}\] which is a flow that interpolates \(\boldsymbol{e}_k\) and \(\boldsymbol{e}_m\) on the probability simplex \(\Delta^{d-1}\).
Proof. Using Lemma 1 with \(\theta_0 = \cos^{\scalebox{0.75}[1.0]{-}1}\langle \boldsymbol{e}_m,\boldsymbol{e}_k \rangle=\pi/2\), we have the representation of \(\boldsymbol{Y}_t\): \[\begin{align} \boldsymbol{Y}_t = \sin(\theta_0 - \theta_t)\boldsymbol{e}_m + \sin\theta_t\boldsymbol{e}_k = \sqrt{1-\alpha_t}\boldsymbol{e}_m + \sqrt{\alpha_t}\boldsymbol{e}_k , \end{align}\] since \(\theta_t = \sin^{-1}\!(\sqrt{\alpha_t})\). Therefore, \(\boldsymbol{Z}_t\) has the following closed form: \[\begin{align} \boldsymbol{Z}_t = ({1-\alpha_t})\boldsymbol{e}_m + {\alpha_t}\boldsymbol{e}_k, \end{align}\] which defines a flow that interpolates \(\boldsymbol{e}_k\) and \(\boldsymbol{e}_m\) on the probability simplex \(\Delta^{d-1}\). ◻
Note that \(\boldsymbol{Z}_t\) is a random variable on \(\Delta^{d-1}\) representing the categorical distribution \(\text{Cat}(\alpha_t\boldsymbol{e}_{x_0} + (1-\alpha_t)\boldsymbol{e}_m)\). This corresponds to the transition distribution \(q(x_t|x_0)\) of a discrete masked diffusion model, where the transition matrix for the diffusion process is given as follows: \[\begin{align} Q^{absorb}_t = \begin{bmatrix} \alpha_t & 0 & \cdots & 0 & 0 \\ 0 & \alpha_t & \cdots & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & \alpha_t & 0 \\ 1-\alpha_t & 1-\alpha_t & \cdots & 1-\alpha_t & 0 \end{bmatrix} \end{align}\]
Corollary 1. The discrete masked diffusion process can be modeled by a continuous flow on \(\mathbb{S}^{d-1}\) that starts from the absorbing state \(\boldsymbol{e}_m\).
We also show that discrete uniform diffusion models correspond to a continuous flow on the statistical manifold that starts from the barycenter of the simplex.
Proposition 3. Define a flow \(\{\boldsymbol{Y}_t\}^T_{t=0}\) on \(\mathbb{S}^{d-1}\) from \(\boldsymbol{e}_k\) to \(\sum^d_{i=1} \boldsymbol{e}_i/\sqrt{d}\): \[\begin{align} \frac{\mathrm{d}\boldsymbol{Y}_t}{\mathrm{d}t} &= -\frac{\mathrm{d}\log \kappa_t}{\mathrm{d}t} \exp^{-1}_{\boldsymbol{Y}_t} \left( \sum^{d}_{i=1} \frac{1}{\sqrt{d}}\boldsymbol{e}_i \right), \;\; \boldsymbol{Y}_0=\boldsymbol{e}_k, \\[6pt] \kappa_t &= 1 - \frac{\sin^{\scalebox{0.75}[1.0]{-}1}\big( \sqrt{1-\alpha_t}\sin\theta_0 \big)}{\theta_0}, \; \theta_0 \mathrel{\vcenter{:}}= \cos^{\scalebox{0.75}[1.0]{-}1}\left(\frac{1}{\sqrt{d}}\right) \label{eq:uniform95flow} \end{align}\qquad{(3)}\] where \(\alpha_t\in[0,1]\) is a differentiable noise schedule satisfying \(\alpha_0\approx1\) and \(\alpha_1\approx0\). Then the random variable \(\boldsymbol{Z}_t\mathrel{\vcenter{:}}= \pi\left(\boldsymbol{Y}_t \right)\in\mathbb{R}^{d}\) satisfies the following: \[\begin{align} \boldsymbol{Z}_t = \sum_{i\neq k}\frac{1-\alpha_t}{d}\boldsymbol{e}_i + \frac{1 + (d-1)\alpha_t}{d}\boldsymbol{e}_k , \end{align}\] which is a flow that interpolates \(\boldsymbol{e}_k\) and \(\sum^d_{i=1} \boldsymbol{e}_i/\sqrt{d}\) on the probability simplex \(\Delta^{d-1}\).
Proof. Using Lemma 1 with \(\theta_0 = \cos^{\scalebox{0.75}[1.0]{-}1}(1/\sqrt{d})\), we have the following representation of \(\boldsymbol{Y}_t\): \[\begin{align} \boldsymbol{Y}_t &= \frac{\sin(\theta_0 - \theta_t)}{\sin\theta_0} \sum^{d}_{i=1} \frac{1}{\sqrt{d}}\boldsymbol{e}_i + \frac{\sin\theta_t}{\sin\theta_0} \boldsymbol{e}_k \\ &= \sum_{i\neq k} \frac{\sin(\theta_0 - \theta_t)}{\sqrt{d-1}} \boldsymbol{e}_i + \left( \frac{\sqrt{d}\sin\theta_t}{\sqrt{d-1}} + \frac{\sin(\theta_0 - \theta_t)}{\sqrt{d-1}} \right) \boldsymbol{e}_k. \end{align}\] Due to the definition of \(\kappa_t\), \(\boldsymbol{Z}_t\) has the following closed form: \[\begin{align} \boldsymbol{Z}_t = \sum_{i\neq k}\frac{1-\alpha_t}{d}\boldsymbol{e}_i + \frac{1 + (d-1)\alpha_t}{d}\boldsymbol{e}_k , \end{align}\] which defines a flow that interpolates \(\boldsymbol{e}_k\) and \(\sum^d_{i=1} \boldsymbol{e}_i/\sqrt{d}\), i.e., the barycenter of the probability simplex \(\Delta^{d-1}\). ◻
Note that \(\boldsymbol{Z}_t\) is a random variable on \(\Delta^{d-1}\) representing the categorical distribution: \[\begin{align} \text{Cat}\left(\sum_{i\neq x_0}\frac{1-\alpha_t}{d}\boldsymbol{e}_i + \frac{1 - (d-1)\alpha}{d}\boldsymbol{e}_{x_0}\right), \end{align}\] which corresponds to the transition distribution \(q(x_t|x_0)\) of a discrete uniform diffusion model. The transition matrix for the uniform diffusion process is given as follows: \[\begin{align} Q^{unif} = \begin{bmatrix} 1-N & 1 & \cdots & 1 \\ 1 & 1-N & \cdots & 1 \\ \vdots & \vdots & \ddots & \vdots \\ 1 & 1 & \cdots & 1-N \end{bmatrix} \end{align}\]
Corollary 2. The discrete uniform diffusion process can be modeled by a continuous flow on \(\mathbb{S}^{d-1}\) that starts from the barycenter of the probability simplex.
On a general manifold \(\mathcal{M}\) that is complete, orientable, connected, and boundaryless, the logarithm bridge process [26] from \(\boldsymbol{x}_0\in\mathcal{M}\) to \(\boldsymbol{x}_1\in\mathcal{M}\) is defined as follows: \[\begin{align} \mathrm{d}\bar{\boldsymbol{X}}_t &= \gamma_t \exp^{\scalebox{0.75}[1.0]{-}1}_{\bar{\boldsymbol{X}_t}}(\boldsymbol{x}_1) \mathrm{d}t + \sigma_t \mathrm{d}\mathbf{B}^{\mathcal{M}}_t, \;\; \bar{\boldsymbol{X}}_0 = \boldsymbol{x}_0 \;;\;\; \gamma_t\mathrel{\vcenter{:}}= \frac{\sigma^2_t}{\int^T_t \sigma^2_s\mathrm{d}s} \label{eq:bridge95app} \end{align}\tag{24}\] where \(\exp^{\scalebox{0.75}[1.0]{-}1}_{x}(\cdot)\) denotes the logarithm map on \(\mathcal{M}\) at point \(x\) and \(\mathbf{B}^{\mathcal{M}}_t\) is the Brownian motion defined on \(\mathcal{M}\). In the case of \(\mathcal{M}=\mathbb{S}^{d-1}\), we can derive the logarithm bridge process from \(\boldsymbol{x}_0\) to \(\boldsymbol{e}_k\): \[\begin{align} \mathrm{d}\bar{\boldsymbol{X}}_t = \gamma_t \frac{\cos^{\scalebox{0.75}[1.0]{-}1}\langle\bar{\boldsymbol{X}}_t, \boldsymbol{e}_k \rangle (\boldsymbol{e}_k - \langle\bar{\boldsymbol{X}}_t, \boldsymbol{e}_k \rangle \bar{\boldsymbol{X}}_t)}{\sqrt{1 - \langle\bar{\boldsymbol{X}}_t, \boldsymbol{e}_k \rangle^2}} \mathrm{d}t + \sigma_t\mathrm{d}\mathbf{B}^{d}_t, \; \bar{\boldsymbol{X}}_0=\boldsymbol{x}_0, \label{eq:bridge95sphere95app} \end{align}\tag{25}\] where we used the logarithm map of Eq. 18 and \(\mathbf{B}^d_t\) is a Brownian motion defined on \(\mathbb{S}^{d-1}\). It is worth noting that Eq. 25 is derived from applying the time change [31] to a simple bridge process: \[\begin{align} \mathrm{d}\bar{\boldsymbol{X}}_t = \frac{1}{T-t} \frac{\cos^{\scalebox{0.75}[1.0]{-}1}\langle \bar{\boldsymbol{X}}_t, \boldsymbol{e}_k\rangle (\boldsymbol{e}_k - \langle \bar{\boldsymbol{X}}_t, \boldsymbol{e}_k\rangle \bar{\boldsymbol{X}}_t)}{\sqrt{1 - \langle \bar{\boldsymbol{X}}_t, \boldsymbol{e}_k\rangle^2}} \mathrm{d}t + \mathrm{d}\mathbf{B}^d_t \;,\; \bar{\boldsymbol{X}}_0 = \boldsymbol{x}_0. \label{eq:bridge95simple95app} \end{align}\tag{26}\]
Note that the drift of the logarithm bridge process can be rewritten using the geodesic distance \(d_g(\cdot, \cdot)\) as follows: \[\begin{align} \mathrm{d}\bar{\boldsymbol{X}}_t = \Big[ \gamma_t \cos^{\scalebox{0.75}[1.0]{-}1}\langle\bar{\boldsymbol{X}}_t, \boldsymbol{e}_k \rangle \nabla_{\bar{\boldsymbol{X}}_t} d_g(\bar{\boldsymbol{X}}_t, \boldsymbol{e}_k) \Big] \mathrm{d}t + \sigma_t\mathrm{d}\mathbf{B}^{d}_t, \; \bar{\boldsymbol{X}}_0=\boldsymbol{x}_0. \end{align}\] The direction of the drift corresponds to the direction that minimizes the distance between the current state \(\bar{\boldsymbol{X}}_t\) and the endpoint \(\boldsymbol{e}_k\). Since \(\gamma_t\rightarrow\infty\) as \(t\rightarrow T\), the bridge process converges to the endpoint \(\boldsymbol{e}_k\). The convergence behavior can be analyzed by examining the radial process \(r^k_t \mathrel{\vcenter{:}}= d_g(\boldsymbol{e}_k, \boldsymbol{X}_t)\), which we describe below.
Let \(r^{\boldsymbol{w}}_t \mathrel{\vcenter{:}}= d_g(\boldsymbol{w}, \boldsymbol{X}_t)\) for arbitrary point \(\boldsymbol{w}\in\mathbb{S}^{d-1}\). Then the bridge process of Eq. 25 can be rewritten as follows: \[\begin{align} \mathrm{d}\bar{\boldsymbol{X}}_t &= \gamma_t \frac{r^k_t(\boldsymbol{e}_k - \cos r^k_t \bar{\boldsymbol{X}}_t)}{\sin r^k_t} \mathrm{d}t + \sigma_t \mathrm{d}\mathbf{B}^d_t, \;\; \boldsymbol{X}_0 = \boldsymbol{x}_0, \end{align}\] where \(r^k_t\mathrel{\vcenter{:}}= r^{\boldsymbol{e}_k}_t\). Then the SDE of \(r^{\boldsymbol{w}}_t\) can be derived using the Itô’s formula as follows: \[\begin{align} \mathrm{d}r^{\boldsymbol{w}}_t &= \left[ \left\langle \nabla r^{\boldsymbol{w}}_t, \gamma_t \frac{r^k_t(\boldsymbol{e}_k - \cos r^k_t \bar{\boldsymbol{X}}_t)}{\sin r^k_t} \right\rangle + \frac{\sigma^2_t}{2} \Delta r^{\boldsymbol{w}}_t \right] \mathrm{d}t + \Big\langle \nabla r^{\boldsymbol{w}}_t, \sigma_t \mathrm{d}\mathbf{B}^d_t \Big\rangle, \end{align}\] where \(\nabla\) and \(\Delta\) denote the Riemannian gradient and the Laplace-Beltrami operator on \(\mathbb{S}^{d-1}\), respectively. From the identities in Appendix 9.1 and the fact that \(\langle \nabla r^{\boldsymbol{w}}_t, \mathrm{d}\mathbf{B}^d_t \rangle\) is a 1-dimensional Brownian motion ([51] Example 3.3.3), we get the following result: \[\begin{align} \begin{aligned} \mathrm{d}r^{\boldsymbol{w}}_t &= \left[ -\gamma_t \; r^k_t \frac{\langle \boldsymbol{e}_k,\boldsymbol{w}\rangle - \cos r^k_t \cos r^{\boldsymbol{w}}_t}{\sin r^k_t \sin r^{\boldsymbol{w}}_t} + \frac{(d-1)\sigma^2_t}{2}\cot(r^{\boldsymbol{w}}_t) \right] \mathrm{d}t + \sigma_t\mathrm{d}W_t , \\[5pt] r^{\boldsymbol{w}}_0 &\mathrel{\vcenter{:}}= \cos^{\scalebox{0.75}[1.0]{-}1}\langle \boldsymbol{x}_0, \boldsymbol{w} \rangle, \end{aligned} \label{eq:radial95process} \end{align}\tag{27}\] where \(W_t\) denotes a 1-dimensional Brownian motion. For \(\boldsymbol{w}=\boldsymbol{e}_l\), we obtain a simplified formulation: \[\begin{align} &\mathrm{d}r^l_t = \left[ -\gamma_t C(r^k_t, r^l_t) r^k_t + \frac{(d-1)\sigma^2_t}{2}\cot(r^l_t) \right] \mathrm{d}t + \sigma_t\mathrm{d}W_t , \;\; r^l_0 = \frac{\pi}{2} \delta_{k,l} \\[6pt] &C(r^k_t, r^l_t) = \begin{cases} \phantom{0}1 &\text{ if } k=l \\ -\cot(r^k_t)\cot(r^l_t) &\text{ otherwise } \end{cases}. \end{align}\]
We provide the statement of the diffusion mixture representation from [26], which extends [52] to Riemannian manifolds. We refer the readers to [26] for a detailed derivation of the diffusion mixture representation for general Riemannian manifolds. We consider Riemannian manifolds that are complete, orientable, connected, and boundaryless.
Proposition 4. Consider a collection of SDEs on a manifold \(\mathcal{M}\) indexed by \(\lambda\in\Lambda\): \[\begin{align} \mathrm{d}\boldsymbol{X}^{\lambda}_t = \eta^{\lambda}(\boldsymbol{X}^{\lambda}_t,t) \mathrm{d}t + \sigma^{\lambda}(\boldsymbol{X}^{\lambda}_t,t) \; \mathrm{d}\mathbf{B}^{\mathcal{M}}_t, \;\; \boldsymbol{X}^{\lambda}_0\sim p_0 \end{align}\] with marginal distribution of \(\boldsymbol{X}^{\lambda}_t\) denoted by \(p^{\lambda}_t\). Let \(\mathcal{L}\) be a mixing distribution over \(\Lambda\). Then a diffusion process on \(\mathcal{M}\) described by the SDE: \[\begin{align} &\mathrm{d}\boldsymbol{X}_t = \eta(\boldsymbol{X}_t,t) \mathrm{d}t + \sigma(\boldsymbol{X}_t,t) \; \mathrm{d}\mathbf{B}^{\mathcal{M}}_t, \;\; \boldsymbol{X}_0\sim p_0 \\ & \eta(x,t) = \int \eta^{\lambda}(x,t)\frac{p^{\lambda}_t(x)}{p_t(x)} \mathcal{L}(\mathrm{d}\lambda) \;,\;\; \sigma(x,t) = \left( \int a^{\lambda}(x,t) \frac{p^{\lambda}_t(x)}{p_t(x)} \mathcal{L}(\mathrm{d}\lambda) \right)^{1/2} \end{align}\] where \(a^{\lambda} \mathrel{\vcenter{:}}= \sigma^{\lambda}(\sigma^{\lambda})^{\top}\), admits the marginal distribution \(p_t\): \[\begin{align} p_t(x) = \int p^{\lambda}_t(x) \mathcal{L}(\mathrm{d}\lambda), \;\; p_0(x) = \int p^{\lambda}_0(x) \mathcal{L}(\mathrm{d}\lambda). \end{align}\]
From the diffusion mixture representation, [26] construct the generative process as a mixture of the bridge processes on \(\mathcal{M}\) as shown in the following proposition.
Proposition 5. Let \(p_0\) and \(p_1\) be probability distributions on a Riemannian manifold \(\mathcal{M}\). Consider a collection of SDEs that describes bridge processes on \(\mathcal{M}\) from \(x\sim p_0\) to \(y\sim p_1\): \[\begin{align} \mathrm{d}\boldsymbol{X}^{x,y}_t = \eta^{x,y}(\boldsymbol{X}^{x,y}_t,t)\mathrm{d}t + \sigma_t\mathrm{d}\mathbf{B}^{\mathcal{M}}_t, \; \boldsymbol{X}_0 = x, \end{align}\] with marginal distribution of \(\boldsymbol{X}^{x,y}\) denoted by \(p^{x,y}_t\). Then the following SDE defines a diffusion process that transports an initial distribution \(p_0\) to a target distribution \(p_1\): \[\begin{align} &\mathrm{d}\boldsymbol{X}_t = \eta(\boldsymbol{X}_t,t) \mathrm{d}t + \sigma_t\mathbf{B}^{\mathcal{M}}_t, \; \boldsymbol{X}_0\sim p_0, \\ &\eta(z,t) \mathrel{\vcenter{:}}= \iint \eta^{x,y}(z,t) \frac{p^{x,y}_t(z)}{p_t(z)} p_0(\mathrm{d}\text{vol}_x) p_1(\mathrm{d}\text{vol}_y), \\ &p_t(z) \mathrel{\vcenter{:}}= \iint p^{x,y}_t(z)p_0(\mathrm{d}\text{vol}_x) p_1(\mathrm{d}\text{vol}_y). \end{align}\]
In the case of \(\mathcal{M}=\mathbb{S}^{d-1}\), we derive the generative process for the reparameterized data distribution \(p_{data}(x) = \sum^{d}_{k=1} p_k \delta(x \!-\! {\boldsymbol{e}_k})\), by mixing the logarithm bridge processes on \(\mathbb{S}^{d-1}\) (Eq. 3 ).
Corollary 3. Let \(p_{data}(x) = \sum^{d}_{k=1} p_k \delta(x \!-\! {\boldsymbol{e}_k})\) be a data distribution on \(\mathbb{S}^{d-1}\). Then the following SDE defines a diffusion process that transports the initial point \(\boldsymbol{x}_0\in\mathbb{S}^{d-1}\) to the distribution \(p_{data}\): \[\begin{align} &\mathrm{d}\boldsymbol{X}_t = \left[ \, \sum^d_{k=1} p_{T|t}(\boldsymbol{e}_k|\boldsymbol{X}_t)\, \eta^k(\boldsymbol{X}_t,t) \right] \mathrm{d}t + \sigma_t\mathrm{d}\mathbf{B}^{d}_t, \; \boldsymbol{X}_0 = \boldsymbol{x}_0, \\ &\eta^k(z,t) \mathrel{\vcenter{:}}= \gamma_t \frac{\cos^{\scalebox{0.75}[1.0]{-}1}\langle z, \boldsymbol{e}_k \rangle(\boldsymbol{e}_k - \langle z, \boldsymbol{e}_k \rangle z)}{\sqrt{1 - \langle z, \boldsymbol{e}_k \rangle^2}} , \end{align}\] where \(p_{T|t}(\boldsymbol{e}_k|\boldsymbol{X}_t)\) represents the conditional probability that the process will reach the endpoint \(\boldsymbol{e}_k\) at time \(T\), given the current state \(\boldsymbol{X}_t\) at time \(t\).
We derive a new family of generative processes by constructing a mixture over the time marginals of generative processes. We first present a proposition for mixing diffusion processes with a general time-dependent mixing schedule.
Proposition 6. Consider a collection of \(n\) SDEs on a closed Riemannian manifold \(\mathcal{M}\): \[\begin{align} \mathrm{d}\boldsymbol{X}^i_t = \eta^i(\boldsymbol{X}^i_t,t) \mathrm{d}t + \sigma^i(\boldsymbol{X}^i_t,t) \; \mathrm{d}\mathbf{B}^{\mathcal{M}}_t, \;\; \boldsymbol{X}^i_0\sim p_0 \end{align}\] with marginal distribution of \(\boldsymbol{X}^i_t\) denoted by \(p^i_t\). Let \(\lambda^i\in C^1([0,T])\) satisfy \(\lambda^i_t \geq 0\) and \(\sum_{i=1}^n \lambda^i_t = 1\) for all \(t\). Then there exists a diffusion process with the marginal distribution \(p_t\): \[\begin{align} p_t(x) = \sum^n_{i=1} \lambda^i_t p^i_t(x) . \label{eq:mixture95path95marginal} \end{align}\qquad{(4)}\]
Proof. We show that there exists a scalar potential \(\Phi:\mathcal{M}\times [0,T]\rightarrow\mathbb{R}\) such that the following SDE defines a diffusion process that yields the desired marginal distribution: \[\begin{align} &\mathrm{d}\boldsymbol{X}_t = \eta(\boldsymbol{X}_t,t)\mathrm{d}t + \sigma(\boldsymbol{X}_t,t) \mathrm{d}\mathbf{B}^{\mathcal{M}}_t, \label{eq:mixture95path95sde} \\ & \eta(x,t) \mathrel{\vcenter{:}}= \sum^n_{i=1} \lambda^i_t \eta^i(x,t)\frac{p^i_t(x)}{p_t(x)} - \frac{\nabla\Phi(x,t)}{p_t(x)} -\frac{1}{2}\sum^n_{i=1}\lambda^i_t a^i(x,t) \nabla\!\left( \frac{p^i_t(x)}{p_t(x)} \right)\\ & \sigma(x,t) \mathrel{\vcenter{:}}= \left( \sum^n_{i=1} \lambda^i_t a^i(x,t) \frac{p^i_t(x)}{p_t(x)}\right)^{1/2} , \end{align}\tag{28}\] where \(a^i \!\mathrel{\vcenter{:}}=\! \sigma^i(\sigma^i)^{\top}\). Here, we assume that \(\eta^i\) and \(\sigma^i\) are bounded and \(a^i\) are uniformly elliptic.
First, define a function \(f:\mathcal{M}\rightarrow\mathbb{R}\) that satisfies the zero-mean condition: \[\begin{align} f(x,t) &\mathrel{\vcenter{:}}= \sum^n_{i=1} \frac{\mathrm{d}\lambda^i_t}{\mathrm{d}t}p^{i}_t(x) \;;\; \int_{\mathcal{M}} f(x,t)\mathrm{d}\text{vol}_x = \sum^n_{i=1} \frac{\mathrm{d}\lambda^i_t}{\mathrm{d}t} \int_{\mathcal{M}} p^{i}_t(x) \mathrm{d}\text{vol}_x = \sum^n_{i=1} \frac{\mathrm{d}\lambda^i_t}{\mathrm{d}t} = 0, \end{align}\] where we used the fact that \(\sum^n_{i=1}\lambda^i_t=1\) for all \(t\). As \(\mathcal{M}\) is closed, its Laplace–Beltrami operator is invertible on the subspace of zero-mean functions. Therefore, the Poisson equation \(\Delta \Phi(\cdot,t) = f(\cdot, t)\) admits a weak solution \(\Phi\).
From the definition of \(p_t\), we can derive the following equality: \[\begin{align} \frac{\partial p_t(x)}{\partial t} &= \sum^n_{i=1} \frac{\partial (\lambda^i_t p^i_t(x))}{\partial t} = \sum^n_{i=1} \lambda^i_t \frac{\partial p^i_t(x)}{\partial t} + \sum^n_{i=1} \frac{\mathrm{d} \lambda^i_t}{\mathrm{d} t} p^i_t(x) \\ &= \sum^n_{i=1} \lambda^i_t \left[ -\text{div}\Big( p^i_t(x) \eta^i(x,t) \Big) + \frac{1}{2} \text{div}\Big( a^i(x,t) \nabla p^i_t(x) \Big) \right] + \Delta \Phi(x,t) \\ &= -\text{div}\left( \sum^n_{i=1} \lambda^i_t p^i_t(x) \eta^i(x,t) \right) + \frac{1}{2}\sum^n_{i=1} \lambda^i_t \text{div}\Big( a^i(x,t) \nabla p^i_t(x) \Big) + \text{div}(\nabla \Phi(x,t)) \\ \begin{aligned} &= -\text{div}\left( \sum^n_{i=1} \lambda^i_t p^i_t(x) \eta^i(x,t) - \nabla \Phi(x,t) \right) \\ &\phantom{aaaaaaa} + \frac{1}{2}\sum^n_{i=1} \text{div}\left( a^i(x,t) \left[ \nabla p_t(x)\frac{\lambda^i_t p^i_t(x)}{p_t(x)} + p_t(x)\lambda^i_t\nabla\left(\frac{p^i_t(x)}{p_t(x)}\right) \right] \right) \end{aligned} \label{eq:mixture95path95before95fokker} \end{align}\tag{29}\] where we used the product rule for divergence in \(\lambda^i_t p^i_t(x) = p_t(x) \frac{\lambda^i_t p^i_t(x)}{p_t(x)}\).
Reordering the terms in Eq. 29 , we obtain the following result: \[\begin{align} \frac{\partial p_t(x)}{\partial t} &=-\text{div} \left( p_t(x) \left[ \sum^n_{i=1} \lambda^i_t \eta^i(\boldsymbol{X}_t,t)\frac{p^i_t(\boldsymbol{X}_t)}{p_t(\boldsymbol{X}_t)} - \frac{\nabla\Phi(\boldsymbol{X}_t,t)}{p_t(\boldsymbol{X}_t)} -\frac{1}{2} \sum^n_{i=1}\lambda^i_t a^i(\boldsymbol{X}_t,t) \nabla\!\left( \frac{p^i_t(x)}{p_t(x)} \right) \right] \right) \notag \\ &\phantom{aaaaaaa} + \frac{1}{2} \text{div}\left( \left[ \sum^n_{i=1} \lambda^i_t a^i(\boldsymbol{X}_t,t) \frac{p^i_t(\boldsymbol{X}_t)}{p_t(\boldsymbol{X}_t)} \right]\nabla p_t(x) \right) , \label{eq:mixture95path95fokker} \end{align}\tag{30}\] which corresponds to the Fokker-Planck equation for the SDE of Eq. 28 . Therefore, the diffusion process described by the SDE in Eq. 28 has a marginal distribution \(p_t\) in Eq. ?? . ◻
From Proposition 6, we can derive a new family of generative processes by constructing a mixture over the time marginals of generative processes \(\{\mathbb{Q}^i\!: 1\leq i\leq n\}\): \[\begin{align} \mathbb{Q}^{mix}_t \mathrel{\vcenter{:}}= \sum^{n}_{i=1} \lambda^{i}_t \mathbb{Q}^i_t \;\;,\;\; \sum^{n}_{i=1} \lambda^{i}_t = 1 \,,\; 0\leq \lambda^i_t \leq 1 \,, \end{align}\] where \(\lambda^i_t\) is the time-dependent mixing schedule assigned to the \(i\)-the generative path.
One example is creating a mixture path by mixing the masked diffusion and the uniform diffusion on \(\mathbb{S}^{d-1}\), as defined in Section 3.2.
Corollary 4. Let \(p^{mask}_t\) and \(p^{unif}_t\) denote the marginal distributions of the masked diffusion and the uniform diffusion on \(\mathbb{S}^{d-1}\), as defined in Section 3.2, respectively. Then there exists a diffusion process on \(\mathbb{S}^{d-1}\) whose marginal distribution at time \(t\) satisfies: \[\begin{align} p_t(x) = \lambda_t p^{mask}_t(x) + (1 - \lambda_t) p^{unif}_t(x) , \end{align}\] where \(\lambda_t\in[0,1]\) for all \(t\in[0,T]\).
We derive the point-wise likelihood bound and the upper bound on the negative log-likelihood of our generative model, defined as the parameterized mixture process \(\mathbb{Q}^{\theta}\) with the drift \(\eta_{\theta}\) in Eq. 8 .
Let \(\mathbb{Q}^k\) be a bridge process with starting point \(\boldsymbol{x}_0\) and endpoint \(\boldsymbol{e}_k\). From the KL divergence between \(\mathbb{Q}^{\theta}\) and \(\mathbb{Q}^k\), we can derive a point-wise upper bound on the negative log-likelihood using the Girsanov theorem on compact manifolds ([30], Corollary H.3): \[\begin{align} -\log \hat{p}_{\theta}(\boldsymbol{e}_k) &= D_{KL}(\delta(\boldsymbol{e}_k) \| \hat{p}_{\theta}(\boldsymbol{e}_k)) = D_{KL}(\mathbb{Q}^k_T \| \mathbb{Q}^{\theta}_T) \\[5pt] &\leq D_{KL}(\mathbb{Q}^k \| \mathbb{Q}^{\theta}) = \mathbb{E}_{\boldsymbol{X}\sim\mathbb{Q}^k} \left[ \frac{1}{2}\int^T_0 \bigg\| \sigma_t^{-1} \big( \eta_{\theta}(\boldsymbol{X}_t,t) - \eta^k(\boldsymbol{X}_t,t) \big) \bigg\|^2_2 \mathrm{d}t \right], \end{align}\] where the inequality comes from the data-processing inequality. The point-wise likelihood bound leads to the upper bound on the negative likelihood of our model: \[\begin{align} \mathbb{E}_{\boldsymbol{z}\sim p_{data}}\big[-\log \hat{p}_{\theta}(\boldsymbol{z})\big] \leq \mathbb{E}_{\substack{\boldsymbol{e}_k\sim p_{data} \\ \boldsymbol{X}\sim\mathbb{Q}^k}} \left[\frac{1}{2}\int^T_0 \bigg\| \sigma_t^{-1} \left( \eta_{\theta}(\boldsymbol{X}_t,t) - \eta^k(\boldsymbol{X}_t,t) \right) \bigg\|^2_2 \mathrm{d}t \right]. \end{align}\]
We show that minimizing the cross-entropy-based loss defined in Eq. 11 guarantees maximizing the likelihood of our generative model defined as the parameterized mixture process in Eq. 8 .
We start with deriving a uniform bound for the drift of the bridge process defined in Eq. 3 : \[\begin{align} \Big\| \eta^l(z,t) \Big\|_2 &= \left\| \gamma_t \frac{\cos^{\scalebox{0.75}[1.0]{-}1}\langle z,\boldsymbol{e}_l\rangle (\boldsymbol{e}_l - \langle z,\boldsymbol{e}_l\rangle z)}{\sqrt{1 - \langle z,\boldsymbol{e}_l\rangle^2}} \right\|_2 = \gamma_t \cos^{\scalebox{0.75}[1.0]{-}1}\langle z,\boldsymbol{e}_l\rangle \leq \pi\gamma_t. \end{align}\] Then the triangle inequality gives the following: \[\begin{align} &\left\| \sum^d_{l=1} \big\langle p_{\theta}(x,t), \boldsymbol{e}_l \big\rangle \eta^l(x,t) - \eta^k(x,t) \right\|^2_2 \leq \left( \sum^d_{l=1} \Big\lvert \big\langle p_{\theta}(x,t), \boldsymbol{e}_l \big\rangle - \delta_{k,l} \Big\rvert \big\| \eta^l(x,t) \big\|_2 \right)^2 \\ &\leq \pi^2\gamma^2_t \left( \sum^d_{l=1} \Big\lvert \big\langle p_{\theta}(x,t), \boldsymbol{e}_l \big\rangle - \delta_{k,l} \Big\rvert \right)^2 \leq -2 \pi^2\gamma^2_t \log \Big\langle p_{\theta}(x,t), \boldsymbol{e}_k \Big\rangle . \label{eq:triangle95ineq95app} \end{align}\tag{31}\]
From Eq. 31 , we derive the upper bound for the maximum likelihood training objective \(\mathcal{L}(\theta)\) in Eq. 10 as follows: \[\begin{align} \mathcal{L}(\theta) &= \mathbb{E}_{\substack{\boldsymbol{e}_k\sim p_{data} \\ \boldsymbol{X}\sim\mathbb{Q}^k}} \left[ \frac{1}{2} \int^T_0 \sigma_t^{-2} \Bigg\| \sum^d_{l=1} \big\langle p_{\theta}(\boldsymbol{X}_t,t), \boldsymbol{e}_l \big\rangle \eta^l(\boldsymbol{X}_t,t) - \eta^k(\boldsymbol{X}_t,t) \Bigg\|^2_2 \mathrm{d}t \right] \\[4pt] &\leq \mathbb{E}_{\substack{\boldsymbol{e}_k\sim p_{data} \\ \boldsymbol{X}\sim\mathbb{Q}^k}} \left[ \int^T_0 -\frac{2\pi^2\gamma^2_t}{\sigma^2_t} \log \big\langle p_{\theta}(\boldsymbol{X}_t,t), \boldsymbol{e}_k \big\rangle \mathrm{d}t \right] \\ \begin{aligned} &\leq \mathbb{E}_{\substack{\boldsymbol{e}_k\sim p_{data} \\ \boldsymbol{X}\sim\mathbb{Q}^k}} \left[ \left( \sup_{t\in[0,T-\epsilon]} \frac{2\pi^2\gamma^2_t}{\sigma^2_t} \right) \int^{T-\epsilon}_0 -\log \big\langle p_{\theta}(\boldsymbol{X}_t,t), \boldsymbol{e}_k \big\rangle \mathrm{d}t \right] \\ & \quad\quad + \mathbb{E}_{\substack{\boldsymbol{e}_k\sim p_{data} \\ \boldsymbol{X}\sim\mathbb{Q}^k}} \left[ \int^T_{T-\epsilon} -\frac{2\pi^2\gamma^2_t}{\sigma^2_t} \log \big\langle p_{\theta}(\boldsymbol{X}_t,t), \boldsymbol{e}_k \big\rangle \mathrm{d}t \right] \end{aligned} \label{eq:objective95bound95term} \\[3pt] &\leq M_{\epsilon} \mathcal{L}^{CE}(\theta) + F(\epsilon), \end{align}\tag{32}\] where \(F(\epsilon)\) denotes the last term of Eq. 32 . Since \(\boldsymbol{X}\!\sim\!\mathbb{Q}^k\) is the bridge process with endpoint \(\boldsymbol{e}_k\), \(\boldsymbol{X}_t\) converges to \(\boldsymbol{e}_k\) as \(t\rightarrow T\) and \(\langle p_{\theta}(\boldsymbol{X}_{T-\epsilon}, T-\epsilon), \boldsymbol{e}_k \rangle\approx 1\) for sufficiently small \(\epsilon>0\). As a result, \(F(\epsilon)\approx 0\) for sufficiently small \(\epsilon\), which lead to the following result: \[\begin{align} \mathcal{L}(\theta) \leq M \mathcal{L}^{CE}(\theta), \end{align}\] for some constant \(M>0\). Therefore, minimizing the cross-entropy-based loss \(\mathcal{L}^{CE}(\theta)\) approximately guarantees maximizing the likelihood.
Let \(\boldsymbol{X}_{t|0,T}\) denote the mixture process \(\{\boldsymbol{X}_t\}^T_{t=0}\) on \(\mathbb{S}^{d-1}\) conditioned to the endpoints \(\boldsymbol{X}_0=\boldsymbol{x}_0\) and \(\boldsymbol{X}_T=\boldsymbol{x}_1\). Then \(\boldsymbol{X}_{t|0,T}\) corresponds to a bridge process described by the following SDE: \[\begin{align} \mathrm{d}\bar{\boldsymbol{X}}_t = \gamma_t \frac{\cos^{\scalebox{0.75}[1.0]{-}1}\langle\bar{\boldsymbol{X}}_t, \boldsymbol{x}_1 \rangle (\boldsymbol{x}_1 - \langle\bar{\boldsymbol{X}}_t, \boldsymbol{x}_1 \rangle \bar{\boldsymbol{X}}_t)}{\sqrt{1 - \langle\bar{\boldsymbol{X}}_t, \boldsymbol{x}_1 \rangle^2}} \mathrm{d}t + \sigma_t\mathrm{d}\mathbf{B}^{d}_t, \; \bar{\boldsymbol{X}}_0=\boldsymbol{x}_0 . \end{align}\]
We can derive the projection \(z^T_t = \langle\boldsymbol{X}_{t|0,T}, \boldsymbol{x}_1\rangle\) using the Itô’s formula for \(f_T(\cdot)\mathrel{\vcenter{:}}= \langle \cdot, \boldsymbol{x}_1 \rangle\): \[\begin{align} \begin{aligned} \mathrm{d}z^T_t &= \left[ \left\langle \nabla f_T(\bar{\boldsymbol{X}}_t), \gamma_t \frac{\cos^{\scalebox{0.75}[1.0]{-}1}\langle\bar{\boldsymbol{X}}_t, \boldsymbol{x}_1 \rangle (\boldsymbol{x}_1 - \langle\bar{\boldsymbol{X}}_t, \boldsymbol{x}_1 \rangle \bar{\boldsymbol{X}}_t)}{\sqrt{1 - \langle\bar{\boldsymbol{X}}_t, \boldsymbol{x}_1 \rangle^2}} \right\rangle \!+\! \frac{1}{2}\sigma^2_t \Delta f_T(\bar{\boldsymbol{X}}_t) \right]\! \mathrm{d}t \\ &\phantom{==} + \sigma_t \Big\langle \nabla f_T(\bar{\boldsymbol{X}}_t), \mathrm{d}\mathbf{B}^d_t \Big \rangle \end{aligned} \\[6pt] \begin{align} &= \left[ \left\langle \boldsymbol{x}_1 - \left\langle \bar{\boldsymbol{X}}_t, \boldsymbol{x}_1\right\rangle \bar{\boldsymbol{X}}_t, \gamma_t \frac{\cos^{\scalebox{0.75}[1.0]{-}1}\! z^T_t}{\sqrt{1 - (z^T_t)^2}} \Big(\boldsymbol{x}_1 - \left\langle \bar{\boldsymbol{X}}_t, \boldsymbol{x}_1\right\rangle \bar{\boldsymbol{X}}_t \Big) \right\rangle - \frac{(d-1)\sigma^2_t}{2}z^T_t \right]\mathrm{d}t \\ &\phantom{==} + \sigma_t\sqrt{1 - (z^T_t)^2}\mathrm{d}W_t \end{align} \\[6pt] &= \left[ \gamma_t \cos^{\scalebox{0.75}[1.0]{-}1}\! z^T_t \sqrt{1 - (z^T_t)^2} -\frac{(d-1)\sigma^2_t}{2}z^T_t \right]\mathrm{d}t + \sigma_t\sqrt{1 - (z^T_t)^2}\mathrm{d}W_t, \end{align}\] where we have used the identities \(\nabla f_T(\boldsymbol{z}) = \boldsymbol{x}_1 - \langle \boldsymbol{z}, \boldsymbol{x}_1 \rangle \boldsymbol{z}, \Delta f_T(\boldsymbol{z}) = -(d-1) \langle \boldsymbol{z}, \boldsymbol{x}_1\rangle\). Note that the Laplace-Beltrami operator defined on \(\mathbb{S}^{d-1}\) has a simple and tractable form due to the radial symmetry of the hypersphere.
Similarly, \(z^0_t = \langle\bar{\boldsymbol{X}}_t, \boldsymbol{x}_0\rangle\) can be derived using Itô’s formula for \(f_0(\boldsymbol{z})\mathrel{\vcenter{:}}= \langle \boldsymbol{z}, \boldsymbol{x}_0 \rangle\): \[\begin{align} \mathrm{d}z^0_t &= \left[ \gamma_t \frac{\cos^{\scalebox{0.75}[1.0]{-}1}\! z^T_t}{\sqrt{1 - (z^T_t)^2}} \Big(\langle\boldsymbol{x}_0,\boldsymbol{x}_1\rangle - z^0_t z^T_t \Big) -\frac{(d-1)\sigma^2_t}{2}z^0_t \right]\mathrm{d}t + \sigma_t\sqrt{1 - (z^0_t)^2}\mathrm{d}W_t. \end{align}\]
Since the masked bridge process has \(\boldsymbol{x}_0=\boldsymbol{e}_m\) and \(\boldsymbol{x}_1=\boldsymbol{e}_k\) with \(\langle \boldsymbol{e}_m, \boldsymbol{e}_k\rangle=0\) for all non-mask token \(\boldsymbol{e}_k\), the projected processes are described as the follows: \[\begin{align} \mathrm{d}z^l_t = \left[ \gamma_t \frac{\cos^{\scalebox{0.75}[1.0]{-}1}z^k_t}{\sqrt{1 - (z^k_t)^2}} \bigg( \delta_{l,k} - z^l_t z^k_t \bigg) -\frac{(d-1)\sigma^2_t}{2}z^l_t \right]\mathrm{d}t + \sigma_t\sqrt{1 - (z^l_t)^2}\mathrm{d}W^l_t, \end{align}\] with initial condition \(z^l_0 = 0\) for all \(l\) and \(W^l_t\) are 1-dimensional standard Wiener processes.
The uniform bridge process has \(\boldsymbol{x}_0=\sum^{d}_{i=1}\boldsymbol{e}_i/\sqrt{d}\) and \(\boldsymbol{x}_1=\boldsymbol{e}_k\), and the projected processes have a simple form: \[\begin{align} \begin{aligned} &\mathrm{d}z^l_t = \left[ \gamma_t \frac{\cos^{\scalebox{0.75}[1.0]{-}1}z^k_t}{\sqrt{1 - (z^k_t)^2}} \bigg( A_{l,k} - z^l_t z^k_t \bigg) -\frac{(d-1)\sigma^2_t}{2}z^l_t \right]\mathrm{d}t + \sigma_t\sqrt{1 - (z^l_t)^2}\mathrm{d}W^l_t, \\[5pt] &A_{l,k} = \begin{cases} 1 / \sqrt{d} & \text{ if } l\neq k \\ 1 & \text{ otherwise} \end{cases} \end{aligned} \end{align}\] with initial condition \(z^l_0 = 1/\sqrt{d}\) for all \(l\).
Here we derive the parameters of the Riemannian normal distribution from the projected processes: \[\begin{align} \mathrm{d}z^T_t &= \left[ \gamma_t \cos^{\scalebox{0.75}[1.0]{-}1}\!z^T_t \, \sqrt{1 - (z^T_t)^2} -\frac{(d-1)\sigma^2_t}{2} z^T_t \right] \mathrm{d}t + \sigma_t\sqrt{1 - (z^T_t)^2}\, \mathrm{d}W^T_t, \\[4pt] \mathrm{d}z^0_t &= \left[ \gamma_t \frac{\cos^{\scalebox{0.75}[1.0]{-}1}\!z^T_t}{\sqrt{1 - (z^T_t)^2}} \Big( z^T_0 - z^0_t z^T_t \Big) -\frac{(d-1)\sigma^2_t}{2}z^0_t \right] \mathrm{d}t + \sigma_t\sqrt{1 - (z^0_t)^2}\, \mathrm{d}W^0_t, \end{align}\] with initial conditions \(z^T_0 = \left\langle\boldsymbol{X}_0, \boldsymbol{X}_T\right\rangle\) and \(z^0_0=1\). From the definition \(z^T_t\mathrel{\vcenter{:}}= \langle \boldsymbol{X}_{t|0,T}, \boldsymbol{x}_1\rangle\), we establish the connection between the mean projection \(\mathbb{E}z^T_t\) and the parameters \(\alpha_t\) and \(\rho_t\): \[\begin{align} \mathbb{E}z^T_t &\approx \mathbb{E}_{\boldsymbol{z}} \big\langle\exp_{\boldsymbol{\mu}_t}(\rho_t \boldsymbol{z}), \boldsymbol{x}_1 \big\rangle, \;\; \boldsymbol{z}\sim \mathcal{N}_{T_{\boldsymbol{\mu}_t}\mathbb{S}^d}(\mathbf{0}, \mathbf{I}) \\ &\stackrel{\text{Eq.}~\eqref{eq:sphere95exp95log}}{\phantom{..}=\phantom{..}} \mathbb{E}_{\boldsymbol{z}}\left\langle \cos(\rho_t\|\boldsymbol{z}\|)\boldsymbol{\mu}_t + \sin(\rho_t\|\boldsymbol{z}\|)\frac{\boldsymbol{z}}{\|\boldsymbol{z}\|}, \boldsymbol{x}_1 \right\rangle \\ &= \mathbb{E}_{\boldsymbol{z}}\bigg(\cos(\rho_t\|\boldsymbol{z}\|) \left\langle \boldsymbol{\mu}_t, \boldsymbol{x}_1\right\rangle \bigg) + \underbrace{\mathbb{E}_{\boldsymbol{z}}\bigg( \sin(\rho_t\|\boldsymbol{z}\|) \left\langle \frac{\boldsymbol{z}}{\|\boldsymbol{z}\|}, \boldsymbol{x}_1 \right\rangle\bigg)}_{=0} \label{eq:zero95term} \\ &\stackrel{\text{Eq.}~\eqref{eq:riemannian95normal}}{\phantom{..}=\phantom{..}} \mathbb{E}_{\boldsymbol{z}}\cos(\rho_t\|\boldsymbol{z}\|) \left\langle \frac{\alpha_t}{\sin\phi_0}\boldsymbol{x}_1 + \left(\sqrt{1-\alpha_t^2} - \frac{\alpha_t\cos\phi_0}{\sin\phi_0}\right)\boldsymbol{x}_0 , \boldsymbol{x}_1\right\rangle \\ &= \mathbb{E}_{\boldsymbol{z}}\cos(\rho_t\|\boldsymbol{z}\|) \left( \sin\phi_0\alpha_t + \cos\phi_0 \sqrt{1 - \alpha_t^2} \right), \end{align}\tag{33}\] for \(\phi_0 \!\mathrel{\vcenter{:}}=\! \cos^{\scalebox{0.75}[1.0]{-}1}\langle\boldsymbol{X}_0,\!\boldsymbol{X}_T\rangle\), where the last term in Eq. 33 is zero due to radial symmetry. Similarly, \[\begin{align} \mathbb{E}z^0_t \approx \mathbb{E}_{\boldsymbol{z}} \langle\exp_{\boldsymbol{\mu}_t}(\rho_t \boldsymbol{z}), \boldsymbol{x}_0 \rangle &= \mathbb{E}_{\boldsymbol{z}}\cos(\rho_t\|\boldsymbol{z}\|) \sqrt{1 - \alpha_t^2}, \end{align}\] Notably, we have the following identity for \(\boldsymbol{z}\sim\mathcal{N}_{T_{\boldsymbol{\mu}_t}\mathbb{S}^d}(\mathbf{0}, \mathbf{I})\): \[\begin{align} \mathbb{E}_{\boldsymbol{z}}\cos(\rho_t\|\boldsymbol{z}\|) = e^{-\rho_t^2/2} {}_{1}f_1\left(\frac{d}{2},\frac{1}{2},-\frac{\rho_t^2}{2} \right) \mathrel{\vcenter{:}}= F_d(\rho_t), \label{eq:damped95Kummer95function} \end{align}\tag{34}\] where \({}_1f_1\) denotes the Kummer function, also known as the confluent hypergeometric function. Therefore, the parameters \(\alpha_t\) and \(\rho_t\) can be derived from the mean projections \(\mathbb{E}z^T_t\) and \(\mathbb{E}z^0_t\): \[\begin{align} \alpha_t = \sqrt{\frac{(\mathbb{E}z^T_t / \mathbb{E}z^0_t - \cos\phi_0)^2}{\sin^2\phi_0 + (\mathbb{E}z^T_t / \mathbb{E}z^0_t - \cos\phi_0)^2}} \;,\;\; \rho_t &= F_d^{\scalebox{0.85}[1.0]{-}1}\left( \mathbb{E}z^0_t / \sqrt{1 - \alpha_t^2} \right) . \end{align}\]
Discrete diffusion models [1]–[4] do not fully leverage the power of iterative refinement, which is the key to generative modeling of continuous data, for example, image synthesis [6], [7] and video generation [8], [9]. In discrete diffusion models, the progressive corruption during the forward process is modeled by stochastic jumps between states in Markov chains. Since denoising is achieved by jumping between states, discrete diffusion loses valuable signals during refinement, which limits the generative performance and controllability. In contrast, our RDLM takes a continuous approach using the geometry of the statistical manifold and the hypersphere, and therefore avoids the signal loss that occurs during state transitions in discrete diffusion models, fully leveraging iterative refinement.
Due to fully leveraging the iterative refinement, RDLM can generate higher-quality samples, outperforming discrete diffusion models across diverse domains. Furthermore, our continuous approach offers additional advantages: (1) Controllable generation: Using a continuous diffusion model enables direct application of guidance, e.g., classifier [53] and classifier-free guidance [10]. (2) Optimized design choices: Benefit from advancements in continuous diffusion, e.g., optimized noise schedule [14], [54], [55] and self-conditioning [13]. (3) Efficient sampling: Our framework supports efficient and scalable sampling strategies such as DPM-Solver [11], [12]. In contrast, discrete diffusion models are restricted to using a simple ancestral sampling strategy.
Our method outperforms previous works using flow matching [21], [22] due to three key contributions: (1) generalization of discrete diffusion, (2) parameterization and training objectives, and (3) scalability to higher dimensions.
First, our method generalizes discrete diffusion models, the current state-of-the-art in language modeling, and introduces a novel mixture path process that enhances performance. In contrast, prior works using flow matching [21], [22] lack a direct connection to discrete diffusion models, resulting in a suboptimal design that leads to inferior performance. Notably, flow matching-based approaches are a special case of our method, as shown in Section 3.
Second, we introduce a novel parameterization (Eq. 7 ) and cross-entropy-based training loss (Eq. 11 ), similar to the loss used in discrete diffusion models. This loss optimizes the likelihood during training, and when combined with our importance sampling loss (Eq. 12 , achieves a superior performance. In comparison, [21] uses a simple flow matching loss that does not guarantee maximum likelihood optimization.
Lastly, prior works are restricted to small vocabularies due to the difficulty of learning a generative process on high-dimensional manifolds (i.e., large vocabulary). This issue arises from the rapid convergence problems and insufficient model capacity, as discussed in Section 4. We address these challenges with dimension splitting, which significantly improves performance and enables effective scaling to large vocabularies.
We provide the pseudocode for our training and sampling schemes in Algorithm 2 and Algorithm 3, respectively. We additionally provide pseudocode for pre-computing the parameters for the Riemannian normal \(\alpha_t\) and \(\rho_t\) in Algorithm 4. Note that pre-computing takes only once before training our model, and the computation time is negligible compared to the training time.
For computing the upper bound for NLL, we use the Monte Carlo estimation of the negative ELBO derived in Eq. 9 . Note that we use simulated \(\boldsymbol{X}_t\), instead of approximation from the Riemannian normal, for accurate computation.
For all experiments, we use NVIDIA RTX A5000 and H100.
We compare against state-of-the-art diffusion models. Multinomial Diffusion [33], D3PM [1], SEDD [2], MDLM [4], MD4 [3] are discrete diffusion models. Plaid [37] and Bayesian Flow Network (BFN) [34] are continuous diffusion models. We do not use existing works for flow matching on the statistical manifold [21], [22] as they do not provide likelihood computation applicable for language modeling.
We also use the transformer AR model [42] and the following autoregressive models as baselines: IAF/SCF [38], AR Argmax Flow [33], and Discrete Flow [39] are flow-based models, and ARDM [40] and MAC [41] are any-order autoregressive models.
Text8 [35] is a small character-level text modeling benchmark extracted from English Wikipedia. Following the previous works [1], [2], [4], we split the dataset into 90M/5M/5M with a fixed sequence length of 256. We use a vocabulary size of 28, comprising 26 lowercase letters, a white space token, and a mask token. We use a 12-layer diffusion transformer [44] following [2] with 92.4M trainable parameters. We train our model for 1M iterations with batch size 512 as done in previous works, using the same learning rate, optimizer AdamW [56], and exponential moving average (EMA) with decay rate 0.9999.
One Billion Word Benchmark is a dataset extracted from the WMT 2011 News Crawl dataset comprised of single sentences from news articles. Following [4], we use the bert-base-uncased tokenizer and pad and truncate the sequences to length 128. We use a 12-layer diffusion transformer [44] with the hidden dimension of 768 and 12 attention heads, following [4] with
110M trainable parameters. We train our model for 1M iterations with batch size 512 as done in previous works, using the same constant learning rate, optimizer AdamW [56], and exponential moving average (EMA) with decay rate 0.9999.
Here we provide a detailed comparison with MDLM [4] on the language modeling task using the One Billion Words dataset.
First, we did not search for optimal training hyperparameters (e.g., learning rate). Instead, we directly adopted the hyperparameters used by MDLM to ensure a fair comparison. However, because RDLM employs a continuous approach, it might benefit from different hyperparameter choices than discrete diffusion models. Due to resource limitations, we could not explore these optimized settings.
Furthermore, MDLM was trained using the low-discrepancy sampler, which is crucial for reducing the variance of the ELBO during training, leading to better perplexity results. We did not use the low-discrepancy sampler during training, yet RDLM still achieved competitive results on the LM1B dataset.
Additionally, the reported RDLM and MDLM results are based on training up to 1 million iterations, at which point RDLM had not yet fully converged. Extrapolating RDLM’s validation perplexity through curve fitting shows that RDLM surpasses MDLM after 10 million iterations. Due to resource limitations, we were unable to train beyond 1 million iterations.
We compare against autoregressive models and diffusion models that directly model raw pixel space. PixelRNN [57], Gated PixelCNN [58], PixelCNN++ [59], PixelSNAIL [47], Image Transformer [60], and Sparse Transformer [48] are autoregressive models. D3PM [1], \(\tau\)LDR [23], and MD4 [3] are discrete diffusion models.
We represent each image as a set of discrete tokens with a vocabulary size of 256. We use the 10-layer diffusion transformer [44] for our model with 35M trainable parameters. We train 100k iterations with batch size 128 and AdamW [56] optimizer following [3].
The dataset contains 100k promoter DNA sequences, each paired with a transcription signal profile. Each sequence consists of 1024 base pairs centered at the annotated transcription start site position [61], and the base pair has 4 categories (ATGC) conditioned on the profile.
We compare our model against diffusion models and language models. Bit Diffusion [13] is a continuous diffusion model, D3PM [1] is a discrete diffusion model, DDSM [19] and Dirichlet Flow Matching [20] are diffusion model and flow matching model using the probability simplex, respectively. Fisher-Flow [22] is a flow matching model using statistical manifold.
Following the previous work [20], [22], we use the same data split of 88,470/3,933/7,497 and identical model architecture consisting of 20-layer 1-D CNN with 13.3M trainable parameters. We train our model for 100k iterations with batch size 256 and AdamW [56] optimizer. We evaluate the MSE on the generated samples conditioned on the prescription signals from the test set, using 300 generation steps following the previous work [22].
Figure 5:
.
We provide uncurated text samples generated by our RDLM trained on the Text8 dataset.
o zero one british single payrock neurologically related condition is a member of the original playboys oriental pbkr cat ii a boob one card featured in the late f one zero dippie dons as it became pigus in the cir the monoseur engine shair which became th
h delivered from the new meeting the construction of modern shooting begins kinington resurrects the hark or corped a hopper nightlife subjecting to turn his attention at a joyable moment he is able to explain that he is in recovery with a new orleans baby
wilder unrefreshed bup of lightmarks was pertified only at the head of sinar joseph avaret in the cetleben key in one nine nine seven this report has been portrayed as a shrinking feathor of the civil directs against urban rumour as that he was ana eichy
s seven two chromosomes regainally regular and contain number of mignain gnaning pros zopods or cells whose podic configuration divided agong the faces of dna generally replaced by b as therus group are non mit and elanisten special cayits regularly are ca
nine four although portrayals of frel appearance the novel include leaked to bratally targeted audiences largely by steve roper dart mer upick and j pernan s durk born one nine four zero s but stillly not they are created the western master and mag both m
idment indicates two different types drop tales have different charges which train structures having rare and light weight variations have lower weight impedients such as chawings starges and groove gloves shorter holes can be jumpliten don badld a horse i
d deliberately rejected this a different post however saw al sh ibn misha rody was revealed to be the lord curses of jesus one nine one nine he handled his journey to its historical map of the egyptians and was still nodged as he committed to reproete he a
ovincial governors regelrant a cursami governor granted to a spanish cominic in one seven eight three mateo s teltacheutes lebmo alexius jeano and pan dosien dostre of a ruguen de cosst originating specifically the treaty of st louis the extinctions remain
We provide uncurated text samples generated by our RDLM trained on the LM1B dataset.
[CLS] social recklessly the obvious support 2013. [CLS] they were elected off by the english authorities, whose party subsequently named as principal when lawrence tang had to hold the property until they were turned to down their heads in the back - sky of which sank from matthews’s doorstep. [CLS] it has been pouring gladly with work and along the motorway, where certified sales will follow a new bone in the next several days to avoid commercial production problems, according to recommendations from both workplace and tropical mod. [CLS] he said he plans watchsty will b greens the old draft plunging sara, but have medics announced she would make you the taxpayer? [CLS] duchess [CLS]
[CLS] of lieberman. [CLS] analysts say since 5, 000 people have held a established council in 120 forums and levels, some have returned to the villages of the british capital, mideast and sprint. [CLS] his friends ring between ironing his body they forbid forrest. [CLS] seven babies missing and 27 french subcontinent and two development employees suffered injuries in a securing of greece, a spokeswoman said immediately, while tneye wedang. [CLS] both questions has already been considered. [CLS] jackie has an hopeful major interest for dirty potter, pilots bullock’s show, whether they have what hugh and mariusa other, no - shame roots [CLS]
[CLS] is the problem that worth most of a marriage to have a single car he doesn’t need. [CLS] mr obama will carry out more casualties however than president obama’s followers, and it mild to form the first cumulative current division ofers holding the guantanamo men that arches to injustice. [CLS] phillips said : " designer kaia kangaroo, 27, and herself rubbed jim reyes, the general patron of france light, have organized a building aimed at gunning film houses. [CLS] at riding, london graduate college in edinburgh and a temporary exhibit mall in fasside, marked since the work are a new sport, smaller schools racing has more [CLS]
[CLS]aceous that in spain had submitted one time the main website on mass wireless, in carpcsllo. [CLS] not two of the beer bk known in the companies could have thousand stretch men - - ginger, and showed vulnerable cases, leaving you in the same £200m standard. [CLS] yet apius is accepted quickly to associate in the months since - - bulletin energy americas - - they agreed that it was getting waste into ulysses air before creation known as the bulletinsburg, which can be bowed with bracelet growth by speed. [CLS] rely will get another less energetic first - turn victory. [CLS] more than 2, 000 people arrived, out [CLS]
[CLS] more steadily increasing transit facilities with murray’s tax breaks. [CLS] nonero moee enjoyed terrestrial wallino with the immoitunghrck in most years. [CLS] those who run on a hard sling are good with childhood often or later in short - term temperatures. [CLS] top - seeded henin is shark seventh and isatin out in stanford. [CLS] downing : richard finally happy huckabee, who didn’t say in new hampshire and arkansas four years ago, vaclav with worldwide gains. [CLS] even if the huckabee god had " the black annesies " chosen to go on his way to combat [CLS]
[CLS] high school, was potya’s poker high - george she - former congressional class - flicked was a prosecutor. [CLS] coln has won the services of the sub - area tustiw university, near fort dodge, pa. [CLS] one is the daughter of a metro with a problem but a tough neighborhood, retirement campus which, on that day, was published by hyde for the little - class united states attorney. [CLS] let’s sell a floral parachute in civil court on a lutheran case. [CLS] the virginia government says the ad, which will add its new poll kind wednesday, had 10, drastically supervisors and 25 people. [CLS] [CLS]
[CLS] a memorandum posted to the university : model google, which makes the copies to sell patients seem off a significant stake in every final - ep you programmes similar. [CLS] almost no day cbees will homemadei. [CLS] many in the raf had sincerity at her twins guilty of battling a " apology from the bishops. " [CLS] the courts have replayled their option for’welcome when the fed tends its view of the aec investors’chance. [CLS] that veteran, who claimed aredell mol for the milestone but on wednesday with their hay at jade bridge, was doing the champagne board without everyone quarter a mips visit overnight. [CLS]
[CLS] the bbc’s george washington is the first of 15, 000 people to put the calraircer range. [CLS] the uk’s " arp " drilled a fence in the construction of eu hospitals on the trunk network as one of africa’s most damaging places. [CLS] all looked after world over just um occasionallytau, which takes place victorious for schizophrenia consumed near the doc centre. [CLS] it is complicated by profits, not the greek pilot anchors, some of whom the very top cruise lay in the deep west of britain, which threatens developing dozens, and joined a conference in america to provide a full grand theft pad to [CLS]
While our approach has shown promising results on language modeling tasks and other modalities, a performance gap remains in some tasks compared to autoregressive models. We hypothesize that this is because autoregressive models utilize model capacity more efficiently, as they learn from a single, fixed ordering of tokens. One interesting direction for future work is to design a position-dependent noise scheduler that converges sequentially from left to right, mimicking the autoregressive generation process. In addition, although the current framework can generate sequences up to a predefined maximum length, it is not capable of producing sequences beyond this limit. This limitation could potentially be addressed by incorporating a semi-autoregressive approach that generates text in a block-wise fashion.
Our work may provide future directions for multimodal generative models that are capable of generating data from multiple domains, for example, text, images, and videos, simultaneously. Furthermore, our continuous approach may allow better controllability and improved quality with advanced sampling strategies. However, there is a risk that someone could misuse our framework to produce harmful content.