Asymptotics of Language Model Alignment

Joy Qiping Yang
University of Sydney
Sydney, Australia
Salman Salamatian
Massachusetts Institute of Technology
Cambridge, MA, USA
Ziteng Sun, Ananda Theertha Suresh, Ahmad Beirami
Google Research
New York, NY, USA
{zitengsun, theertha, beirami}


Let \(\boldsymbol{p}\) denote a reference generative language model. Let \(\boldsymbol{r}\) denote a reward model that returns a scalar to capture the degree at which a draw from \(\boldsymbol{p}\) is preferred. The goal of language model alignment is to alter \(\boldsymbol{p}\) to a new distribution \(\boldsymbol{\phi}\) that results in a higher expected reward while keeping \(\boldsymbol{\phi}\) close to \(\boldsymbol{p}.\) A popular alignment method is the KL-constrained reinforcement learning (RL), which chooses a distribution \(\boldsymbol{\phi}_\Delta\) that maximizes \(E_{\boldsymbol{\phi}_{\Delta}} \boldsymbol{r}(\boldsymbol{y})\) subject to a relative entropy constraint \(D_{\operatorname{KL}}(\boldsymbol{\phi}_\Delta \| \boldsymbol{p}) \leq \Delta.\) Another simple alignment method is best-of-\(N\), where \(N\) samples are drawn from \(\boldsymbol{p}\) and one with highest reward is selected. In this paper, we offer a closed-form characterization of the optimal KL-constrained RL solution. We then demonstrate that any alignment method that achieves a comparable trade-off between KL divergence and expected reward must approximate the optimal KL-constrained RL solution in terms of relative entropy. To further analyze the properties of alignment methods, we introduce two simplifying assumptions: we let the language model be memoryless, and the reward model be linear. Although these assumptions may not reflect complex real-world scenarios, they enable a precise characterization of the asymptotic (in the sequence length) behavior of both the best-of-\(N\) alignment, and the KL-constrained RL method, in terms of information-theoretic quantities. We prove that the reward of the optimal KL-constrained RL solution satisfies a large deviation principle, and we fully characterize its rate function. We also show that the rate of growth of the scaled cumulants of the reward is characterized by a proper Rényi cross entropy. Finally, we show that best-of-\(N\) is asymptotically equivalent to KL-constrained RL solution by proving that their expected rewards are asymptotically equal, and concluding that the two distributions must be close in KL divergence.

1 Introduction↩︎

Generative models, such as large language models [1][3] and diffusion models [4][6], are increasingly popular general-purpose problem solvers. These models are generally trained on large corpora of unlabeled data; with additional training on domain-specific data through supervised finetuning (SFT). We refer to the resulting model as the reference model. The reference model, while capable of generating realistic outputs, may still generate undesirable outcomes, e.g., unsafe language or images. Hence, it is desirable to further align the output of the reference model with a reward that captures human preferences [7][10].

The literature has seen a proliferation of algorithmic techniques for alignment [7][14]. Two of the popular methods are: (a) KL-constrained reinforcement learning (RL), \(\boldsymbol{\phi}_\Delta\), [7], [9], which maximizes expected reward subject to a KL divergence constraint, \(\Delta,\) between the aligned model and the reference model; and (b) Best-of-\(N\), \(\boldsymbol{\pi}_N\), [3], [15], where \(N\) i.i.d. samples are drawn from the reference model and one with the highest reward is returned.

Despite the popularity of both of these alignment methods, no relationship between them is known. In particular, best-of-\(N\) has shown to be surprisingly competitive with, or even outperform more involved KL-constrained RL algorithms that aim at achieving the best reward-KL tradeoffs [13], [16], [17]. In this work, we provide theoretical reasoning for this empirical observation by showing that the two methods are in fact asymptotically equivalent in some specific settings. We hope our work initiates further study into the relationship between different alignment methods. The summary of our contributions may be found below:

  • We characterize the unique optimal solution of the KL-constrained reinforcement learning problem, denoted by \(\boldsymbol{\phi}_\Delta,\) for a given KL constraint of \(\Delta\). We show that the solution is a mismatched tilted distribution that had appeared and characterized by [18] in the context of mismatched guesswork.

  • We show that if an alignment method \(\boldsymbol{\omega}\) satisfies a KL constraint \(\Delta\), and also achieves almost the same expected reward as \(\boldsymbol{\phi}_\Delta\) (the optimal KL-constrained RL solution), then \(D_{\operatorname{KL}}(\boldsymbol{\omega}\| \boldsymbol{\phi}_\Delta)\) must be vanishingly small, where \(D_{\operatorname{KL}}(\cdot \| \cdot)\) denotes the KL divergence and is formally defined in 1 .

  • We show that the cumulants of the reward are related to the Rényi cross entropy of the alignment distribution to the source distribution. Under assumptions on the data generating process, we also show that the optimal KL-constrained RL solution, \(\boldsymbol{\phi}_\Delta,\) satisfies a large deviation principle, where we characterize its rate function using information-theoretic quantities.

  • Under assumptions on the data-generating process, we show that the outcomes of the best-of-\(N\) alignment method, \(\boldsymbol{\pi}_N,\) are similar to those of the optimal KL-constrained RL, \(\boldsymbol{\phi}_\Delta,\) for \(N = \exp(\Delta)\). In particular, their expected rewards are also close, which implies that KL-constrained RL method is close to the best-of-\(N\) method in KL divergence, i.e., \(D_{\operatorname{KL}}(\boldsymbol{\pi}_N \| \boldsymbol{\phi}_\Delta)\) is vanishingly small.

2 Problem Setup↩︎

Let \(\boldsymbol{x}\in \boldsymbol{\mathcal{X}}\) denote an input prompt, e.g., Who is Barack Obama? Let \(\boldsymbol{\mathcal{Y}}\) be the set of possible outputs of a generative language model. A language model \(\boldsymbol{p}_{\boldsymbol{y}|\boldsymbol{x}}\) assigns probability distributions over \(\boldsymbol{\mathcal{Y}}\) given a context (input prompt) \(\boldsymbol{x}\in \boldsymbol{\mathcal{X}}\) i.e., \(\boldsymbol{p}_{\boldsymbol{y}|\boldsymbol{x}} : \boldsymbol{\mathcal{X}} \times \boldsymbol{\mathcal{Y}} \to \mathbb{R}^{\geq0}\) such that \(\forall \boldsymbol{x}\) \[\sum_{\boldsymbol{y} \in \boldsymbol{\mathcal{Y}}} \boldsymbol{p}_{\boldsymbol{y}|\boldsymbol{x}}(\boldsymbol{y}| \boldsymbol{x}) = 1.\] Throughout this paper, we use \(\boldsymbol{p}\) to denote the reference language model which we wish to further align.

A reward model \(\boldsymbol{r}\) assigns scalar scores for every prompt and output i.e., \(\boldsymbol{r}: \boldsymbol{\mathcal{X}} \times \boldsymbol{\mathcal{Y}} \to \mathbb{R}\). Given such a reward model, one can also define an alignment distribution as follows.

Definition 1 (alignment distribution). For a given reward function \(\boldsymbol{r}\), we denote \(\boldsymbol{q}_{\boldsymbol{y}|\boldsymbol{x}}\) as the corresponding alignment distribution defined as \[\boldsymbol{q}_{\boldsymbol{y}|\boldsymbol{x}} (\boldsymbol{y}| \boldsymbol{x}) = \exp \left( \boldsymbol{r}(\boldsymbol{x}, \boldsymbol{y})\right)/ R_{\boldsymbol{x}},\] where \(R_{\boldsymbol{x}}\) is the partition function, i.e., \(R_{\boldsymbol{x}} := \sum_{\boldsymbol{y}\in \boldsymbol{\mathcal{Y}}}\exp \left( \boldsymbol{r}(\boldsymbol{x}, \boldsymbol{y})\right)\). Without loss of generality, and for convenience in the rest of this paper we assume that \(R_{\boldsymbol{x}} = 1,\) i.e., reward is the negative log likelihood of a generative distribution.

For two distributions \(\boldsymbol{p}\) and \(\boldsymbol{q}\), let \(H(\boldsymbol{p}\| \boldsymbol{q})\) denote the cross-entropy of \(\boldsymbol{p}\) to \(\boldsymbol{q}\); and \(D_{\operatorname{KL}}(\boldsymbol{p}\| \boldsymbol{q})\) denote the Kullback–Leibler divergence. For discrete distributions \(\boldsymbol{p}\) and \(\boldsymbol{q}\) over the same alphabet \(\boldsymbol{\mathcal{Y}}\), we have that1 \[H(\boldsymbol{p}\| \boldsymbol{q}) = \sum_{\boldsymbol{y}\in \boldsymbol{\mathcal{Y}}} p(\boldsymbol{y}) \log \frac{1}{q(\boldsymbol{y})}; \quad\quad D_{\operatorname{KL}}(\boldsymbol{p}\| \boldsymbol{q}) = \sum_{{\boldsymbol{y}\in \boldsymbol{\mathcal{Y}}}} p(\boldsymbol{y}) \log \frac{p(\boldsymbol{y})}{q(\boldsymbol{y})}. \label{eq:define-KL-H}\tag{1}\]

With these definitions, the KL-constrained RL problem is defined as follows:

Definition 2 (KL-constrained RL). We say \(\boldsymbol{\phi}_\Delta\) is an optimal aligned model of \(\boldsymbol{p}\) with KL divergence \(\Delta,\) if \[\boldsymbol{\phi}_\Delta (\cdot | \boldsymbol{x}) \in \arg\min_{\boldsymbol{\phi}\in \mathcal{D}_{\Delta}(\boldsymbol{p})} H(\boldsymbol{\phi}(\cdot | \boldsymbol{x}) \| \boldsymbol{q}(\cdot | \boldsymbol{x})),\] where \(\mathcal{D}_\Delta(\boldsymbol{p}) := \{ \boldsymbol{\phi}: D_{\operatorname{KL}}(\boldsymbol{\phi}(\cdot | \boldsymbol{x}) \| \boldsymbol{p}(\cdot | \boldsymbol{x})) \leq \Delta \}\).

Under the KL-constrained RL method, outputs are then generated using \(\boldsymbol{\phi}_\Delta\). We remark that \[\begin{align} H(\boldsymbol{\phi}(\cdot | \boldsymbol{x}) \| \boldsymbol{q}(\cdot | \boldsymbol{x})) & = - \mathbb{E}_{\boldsymbol{y}\sim \boldsymbol{\phi}(\cdot | \boldsymbol{x})} \log \boldsymbol{q}_{\boldsymbol{y}|\boldsymbol{x}}(\boldsymbol{y}|\boldsymbol{x}) = - \mathbb{E}_{\boldsymbol{y}\sim \boldsymbol{\phi} (\cdot | \boldsymbol{x})} \boldsymbol{r}(\boldsymbol{x}, \boldsymbol{y}), \end{align}\] and hence minimizing the cross entropy \(H(\boldsymbol{\phi}(\cdot | \boldsymbol{x}) \| \boldsymbol{q}(\cdot | \boldsymbol{x}))\) is equivalent to maximizing expected reward \(\mathbb{E}_{\boldsymbol{y}\sim \boldsymbol{\phi} (\cdot | \boldsymbol{x})} \boldsymbol{r}(\boldsymbol{x}, \boldsymbol{y})\), which is the common terminology in the language model alignment literature [13], [16], [17].

Next, we describe the best-of-\(N\) alignment method.

Definition 3 (best-of-\(N\)). Let \(\boldsymbol{y}_1, \ldots, \boldsymbol{y}_N\) be \(N\) i.i.d. draws from \(\boldsymbol{p}_{\boldsymbol{y}| \boldsymbol{x}}(\cdot | \boldsymbol{x})\). The best-of-\(N\) alignment policy returns2 \[\boldsymbol{y}= \boldsymbol{y}_{k^*} \quad\quad \text{where}\quad \quad k^* := \arg\max_{k \in [N]} \boldsymbol{q}_{\boldsymbol{y}| \boldsymbol{x}}(\boldsymbol{y}_k |\boldsymbol{x}).\] Further, let \(\boldsymbol{\pi}_N\) denote the probability distribution of the best-of-\(N\) strategy. It is clear that \(\boldsymbol{\pi}_1 = \boldsymbol{p}\).

The best-of-\(N\) alignment method generates \(N\) candidate outputs directly from the reference model and selects the one with highest reward. Further, its PMF is characterized by [19].

In empirical studies [12], [13], [16], [17], in order to compare a family of decoding policy \(\{\boldsymbol{\psi}_\beta\}\) parameterized by \(\beta\), it is common to report tradeoff curves of expected reward \(E_{\boldsymbol{x}} E_{\boldsymbol{\psi}_\beta} \{\boldsymbol{r}(\boldsymbol{x},\boldsymbol{y})\}\) vs KL divergence \(E_{\boldsymbol{x}} D_{\operatorname{KL}}(\boldsymbol{\psi}_\beta(\cdot | \boldsymbol{x}) \| \boldsymbol{p}(\cdot | \boldsymbol{x})),\) which we refer to as reward-KL tradeoff curves (e.g., [13]). A decoding policy is desired when its reward-KL tradeoff curve dominates that of any other decoding policy. The optimal aligned model (Definition 2) must dominate any other alignment technique by definition. However, in practice best-of-\(N\) achieves remarkably good reward-KL tradeoffs and often dominates those of sophisticated methods solving the RL problem in practical scenarios [13], [16]. One of our major objectives in this paper is to develop a theoretical understanding for this phenomenon.

3 Optimal KL-Constrained RL Solution↩︎

Our first goal is to understand the precise theoretical behavior of the reward under the optimal KL-constrained RL. Our second goal is to study the relationship between the optimal KL-constrained RL and best-of-\(N\) methods. We want to answer the questions: does best-of-\(N\) method produce good or even optimal solutions to the KL-constrained RL problem? Is there any theoretical support for the good empirical performance of best-of-\(N\)?

First, we characterize the solution to the KL-constrained RL problem.

Lemma 1 (optimal aligned model). The solution of the KL-constrained RL problem in Definition 2 is unique and is given by \[\boldsymbol{\phi}_\Delta(\cdot | \boldsymbol{x}) = T(\boldsymbol{q}(\cdot | \boldsymbol{x}), \boldsymbol{p}(\cdot | \boldsymbol{x}), \alpha(\Delta)), \label{eq:mismatched-tilt}\qquad{(1)}\] where \(T(\boldsymbol{q}(\cdot | \boldsymbol{x}), \boldsymbol{p}(\cdot | \boldsymbol{x}), \alpha)\) is the mismatched tilt [18]: \[T(\boldsymbol{q}(\cdot | \boldsymbol{x}), \boldsymbol{p}(\cdot | \boldsymbol{x}), \alpha)(\boldsymbol{y}) := \frac{\boldsymbol{p}(\boldsymbol{y}| \boldsymbol{x})\boldsymbol{q}^{\alpha}(\boldsymbol{y}| \boldsymbol{x}) }{Z_{\boldsymbol{x}, \alpha}},\] where \[Z_{\boldsymbol{x}, \alpha} :=\sum_{\boldsymbol{z}\in \boldsymbol{\mathcal{Y}}} \boldsymbol{p}(\boldsymbol{z}| \boldsymbol{x})\boldsymbol{q}^{\alpha}(\boldsymbol{z}| \boldsymbol{x}),\] and \[\alpha(\Delta) := \arg_{\alpha \in \mathbb{R}^{\geq 0}} \left\{D_{\operatorname{KL}}(T(\boldsymbol{q}(\cdot | \boldsymbol{x}), \boldsymbol{p}(\cdot | \boldsymbol{x}), \alpha) \| \boldsymbol{p}) = \Delta \right\}. \label{eq:alpha-Delta}\qquad{(2)}\]

Proof sketch. This follows from applying the Karush–Kuhn–Tucker (KKT) conditions on the convex problem in Definition 2. ◻

A variant of this result has already appeared in [20], [21] and we provide a proof in the appendix for completeness. The result above suggests that the aligned models (parameterized by \(\Delta\)) form an exponential family of distributions,3 and we define this family of solutions to the KL-constrained RL for different values of \(\Delta\) as the aligned family.

Definition 4 (aligned family). Let the aligned family of \(\boldsymbol{p}\) towards \(\boldsymbol{q}\) be denoted by \(\mathcal{T}(\boldsymbol{p}, \boldsymbol{q})\), and be given by \[\mathcal{T}(\boldsymbol{p}, \boldsymbol{q}) = \{\boldsymbol{\phi}_\Delta\}_{\Delta \geq 0}.\]

Equipped with the optimal aligned model for a given KL divergence constraint, we ask how close a model would be to optimal, if it offered almost the same expected reward as the optimal model.

Lemma 2 (closeness of models given closeness of expected rewards). Let \(\boldsymbol{\psi}_\Delta\) be such that \[D_{\operatorname{KL}}(\boldsymbol{\psi}_\Delta \| \boldsymbol{p}) \leq \Delta; \quad\quad H(\boldsymbol{\psi}_\Delta \|\boldsymbol{q}) \leq H(\boldsymbol{\phi}_\Delta \|\boldsymbol{q}) + \epsilon, \label{eq:lemma-closeness-assumptions}\qquad{(3)}\] where \(\boldsymbol{\phi}_\Delta\) is the optimal aligned model at KL divergence \(\Delta\). Then, \[D_{\operatorname{KL}}(\boldsymbol{\psi}_\Delta \| \boldsymbol{\phi}_\Delta) \leq \alpha(\Delta) \epsilon,\] where \(\alpha(\Delta)\) is defined in ?? .

Proof. Note that \[\begin{align} D_{\operatorname{KL}}(\boldsymbol{\psi}_\Delta(\cdot | \boldsymbol{x}) \| \boldsymbol{\phi}_\Delta(\cdot| \boldsymbol{x})) & = D_{\operatorname{KL}}(\boldsymbol{\psi}_\Delta(\cdot | \boldsymbol{x}) \| \boldsymbol{\phi}_\Delta(\cdot| \boldsymbol{x})) - D_{\operatorname{KL}}(\boldsymbol{\phi}_\Delta(\cdot| \boldsymbol{x}) \| \boldsymbol{\phi}_\Delta(\cdot| \boldsymbol{x})) \nonumber\\ &= \mathbb{E}_{\boldsymbol{y}\sim \boldsymbol{\psi}_\Delta(\cdot | \boldsymbol{x})} \log \frac{\boldsymbol{\psi}_\Delta(\boldsymbol{y}| \boldsymbol{x})}{\boldsymbol{p}(\boldsymbol{y}| \boldsymbol{x})} + \alpha(\Delta) \mathbb{E}_{\boldsymbol{y}\sim \boldsymbol{\psi}_\Delta(\cdot| \boldsymbol{x}) } \log \frac{1}{\boldsymbol{q}(\boldsymbol{y}| \boldsymbol{x})} + \log Z_{\boldsymbol{x}, \alpha(\Delta)}\nonumber\\ & - \mathbb{E}_{\boldsymbol{y}\sim \boldsymbol{\phi}_\Delta(\cdot | \boldsymbol{x})} \log \frac{\boldsymbol{\phi}_\Delta(\boldsymbol{y}| \boldsymbol{x})}{\boldsymbol{p}(\boldsymbol{y}| \boldsymbol{x})} - \alpha(\Delta) \mathbb{E}_{\boldsymbol{y}\sim \boldsymbol{\phi}_\Delta(\cdot| \boldsymbol{x}) } \log \frac{1}{\boldsymbol{q}(\boldsymbol{y}| \boldsymbol{x})} - \log Z_{\boldsymbol{x}, \alpha(\Delta)}\tag{2} \\ & = D_{\operatorname{KL}}(\boldsymbol{\psi}_\Delta(\cdot | \boldsymbol{x}) \| \boldsymbol{p}(\cdot \| \boldsymbol{x}) ) - D_{\operatorname{KL}}(\boldsymbol{\phi}_\Delta(\cdot| \boldsymbol{x})\| \boldsymbol{p}(\cdot \| \boldsymbol{x}) )\nonumber\\ & + \alpha(\Delta) \left( H(\boldsymbol{\psi}_\Delta \|\boldsymbol{q}) - H(\boldsymbol{\phi}_\Delta \|\boldsymbol{q}) \right) \nonumber\\ & \leq \alpha(\Delta) \epsilon, \tag{3} \end{align}\] where 2 follows from the fact that \(\boldsymbol{\phi}_\Delta\) could be expressed by ?? in Lemma 1, and 3 follows from the assumptions of the lemma in ?? , which completes the proof. ◻

Lemma 2 states that any model that satisfies KL constraint \(\Delta,\) and whose expected reward is within \(\epsilon\) of that of the optimal aligned model must be close to the optimal aligned model in KL divergence. We will use this result to establish such closeness between best-of-\(N\) and the optimal KL-constrained model in the sequel.

Before we proceed, we would like to also observe a connection between the cumulants of the reward and Rényi cross entropy.

Definition 5 (Rényi cross entropy). For all \(t> 0,\) let Rényi cross entropy of order \(t\) be defined as4 [22] \[H_t (\boldsymbol{p}\| \boldsymbol{q}) := \frac{1}{1-t} \log \sum_{\boldsymbol{y}\in \boldsymbol{\mathcal{Y}}} \boldsymbol{p}(\boldsymbol{y}) \boldsymbol{q}(\boldsymbol{y})^{t -1}.\]

Rényi cross entropy of order \(1\) recovers the cross entropy, i.e., \(H_1 (\boldsymbol{p}\| \boldsymbol{q}) = H (\boldsymbol{p}\| \boldsymbol{q}).\)

Lemma 3 (cumulant generating function of reward). Let \(E_{\Delta, \rho}(\boldsymbol{p}, \boldsymbol{q})\) be the \(\rho\)-th cumulant of the reward under the optimal \(\Delta\)-aligned model, and for all \(\rho \geq 0,\)be defined as5 \[E_{\Delta, \rho}(\boldsymbol{p}, \boldsymbol{q}) := \frac{1}{\rho} \log \mathbb{E}_{\boldsymbol{y}\in \boldsymbol{\phi}_{\Delta} (\cdot|\boldsymbol{x})} \boldsymbol{q}_{\boldsymbol{y}|\boldsymbol{x}} (\boldsymbol{y}|\boldsymbol{x})^\rho = \frac{1}{\rho} \log \mathbb{E}_{\boldsymbol{y}\in \boldsymbol{\phi}_{\Delta} (\cdot|\boldsymbol{x})} \exp ( \rho \, \boldsymbol{r}(\boldsymbol{x}, \boldsymbol{y})),\] where the latter equality is due to the fact that we assumed \(R_{\boldsymbol{x}} = 1.\) Then, \[E_{\Delta, \rho}(\boldsymbol{p}, \boldsymbol{q}) = - H_{1+\rho}(\boldsymbol{\phi}_\Delta \| \boldsymbol{q}).\]

Proof. By definition, \[H_{1+\rho}(\boldsymbol{\phi}_\Delta \| \boldsymbol{q}) = - \frac{1}{\rho} \log \mathbb{E}_{\boldsymbol{y}\in \boldsymbol{\phi}_{\Delta} (\cdot|\boldsymbol{x})} \boldsymbol{q}_{\boldsymbol{y}|\boldsymbol{x}} (\boldsymbol{y}|\boldsymbol{x})^\rho,\] which completes the proof. ◻

In particular, \[E_{\Delta, 0}(\boldsymbol{p}, \boldsymbol{q}) = \mathbb{E}_{\boldsymbol{y}\in \boldsymbol{\phi}_{\Delta} (\cdot|\boldsymbol{x})} \log \boldsymbol{q}_{\boldsymbol{y}|\boldsymbol{x}} (\boldsymbol{y}| \boldsymbol{x}) = \mathbb{E}_{\boldsymbol{y}\in \boldsymbol{\phi}_{\Delta} (\cdot|\boldsymbol{x})} \boldsymbol{r}(\boldsymbol{x}, \boldsymbol{y}) = - H(\boldsymbol{\phi}_\Delta \| \boldsymbol{q}).\]

4 Main Results↩︎

Equipped with the preliminaries, in this section we derive our main results. To this end, we make two simplifying assumptions.

Assumption 1 (memoryless reference model). We assume that \(\boldsymbol{p}_{\boldsymbol{y}|\boldsymbol{x}}\) is a memoryless source such that the outcome is a sequence of length \(m\) from the \(m\)-product of the categorical distribution defined by stochastic vector \(p \in \mathcal{S}^K_\zeta\), where \(\mathcal{S}^K_\zeta\) denotes the interior of the simplex over alphabet size \(K,\) such that \[\mathcal{S}^K_\zeta:= \left\{p : \sum_{k \in [K]} p_k = 1, \quad p_k > \zeta, \quad \forall k \neq k'~~ p_k \neq p_{k'}\right\}.\]

Assumption 2 (linear reward). We assume that the reward is the negative loglikelihood of an alignment distribution \(\boldsymbol{q}_{\boldsymbol{y}|\boldsymbol{x}}\) that is memoryless and a product of categorical distribution \(q \in \mathcal{S}^K_\zeta\), i.e., \(\boldsymbol{q}_{\boldsymbol{y}|\boldsymbol{x}}\) satisfies Assumption 1. This immediately implies that \(\boldsymbol{r}\) is bounded from above and is linear in the outcome.

We adopt Assumptions 1-2 for brevity in this paper, and comment on extension to more general classes of language models and reward functions. Also note that the regularity condition on \(\boldsymbol{q}_{\boldsymbol{y}|\boldsymbol{x}}\) is equivalently a regularity condition on the reward function (i.e. that it is bounded, additive, and assigns distinct reward to all of possible types).

Definition 6 (type). Let \(y^m\) be a sequence of length \(m\) supported on alphabet of size \(K.\) Then the type of sequence \(y^m\) is denoted by \(t(y^m)\) defined as \[t(y^m) = \left(\frac{1}{m}\sum_{i \in [m]} \mathbf{1}\{y_i = 1\}, \ldots, \frac{1}{m}\sum_{i \in [m]} \mathbf{1}\{y_i = K\}\right),\] where \(\mathbf{1}\{\cdot\}\) is the indicator function.

The type of a sequence denotes its empirical distribution and is a sufficient statistic of the sequence under the memoryless assumption for the reference model.

In this setup, our goal is to show that both best-of-\(N\) and optimal KL-constrained RL method return sequences whose type (empirical distribution) is close to the intersection of the reward contour and the KL divergence contour. To this end, we first start with the KL-constrained RL solution.

4.1 KL-Constrained RL Problem↩︎

Under the memoryless setup of Assumption 1-2, the aligned family and optimal KL-constrained RL solution can be further simplified as follows.

Lemma 4 (optimal aligned model is in the same family as the reference model and reward model). If Assumptions 1-2 hold, then for each \(\boldsymbol{x}\), \(\boldsymbol{\phi}_\Delta( y^m | \boldsymbol{x})\) is also a memoryless distribution that satisfies Assumption 1, i.e., there exists a distribution \(\phi_\delta\) such that \(\Delta = m\delta\) and \[\boldsymbol{\phi}_\Delta( y^m | \boldsymbol{x}) = \prod^m_{i=1} \phi_\delta(y_i).\] Furthemore, \(\phi_\delta\) is given by \[\phi_\delta = T(q, p, \alpha(\delta)),\] where \(T(q, p, \alpha)\) is the mismatched tilt [18]: \[T(q, p, \alpha)(y) = \frac{p_i q_i^{\alpha}}{\sum_{j \in [K]} p_j q_j^{\alpha}}\] and where \[\alpha(\delta) := \arg_{\alpha \in \mathbb{R}+} \left\{D_{\operatorname{KL}}(T(q, p, \alpha) \| p) = \delta \right\}.\]

Proof. The proof follows from Lemma 1 and by identifying that: \[\begin{align} H(\boldsymbol{\phi}_\Delta \| \boldsymbol{q}) & = H(\phi^m_\delta \| q^m) = m H (\phi_\delta \| q), \\ D_{\operatorname{KL}}(\boldsymbol{\phi}_\Delta \| \boldsymbol{q}) & = D_{\operatorname{KL}}(\phi^m_\Delta \| q^m) = m D_{\operatorname{KL}}(\phi_\delta \| q) . \end{align}\] ◻

Figure 1: We consider a ternary memoryless source, where the green point is \(p = (\frac{1}{5}, \frac{3}{10}, \frac{1}{2})\), the red point is \(q = (\frac{2}{3}, \frac{1}{9}, \frac{2}{9})\). The green curve characterizes all \(v\) s.t. \(D_{\operatorname{KL}}(v \| p) = \Delta = 0.11\), which is the KL contour. The black line characterizes all \(w\) s.t. \(H(w\|q) = H(T(q, p, \alpha(0.11) \| q),\) which is a linear family. The blue curve is \(\mathcal{T}_{p,q},\) i.e., the aligned family. The unique intersection of the green KL ball and the black constant reward front is a point on the blue curve which is \(\phi_\Delta.\) The brown point is the expected value of the type of the best-of-\(N\) for \(N = \exp(m \delta) \approx 3,\) where \(m = 10,\) which is remarkably close to \(\phi_\Delta.\)

Example 1 (ternary memoryless source). Let us introduce a recurring example of such source on a ternary alphabet \(\mathcal{Y} = \{1, 2, 3\},\) with \(K =3.\) Let \(p = (\frac{1}{5}, \frac{3}{10}, \frac{1}{2})\) and \(q = (\frac{2}{3}, \frac{1}{9}, \frac{2}{9})\). In Figure 1, the two distributions are depicted on the ternary probability simplex. A KL divergence contour \(\{\phi: D_{\operatorname{KL}}(\phi \| p) = \delta\}\) is depicted via the green curve. A reward contour \(\{\phi: H(\phi \|p) = c\}\) is depicted via the black line. The aligned family \(\{\phi_\delta\}_{\delta \geq 0}\) is depicted via the blue line.

Equipped with this result, we now show that the sequence of optimal aligned models as a function the sequence length, \(m,\) satisfies a large deviation principle (LDP).

Definition 7 (large deviation principle (LDP)). A sequence of random variables \(\{Y_n:n\in\mathbb{N}\}\) taking values in \(\mathbb{R}\) satisfies the Large Deviation Principle with rate function \(J:\mathbb{R}\to[0,\infty]\) if \(J\) is lower semi-continuous and has compact level-sets, and for all Borel sets \(B\) \[\begin{align} -\inf_{t\in B^\circ} J(t) & \leq \liminf_{n\to\infty} \frac{1}{n} \log \mathbb{P}\{Y_n\in B\} \leq \limsup_{n\to\infty} \frac{1}{n} \log \mathbb{P}\{Y_n\in B\} \leq -\inf_{t\in \bar{B}} J(t), \end{align}\] where \(B^\circ\) is the interior of \(B\) and \(\bar{B}\) is its closure.

The rate function of the Large Deviation Principle (LDP) thoroughly characterizes the probability of rare events in which \(Y_n\) significantly deviates from its mean. Specifically, an LDP offers a complete description of the cumulant generating function of \(Y_n\) by employing Varadhan’s Lemma [23]. We are now ready to state our next result which is an LDP for the reward function for sequences generated from \(\boldsymbol{\phi}_\Delta\), and under Assumptions 1-2.

Theorem 1 (LDP for aligned reward). Let \(Y^m \sim \phi_\delta^m.\) The sequence \(\{ -\frac{1}{m} \log q^m(Y^m)\}_{m \in \mathbb{N}}\) satisfies an LDP with rate function \[J_\Delta(t) = D_{\operatorname{KL}}(T(q, p, \beta(t)) \| \phi_\delta ), \label{eq:J-delta}\qquad{(4)}\] where \(\beta(t) = \arg_{\beta \in \mathbb{R}} \{ H(T(q, p, \beta) \| q) = t \}.\)

Proof sketch of Theorem 1. The proof relies on a connection with the LDP of mismatched information along with a change of measure along the aligned family to \(\phi_\delta^m\). The LDP for the mismatched information is itself proven using a proof technique developed in  [18]. More precisely, one first constructs sets in the space of distribution and argue that the empirical type of a sequence falling within these sets constrains the reward. Finally, a careful application of Sanov’s Theorem [23] concludes the argument. ◻

As we mentioned above, one practical usecase for the LDP of the reward, is the derivation of the scaled cumulant of function via Varadhan’s lemma.

Corollary 1 (cumulants of the reward under aligned model). For all \(\rho \geq 0,\)6 the scaled cumulants of the reward are given by \[\lim_{m \to \infty} \frac{1}{m} \frac{1}{\rho} \log E_{Y^m \sim \phi^m_\delta} e^{\rho \, r(Y^m)} = - H_{1+\rho} (\phi_\delta \| q).\]

Note that the same result could have been directly derived from the characterization of the cumulants in Lemma 3 as well. This immediately implies that the average reward of a draw from \(\phi_\delta\) concentrates on \(H (\phi_\delta \| q):\) \[\lim_{m \to \infty} \frac{1}{m} E_{Y^m \sim \phi^m_\Delta} r(Y^m) = - H (\phi_\delta \| q).\] This characterizes the expected reward that is achieved by the solution to the KL-constrained RL problem.

4.2 Best-of-\(N\) Alignment↩︎

Next, we provide theoretical guarantees on the best-of-\(N\) alignment method and relate it to the optimal KL-constrained alignment problem. We first start with proving some properties of the best-of-\(N\) policy.

Under Assumptions 1-2, let \(\pi^m_N\) denote best-of-\(N\) distribution on sequences of length \(m.\) Intuitively, one would think that \(\pi^m_N\) still retains the nice memoryless property of \(p^m\); we show below that, sadly this property is lost.

Example 2 (The set of memoryless sources is not closed under best-of-\(N\)). Let \(m=2\) and \(N=2\). Let \(p\) be a uniform distribution over \(\mathcal{Y}= \{ 0, 1, 2 \}\): \(p (y_1 = 0) = 0.2\), \(p (y_1 = 1) = 0.3\), \(p(y_2 = 2) = 0.5\). Let the reward function be \(r (y = 0) = \log_e 6\), \(r (y = 1) = 0\) and \(r (y = 2) = \log_e 2\). Additionally, suppose best-of-\(N\) will pick uniformly for tie-breaking. We can compute directly \(\pi^m_N\) and show that it is no longer i.i.d.: \(\pi^m_N (y_1 = 0) = \pi^m_N(y_2 = 0) = 209/625\), while, \[\pi^m_N (y_1 = 0, y_2 = 0) = 49/625 \neq \pi^m_N (y_1 = 0) \pi^m_N(y_2 = 0).\] For completeness, we list \(\pi^m_N\) for all outcomes in Table 2.

Table 1: Full description of \(\pi_N^m\) for Example [example:best-of-N-not-product]
\(\pi_N^m (y_1, y_2)\) \(y_1 = 0\) \(y_1 = 1\) \(y_1 = 2\)
\(y_2 = 0\) 49/625 21/250 43/250
\(y_2 = 1\) 21/250 81/10000 9/125
\(y_2 = 2\) 43/250 9/125 103/400

Even though \(\pi^m_N\) is not a memoryless distribution, we show that it still is symmetric and is an exchangeable distribution.

Lemma 5 (best-of-\(N\) alignment model is exchangeable). If Assumptions 1-2 hold and the ties in best-of-\(N\) algorithm are broken uniformly at random, then \(\pi^m_N\) is an exchangeable distribution, i.e., \[\pi^m_N(X_i=x_i, \ldots, X_j=x_j) = \pi^m_N(X_i=x_j, \ldots, X_j=x_i),\] for any \(i \neq j\).

Proof of Lemma 5. Without loss of generality, we prove for \(i = 1\) and \(j = 2\). We need to show \[P [\pi_N^m = (x_1, x_2, \ldots, x_m)] = P [\pi_N^m = (x_2, x_1, \ldots, x_m)] .\] Let \(\boldsymbol{p}(\cdot | \boldsymbol{x}) = p^m\). Let \(p_c^m\) be a coupling of \(p^m\) such that it will swap first item and second item of \(p^m\). Since \(p^m\) is memoryless and product of identical distributions, the probability mass function of \(p_c^m\) and \(p^m\) are the same. By running best-of-\(N\) algorithm over \(p_c^m\) and \(p^m\), we can see that their first and second item are swapped while having the same probability mass function for their corresponding best-of-\(N\). ◻

In view of the previous example, it may seem hopeless to relate best-of-\(N\) aligned model and the optimal aligned model, given that the latter remains a memoryless source whereas best-of-\(N\) does not even satisfy that criterion. However, we show that for certain values of \(m\) and \(N\), both the optimal KL-constrained RL alignment and best-of-\(N\) alignment would roughly be doing the same thing, which we establish using Lemma 2.

To this end, we first state a KL divergence upper bound for the best-of-\(N\) strategy.

Lemma 6 (KL divergence upper bound for best-of-\(N\)). For any \(N,\) \(\boldsymbol{x},\) \(m,\) and \(p,\) \[D_{\operatorname{KL}}(\pi^m_N(\cdot \| \boldsymbol{x}) \| p^m(\cdot \| \boldsymbol{x})) \leq \log N.\]

Proof. This is a direct corollary of [19]. ◻

Given this result, we set \(N = \exp(m \delta)\) such that the optimal KL-constrained aligned model and best-of-\(N\) both satisfy the same upper bound on their KL divergence, i.e., \(\Delta = m\delta\). To show that they are close to each other in KL divergence using Lemma 2, we need to establish that they roughly achieve the same expected reward, which then implies the KL divergence between the two distributions \(\pi^m_N\) and \(\phi_\delta^m\) tends to zero. We show that not only the two methods have the same reward asymptotically, but their output types are also similar.

lemmalemmaTypeConvergence Let Assumptions 1-2 hold. Let \(Y^m \sim \pi_N^m\) be a sequence generated from the best-of-\(N\) method. Let \(\delta \in [0, \log \frac{1}{\zeta}]\), and \(N = \exp(m\delta)\). Denote by \(t(Y^m)\) the type of \(Y^m\) (Definition 6). Then, for any \(\epsilon > 0\), we have: \[\lim_{m \to \infty}\mathbb{P}(|t(Y^m) - \phi_\delta| \leq \epsilon) \to 1.\]

The above result implies the following result on the reward of the best-of-\(N\) model.

Lemma 7 (best-of-\(N\) achieves the same reward as optimal aligned model). Let Assumptions 1-2 hold. Let \(\delta \in [0, \log \frac{1}{\zeta}]\) and \(N = \exp(m \delta)\). Then \[\lim_{m \rightarrow \infty} \frac{1}{m} H(\pi^m_N(\cdot | \boldsymbol{x}) \| q^m(\cdot | \boldsymbol{x})) =\lim_{m \to \infty} \frac{1}{m} H(\phi^m_\delta (\cdot | \boldsymbol{x}) \| q^m(\cdot | \boldsymbol{x}))\]

Proof. Let \(Y^m \sim \pi_N^m\). We first relate a sample’s type to its reward: \[\mathbb{E}_{x \sim t(Y^m)} [\boldsymbol{r}(x)] = \sum_{x \in \boldsymbol{\mathcal{Y}}} \frac{N_x}{m} \boldsymbol{r}(x) = \frac{1}{m} \, \boldsymbol{r}(Y^m),\] where \(N_x\) is the count of element \(x\) over the stream of \(m\) tokens of \(Y_i^m\). By , the type of best-of-\(N\) converges to \(\phi_\delta\), and hence, \[\begin{align} \lim_{m\rightarrow \infty}\frac{1}{m} \left( H(\pi^m_N(\cdot | \boldsymbol{x}) \| q^m(\cdot | \boldsymbol{x})) - H(\phi^m_\delta (\cdot | \boldsymbol{x}) \| q^m(\cdot | \boldsymbol{x})) \right) & = - \lim_{m\rightarrow \infty}\frac{1}{m} \, \boldsymbol{r}(Y^m) - \mathbb{E}_{x \sim \phi_\Delta} [\boldsymbol{r}(x)] = 0, \end{align}\] which completes the proof. ◻

We now state our main theorem, which says that the best of \(N=\exp(m \delta)\) is asymptotically the same to that of the optimal solution of KL-constrained RL alignment.

Theorem 2. Under Assumptions 1-2, let \(\phi_\delta^m\) be the optimal solution to , and \(\pi^m_{N}\) be the distribution of the best-of-\(N\). For any \(\delta \in [0, \log \frac{1}{\zeta}]\), if \(N=\exp(m \delta)\), then we have that for all \(\boldsymbol{x}\), \[\lim_{m \to \infty} \frac{1}{m} D_{\operatorname{KL}}(\pi^m_N(\cdot | \boldsymbol{x}) \| \phi^m_\delta(\cdot | \boldsymbol{x})) = 0.\]

Proof. Combining Lemma 7 with Lemma 2 yields Theorem 2. ◻

Theorem 2 shows that the best-of-\(N\) distribution is asymptotically close to the KL-constrained RL solution. We remark that even though best-of-\(N\) is not necessarily a product distribution as shown in , it still is close to the optimal solution of the KL-constrained RL, which is a product distribution in our setting. We believe that Theorem 2 sheds some light on the remarkable performance of the best-of-\(N\) method when evaluated on the tradeoffs between expected reward and the KL divergence between the aligned model and the reference model (reward-KL tradeoffs).

We also empirically validate that the type of best-of-\(N\) is close to the solution for the KL-constrained RL problem for \(N = \exp(m\delta)\). To this end, we revisit the ternary example in Figure 1. As can be seen, even for modest values of \(m\), and \(N\), we observe that \(\mathbb{E}[t(Y^m)]\) where \(Y^m \sim \pi^m_N\) is remarkably close to \(\phi_\delta\).

We have thus shown that the typical behavior of the best-of-\(N\) method is mirrored by the solution to the KL-constrained RL. We conclude this section with a conjecture: we propose that the atypical behavior of the best-of-\(N\) could also be described by a Large Deviation Principle (LDP) with the identical rate function as that of the KL-constrained RL, as detailed in Theorem 1.

5 Concluding Remarks↩︎

In this paper, we considered the language model alignment problem, and studied two popular alignment techniques: KL-constrained reinforcement learning and best-of-\(N.\) We established several properties of the solutions of both techniques and established an asymptotic equivalence between the two problems, providing theoretical justification for the remarkable performance of best-of-\(N\) in practice. While our derivations were obtained under strong assumptions about the string-generating sources, we believe that these results may be extended beyond these classes of sources. In particular, we performed some extra experiments to compare the KL divergence of best-of-\(N\) solution, \(\boldsymbol{\pi}_N\) to that of the optimal KL-regularized RL where the KL divergence budget for both models is \(\Delta\), i.e., \(\boldsymbol{\phi}_\Delta\). We found that for random \(\boldsymbol{p}\) and \(\boldsymbol{q}\) over alphabet of size \(K>1000,\) the KL divergence \(D_{\operatorname{KL}}(\boldsymbol{\pi}_N \| \boldsymbol{\phi}_\Delta)\) was always bounded by \(0.01\) for all \(N \in [1, 1000]\) which is striking. Even for \(K<10,\) we empirically observed that \(D_{\operatorname{KL}}(\boldsymbol{\pi}_N \| \boldsymbol{\phi}_\Delta) < 0.5.\) We hope future research can extend the theoretical investigation to better understand this phenomenon.


The authors are grateful to Alekh Agarwal, Ananth Balashankar, Jonathan Berant, Michael Collins, Jacob Eisenstein, Adam Fisch, Jong Lee, and Mahyar Jafarinodeh for helpful discussions on the fundamentals of language model alignment.

6 Proofs of Properties of KL-constrained RL↩︎

Proof of . The Lagrangian is written as \[\mathcal{L}(\boldsymbol{\phi}) = H (\boldsymbol{\phi}\|\boldsymbol{q}) + \lambda \left( D_{\operatorname{KL}}(\boldsymbol{\phi}\|\boldsymbol{p}) - \Delta \right) .\] Fix \(\lambda > 0\) we can solve for its minimum with respect to \(\boldsymbol{\phi}\) following steps, \[\begin{align} \min_{\boldsymbol{\phi}} \mathcal{L}(\boldsymbol{\phi}) & = \min_{\boldsymbol{\phi}} H (\boldsymbol{\phi}\|\boldsymbol{q}) + \lambda \left( D_{\operatorname{KL}}(\boldsymbol{\phi}\|\boldsymbol{p}) - \Delta \right) \\ & = \min_{\boldsymbol{\phi}} \sum_{\boldsymbol{y}\in {\mathcal{Y}}} \boldsymbol{\phi}(\boldsymbol{y}) \left( \log \frac{1}{\boldsymbol{q}(\boldsymbol{y})} + \lambda \log \frac{\boldsymbol{\phi}(\boldsymbol{y})}{\boldsymbol{p}(\boldsymbol{y})} - \lambda \Delta \right)\\ & = \min_{\boldsymbol{\phi}} \lambda \sum_y \boldsymbol{\phi}(\boldsymbol{y}) \left( \log \frac{\boldsymbol{\phi}(\boldsymbol{y})}{\boldsymbol{p}(\boldsymbol{y})\boldsymbol{q}(\boldsymbol{y})^{1 / \lambda}} - \Delta \right)\\ & = \min_{\boldsymbol{\phi}} \lambda \sum_{\boldsymbol{y}} \boldsymbol{\phi}(\boldsymbol{y}) \left( \log \frac{\boldsymbol{\phi}(\boldsymbol{y})}{T (\boldsymbol{q}, \boldsymbol{p}, 1 / \lambda) (\boldsymbol{y})} \right) + C(\lambda, \Delta),\\ & = \min_{\boldsymbol{\phi}} \lambda D_{\operatorname{KL}}( \boldsymbol{\phi}(\boldsymbol{y}) \| T (\boldsymbol{q}, \boldsymbol{p}, 1 / \lambda) (\boldsymbol{y})) + C(\lambda, \Delta), \end{align}\] which is uniquely minimized by \(\boldsymbol{\phi}(\boldsymbol{y}) = T (\boldsymbol{q}, \boldsymbol{p}, 1 / \lambda) (\boldsymbol{y})\). ◻

7 Proofs of Large Deviation Principle (LDP)↩︎

Now to prove LDP, we first show some results on the KL-constrained RL.

Theorem 3 (LDP for mismatched information). Let \(Y^m \sim p^m.\) The sequence \(\{ -\frac{1}{m} \log q^m(Y^m)\}_{m \in \mathbb{N}}\) satisfies an LDP with rate function \[J_0(t) = D_{\operatorname{KL}}(T(q, p, \beta(t)) \| p ), \label{eq:J-delta-info}\qquad{(5)}\] where \[\beta(t) = \arg_{\beta \in \mathbb{R}} \{ H(T(q, p, \beta) \| q) = t \}.\]

The proof of Theorem 3 relies on a correspondence between mismatched information, and some sets of distributions, which we will define shortly. This correspondence is implicitly used in [24] and made explicit in [18]. For \(\epsilon \geq 0\) and \(\alpha \in \mathbb{R}\), let \[\begin{align} \mathcal{D}(q,\alpha,\epsilon) & \triangleq \left\{ \varphi \in \Delta_{\mathcal{X}}: H(\varphi \| q) - H(T(q,\alpha)\| q) \leq \epsilon \right\} \\ \mathcal{E}(q,\alpha,\epsilon) & \triangleq \left\{ \varphi \in \Delta_{\mathcal{X}}: H(\varphi \| q) - H(T(q,\alpha)\| q) \geq - \epsilon \right\} \\ \mathcal{B}(q,\alpha,\epsilon) &\triangleq \left\{\varphi \in \Delta_{\mathcal{X}}:H(T(q,\alpha)\|q) - H(\varphi \| q) \in [0,\epsilon]\right\}. \end{align}\] The sets above are extensions of tilted weakly typical sets of order \(\alpha\) [24], and capture the set of types which are respectively, more likely, less likely, and as likely according to \(q\) than \(T(q,\alpha)\), where we define \[T(q, \alpha) := T(q, u, \alpha),\] where \(u = (1/|\mathcal{Y}|, \ldots, 1/|\mathcal{Y}|)\) is the uniform distribution. For these sets, we then have the following lemma.

Lemma 8. For any \(\alpha > 0\), the following inclusion relations hold, for sufficiently large \(n\), \[\begin{align} & \left| - \frac{1}{m} \log q(y^m) - H(T(q,\alpha)\| q) \right| \leq \epsilon \Rightarrow t(y^m) \in \mathcal{D}(q,\alpha,2\epsilon), \label{eq:D95set}\\ & \left| - \frac{1}{m} \log q(y^m) - H(T(q,\alpha)\| q) \right| \leq \epsilon \Rightarrow t(y^m) \in \mathcal{E}(q,\alpha,2\epsilon), \label{eq:E95set}\\ &\left| - \frac{1}{m} \log q(y^m) - H(T(q,\alpha)\| q) \right| \leq \epsilon \Leftarrow t(y^m) \in \mathcal{B}(q,\alpha,\epsilon).\label{eq:B95set} \end{align}\] {#eq: sublabel=eq:eq:D95set,eq:eq:E95set,eq:eq:B95set}

This was proved implicitly in the proofs of Theorems 3 and 5 in [24]. We are now equipped to provide the proof of the main theorem.

Proof of Theorem 3. Note that as \(-\frac{1}{m}\log q(Y^m)\) takes values in a compact subset \([0,\log \frac{1}{\zeta}]\) of \(\mathbb{R}\) (see Assumptions 1-2), it is sufficient to prove that the limit below exists and evaluates to the rate function (see [24] for a formal discussion), i.e., \[\begin{align} \lim_{\epsilon \downarrow 0} \lim_{m \to \infty} \frac{1}{m}\log \mathbb{P}_p^m\left( \left|- \frac{1}{m} \log q(Y^m) - t \right| < \epsilon \right)= -J(t). \end{align}\] We proceed with the proof in three separate cases.

Case (a): We let \(t \in (H(q),\log \frac{1}{\zeta})\), which implies \(\alpha(t) \in (0,1)\) by monotonicity of \(H(T(q,\alpha)\| q)\) for \(\alpha \in \mathbb{R}\). Note that ?? and ?? respectively imply \[\begin{align} & \lim_{\epsilon \downarrow 0} \limsup_{m \to \infty} \frac{1}{m} \log \mathbb{P}_{p^m}\left( \left| - \frac{1}{m} \log q(Y^m) - H(T(q,\alpha(t))\|q) \right| \leq \epsilon \right) \notag \\ &\leq \lim_{\epsilon \downarrow 0} \limsup_{m \to \infty} \frac{1}{m} \log \mathbb{P}_{p^m}(t(Y^m) \in \mathcal{D}(q,\alpha(t),2\epsilon)), \tag{4}\\ & \lim_{\epsilon \downarrow 0} \liminf_{m \to \infty} \frac{1}{m} \log \mathbb{P}_{p^m}\left( \left| - \frac{1}{m} \log q(Y^m) - H(T(q,\alpha(t))\|q) \right| \leq \epsilon \right) \notag \\ &\geq \lim_{\epsilon \downarrow 0} \liminf_{m \to \infty} \frac{1}{m} \log \mathbb{P}_{p^m}(t(Y^m) \in \mathcal{B}(q,\alpha(t),\epsilon)). \tag{5} \end{align}\] Thus, it suffices to show that the RHS of 4 and 5 both evaluate to \(-D(T(q, p, \alpha(t)) \| p)\). This is done via Sanov’s Theorem. Recall that Sanov’s Theorem [23] states that, for a set of distributions \(\mathcal{C}\), \[\begin{align} - \inf_{\gamma \in \mathrm{int} \mathcal{C}} D(\gamma \| p) &\leq \liminf_{m \to \infty} \frac{1}{m} \log \mathbb{P}(t(y^m) \in \mathcal{C}) \notag \\ &\leq \limsup_{m \to \infty} \frac{1}{m} \log \mathbb{P}(t(y^m) \in \mathcal{C}) \notag \\ & \leq - \inf_{\gamma \in \mathrm{cl} \mathcal{C}} D(\gamma \| p). \end{align}\] To obtain the upper bound, we apply this result to the set \(\mathcal{D}(q,\alpha(t),2\epsilon/\alpha(t))\). Observing that this holds for any \(\epsilon\), and then letting \(\epsilon \downarrow 0\), we get that the RHS of 4 is upper bounded \[\begin{align} - \lim_{\epsilon \downarrow 0} \inf_{\gamma \in \mathrm{cl} \mathcal{D}(q,\alpha(t),2\epsilon/\alpha(t))} D(\gamma \| p). \label{eq:lim95eps95to950} \end{align}\tag{6}\] We now make use of a basic topological fact. Observe that \(D(\gamma \| p)\) is strictly convex in \(\gamma\) for a fixed \(p\), and thus there is a unique minimizer \(\gamma(t,\epsilon)\). Noting that the minimizer \(\gamma(t, \epsilon)\) is in the set \(\mathrm{cl}\mathcal{D}(q,\alpha(t),2\epsilon)\), by continuity of \(D(\gamma \| p)\) and compactness of the set. Thus, the collection of minimizers \(\gamma(t,\epsilon)\) is a collection of points such that \(\gamma(t,\epsilon) \in \mathrm{cl}\mathcal{D}(q,\alpha(t),2\epsilon)\). It follows from compactness that the limit point \(\lim_{\epsilon \downarrow 0} \gamma(\epsilon,t) \in \bigcap_{\epsilon > 0} \mathrm{cl}\mathcal{D}(q,\alpha(t),2\epsilon) = \mathrm{cl} \mathcal{D}(q,\alpha(t),0)\), where we have used that \(\alpha(t) > 0\). Therefore, we have the bound \[\begin{align} & \inf_{\gamma \in \Delta_{\mathcal{X}}}D(\gamma \| p) \notag \\ & \text{subject to }H(\gamma\|q) \leq H(T(q,\alpha(t))\| q) \end{align}\] Note that this optimization problem is convex, and thus can be solved analytically by writing the KKT conditions [25], which give a solution \(\gamma_{q,p}(t) \in \mathcal{T}_{q,p}\), and optimal value \(D(\gamma_{q,p}(t)\| p)\).

Analogously, the RHS of ?? can be shown to be lower bounded by \(- \inf D(\gamma \| p)\), where \(\gamma \in \mathcal{B}(q,\alpha(t),0)\), by noting \(\mathcal{B}(q,\alpha,0) \subset \mathrm{int} \mathcal{B}(q,\alpha,\epsilon)\), for any \(\epsilon > 0\). Again, this optimization can be solved analytically, and gives the desired output. Putting these results together, we get that \[\begin{align} \lim_{\epsilon \downarrow 0} \lim_{m \to \infty} \frac{1}{m}\log &\mathbb{P}_p^m\left( \left|-\frac{1}{m} \log q(Y^m) - H(T(q,\alpha(t))\| q)\right| < \epsilon \right) \notag \\ &= -D(\gamma_{q,p}(t) \| p). \end{align}\]

Case (b): We now let \(t \in (0,H(q))\), which implies \(\alpha(t) \in (1,\infty)\). The proof in this case follows from the same step as in Case (a), by replacing the set \(\mathcal{D}(q,\alpha(t),\epsilon)\) with the set \(\mathcal{E}(q,\alpha(t),\epsilon)\).

Case (c): Finally, let \(t = H(q)\), or equivalently, \(\alpha(t) = 1\). In this case, note that \(p \in \mathcal{B}(q,1,\epsilon)\), and thus, by the law of large numbers and 5 , we have that \[\begin{align} \lim_{\epsilon \downarrow 0} \lim_{m \to \infty} \frac{1}{m}\log \mathbb{P}_p^m\left( \left|-\frac{1}{m} \log q(Y^m) - t \right| < \epsilon \right) \geq 0, \end{align}\] which implies that \(J(t) = 0\) in this case. ◻

Proof of Theorem 1. The proof is completed by invoking Theorem 3 with \(q\) replaced with \(\phi_\Delta\), noticing that \(\phi_\Delta = T(q, p, \Delta),\) and invoking Lemma 9 to express \(T(q, \phi_\Delta, \beta)\) in terms of \(p\). ◻

Proof of Corollary 1. We use Varadhan’s Lemma [23], which states that if a sequence of random variables \(A_m\) satisfies a LDP with rate function \(J(t)\), then we have: \[\lim_{m \to \infty} \frac{1}{m} \log \mathbb{E}[\exp(m F(A_m))] = \sup_t \{F(t) - J(t)\},\] for any continuous and bounded function \(F\). Applying this result with \(F(t) = \gamma \cdot t\), we obtain: \[\begin{align} \lim_{m \to \infty} &\frac{1}{m} \log E_{Y^m \sim \phi^m_\Delta} e^{\gamma r(Y^m)} \tag{7}\\ = &\sup_t \; \{\gamma H(T(q, p, \beta(t)) \| q) - D(T(q, p, \beta(t)) \| \phi_\Delta) \}\nonumber \\ = &\sup_\beta \; \{\gamma H(T(q, p, \beta \| q) - D(T(q, p, \beta \| \phi_\Delta) \}\tag{8}, \end{align}\] where we used the definition of \(\beta(t)\) in Theorem 1 in 7 , and changed the optimization variable to \(\beta\) in 8 . Expanding the RHS, and collecting terms, we get: \[\begin{align} & \; - \sum_y T(q, p,\beta)(y) \log \frac{T(q, p, \beta)(y)}{\phi_\Delta(y) \cdot q(y)^\gamma} \nonumber \\ = & - \sum_y T(q, p, \beta)(y) \log \frac{T(q, p, \beta)(y)}{T(q, \phi_\Delta, \gamma)(y)} \nonumber \\ & - \log \sum_y \phi_\Delta(y)q(y)^\gamma \\ = & - D_{\operatorname{KL}}(T(q,p, \beta) \| T(q, \phi_\Delta, \gamma)) - \log \sum_y \phi_\Delta(y)q(y)^\gamma. \nonumber \end{align}\] This is maximized when \(T(q, p, \beta) = T(q, \phi_\Delta, \gamma)\), which is possible since \(\phi_\Delta\) is on the aligned family between \(p\) and \(q\). The proof follows from identifying the remaining term as \((\gamma+1) H_{\gamma+1}(\phi_\Delta \| q)\). ◻

8 Proofs of Properties of Best-of-\(N\)↩︎

Let \(K = |\mathcal{Y}|\). For a sequence \(y^m,\) let \(t({y^m})\) denote its type. The proof of this notion of \(m\)-grained types, which is defined below.

Definition 8. A discrete distribution \(p\) on \(\mathcal{Y}\) is called \(m\)-grained if for all elements, its probability are multiples of \(\frac{1}{m}\), i.e., \(\forall i \in \boldsymbol{\mathcal{Y}}, \exists c \in \mathbb{N}, p(i) = c \cdot \frac{1}{m}\).

Proof of . Denote by \(\mathcal{E} = \{ \lambda : |\lambda - \phi_\delta| > \epsilon \}\), let \(\mathcal{T}\) be the set of \(m\)-grained distributions, we note that: \[\begin{align} \mathbb{P}(|t(Y^m) - \phi_\delta| > \epsilon) & = \mathbb{P}(t(Y^m) \in \mathcal{E}) \\ & \leq \sum_{\lambda \in \mathcal{E} \cap \mathcal{T}} \mathbb{P}(t(Y^m) = \lambda). \end{align}\] Therefore, if we prove that for any valid \(\lambda \in \mathcal{E}\cap\mathcal{T}\), the probability of \(Y^m\) having type \(\lambda\) is exponentially small (in \(m\)), our proof is complete by observing that there are only a polynomial number of valid types.

Recall that the best-of-\(N\) is selected such that \(Y^m\) is the sequence among \(X_1^m,\ldots, X_N^m \overset{i.i.d.}{\sim} \boldsymbol{p}\) which has the highest reward, i.e. \(r(Y^m) = \max \{ r(X_1^m), \ldots, r(X)\}\) For any valid type \(\lambda \in \mathcal{T}\), we thus have: \[\begin{align} \mathbb{P}(t(Y^m) = \lambda) & = N \mathbb{P}(t(X^m_N) = \lambda) \prod_{i=1}^{N-1}\mathbb{P}(r(X^m_i) \leq r(\lambda)). \label{eq:best95of95N95exact95prob} \end{align}\tag{9}\] Next, note that \(r(X^m) \leq r(\lambda)\) can be written equivalently in terms of the type of \(X^n\) as \(H(t(X^m) \| q) \leq H(\lambda \| q)\). Letting the set \(\mathcal{A}(\lambda) = \{ t: H(\lambda \| q) > H(t \| q) \}\), the last term in 9 is equivalently written as: \[\begin{align} \mathbb{P}(r(X^m) \leq r(\lambda)) & = \mathbb{P}(t(X^m) \in \mathcal{A}(\lambda)) \\ & \leq 1 - \exp \{ - m D(T(p, q, \alpha(\lambda)) \| p)\}, \end{align}\] where \(\alpha(\lambda)\) is the unique tilt so that \(H(T(p, q, \alpha(\lambda)) \| q ) = H(\lambda \| q)\) (by Sanov’s Theorem). Therefore, the product in 9 can be bounded by: \[\begin{align} \prod_{i=1}^{N-1}\mathbb{P}(r(X^m_i) \leq r(\lambda)) &\leq \left(1 - \exp\left\{-m \left( D(T(p, q, \alpha(\lambda)) \| p) \right)\right\}\right)^N \\ & \leq \exp{-m \left[ D(T(p, q, \alpha(\lambda)) \| p) - \delta\right]_+} \end{align}\] On the other hand, if \(N\) is sufficiently large, precisely if \(\delta > D(T(p, q, \alpha(\lambda)) \| \boldsymbol{p})\), another bound is tighter, namely: \[\begin{align} \prod_{i=1}^{N-1}\mathbb{P}(r(X^m_i) \leq r(\lambda)) &\leq \left(1 - \exp\left\{-m \left( D(T(p, q, \alpha(\lambda)) \| p) \right)\right\}\right)^N \\ & = \left( \left(1 - \frac{1}{x}\right)^x \right)^{N/x} \\ & \leq \exp{-(\exp{m (\delta - D(T(p, q, \alpha(\lambda) \| p)}))} \end{align}\] where we used \((1-1/x)^x \leq 1/e\), and \(x = \exp{m D(T(p, q, \alpha(\lambda)) \| p)}\). Therefore, we obtain the general bound: \[\begin{align} \prod_{i=1}^{N-1}\mathbb{P}(r(X^m_i) \leq r(\lambda)) \leq \exp - m F(m, \alpha(\lambda), \delta, p, q), \end{align}\] where \[\begin{align} F(m, \alpha(\lambda), \delta, p, q) = \begin{cases} D(T(p, q, \alpha(\lambda)) \| p) - \delta & \text{if } D(T(p, q, \alpha(\lambda)) \| p) > \delta \\ \frac{1}{m}\exp {m (\delta - D(T(p, q, \alpha(\lambda)) \| p))} & \text{if } D(T(p, q, \alpha(\lambda)) \| p) < \delta \\ 0 & \text{if } D(T(p, q, \alpha(\lambda)) \| p) = \delta \end{cases}\label{eq:F95function} \end{align}\tag{10}\] Then, putting everything together in 9 , we get: \[\begin{align} \mathbb{P}(t(Y^m) = \lambda) & \leq \exp \left\{-m\left[D(\lambda \| p) + F(m, \alpha(\lambda), \delta, p, q) - \delta \right]\right\} \\ & = \exp \left\{-m\left[D(\lambda \| T(p, q, \alpha(\lambda)) + D(T(p, q, \alpha(\lambda)) \| p) + F(m, \alpha(\lambda), \delta, p, q) - \delta \right]\right\} \label{eq:I-pythagorean32theorem} \end{align}\tag{11}\] where 11 follows from the I-Pythagorean Theorem. For 11 to decay to zero as \(m \to \infty\), we must have the exponent be strictly positive. We check this in three steps:

  • Let \(D(T(p, q, \alpha(\lambda)) \| p) > \delta\), and identifying 10 , the exponent in 11 is: \[\begin{align} D(\lambda \| T(p, q, \alpha(\lambda)) + 2(D(T(p, q, \alpha(\lambda)) \| p) - \delta) > 0 \end{align}\] and therefore \(\mathbb{P}(t(Y^m) = \lambda)\) goes to zero exponentially fast.

  • Let \(D(T(p, q, \alpha(\lambda)) \| p) < \delta\), then, the exponent in 11 is: \[\begin{align} D(\lambda \| T(p, q, \alpha(\lambda)) + D(T(p, q, \alpha(\lambda)) \| p)- \delta + \frac{1}{m}\exp {m (\delta - D(T(p, q, \alpha(\lambda)) \| p))} > 0, \end{align}\] for \(m\) large enough.

  • Finally, let \(D(T(p, q, \alpha(\lambda)) \| p) = \delta\), then the exponent becomes: \[\begin{align} D(\lambda \| T(p, q, \alpha(\lambda))), \end{align}\] which is positive if and only if \(\lambda = T(p, q, \alpha(\lambda))\).

Putting these cases together, we obtain that \(\mathbb{P}(t(Y^m))\) goes to 0 exponentially fast unless \(\lambda = T(p,q, \alpha(\delta))\). ◻

Lemma 9. Let \(\alpha, \beta \in \mathbb{R},\) then: \[T(q, T(q, p, \alpha), \beta)) = T(q, p, \alpha+\beta).\]

Proof. The proof is completed by noticing \[\begin{align} T(q, T(q, p, \alpha), \beta)) & \propto T(q, p, \alpha) q^\beta \nonumber\\ & \propto p q^{\alpha \beta} = T(q, p, \alpha + \beta). \end{align}\] ◻


Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33: 1877–1901, 2020.
Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805, 2023.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In International conference on machine learning, pp. 2256–2265. PMLR, 2015.
Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33: 6840–6851, 2020.
Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456, 2020.
Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. Advances in neural information processing systems, 30, 2017.
Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. Learning to summarize with human feedback. Advances in Neural Information Processing Systems, 33: 3008–3021, 2020.
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155, 2022.
Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022.
Yao Zhao, Mikhail Khalman, Rishabh Joshi, Shashi Narayan, Mohammad Saleh, and Peter J Liu. Calibrating sequence likelihood improves conditional language generation. In The Eleventh International Conference on Learning Representations, 2022.
Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D Manning, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. Neural Information Processing Systems (NeurIPS), 2023.
Sidharth Mudgal, Jong Lee, Harish Ganapathy, YaGuang Li, Tao Wang, Yanping Huang, Zhifeng Chen, Heng-Tze Cheng, Michael Collins, Trevor Strohman, Jilin Chen, Alex Beutel, and Ahmad Beirami. Controlled decoding from language models. arXiv preprint arXiv:2310.17022, 2023.
Kevin Yang and Dan Klein. : Controlled text generation with future discriminators. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 3511–3535, Online, June 2021. Association for Computational Linguistics. . URL
Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. : Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332, 2021.
Leo Gao, John Schulman, and Jacob Hilton. Scaling laws for reward model overoptimization. In International Conference on Machine Learning, pp. 10835–10866. PMLR, 2023.
Jacob Eisenstein, Chirag Nagpal, Alekh Agarwal, Ahmad Beirami, Alex D’Amour, DJ Dvijotham, Adam Fisch, Katherine Heller, Stephen Pfohl, Deepak Ramachandran, Peter Shaw, and Jonathan Berant. Helping or herding? reward model ensembles mitigate but do not eliminate reward hacking. arXiv preprint arXiv:2312.09244, 2023.
Salman Salamatian, Litian Liu, Ahmad Beirami, and Muriel Médard. Mismatched guesswork. arXiv preprint arXiv:1907.00531, 2019.
Ahmad Beirami, Alekh Agarwal, Jonathan Berant, Alexander D’Amour, Jacob Eisenstein, Chirag Nagpal, and Ananda Theertha Suresh. Theoretical guarantees on the best-of-n alignment policy. arXiv preprint arXiv:2401.01879, 2024.
Tomasz Korbak, Hady Elsahar, Germán Kruszewski, and Marc Dymetman. On reinforcement learning and distribution matching for fine-tuning language models with no catastrophic forgetting. Advances in Neural Information Processing Systems, 35: 16203–16220, 2022.
Tomasz Korbak, Ethan Perez, and Christopher Buckley. . In Findings of the Association for Computational Linguistics: EMNLP 2022, pp. 1083–1091, 2022.
Tian Li, Ahmad Beirami, Maziar Sanjabi, and Virginia Smith. On tilted losses in machine learning: Theory and applications. Journal of Machine Learning Research, 24 (142): 1–79, 2023.
A. Dembo and O. Zeitouni. Large Deviations Techniques and Applications. Springer, 1998.
Ahmad Beirami, Robert Calderbank, Mark M Christiansen, Ken R Duffy, and Muriel Médard. A characterization of guesswork on swiftly tilting curves. IEEE Transactions on Information Theory, 65 (5): 2850–2871, 2018.
Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004.

  1. All logarithms in this paper are in base \(e.\)↩︎

  2. We define \([N] := \{1, \ldots, N\}\).↩︎

  3. This specific family of distribution also arises in characterization of other elements in information theory, e.g. in mismatched guesswork and mismatched one-to-one source coding [18], and is referred there as the mismatched tilted family of distributions.↩︎

  4. The definition is extended via continuous extension at \(t = 1.\)↩︎

  5. For \(\rho = 0,\) the definition is extended via continuous extension.↩︎

  6. The definition is continuously extended at \(\rho = 0\).↩︎