First Hallucination Tokens Are Different from Conditional Ones


Abstract

Large Language Models (LLMs) hallucinate, and detecting these cases is key to ensuring trust. While many approaches address hallucination detection at the response or span level, recent work explores token-level detection, enabling more fine-grained intervention. However, the distribution of hallucination signal across sequences of hallucinated tokens remains unexplored. We leverage token-level annotations from the RAGTruth corpus and find that the first hallucinated token is far more detectable than later ones. This structural property holds across models, suggesting that first hallucination tokens play a key role in token-level hallucination detection. Our code is available at github.com/jakobsnl/RAGTruth_Xtended.

1 Introduction↩︎

Foundation models are transforming scientific research and society [1], [2]. However, their increasing capabilities raise critical questions about their responsible application, especially in terms of reliability and the potential to generate untruthful content [2][5]. The hallucination phenomenon, where LLMs produce non-factual or contradictory content, poses a key challenge for building trustworthy AI systems [2], [6]. Such errors can mislead users and undermine trust in critical applications [5], [7]. Although there are initiatives to alleviate hallucinations, LLMs are still fundamentally trained to approximate patterns in their training data. This makes hallucinations an inherent risk [8]. As a consequence, the need to detect hallucinated outputs is evident.
Detection methods vary regarding the granularity at which hallucinations are identified. While prior work has advanced response-level and span-level detection [9][11], the majority is not designed to operate on token-level. Yet, token-level detection is increasingly important for enabling real-time filtering, targeted correction, and improved interpretability [11]. This shift is reflected in recent contributions [4], [12]. However, a detailed understanding of how hallucination signals vary across tokens in a hallucinated span is lacking. The recently published large-scale corpus RAGTruth provides novel token-level hallucination annotations that enable this investigation [13].

a

Figure 1: First Hallucination Tokens Are Different: We visualise three tokenised model responses from RAGTruth, overlaid with normalised logit entropy magnitudes. Tokens that are annotated as hallucination are highlighted with red outlines. The first hallucinated token exhibits higher entropy characteristics compared to conditional hallucinated tokens. This pattern holds consistently across different models, hallucination positions, and contexts. [model: llama-2-13b-chat, id: 214, 64, 730].

Despite recent progress, current methods often overlook how the token-level hallucination signal evolves within a span. We argue that a token’s corresponding hallucination signal depends on its position within the hallucinated sequence of tokens, the hallucinated span. To investigate this, we hypothesise that the first token carries a stronger hallucination signal and achieves higher detection accuracy than subsequent, conditionally generated ones. We validate this hypothesis through a position-aware analysis using RAGTruth’s token-level annotations. Aware of various established hallucination signals such as intrinsic uncertainty [2], [14], internal representations [10], [15][18], and external models as judges [7], [8], [17], [19], we demonstrate that our hypothesis already holds for light-weight logit-based signals. Therefore, we augment RAGTruth with reproduced output logits. Our findings consistently support our hypothesis across models and contexts. This reveals a structural property of hallucination that improves understanding of token-level signals and supports more interpretable, fine-grained, and potentially real-time detection methods.

2 Methodology↩︎

This section outlines our approach for investigating whether hallucination detection signals vary systematically with token position in a hallucinated span.

2.0.0.1 Terminology.

We define a hallucinated span as a contiguous sequence of tokens in a model-generated response that is annotated as hallucinated. A token’s in-span index refers to its position within such a span, while the span index refers to the order of the hallucinated span within the response. Lastly, we refer to all subsequent tokens within the same hallucinated span as conditional tokens, reflecting their generation conditioned on the preceding hallucinated content.

2.0.0.2 Hypothesis.

We hypothesise that the first token carries a stronger hallucination signal. To test this, we analyse the detectability and separability of different token-level logit-derived signals.

Our methodology comprises three components: (1) enriching the RAGTruth dataset with model-generated logits for each response token; (2) categorising tokens by their position within hallucinated spans and across hallucination contexts; and (3) computing detectability and separability metrics for a range of logit-derived signals. The following subsections provide a detailed description of each component.

2.1 RAGTruth Dataset↩︎

Our dataset is a modified version of the RAGTruth corpus [13]. RAGTruth provides large-scale token-level annotations of responses from a diverse range of state-of-the-art large language models (LLMs). For this work, we extract the token-level annotations and complement them with response token logits for all responses across the dataset.
As this reproduction step requires model access, we restrict our work to the publicly available Mistral-7b-Instruct [20], Llama-2-7B-chat, Llama-2-13B-chat, and Llama-2-70B-chat [21].

2.2 Hallucination Token Positions↩︎

To evaluate how hallucination signals vary with token position, we categorise tokens to enable both detectability and separability analysis. This includes a basic split between non-hallucinated tokens (\(\mathcal{T}_{\text{non}}\)) and hallucinated tokens (\(\mathcal{T}\)), as well as more granular groupings that capture positional attributes within hallucinated spans.

2.2.1 In-Span Index↩︎

We group hallucinated tokens \(\mathcal{T}\) by their positional index within a hallucination span. Let \(N\) be the maximum hallucination span length in the dataset. Then for each index \(k\) in a hallucinated span:

\[\mathcal{T}_{k} = \left\{ t_i \mid t_i \text{ is the } k\text{th token in a span}, \; k = 0, 1, \dots, N \right\}\]

where each \(\mathcal{T}_{k}\) corresponds to tokens at a specific positional index within hallucinated spans. To uncover inter-group differences, we analyse each set independently.

2.2.2 Span Index↩︎

In addition to analysing token position within hallucination spans, we examine whether detectability and separability patterns apply across different hallucination spans within the same response. Specifically, we validate that hallucination signals are consistent across different span positions in the response, for example, whether it is the first hallucinated span or a later one.
To test this, we differentiate hallucinated tokens by their span index within the response. Hallucination tokens are grouped according to their response-wide index of the hallucination span they are part of. Let a response contain \(M\) hallucination spans, denoted as \(S_1, S_2, \dots, S_M\), where each span \(S_j\) consists of a sequence of hallucinated tokens. We define the set of tokens belonging to the \(j\)-th hallucination span and in-span index \(k\) as:

\[\mathcal{T}_{k}^{(j)} = \{ t_{ki} \mid t_{ki} \in T_{kj} \}, \quad j = 1, \dots, M\]

However, the distribution of sample sizes across hallucination span indices is not balanced, with later spans containing fewer tokens. To mitigate this imbalance, we introduce a binned grouping strategy. Let \(j \in \mathbb{N}\) denote the span index, starting from 0. We aggregate tokens from spans with \(j \geq 2\) into \(\mathcal{T}^{\text{third+}}\), and define \(\mathcal{T}^{\text{all}}\) as the union of all hallucinated spans. This categorisation enables us to compare positional signal strength across early and later hallucination occurrences while maintaining sufficient sample sizes1, as shown in Table 1.

in

Table 1: Model-wise distribution of hallucination span counts across response-wide span indices. \(\mathcal{T}^{\text{all}}\) is the count of all dataset-wide hallucination spans, while \(\mathcal{T}_{\text{no}}\) is the count of responses free of hallucination.
llama-2-7b-chat llama-2-13b-chat llama-2-70b-chat mistral-7b-instruct
\(\mathcal{T}^{\text{all}}\) 1832 1677 1395 1953
\(\mathcal{T}^{\text{first}}\) 1012 697 744 1026
\(\mathcal{T}^{\text{second}}\) 460 414 346 533
\(\mathcal{T}^{\text{third+}}\) 360 566 305 394
1-5 \(\mathcal{T}_{\text{no}}\) 1133 1288 1570 1012

2.3 Detectability↩︎

Following prior work, we frame token-level hallucination detection as a binary classification task: predicting whether a given token is hallucinated or not. We hypothesise that the detectability of hallucinated tokens varies systematically with their position in a span. To test this, we compare each positional subgroup in our categorisation against non-hallucinated tokens.

We quantify detectability using the area under the receiver operating characteristic curve (AUROC) computed over a set of scalar signals derived from non-hallucinated (\(y = 0\)) and hallucinated (\(y = 1\)) outputs. These include commonly used uncertainty measures such as entropy, perplexity, sampled probability, and logit [17], [22], [23], as well as auxiliary signals like logit vector mean, variance, and L2 distance. Signals are computed per token and grouped according to the positional categories defined in Section 2.2.
To capture both global and local trends, we compute AUROC scores across the entire dataset as well as per response (more details are provided in Appendix 5.1.1). This response-level perspective reflects a real-time usage setting, enabling us to assess the consistency of inter-positional patterns across independent model responses.

2.4 Separability↩︎

We verify that the observed positional patterns are specific to hallucinated tokens rather than generic artifacts of token position. To do this, we examine whether similar signal patterns also appear in non-hallucinated spans. As part of this verification, the separability analysis includes non-hallucinated and hallucinated tokens for comparison.

We extract two subsets from the non-hallucinated token set \(\mathcal{T}^{\text{non}}\): \(\mathcal{T}^{\text{no}}\), containing tokens from hallucination-free responses, and \(\mathcal{T}^{\text{pre}}\), containing pre-hallucination tokens2. For both subsets, we exclude the initial \(<\)start\(>\) and the first generated token as their logits, regardless of whether the token is hallucinated or not, differ from those of conditional tokens. This follows from the Min-K probability and entropy distributions of \(\mathcal{T}^{\text{pre}}\) and \(\mathcal{T}^{\text{no}}\) in Appendix 5.4. We bin the remainder by their position, following the in-span index logic from Section 2.2. This yields six token groups for qualitative comparison with hallucinated tokens.

To measure distributional separability across these subsets, we adopt metrics from Membership Inference Attacks (MIA), which similarly rely on confidence-based features to distinguish in vs out-of-distribution behaviour. We use Min-K probability as our primary separability metric due to its proven efficacy in MIA contexts [23], [24], and also compute the Min-K entropy to support our findings. This yields a family of scores that reflect how token response logits vary across categories and positions. See Appendix 5.1.2 for implementation details.

3 Results↩︎

Our experimental setup tests whether the detectability and separability of hallucinated tokens vary with their position within hallucinated spans. We specifically investigated two questions: (1) How does the detectability and separability of hallucinated tokens vary depending on their position within a hallucinated span? (2) Which logit-derived signals most reliably detect hallucinated tokens and separate them from truthful tokens?
Although our analysis covers all in-span token positions, we concentrate on the first nine token indices, as the median hallucination span length ranges between six and eight tokens, depending on the model.

3.1 First Hallucination Tokens Are Better Detectable than Conditional Ones — Globally...↩︎

Hallucination tokens at the in-span index 0 appear to be more distinguishable than conditional tokens (see Figure 2). In our simplified detection setup, the detection accuracy of conditional hallucination tokens is slightly higher than 0.5, regardless of the signal. In contrast, the first hallucination token appears to be strongly distinguishable, as indicated by entropy and perplexity, yielding AUROC scores close to 0.8 across all models (see Appendix 5.2).

Figure 2: First Hallucination Tokens Are Better Detectable: We show AUROC scores per signal and in-span hallucination token index across all hallucination spans. We report both global and averaged response-level scores. For the latter, we add error bars to account for the score distribution across different responses. Per analysis level and model, we invert AUROC scores that are, averaged over all indices, below 0.5 on \mathcal{T}^{\text{all}}. [llama-2-13b-chat; all]

a

Figure 3: First Hallucination Tokens Exhibit Greater Separability: Min-10 probability distribution across different token categories and indices. Grey magnitudes are normalised across the entire category, while the numerical scores are not. Separability patterns are consistent across all percentiles in the range of 10 to 100 concerning token rankings (see appendix 5.4). As the contrast is the greatest for the 10th percentile threshold, we choose it for visualisation. [llama-2-13b-chat; all].

3.2 ...and Locally↩︎

At the response level, AUROC trends mirror global findings. However, the error bars in Figure 2 reveal high variability in the first-token detectability. It becomes evident that, at least for raw token logit entropy and perplexity signals, the first token detectability is not stable but varies heavily.

3.3 First Hallucination Tokens Exhibit Greater Separability Than Conditional Ones↩︎

Min-K Probabilities further support the enhanced detection scores for first hallucination tokens, which consistently exhibit lower scores than conditional ones (see Figure 3). This pattern is consistent across models and percentiles (see Appendix 5.3)

3.4 Entropy Most Effectively Identifies First Hallucinated Tokens↩︎

Among all logit-derived hallucination signals tested, entropy yields the most pronounced separation between first and conditional hallucination tokens. This is reinforced by the larger score gap for Min-K entropy than for Min-K probability (see Appendix 5.3). As a consequence, we conclude that logit entropy is the signal from our set that best reflects whether the first token is hallucinated.

3.5 Individual Logit-Derived Signals Are Not Robust Across Token Indices↩︎

While our analysis reveals clear positional trends in hallucination detectability, it also highlights a key limitation: none of the evaluated logit-derived signals consistently detects hallucinated tokens across all in-span indices. Some, such as logit entropy, perform well for early tokens but degrade in later positions, while others show inconsistent or noisy behavior. This suggests that no single logit-derived feature is sufficient for robust, position-invariant hallucination detection.

4 Conclusion↩︎

Our qualitative detectability and separability analysis reveals that first hallucination tokens are systematically more distinguishable than subsequent, conditionally generated ones. This pattern is evident across multiple logit-derived uncertainty signals, with logit entropy providing the clearest signal for first-token hallucination detection. However, no hallucination signal achieves robust performance across all in-span positions, highlighting the limitations of current logit-based methods. These findings motivate several directions for future work. First, we hypothesise that richer model internals, such as hidden states, may amplify the observed positional effects and improve detection reliability. Second, a complementary investigation into the last token in each span may reveal symmetric patterns and help characterise hallucination span boundaries with more precision. As our results also indicate that no single logit-derived metric consistently captures the hallucination signal across all token positions, they suggest the need for more robust or composite detection signals. We leave these extensions for future work.

4.1 Limitations↩︎

First, our approach assumes the accuracy of the hallucination span annotations provided in RAGTruth. Given the subjective nature of hallucination annotation, this could skew results either positively or negatively.

We provide a qualitative analysis based on a simplified detection setup, rather than a deployable classifier. Therefore, we leave for future work whether the observed patterns directly improve fine-grained hallucination token detection in practice.

Additionally, our analysis focuses specifically on intra-hallucination token patterns, particularly detectability and separability within hallucination spans. However, prior and subsequent tokens outside the hallucinated spans might also carry predictive signals for hallucination, as suggested in recent studies [13], [25], [26].

Lastly, we neglect the hallucination taxonomy introduced in RAGTruth. While [13] distinguish between Evident Conflict, Subtle Conflict, Evident Introduction of Baseless Information, and Subtle Introduction of Baseless Information across different task categories (QA, Data-to-Text, Summarisation), we treat all hallucinations uniformly. This decision promotes generality across hallucination types but leaves open whether the observed patterns are consistent across specific hallucination categories and tasks [3], [25].

Impact Statement↩︎

This work contributes to the responsible development of foundation models by advancing the diagnostic understanding of hallucinations at the token level. By uncovering structural patterns of hallucination signal in hallucinated spans, it provides insight into where and how hallucinations emerge, supporting the development of interpretable token-level detection methods. As this is a knowledge-oriented analysis rather than a deployed system, it poses minimal direct societal risk. Instead, it lays the groundwork for building more trustworthy and accountable language models by improving the understanding of their failure modes.

5 Appendix↩︎

5.1 Extended Methodology↩︎

5.1.1 Detectability↩︎

For each in-span index \(k \in \{0, 1, \dots, N\}\), feature \(f\), and span group \(g \in \{\text{first}, \text{second}, \text{third+}, \text{all}\}\), we define the global detectability as:

\[\text{AUROC}_f(k, g) = \text{AUROC}\left( \mathcal{F}^{\text{non}}, \mathcal{F}_{k}^{(g)} \right)\]

where \(\mathcal{F}^{\text{non}}\) and \(\mathcal{F}_{k}^{(g)}\) denote the feature sets for non-hallucinated and hallucinated tokens across each models dataset.

For each response \(r\), we define the local detectability as:

\[\text{AUROC}_f^{(r)}(k, g) = \text{AUROC}\left( \mathcal{F}_{\text{non}}^{(r)}, \mathcal{F}_{k}^{(g, r)} \right)\]

where \(\mathcal{F}_{\text{non}}^{(r)}\) and \(\mathcal{F}_{k}^{(g, r)}\) denote the feature sets for non-hallucinated and hallucinated tokens within the same model response \(r\), respectively.

5.1.2 Separability↩︎

We apply our qualitative separability analysis across six distinct token groups:

\[\mathcal{T}_k^{\text{all}}, \mathcal{T}_k^{\text{first}}, \mathcal{T}_k^{\text{second}}, \mathcal{T}_k^{\text{third+}}, \mathcal{T}_k^{\text{pre}}, \mathcal{T}_k^{\text{no}}\]

where \(k\) denotes the in-span token index for hallucination, and in-response index for non-hallucination.

Min-K is defined as the \(K\)-th smallest value among the metric scores of a given token group [24]. Let \(f(t)\) be a scalar metric computed per token \(t\), and let \(r \in \{10, 20, \dots, 100\}\) be the percentile threshold. For each group \(\mathcal{T}_g \in \mathcal{G}\), and for a fixed in-span position \(k\), we compute:

\[\text{MIN-K}_r\left(\mathcal{T}_{k}^{g}\right) = \text{K}_{r\%}\left( \{ f(t_i) \mid t_i \in \mathcal{T}_{k}^{g} \} \right)\]

where \(\text{K}_{r\%}\) denotes the \(r\)-th percentile of the sorted metric values.

5.2 Detectability↩︎

a
b
c
d

Figure 4: [all] AUROC per signal and in-span hallucination token indices from all hallucination spans at both global and response level.. a — LLaMA-2-7B-chat, b — LLaMA-2-13B-chat, c — LLaMA-2-70B-chat, d — Mistral-7B-instruct

a
b
c
d

Figure 5: [first] AUROC per signal and in-span hallucination token indices from first hallucination spans at both global and response level.. a — LLaMA-2-7B-chat, b — LLaMA-2-13B-chat, c — LLaMA-2-70B-chat, d — Mistral-7B-instruct

a
b
c
d

Figure 6: [second] AUROC per signal and in-span hallucination token indices from second hallucination spans at both global and response level.. a — LLaMA-2-7B-chat, b — LLaMA-2-13B-chat, c — LLaMA-2-70B-chat, d — Mistral-7B-instruct

a
b
c
d

Figure 7: [third+] AUROC per signal and in-span hallucination token indices from third+ hallucination spans at both global and response level.. a — LLaMA-2-7B-chat, b — LLaMA-2-13B-chat, c — LLaMA-2-70B-chat, d — Mistral-7B-instruct

5.3 Separability↩︎

5.3.1 Min-K Probability↩︎

a
b
c
d

Figure 8: [10th percentile] Min-K Probability scores per token category and index, over the first 9 tokens at global level.. a — LLaMA-2-7B-chat, b — LLaMA-2-13B-chat, c — LLaMA-2-70B-chat, d — Mistral-7B-instruct

a
b
c
d

Figure 9: [20th percentile] Min-K Probability scores per token category and index, over the first 9 tokens at global level.. a — LLaMA-2-7B-chat, b — LLaMA-2-13B-chat, c — LLaMA-2-70B-chat, d — Mistral-7B-instruct

a
b
c
d

Figure 10: [30th percentile] Min-K Probability scores per token category and index, over the first 9 tokens at global level.. a — LLaMA-2-7B-chat, b — LLaMA-2-13B-chat, c — LLaMA-2-70B-chat, d — Mistral-7B-instruct

a
b
c
d

Figure 11: [40th percentile] Min-K Probability scores per token category and index, over the first 9 tokens at global level.. a — LLaMA-2-7B-chat, b — LLaMA-2-13B-chat, c — LLaMA-2-70B-chat, d — Mistral-7B-instruct

a
b
c
d

Figure 12: [50th percentile] Min-K Probability scores per token category and index, over the first 9 tokens at global level.. a — LLaMA-2-7B-chat, b — LLaMA-2-13B-chat, c — LLaMA-2-70B-chat, d — Mistral-7B-instruct

a
b
c
d

Figure 13: [60th percentile] Min-K Probability scores per token category and index, over the first 9 tokens at global level.. a — LLaMA-2-7B-chat, b — LLaMA-2-13B-chat, c — LLaMA-2-70B-chat, d — Mistral-7B-instruct

a
b
c
d

Figure 14: [70th percentile] Min-K Probability scores per token category and index, over the first 9 tokens at global level.. a — LLaMA-2-7B-chat, b — LLaMA-2-13B-chat, c — LLaMA-2-70B-chat, d — Mistral-7B-instruct

a
b
c
d

Figure 15: [80th percentile] Min-K Probability scores per token category and index, over the first 9 tokens at global level.. a — LLaMA-2-7B-chat, b — LLaMA-2-13B-chat, c — LLaMA-2-70B-chat, d — Mistral-7B-instruct

a
b
c
d

Figure 16: [90th percentile] Min-K Probability scores per token category and index, over the first 9 tokens at global level.. a — LLaMA-2-7B-chat, b — LLaMA-2-13B-chat, c — LLaMA-2-70B-chat, d — Mistral-7B-instruct

a
b
c
d

Figure 17: [100th percentile] Min-K Probability scores per token category and index, over the first 9 tokens at global level.. a — LLaMA-2-7B-chat, b — LLaMA-2-13B-chat, c — LLaMA-2-70B-chat, d — Mistral-7B-instruct

5.3.2 Min-K Entropy↩︎

a
b
c
d

Figure 18: [10th percentile] Min-K Entropy scores per token category and index, over the first 9 tokens at global level.. a — LLaMA-2-7B-chat, b — LLaMA-2-13B-chat, c — LLaMA-2-70B-chat, d — Mistral-7B-instruct

a
b
c
d

Figure 19: [20th percentile] Min-K Entropy scores per token category and index, over the first 9 tokens at global level.. a — LLaMA-2-7B-chat, b — LLaMA-2-13B-chat, c — LLaMA-2-70B-chat, d — Mistral-7B-instruct

a
b
c
d

Figure 20: [30th percentile] Min-K Entropy scores per token category and index, over the first 9 tokens at global level.. a — LLaMA-2-7B-chat, b — LLaMA-2-13B-chat, c — LLaMA-2-70B-chat, d — Mistral-7B-instruct

a
b
c
d

Figure 21: [40th percentile] Min-K Entropy scores per token category and index, over the first 9 tokens at global level.. a — LLaMA-2-7B-chat, b — LLaMA-2-13B-chat, c — LLaMA-2-70B-chat, d — Mistral-7B-instruct

a
b
c
d

Figure 22: [50th percentile] Min-K Entropy scores per token category and index, over the first 9 tokens at global level.. a — LLaMA-2-7B-chat, b — LLaMA-2-13B-chat, c — LLaMA-2-70B-chat, d — Mistral-7B-instruct

a
b
c
d

Figure 23: [60th percentile] Min-K Entropy scores per token category and index, over the first 9 tokens at global level.. a — LLaMA-2-7B-chat, b — LLaMA-2-13B-chat, c — LLaMA-2-70B-chat, d — Mistral-7B-instruct

a
b
c
d

Figure 24: [70th percentile] Min-K Entropy scores per token category and index, over the first 9 tokens at global level.. a — LLaMA-2-7B-chat, b — LLaMA-2-13B-chat, c — LLaMA-2-70B-chat, d — Mistral-7B-instruct

a
b
c
d

Figure 25: [80th percentile] Min-K Entropy scores per token category and index, over the first 9 tokens at global level.. a — LLaMA-2-7B-chat, b — LLaMA-2-13B-chat, c — LLaMA-2-70B-chat, d — Mistral-7B-instruct

a
b
c
d

Figure 26: [90th percentile] Min-K Entropy scores per token category and index, over the first 9 tokens at global level.. a — LLaMA-2-7B-chat, b — LLaMA-2-13B-chat, c — LLaMA-2-70B-chat, d — Mistral-7B-instruct

a
b
c
d

Figure 27: [100th percentile] Min-K Entropy scores per token category and index, over the first 9 tokens at global level.. a — LLaMA-2-7B-chat, b — LLaMA-2-13B-chat, c — LLaMA-2-70B-chat, d — Mistral-7B-instruct

5.4 Min-K Across All Percentiles↩︎

5.4.1 Min-K Probability↩︎

a
b
c
d

Figure 28: [all] Min-K Probability scores across all percentiles over the first 9 tokens from all hallucination spans at global level.. a — LLaMA-2-7B-chat, b — LLaMA-2-13B-chat, c — LLaMA-2-70B-chat, d — Mistral-7B-instruct

a
b
c
d

Figure 29: [first] Min-K Probability scores across all percentiles over the first 9 tokens from first hallucination spans at global level.. a — LLaMA-2-7B-chat, b — LLaMA-2-13B-chat, c — LLaMA-2-70B-chat, d — Mistral-7B-instruct

a
b
c
d

Figure 30: [second] Min-K Probability scores across all percentiles over the first 9 tokens from second hallucination spans at global level.. a — LLaMA-2-7B-chat, b — LLaMA-2-13B-chat, c — LLaMA-2-70B-chat, d — Mistral-7B-instruct

a
b
c
d

Figure 31: [third+] Min-K Probability scores across all percentiles over the first 9 tokens from third+ hallucination spans at global level.. a — LLaMA-2-7B-chat, b — LLaMA-2-13B-chat, c — LLaMA-2-70B-chat, d — Mistral-7B-instruct

a
b
c
d

Figure 32: [pre] Min-K Probability scores across all percentiles over the first 9 pre-hallucination tokens at global level.. a — LLaMA-2-7B-chat, b — LLaMA-2-13B-chat, c — LLaMA-2-70B-chat, d — Mistral-7B-instruct

a
b
c
d

Figure 33: [no] Min-K Probability scores across all percentiles over the first 9 tokens from responses without hallucination at global level.. a — LLaMA-2-7B-chat, b — LLaMA-2-13B-chat, c — LLaMA-2-70B-chat, d — Mistral-7B-instruct

5.4.2 Min-K Entropy↩︎

a
b
c
d

Figure 34: [all] Min-K Entropy scores across all percentiles over the first 9 tokens from all hallucination spans at global level.. a — LLaMA-2-7B-chat, b — LLaMA-2-13B-chat, c — LLaMA-2-70B-chat, d — Mistral-7B-instruct

a
b
c
d

Figure 35: [first] Min-K Entropy scores across all percentiles over the first 9 tokens from first hallucination spans at global level.. a — LLaMA-2-7B-chat, b — LLaMA-2-13B-chat, c — LLaMA-2-70B-chat, d — Mistral-7B-instruct

a
b
c
d

Figure 36: [second] Min-K Entropy scores across all percentiles over the first 9 tokens from second hallucination spans at global level.. a — LLaMA-2-7B-chat, b — LLaMA-2-13B-chat, c — LLaMA-2-70B-chat, d — Mistral-7B-instruct

a
b
c
d

Figure 37: [third+] Min-K Entropy scores across all percentiles over the first 9 tokens from third+ hallucination spans at global level.. a — LLaMA-2-7B-chat, b — LLaMA-2-13B-chat, c — LLaMA-2-70B-chat, d — Mistral-7B-instruct

a
b
c
d

Figure 38: [pre] Min-K Entropy scores across all percentiles over the first 9 pre-hallucination tokens at global level.. a — LLaMA-2-7B-chat, b — LLaMA-2-13B-chat, c — LLaMA-2-70B-chat, d — Mistral-7B-instruct

a
b
c
d

Figure 39: [no] Min-K Entropy scores across all percentiles over the first 9 tokens from responses without hallucination at global level.. a — LLaMA-2-7B-chat, b — LLaMA-2-13B-chat, c — LLaMA-2-70B-chat, d — Mistral-7B-instruct

References↩︎

[1]
Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., and Amodei, D. Language models are few-shot learners. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.-F., and Lin, H.-T. (eds.), Advances in Neural Information Processing Systems 33 (NeurIPS 2020), pp. 1877–1901, 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf.
[2]
Hou, B., Zhang, Y., Andreas, J., and Chang, S. A probabilistic framework for LLM hallucination detection via belief tree propagation. In Chiruzzo, L., Ritter, A., and Wang, L. (eds.), Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pp. 3076–3099, Albuquerque, New Mexico, April 2025. Association for Computational Linguistics. ISBN 979-8-89176-189-6. URL https://aclanthology.org/2025.naacl-long.158/.
[3]
Ravichander, A., Ghela, S., Wadden, D., and Choi, Y. The HALogen benchmark: Fantastic LLM hallucinations and where to find them, 2024. URL https://openreview.net/forum?id=pQ9QDzckB7.
[4]
Liu, T., Zhang, Y., Brockett, C., Mao, Y., Sui, Z., Chen, W., and Dolan, B. A token-level reference-free hallucination detection benchmark for free-form text generation. In Muresan, S., Nakov, P., and Villavicencio, A. (eds.), Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 6723–6737, Dublin, Ireland, May 2022. Association for Computational Linguistics. . URL https://aclanthology.org/2022.acl-long.464/.
[5]
Rawte, V., Sheth, A., and Das, A. A survey of hallucination in large foundation models, 2023. URL https://arxiv.org/abs/2309.05922.
[6]
Kaddour, J., Harris, J., Mozes, M., Bradley, H., Raileanu, R., and McHardy, R. Challenges and applications of large language models. ArXiv, abs/2307.10169, 2023. URL https://api.semanticscholar.org/CorpusID:259982665.
[7]
Chen, Y., Fu, Q., Yuan, Y., Wen, Z., Fan, G., Liu, D., Zhang, D., Li, Z., and Xiao, Y. Hallucination detection: Robustly discerning reliable answers in large language models, 2024. URL https://arxiv.org/abs/2407.04121.
[8]
Santilli, A., Xiong, M., Kirchhof, M., Rodriguez, P., Danieli, F., Suau, X., Zappella, L., Williamson, S., and Golinski, A. On the protocol for evaluating uncertainty in generative question-answering tasks. In Neurips Safe Generative AI Workshop 2024, 2024. URL https://openreview.net/forum?id=jGtL0JFdeD.
[9]
Farquhar, S., Kossen, J., Kuhn, L., and Gal, Y. Detecting hallucinations in large language models using semantic entropy, 06 2024. URL https://doi.org/10.1038/s41586-024-07421-0.
[10]
Vazhentsev, A., Rvanova, L., Lazichny, I., Panchenko, A., Panov, M., Baldwin, T., and Shelmanov, A. Token-level density-based uncertainty quantification methods for eliciting truthfulness of large language models. In Chiruzzo, L., Ritter, A., and Wang, L. (eds.), Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pp. 2246–2262, Albuquerque, New Mexico, April 2025. Association for Computational Linguistics. ISBN 979-8-89176-189-6. URL https://aclanthology.org/2025.naacl-long.113/.
[11]
Quevedo, E., Yero, J., Koerner, R., Rivas, P., and Cerny, T. Detecting hallucinations in large language model generation: A token probability approach, 2024. URL https://arxiv.org/abs/2405.19648.
[12]
Rebuffel, C., Roberti, M., Soulier, L., Scoutheeten, G., Cancelliere, R., and Gallinari, P. Controlling hallucinations at word level in data-to-text generation, 10 2022. URL https://doi.org/10.1007/s10618-021-00801-4.
[13]
Niu, C., Wu, Y., Zhu, J., Xu, S., Shum, K., Zhong, R., Song, J., and Zhang, T. ruth: A hallucination corpus for developing trustworthy retrieval-augmented language models. In Ku, L.-W., Martins, A., and Srikumar, V. (eds.), Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 10862–10878, Bangkok, Thailand, August 2024. Association for Computational Linguistics. . URL https://aclanthology.org/2024.acl-long.585/.
[14]
Agrawal, A., Suzgun, M., Mackey, L., and Kalai, A. Do language models know when they‘re hallucinating references? In Graham, Y. and Purver, M. (eds.), Findings of the Association for Computational Linguistics: EACL 2024, pp. 912–928, St. Julians, Malta, March 2024. Association for Computational Linguistics. URL https://aclanthology.org/2024.findings-eacl.62/.
[15]
Chen, C., Liu, K., Chen, Z., Gu, Y., Wu, Y., Tao, M., Fu, Z., and Ye, J. : LLMs’ internal states retain the power of hallucination detection. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=Zj12nzlQbz.
[16]
Jiang, C., Qi, B., Hong, X., Fu, D., Cheng, Y., Meng, F., Yu, M., Zhou, B., and Zhou, J. On large language models’ hallucination with regard to known facts. In Duh, K., Gomez, H., and Bethard, S. (eds.), Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pp. 1041–1053, Mexico City, Mexico, June 2024. Association for Computational Linguistics. . URL https://aclanthology.org/2024.naacl-long.60/.
[17]
Snyder, B., Moisescu, M., and Zafar, M. B. On early detection of hallucinations in factual question answering. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD ’24, pp. 2721–2732, New York, NY, USA, 2024. Association for Computing Machinery. ISBN 9798400704901. . URL https://doi.org/10.1145/3637528.3671796.
[18]
Ma, H., Chen, J., Wang, G., and Zhang, C. Estimating llm uncertainty with logits, 01 2025. URL https://arxiv.org/html/2502.00290v1.
[19]
Chang, T. A., Tomanek, K., Hoffmann, J., Thain, N., MacMurray van Liemt, E., Meier-Hellstern, K., and Dixon, L. Detecting hallucination and coverage errors in retrieval augmented generation for controversial topics. In Calzolari, N., Kan, M.-Y., Hoste, V., Lenci, A., Sakti, S., and Xue, N. (eds.), Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pp. 4729–4743, Torino, Italia, May 2024. ELRA and ICCL. URL https://aclanthology.org/2024.lrec-main.423/.
[20]
Jiang, A. Q., Sablayrolles, A., Mensch, A., Bamford, C., Chaplot, D. S., de las Casas, D., Bressand, F., Lengyel, G., Lample, G., Saulnier, L., Lavaud, L. R., Lachaux, M.-A., Stock, P., Scao, T. L., Lavril, T., Wang, T., Lacroix, T., and Sayed, W. E. Mistral 7b, 2023. URL https://arxiv.org/abs/2310.06825.
[21]
Zhao, W. X., Zhou, K., Li, J., Tang, T., Wang, X., Hou, Y., Min, Y., Zhang, B., Zhang, J., Dong, Z., Du, Y., Yang, C., Chen, Y., Chen, Z., Jiang, J., Ren, R., Li, Y., Tang, X., Liu, Z., Liu, P., Nie, J.-Y., and Wen, J.-R. A survey of large language models, 2025. URL https://arxiv.org/abs/2303.18223.
[22]
Kadavath, S., Conerly, T., Askell, A., Henighan, T., Drain, D., Perez, E., Schiefer, N., Dodds, Z., DasSarma, N., Tran-Johnson, E., Johnston, S., El-Showk, S., Jones, A., Elhage, N., Hume, T., Chen, A., Bai, Y., Bowman, S., Fort, S., and Kaplan, J. Language models (mostly) know what they know, 07 2022. URL https://arxiv.org/pdf/2207.05221.
[23]
Puerto, H., Gubri, M., Yun, S., and Oh, S. J. Scaling up membership inference: When and how attacks succeed on large language models. In Chiruzzo, L., Ritter, A., and Wang, L. (eds.), Findings of the Association for Computational Linguistics: NAACL 2025, pp. 4165–4182, Albuquerque, New Mexico, April 2025. Association for Computational Linguistics. ISBN 979-8-89176-195-7. URL https://aclanthology.org/2025.findings-naacl.234/.
[24]
Zhang, J., Sun, J., Yeats, E., Ouyang, Y., Kuo, M., Zhang, J., Yang, H. F., and Li, H. Min-k%++: Improved baseline for pre-training data detection from large language models. In The Thirteenth International Conference on Learning Representations, 2025. URL https://openreview.net/forum?id=ZGkfoufDaU.
[25]
Li, J., Chen, J., Ren, R., Cheng, X., Zhao, X., Nie, J.-Y., and Wen, J.-R. The dawn after the dark: An empirical study on factuality hallucination in large language models, August 2024. URL https://aclanthology.org/2024.acl-long.586/.
[26]
Mishra, A., Asai, A., Balachandran, V., Wang, Y., Neubig, G., Tsvetkov, Y., and Hajishirzi, H. Fine-grained hallucination detection and editing for language models. In First Conference on Language Modeling, 2024. URL https://openreview.net/forum?id=dJMTn3QOWO.

  1. As later hallucination spans are less frequent, we bin them.↩︎

  2. For simplicity, we assume that non-hallucinated tokens preceding the first hallucination span exhibit similar patterns to those following it.↩︎