SelfBudgeter: Adaptive Token Allocation
for Efficient LLM Reasoning
May 16, 2025
Recently, large reasoning models demonstrate exceptional performance on various tasks. However, reasoning models inefficiently over-process both trivial and complex queries, leading to resource waste and prolonged user latency. To address this challenge, we propose SelfBudgeter - a self-adaptive controllable reasoning strategy for efficient reasoning. Our approach adopts a dual-phase training paradigm: first, the model learns to pre-estimate the reasoning cost based on the difficulty of the query. Then, we introduce budget-guided GPRO for reinforcement learning, which effectively maintains accuracy while reducing output length. SelfBudgeter allows users to anticipate generation time and make informed decisions about continuing or interrupting the process. Furthermore, our method enables direct manipulation of reasoning length via pre-filling token budget. Experimental results demonstrate that SelfBudgeter can dynamically allocate budgets according to problem complexity, yielding an average response length compression of 61% for the 1.5B model on GSM8K, MATH500, and AIME2025, and 48% for the 7B model, while maintaining nearly undiminished accuracy.
Recent large reasoning models, such as O1 [1], has shown remarkable performance in various complex reasoning tasks [2], [3]. The primary success factor lies in the long chain of thought (CoT) process learned through reinforcement learning (RL), which allows the model to break down reasoning steps and scaling test-time compute [4], [5].
However, reasoning models tend to use overly long thought processes even for simple questions. This “overthinking” phenomenon leads to a waste of computational resources and excessive user waiting times [6], [7]. For example, when answering the simple questions such as “What is the answer of 2+3?”, the QwQ-32B model provides 13 different solutions and generates 100 times more tokens than Qwen2.5-72B-Instruct model [8].
Prior studies have explored various approaches to mitigate overthinking through response length control and computation routing. Existing methods mainly include: (1) Prompt-based approaches [9], [10] that implicitly guide length through instructions, (2) Integrated training strategies that teach models to adaptively determine reasoning steps via SFT [11], [12] or RL with length penalties [13], [14], and (3) Router-based [15], [16] architectures employing classifiers to allocate computation paths. While achieving partial progress, these methods either lack precise length control, require additional computational overhead, or fail to explicitly output optimal reasoning lengths [13], [17].
We propose SelfBudgeter that enables reasoning models to (1) estimate the minimal token budget required for correct responses when users do not specify token constraints, and (2) generate responses of corresponding lengths while adhering to either self-estimated or user-defined token budgets. SelfBudgeter aims to mitigate the overthinking issue by predicting the minimal possible token budget, thereby significantly reducing user waiting time. As shown in Figure 1, SelfBudgeter can provide a relatively accurate token budget estimation before generating responses, users can precisely anticipate the waiting time and decide whether to wait for the full output or terminate early based on their needs.
Additionally, when specific requirements arise, users can pre-fill the token budget field to constrain the model’s response within the given limit, thereby improving interaction efficiency.
Our training framework consists of two main stages. During the Cold-Start stage, we fine-tune the model to learn how to first output its estimated token budget within <budget>
tags. Subsequently, in the RL training stage, we optimize
SelfBudgeter using the GRPO algorithm. For this stage, we design a reward function that primarily focuses on three key aspects: (1) answer correctness, (2) minimal achievable token budget, and (3) consistency between response length and the allocated token
budget.
We conduct full-parameter training of Deepseek-R1-Distill-Qwen-1.5B using SelfBudgeter and evaluate its performance on the GSM8K, MATH500 and AIME2025 datasets. Experimental results demonstrate that SelfBudgeter achieves an average response length
compression of 61% with the 1.5B model, while maintaining nearly equivalent accuracy. Furthermore, on GSM8K and MATH500, SelfBudgeter simultaneously reduces response length while improving accuracy. SelfBudgeter also exhibits excellent capability in
predicting output length and, when provided with pre-filled <budget>
tags, consistently adheres to the specified token budget constraints. In addition, experiments on Deepseek-R1-Distill-Qwen-7B show an average compression of 48%,
further validating the scalability of SelfBudgeter to larger model sizes.
The emergence of the reasoning models like O1, Deepseek-R1 and QwQ advanced complex problem-solving through RL-enhanced CoT [1]–[3], [18]. However, researchers observed a tendency for reasoning models to overthink simple problems—expending unnecessary computational effort on trivial queries [6], [7]. Excessive long CoT may lead to a decrease in accuracy [19]. Current solutions for overthinking mainly involve following three strategies. Prompt-based methods try to control response length by adding instructions in prompts, but cannot control the length accurately [9], [10], [20], [21]. Integated Training-based methods try to teach model decide the length by the difficulty of the problems. Supervised fine-tuning(SFT)-based methods collect the dataset with variable length [11], [12], [22]–[26]. RL-based methods incorporate length penalties into the reward function [13], [14], [17], [27]–[30]. These methods fail to control the length as users’ requirements. And Router-based methods train another model as a classifier [15], [16], [31]–[33]. The classifier decide to route the query to fast models or reasoning models. However, an extra classifier means more computation resources are needed. Current methods either sacrifice precise control, require extra computation, or fail to bridge autonomous budget estimation with strict adherence.
In addressing the issue of overthinking, a highly intuitive approach involves directly constraining the output length. CCoT [21] attempt to achieve this by incorporating a word budget into the prompt, various approaches—including character, token, and step budgets [9]—have been attempted by directly incorporating them into prompts, yet achieving precise control over the model’s output behavior remains challenging. TALE [23] introduce, for the first time, the concept of a token budget. TOPS [26] attempt to enable the model to autonomously determine the required effort for solving a given task. However, both TALE and TOPS fail to explicitly guide the model to produce the optimal token budget. They also fail to effectively control the output length according to a given token budget. L1 [13] and Elastic Reasoning [17] can more precisely control the output length under a given token budget, yet they fail to enable the model to autonomously estimate an appropriate response length. Our proposed method enables the model to autonomously estimate the optimal token budget and subsequently generate text in strict adherence to it.
To minimize the overthinking problem in LLMs, we propose SelfBudgeter for efficient reasoning. Our method aims to enable the model to autonomously determine an appropriate token budget and generate responses of corresponding length while adhering to this budget. Although reasoning models may occasionally overthink simple problems, their response lengths generally increase with problem difficulty. This phenomenon demonstrates that the model possesses the capability to allocate token quantities reasonably based on problem complexity. Previous works such as L1 [13] and Elastic Reasoning [17] have also demonstrated that models can generate responses of appropriate length according to a given token budget.
Therefore, we design SelfBudgeter, which employs a reward function to guide the model in: (1) learning an output format where it first predicts a token budget before generating the answer, (2) allocating appropriate token budgets based on its own capabilities and question difficulty, and (3) generating solutions with optimal length while ensuring answer accuracy.
SelfBudgeter is a concise and efficient method for automatic precise length controlled. We design the Precise Budget Control Reward (PreB Reward) to achieve precise control over length. The detailed introduction of PreB Reward can be found in Section 3.3. We employ GRPO algorithm to train the model in predicting appropriate token budgets based on problem difficulty and generating responses with lengths conforming to the specified budget.
Our reward function is formally defined as Formula 1 :
\[\text{R}(C,F,\ell,b,b_\text{max}) = \begin{cases} r_f, & \text{if F=0,} \\ \text{P}_\text{B}(b,b_\text{max})+\text{PreB}(s_{\min}^W, s_{\max}^W, \ell, b,\alpha,b_{\text{best}}^W), & \text{if F=1 and C=0,} \\ \text{P}_\text{B}(b,b_\text{max})+\text{PreB}(s_{\min}^C, s_{\max}^C, \ell, b,\alpha,b_{\text{best}}^C), & \text{if F=1 and C=1.} \\ \end{cases} \label{formula:reward32function}\tag{1}\]
where \[b_{\text{best}}^C=(1-\alpha)\cdot b,\quad b_{\text{best}}^W=(1+\alpha)\cdot b \label{formula:b32best}\tag{2}\]
The inputs and hyperparameters in the reward function are listed in Table 1. To ensure stable prediction of the token budget prior to response generation, any responses deviating from the prescribed format will be assigned the minimum reward score of \(r_f\).
Input | Description | HPs | Description |
---|---|---|---|
\(C\) | Correctness for answer | \(r_f\) | Penalty for format error |
\(F\) | Correctness for format | \(s_{\text{min}}^{W/C}\) | Minimum reward (wrong/correct) |
\(l\) | Response length | \(s_{\text{max}}^{W/C}\) | Maximum reward (wrong/correct) |
\(b\) | Model’s budget | \(\alpha\) | Tightness coefficient of budget |
\(b_\text{max}\) | Maximum acceptable budget | \(r_b\) | Penalty for excessive budget |
To enable the model to learn token budget allocation, we introduce a budget penalty module defined by Formula 3 . The model incurs a penalty \(r_b\) when its estimated token budget exceeds the maximum acceptable budget \(b_\text{max}\). No penalty is applied when the estimated token budget remains within \(b_\text{max}\). A detailed introduction of \(b_\text{max}\) is presented in Section 4.2. Briefly stated, for a given question, \(b_\text{max}\) equals the response length if the base model can answer it correctly; otherwise, \(b_\text{max}\) is set to \(\infty\). \[\text{P}_\text{B}(b, b_\text{max}) = \begin{cases} 0, & \text{if } b \leq b_\text{max}, \\ r_b, & \text{else.} \end{cases} \label{formula:budget32penalty}\tag{3}\]
Inspired by the cosine reward [29], we propose the Precise Budget Control Reward (PreB Reward). While the cosine reward helps mitigate overthinking tendencies, it lacks precise control over output length, as it only constrains the upper bound of the response. To address this limitation, we introduce a tightness coefficient \(\alpha\) to better align the response length with the specified token budget.
Given the inherent challenge for models to precisely comply with token budgets, we relax the length constraint to require only approximate adherence within \(\alpha \cdot b\) around the target budget \(b\). As shown in Formula 4 , when the model’s response length falls outside the specified range, the corresponding reward score plummets to its minimum value \(s_\text{min}\).
For incorrect responses, the function incentivizes longer reasoning chains (increasing length \(\ell\)) to encourage deeper analysis that might lead to correct conclusions. Conversely, for correct answers, the reward peaks at the minimally sufficient length \((1-\alpha)\cdot b\) to prevent unnecessary computational overhead while maintaining accuracy. This explains why in Formula 2 , the value of \(b_\text{best}\) differs between correct and incorrect responses from the model.This dual mechanism promotes efficient reasoning by adaptively modulating response lengths based on answer correctness.
\[\small \text{PreB}(s_{\min}, s_{\max}, \ell, b,\alpha,b_{\text{best}}) = \begin{cases} s_{\min}, & \text{if } \dfrac{\left|\ell - b\right|}{b} > \alpha,\\ \begin{align} &s_{\min} + (s_{\max}-s_{\min}) \times \\ &\quad \frac{1}{2} \left(1 + \cos\left(\pi \cdot \dfrac{\left|\ell - b_{\mathrm{best}}\right|}{2\alpha b}\right)\right), \end{align} & \text{else.} \end{cases} \label{formula:cosine32reward}\tag{4}\]
To ensure the model’s post-training accuracy does not degrade below its initial performance, we configure hyperparameters to guarantee that the minimum reward for correct responses always exceeds the maximum reward for incorrect responses. Specifically, our design ensures that: A correct response, which has a token budget exceeding \(b_\text{max}\) and receives the lowest budget following reward \(s^C_\text{min}\), will yield a higher total reward than an incorrect response that has a token budget within \(b_\text{max}\) and receives the highest budget following reward \(s^W_\text{max}\). This constraint is formally expressed as: \(s^C_\text{min}+r_b \ge s^W_\text{max}\).
Overall, the core design of SelfBudgeter consists of three key modules: Budget Penalty, Preb Reward, and Accuracy Reward, which collectively balance length compression, correctness, and precise length control–ultimately delivering a better user experience.
The existing reasoning models utilize a pair of <think></think>
tags to demarcate the thinking process from the final solution output. Building upon this format, we have further incorporated a token budget component.
To enable the model to dynamically allocate token usage based on question difficulty, we design an output format as follows:
<budget>an integer</budget><solution>response</solution>
The format requires the model to first estimate the required token budget before providing the answer to the question. When no user constraint exists, the model autonomously predicts the token budget. When users specify a token limit, we pre-fill the <budget> field and let the model generate the <solution> within this constraint.
At this stage, we collect model’s responses to the test questions used in both the cold-start and RL training phases, and then evaluate the correctness and length of the responses.
For the cold-start data, we retain the model’s correct responses along with their lengths and discard incorrect answers to prevent reinforcing the model’s memory of wrong responses.
For the RL training data, we calculate \(budget_ \text{max}\) (for convenience, we will refer to it as \(b_\text{max}\) in the following sections) using Formula 5 , representing the maximum acceptable token budget for a given question. When the model answers correctly, the correctness of the response indicates that the minimum token budget required for a correct answer does not exceed the current length. Therefore, we encourage the model to further compress the response length and set \(b_\text{max}\) to the current response length. When the model answers incorrectly, the relationship between the minimum token budget needed for correctness and the current length remains unclear, so any token budget is acceptable.
\[b_\text{max} = \begin{cases} \text{response length}, & \text{if model answers correctly}, \\ {\infty}, & \text{else.} \end{cases} \label{formula:b95max}\tag{5}\]
In our actual RL training process, we observe that requiring the model to simultaneously master multiple objectives - learning the new output format, providing appropriate token budgets, generating solutions of corresponding lengths according to the budget, while maintaining or improving accuracy - proved excessively challenging. After extended training periods, the model often only succeeds in adopting the output format without achieving the other goals. Inspired by the Deepseek-R1 training methodology, we introduce a cold-start phase to accelerate training and enable the model to first learn the new output format before proceeding to more complex tasks. The overall training framework is illustrated in Figure 2.
To prevent the model from losing its original reasoning capability during the cold-start phase, fine-tuning must be performed using either the model’s own generated responses or datasets containing long CoT responses. In our approach, we pre-populate
the <budget>
section with token counts obtained during the preprocessing stage. The <solution>
section is filled with the model’s generated responses. And the instruction prefix we prepend to each question can be found
in Appendix 8 .
We conduct experiments on the DeepSeek-R1-Distill-Qwen-1.5B (R1-1.5B) model. We reproduce L1-Max using R1-1.5B, and select R1-1.5B and L1-Max as baseline methods for comparative evaluation against SelfBudgeter. In addition, we extend our experiments to the larger DeepSeek-R1-Distill-Qwen-7B (R1-7B) model. For more comprehensive comparison, we also include E1-Math-1.5B, R1-7B, Eurus-2-7B-PRIME [34], and Qwen-2.5-7B-Simple-RL [35] as additional baselines.
During the cold-start phase, we employ three datasets of varying difficulty—GSM8K [36], MATH [37], and s1k-1.1 [38]—to help the model learn the new output format while producing token budgets with diverse distributions. The s1k-1.1 dataset contains 1,000 challenging mathematical problems with long reasoning chains generated by DeepSeek-R1, which support both reasoning ability and format adaptation. For GSM8K and MATH, we select 1,500 training samples each that the model can answer correctly. For s1k-1.1, we directly use the native responses and compute the corresponding token counts with the model’s tokenizer to populate our designed template; in total, we retain 630 problems that DeepSeek-R1 answered correctly. This yields a training set of 3,630 samples. Following the preprocessing protocol in Sections 4.2 and 4.3, we fine-tune the model for one epoch. Throughout data collection and training, the model’s temperature is consistently set to 0.6.
During the reinforcement learning phase, we use STILL-3-Preview-RL-Data [39] dataset. It also serves as the training dataset for reproducing L1-max. This dataset collects 30K high-quality samples based on the MATH [37], NuminaMathCoT [40], and AIME 1983-2023 [41] datasets. It includes problems of varying difficulty levels, which also helps the model learn to allocate token counts adaptively based on difficulty. As described in Section 4.2, we compute the maximum acceptable budget (\(b_\text{max}\)) based on the model’s responses, then train the model for 3 epochs on this dataset. More detailed information can be found in Appendix 7.
Models | GSM8K | MATH500 | AIME2025 | |||
---|---|---|---|---|---|---|
2-3 (lr)4-5 (lr)6-7 | Acc | Len | Acc | Len | Acc | Len |
DeepSeek-R1-Distill-Qwen-1.5B | 73.09 | 2865.08 | 74.93 | 5327.12 | 22.22 | 14444.03 |
E1-Math-1.5B(0.5K,1K) | 60.20 | 1205.21 | 35.53 | 1499.54 | 4.44 | 3008.44 |
E1-Math-1.5B(4K,1K) | 72.10 | 1299.62 | 72.47 | 2088.44 | 21.11 | 5578.13 |
L1-Max(3600) | 79.56 | 571.72 | 76.73 | 1753.42 | 17.88 | 5213.89 |
SelfBudgeter-1.5B | 84.10 | 1231.79 | 78.47 | 2326.85 | 21.11 | 4288.10 |
DeepSeek-R1-Distill-Qwen-7B | 87.09 | 1918.21 | 86.73 | 5387.19 | 28.89 | 22158.79 |
Eurus-2-7B-PRIME | 90.98 | 302.72 | 79.73 | 582.58 | 15.56 | 1254.52 |
Qwen-2.5-7B-Simple-RL | 75.94 | 519.07 | 61.13 | 823.89 | 6.67 | 1429.94 |
SelfBudgeter-7B | 90.30 | 991.13 | 86.87 | 2666.58 | 30.00 | 12241.84 |
Table 2 presents a comprehensive comparison of model performance on the GSM8K, MATH500, and AIME2025 test sets, evaluated in terms of accuracy (Acc) and average response length (Len). The table contrasts baseline models with different variants of the SelfBudgeter framework across varying model scales. For clarity, the best performance is highlighted in bold, while the second-best performance is indicated with underline. It is worth noting that token limits for L1 are explicitly specified through prompt templates, whereas those for E1 are enforced via hard truncation. In contrast, SelfBudgeter autonomously estimates its token constraints during inference. All reported results are averaged over three runs with different random seeds.
Although the Deepseek-R1-Distill-Qwen-1.5B baseline demonstrates strong accuracy, it requires substantially longer responses. On GSM8K, our method improves accuracy by 11.01 percentage points while compressing response length to 43% of the original. On MATH500, it achieves a 3.54-point accuracy gain with response length reduced to 44%. On AIME2025, our approach compresses response length to 30% of the original while maintaining comparable accuracy. In contrast, although L1 and E1 attain stronger compression on certain datasets, they incur larger accuracy losses—L1 performs poorly on the challenging AIME2025 benchmark, while E1 suffers more pronounced accuracy degradation on the simpler GSM8K and MATH500 datasets.
In addition, Table 2 highlights that SelfBudgeter consistently strikes a better balance between accuracy and response length than existing baselines. Unlike L1, which enforces explicit length limits but collapses on AIME2025, or E1, which relies on hard truncation and severely harms accuracy, SelfBudgeter autonomously learns effective token budgeting. As a result, it achieves the best or second-best accuracy across all datasets while simultaneously reducing response length substantially.
Beyond its effectiveness at the 1.5B scale, our method also delivers efficient reasoning with larger models. SelfBudgeter-7B achieves the highest accuracy on MATH500 and AIME2025, and the second-best accuracy on GSM8K—only 0.68 points lower than the best-performing model. Meanwhile, SelfBudgeter-7B attains an average compression ratio of 48%, further demonstrating the generality of our approach and its effectiveness at larger model scales. Compared with Eurus-2-7B-PRIME, which excels only on GSM8K but falls behind on harder reasoning tasks, and Qwen-2.5-7B-Simple-RL, which underperforms across all benchmarks, SelfBudgeter exhibits robust gains across datasets of varying difficulty.
In SelfBudgeter, \(\alpha\) serves as a critical hyperparameter. As shown in Figure 3, we observe that using a fixed and relatively loose \(\alpha\) can lead to reward hacking: once the model learns to align the budget with the actual response length, it tends to inflate the predicted budget during later training stages, pushing the output length toward the lower bound of the acceptable range to obtain higher PreB scores. Conversely, when \(\alpha\) is fixed but relatively tight, the token budget quickly collapses to the response length, which hinders the model from learning an optimal budgeting strategy. To address these issues, we introduce a dynamic alpha schedule, where \(\alpha\) is linearly decreased over training steps. This gradually tightens the tolerance range for acceptable response lengths and encourages closer convergence between the predicted budget and the actual output length. Consequently, the optimal \(\alpha\) is not static but evolves throughout the training process.
Formally, the dynamic \(\alpha\) is defined by a linear schedule: \[\alpha_{\text{now}} = \alpha_{\text{start}} - \left(\alpha_{\text{start}} - \alpha_{\text{end}}\right) \cdot \frac{\text{step}_{\text{now}}}{\text{Total steps}}.\]
This schedule only requires specifying the starting and ending values of \(\alpha\) (i.e., \(\alpha_{\text{start}}\) and \(\alpha_{\text{end}}\)), which are set to \(6.0\) and \(0.1\), respectively.
In the Analysis section, we systematically examine SelfBudgeter’s adaptive computation allocation mechanism through two pivotal aspects: its ability to dynamically adjust budgets according to problem complexity, and compliance with token constraints while preserving response quality. This holistic evaluation reveals fundamental insights into how adaptive language models negotiate computational efficiency with task requirements, informing both theoretical understanding and practical deployment considerations.
To investigate SelfBudgeter’s capacity for difficulty-aware budget allocation, we conduct empirical evaluations across three mathematical reasoning benchmarks with inherent complexity gradients: GSM8K, MATH, and AIME 2024. Our experimental framework systematically evaluates four architectural variants combining cold-start initialization strategies (GSM8K vs. s1k) with \(\alpha\) hyperparameter values (0.2 vs. 0.5).
Figure 4 shows a consistent positive correlation between problem complexity and allocated token budgets across all model variants, demonstrating SelfBudgeter’s ability to scale computation with task difficulty. The near-linear allocation across difficulty tiers highlights its emergent capacity for intrinsic difficulty estimation, while the minimal variance across configurations indicates robust and generalized learning of task-complexity metrics rather than configuration-specific artifacts.
To systematically evaluate the generation capability of SelfBudgeter under budget constraints, this study employs linear regression modeling to quantitatively analyze the mapping relationship between predicted token budgets and actual response lengths. We conduct a quantitative analysis on the MATH500 dataset and GSM8K test set using linear regression to investigate the mapping between predicted budgets and actual response lengths (as shown in the Figure 5). On MATH500 dataset, the least squares fitting yields a slope of 1.025 (95% CI [0.9466, 1.1042]). And on GSM8K test set, the least squares fitting yields a slope of 0.793 (95% CI [0.7512, 0.8354]). The slope coefficient approaching unity validates the efficacy of the budget control mechanism, indicating that each 1-token increase in the predicted budget corresponds to an average increase of about 1-token in output.
Quantitative results demonstrate that 96% of generated responses exhibit relative deviations \(\leq 50\%\) from the target token budget, with 65.40% achieving tighter deviations \(\leq 20\%\) . Extended experiments on full benchmark datasets reveal that 97.65% (GSM8K) and 95.82% (MATH) of samples satisfy the \(\leq 50\%\) relative deviation constraint. Notably, the model’s budget adherence is influenced by the cold-start dataset and hyperparameter \(\alpha\). The optimized SelfBudgeter configuration (initialized with GSM8K and \(\alpha=0.2\)), which balances generation quality and budget compliance, is reported here as the best-performing variant.
We further validate SelfBudgeter’s adherence to user-defined token budgets through controlled experiments. The results indicate that the actual generated length follows a linear functional relationship with user-defined budgets, demonstrating robust alignment even under explicit external constraints. Details are provided in Appendix 9.
We propose the SelfBudgeter framework, which autonomously predicts required token budgets for reasoning while effectively adhering to self-imposed constraints, successfully optimizing the accuracy-response length trade-off. By leveraging SelfBudgeter’s token budget predictions, users can anticipate total inference duration in advance, significantly enhancing user experience. In resource-efficient reasoning, SelfBudgeter demonstrates performance comparable to several existing methods, highlighting its potential for deployment in resource-constrained environments. Additionally, output length can be dynamically regulated through transformation functions when required. SelfBudgeter paves a promising pathway toward more efficient, controllable, and user-friendly reasoning models.
This work uses only publicly available datasets under their original licenses, and does not involve human subjects, private data, or personally identifiable information. Our contributions are methodological, focusing on improving reasoning efficiency, and do not amplify risks of harmful or biased content. We declare no conflicts of interest or ethical concerns, and we have complied with the ICLR Code of Ethics throughout the research and submission process. Additional details regarding the use of large language models (LLMs) are provided in Appendix 10.
We have made extensive efforts to ensure the reproducibility of our work. The datasets used in our experiments are publicly available. Detailed descriptions of data preprocessing, training settings, and evaluation protocols are provided in Section 4, with additional implementation details and hyperparameters included in the appendix. We will release anonymous source code and scripts for training and evaluation as supplementary material.
Our server is equipped with two 80GB A100 GPUs and two 45GB A40 GPUs. We conducted fine-tuning experiments and inference tests on the two A40 GPUs, while the GRPO training was performed on the two A100 GPUs.
In the fine-tuning training during the cold-start phase, our parameter settings are configured as follows. The sequence length is capped at 16,384, with a per-device training and evaluation batch size of 1, while gradient accumulation (2 steps) is employed to alleviate GPU memory constraints. A cosine learning rate scheduler is adopted with a 10% warm-up ratio and a base learning rate of 5e-5. The model is trained for 1 epoch, with 10% of the training set allocated for validation. The model checkpoints are saved and evaluated every 500 steps, and the best-performing checkpoint is retained.
In the GRPO (Global Reward Policy Optimization) training, our parameter configuration is set as follows. The training and validation batch sizes are set to 128 and 1,250, respectively, with maximum prompt and response lengths of 1,024 and 32,000 tokens. The Actor model employs a learning rate of 1e-6, dynamic batching (up to 24K tokens per GPU), and a KL divergence loss (coefficient 0.001), with gradient checkpointing and FSDP (Fully Sharded Data Parallel) distributed training enabled (parameter offloading disabled). During the Rollout phase, the vLLM inference engine is utilized with tensor parallelism (TP=2) and 80% GPU memory utilization, generating 5 responses per round. Global settings include 3 training epochs, a checkpoint-saving interval of 50 steps, and a KL control coefficient of 0.001, executed on a single node with dual GPUs. And key hyperparameters involved in the reward function are specified in Table 3.
Parameters | \(C=0\) | \(C=1\) | Parameters | Value |
---|---|---|---|---|
\(s_{\text{min}}\) | -0.5 | 0.5 | \(r_f\) | -1 |
\(s_{\text{max}}\) | 0 | 1 | \(r_b\) | -0.4 |
For the GSM-initialized SelfBudgeter, we select the checkpoint after 699 training steps when alpha was set to 0.2, and the checkpoint after 575 steps when alpha was 0.5. For the s1k-initialized SelfBudgeter, we choose the checkpoint after 475 training steps with alpha=0.2, and the checkpoint after 500 steps with alpha=0.5. For L1-Max, we choose the checkpoint after 280 training steps.
None
Figure 6: The prompt template used in the cold-start stage..
Model | GSM8K | MATH | ||||
---|---|---|---|---|---|---|
2-4 (lr)5-7 | Acc\(\uparrow\) | Len\(\downarrow\) | Mat\(\uparrow\) | Acc\(\uparrow\) | Len\(\downarrow\) | Mat\(\uparrow\) |
Cold Start (GSM) | 71.95 | 1003.79 | 85.82 | 64.74 | 3043.29 | 41.16 |
SelfBudgeter (GSM, \(\alpha=0.2\)) | 76.27 | 523.77 | 97.65 | 63.46 | 779.54 | 95.82 |
SelfBudgeter (GSM, \(\alpha=0.5\)) | 74.68 | 520.82 | 96.97 | 63.78 | 777.80 | 96.66 |
Cold Start (s1k) | 82.49 | 1983.29 | 21.76 | 76.64 | 4001.29 | 23.28 |
SelfBudgeter (s1k, \(\alpha=0.2\)) | 81.50 | 662.08 | 70.74 | 74.18 | 919.27 | 78.36 |
SelfBudgeter (s1k, \(\alpha=0.5\)) | 80.44 | 719.36 | 71.19 | 72.60 | 1022.99 | 79.76 |
The choice of initialization data substantially impacts model performance. SelfBudgeters initialized with the s1k dataset outperform their GSM-initialized SelfBudgeters by 8.82–10.72 percentage points on MATH (74.18% vs. 63.46% for \(\alpha=0.2\)) and 5.23–5.76 percentage points on GSM8K (80.44% vs. 74.68% for \(\alpha=0.5\)). While SelfBudgeters with GSM-initialized exhibit lower accuracy, they generate significantly more concise responses compared to s1k-initialized SelfBudgeters. Specifically, GSM-initialized SelfBudgeters reduces response length by approximately 15–24% on MATH and achieves 21–28% length reduction on GSM8K. This performance gap highlights the importance of high-quality initialization for the budgeting mechanism.
As shown in Table 4, significant performance variations exist between models fine-tuned with different cold-start datasets. The s1k-fine-tuned model demonstrates superior accuracy over the GSM-fine-tuned counterpart, achieving 10.54% and 11.90% higher accuracy on GSM8K and MATH respectively. However, this comes at the cost of substantially longer responses, with the s1k model generating 97.58% and 31.48% lengthier outputs on GSM8K and MATH. This discrepancy stems from the s1k dataset’s responses being generated by Deepseek-R1, which produces higher-quality outputs than those self-generated by Deepseek-R1-Distill-Qwen-1.5B. Additionally, the s1k dataset’s average length of 7,677.43 tokens (we only retained correct responses under 16,000 tokens) vastly exceeds GSM8K’s 837.14 tokens, explaining the dramatic difference in response lengths after fine-tuning. These factors substantially influence SelfBudgeter’s final performance, as evidenced by: (1) SelfBudgeter’s accuracy closely mirroring that of its fine-tuned base model, and (2) the response length relationships and matching rate relationships between different SelfBudgeter variants remaining consistent with their respective cold-start models.
To systematically evaluate model performance under user-defined token budget constraints, we conduct quantitative analysis using SelfBudgeter with GSM initialization and hyperparameter \(\alpha=0.2\) on both MATH500 dataset and GSM8K test set. In the experimental design, fixed token budgets were pre-filled in the <budget> field of training templates, with empirical results obtained by measuring average generated response lengths. We evaluated SelfBudgeter’s performance with user-defined token budgets ranging from 50 to 2000 (specifically: 50, 100, 200, 400, 500, 600, 800, 1000, 1200, 1400, 1600, 1800, and 2000), as shown in the Figure 7.
Regression intercepts effectively reflect problem complexity, where GSM8K’s simpler questions yield significantly smaller intercepts. Despite a moderate slope, SelfBudgeter demonstrates robust budget adaptability, maintaining a stable positive correlation between user-defined budgets and output lengths. This linear relationship enables deterministic length control through derived transformation functions.
During the preparation of this paper, large language models (LLMs) were used solely as auxiliary tools for grammar checking, text polishing, and improving clarity of exposition. No experimental design, data analysis, or substantive research conclusions were generated by LLMs. All methodological and experimental contributions are original and conducted entirely by the authors.