New articles on Quantitative Finance


[1] 2602.11334

Interpolation and Prewar-Postwar Output Volatility and Shock-Persistence Debate: A Closer Look and New Results

It is well established that the US prewar output was more volatile and less shock persistent than the postwar output. This is often attributed to the data interpolation employed to construct the prewar series. Our analytical results, however, indicate that commonly used linear interpolation has the opposite effect on shock persistence and volatility of a series - it increases shock persistence and reduces volatility. The surprising implication of this finding is that the actual differences between the volatility and shock persistence of the prewar and postwar output series are likely greater than the existing literature recognizes, and interpolation has dampened rather than magnified this difference. Consequently, the view that postwar output was more stable than prewar output because of the effectiveness of the postwar stabilization policies and institutional changes has considerable merit. Our results hold for parsimonious stationary and nonstationary time series commonly used to model macroeconomic time series


[2] 2602.11442

Ecosystem service demand relationship and trade-off patterns in urban parks across China

Urban parks play a vital role in delivering various essential ecosystem services that significantly contribute to the well-being of urban populations. However, there is quite a limited understanding of how people value these ecosystem services differently. Here, we investigated the relationships among nine ecosystem service demands in urban parks across China using a large-scale survey with 20,075 responses and a point-allotment experiment. We found particularly high preferences for air purification and recreation services at the expense of other services among urban residents in China. These preferences were further reflected in three distinct demand bundles: air purification-dominated, recreation-dominated, and balanced demands. Each bundle delineated a typical group of people with different representative characteristics. Socio-economic and environmental factors, such as environmental interest and vegetation coverage, were found to significantly influence the trade-off intensity among service demands. These results underscore the necessity for tailored urban park designs that address diverse service demands with the aim of enhancing the quality of urban life in China and beyond sustainably.


[3] 2602.11687

Exact Value Solution to the Equity Premium Puzzle

The aim of this article is to provide the solution to the equity premium puzzle without using calibrated values. Calibrated values of subjective time discount factor were used in the prior derived models because 4 variables were determined from 3 different equations. Furthermore, calculated values and risk behavior determination of prior models were compatible with empirical literature. 4 unknown variables are now calculated from 4 different equations in the new derived model in this article. Subjective time discount factor and coefficient of relative risk aversion are found 0.9581 and 1.0319, respectively from the system of equations which are compatible with empirical studies. Micro and macro studies about CRRA value affirm each other for the first time in the literature. Furthermore, equity and risk-free asset investors are pinned down to be insufficient risk-loving, which can be considered a type of risk-averse behavior. Hence it can be said that calculated values and risk attitude determination align with empirical literature. This shows that derived model is valid and make CCAPM work under the same assumptions with those of prior derived models.


[4] 2602.11992

Labor Supply under Temporary Wage Increases: Evidence from a Randomized Field Experiment

We conduct a pre-registered randomized controlled trial to test for income targeting in labor supply decisions among sellers of a Swedish street paper. These workers face liquidity constraints, high income volatility, and discretion over hours. Treated individuals received a 25 percent bonus per copy sold for the duration of an issue, simulating an increase in earnings potential. Treated sellers sold more papers, worked longer hours, and took fewer days off. These findings contrast with studies on intertemporal labor supply that find small substitution effects. Notably, when we apply strategies similar to observational studies, we recover patterns consistent with income targeting.


[5] 2602.12030

Time-Inhomogeneous Volatility Aversion for Financial Applications of Reinforcement Learning

In finance, sequential decision problems are often faced, for which reinforcement learning (RL) emerges as a promising tool for optimisation without the need of analytical tractability. However, the objective of classical RL is the expected cumulated reward, while financial applications typically require a trade-off between return and risk. In this work, we focus on settings where one cares about the time split of the total return, ruling out most risk-aware generalisations of RL which optimise a risk measure defined on the latter. We notice that a preference for homogeneous splits, which we found satisfactory for hedging, can be unfit for other problems, and therefore propose a new risk metric which still penalises uncertainty of the single rewards, but allows for an arbitrary planning of their target levels. We study the properties of the resulting objective and the generalisation of learning algorithms to optimise it. Finally, we show numerical results on toy examples.


[6] 2602.12066

Chaos and Misallocation under Price Controls

Price controls kill the incentive for arbitrage. We prove a Chaos Theorem: under a binding price ceiling, suppliers are indifferent across destinations, so arbitrarily small cost differences can determine the entire allocation. The economy tips to corner outcomes in which some markets are fully served while others are starved; small parameter changes flip the identity of the corners, generating discontinuous welfare jumps. These corner allocations create a distinct source of cross-market misallocation, separate from the aggregate quantity loss (the Harberger triangle) and from within-market misallocation emphasized in prior work. They also create an identification problem: welfare depends on demand far from the observed equilibrium. We derive sharp bounds on misallocation that require no parametric assumptions. In an efficient allocation, shadow prices are equalized across markets; combined with the adding-up constraint, this collapses the infinite-dimensional welfare problem to a one-dimensional search over a common shadow price, with extremal losses achieved by piecewise-linear demand schedules. Calibrating the bounds to station-level AAA survey data from the 1973-74 U.S. gasoline crisis, misallocation losses range from roughly 1 to 9 times the Harberger triangle.


[7] 2602.12104

Liquidation Dynamics in DeFi and the Role of Transaction Fees

Liquidation of collateral are the primary safeguard for solvency of lending protocols in decentralized finance. However, the mechanics of liquidations expose these protocols to predatory price manipulations and other forms of Maximal Extractable Value (MEV). In this paper, we characterize the optimal liquidation strategy, via a dynamic program, from the perspective of a profit-maximizing liquidator when the spot oracle is given by a Constant Product Market Maker (CPMM). We explicitly model Oracle Extractable Value (OEV) where liquidators manipulate the CPMM with sandwich attacks to trigger profitable liquidation events. We derive closed-form liquidation bounds and prove that CPMM transaction fees act as a critical security parameter. Crucially, we demonstrate that fees do not merely reduce attacker profits, but can make such manipulations unprofitable for an attacker. Our findings suggest that CPMM transaction fees serve a dual purpose: compensating liquidity providers and endogenously hardening CPMM oracles against manipulation without the latency of time-weighted averages or medianization.


[8] 2602.11379

Regularized Ensemble Forecasting for Learning Weights from Historical and Current Forecasts

Combining forecasts from multiple experts often yields more accurate results than relying on a single expert. In this paper, we introduce a novel regularized ensemble method that extends the traditional linear opinion pool by leveraging both current forecasts and historical performances to set the weights. Unlike existing approaches that rely only on either the current forecasts or past accuracy, our method accounts for both sources simultaneously. It learns weights by minimizing the variance of the combined forecast (or its transformed version) while incorporating a regularization term informed by historical performances. We also show that this approach has a Bayesian interpretation. Different distributional assumptions within this Bayesian framework yield different functional forms for the variance component and the regularization term, adapting the method to various scenarios. In empirical studies on Walmart sales and macroeconomic forecasting, our ensemble outperforms leading benchmark models both when experts' full forecasting histories are available and when experts enter and exit over time, resulting in incomplete historical records. Throughout, we provide illustrative examples that show how the optimal weights are determined and, based on the empirical results, we discuss where the framework's strengths lie and when experts' past versus current forecasts are more informative.


[9] 2405.00357

Optimal nonparametric estimation of the expected shortfall risk

We address the problem of estimating the expected shortfall risk of a financial loss using a finite number of i.i.d. data. It is well known that the classical plug-in estimator suffers from poor statistical performance when faced with (heavy-tailed) distributions that are commonly used in financial contexts. Further, it lacks robustness, as the modification of even a single data point can cause a significant distortion. We propose a novel procedure for the estimation of the expected shortfall and prove that it recovers the best possible statistical properties (dictated by the central limit theorem) under minimal assumptions and for all finite numbers of data. Further, this estimator is adversarially robust: even if a (small) proportion of the data is maliciously modified, the procedure continuous to optimally estimate the true expected shortfall risk. We demonstrate that our estimator outperforms the classical plug-in estimator through a variety of numerical experiments across a range of standard loss distributions.


[10] 2505.07078

Can LLM-based Financial Investing Strategies Outperform the Market in Long Run?

Large Language Models (LLMs) have recently been leveraged for asset pricing tasks and stock trading applications, enabling AI agents to generate investment decisions from unstructured financial data. However, most evaluations of LLM timing-based investing strategies are conducted on narrow timeframes and limited stock universes, overstating effectiveness due to survivorship and data-snooping biases. We critically assess their generalizability and robustness by proposing FINSABER, a backtesting framework evaluating timing-based strategies across longer periods and a larger universe of symbols. Systematic backtests over two decades and 100+ symbols reveal that previously reported LLM advantages deteriorate significantly under broader cross-section and over a longer-term evaluation. Our market regime analysis further demonstrates that LLM strategies are overly conservative in bull markets, underperforming passive benchmarks, and overly aggressive in bear markets, incurring heavy losses. These findings highlight the need to develop LLM strategies that are able to prioritise trend detection and regime-aware risk controls over mere scaling of framework complexity.


[11] 2510.15995

The Invisible Handshake: Tacit Collusion between Adaptive Market Agents

We study the emergence of tacit collusion in a repeated game between a market maker, who controls market liquidity, and a market taker, who chooses trade quantities. The market price evolves according to the endogenous price impact of trades and exogenous innovations to economic fundamentals. We define collusion as persistent overpricing over economic fundamentals and characterize the set of feasible and collusive strategy profiles. Our main result shows that a broad class of simple learning dynamics, including gradient ascent updates, converges in finite time to collusive strategies when the agents maximize individual wealth, defined as the value of their portfolio, without any explicit coordination. The key economic mechanism is that when aggregate supply in the market is positive, overpricing raises the market capitalization and thus the total wealth of market participants, inducing a cooperative component in otherwise non-cooperative learning objectives. These results identify an inherent structure through which decentralized learning by AI-driven agents can autonomously generate persistent overpricing in financial markets.