New articles on Quantitative Finance


[1] 2602.09237

Sign-Dependent Spillovers of Global Monetary Policy

This paper examines the sign-dependent international spillovers of Federal Reserve and European Central Bank monetary policy shocks. Using a consistent high-frequency identification of pure monetary policy shocks across 44 advanced and non-advanced economies and the methodology of Caravello and Martinez-Bruera, 2024, we document strong asymmetries in international transmission. Linear specifications mask these effects: contractionary shocks generate large and significant deteriorations in financial conditions, economic activity, and international trade abroad, while expansionary shocks yield little to no measurable improvement. Our results are robust across samples, identification strategies, and the framework proposed by Ben Zeev et al., 2023.


[2] 2602.09362

Behavioral Economics of AI: LLM Biases and Corrections

Do generative AI models, particularly large language models (LLMs), exhibit systematic behavioral biases in economic and financial decisions? If so, how can these biases be mitigated? Drawing on the cognitive psychology and experimental economics literatures, we conduct the most comprehensive set of experiments to date$-$originally designed to document human biases$-$on prominent LLM families across model versions and scales. We document systematic patterns in LLM behavior. In preference-based tasks, responses become more human-like as models become more advanced or larger, while in belief-based tasks, advanced large-scale models frequently generate rational responses. Prompting LLMs to make rational decisions reduces biases.


[3] 2602.09504

Seeing the Goal, Missing the Truth: Human Accountability for AI Bias

This research explores how human-defined goals influence the behavior of Large Language Models (LLMs) through purpose-conditioned cognition. Using financial prediction tasks, we show that revealing the downstream use (e.g., predicting stock returns or earnings) of LLM outputs leads the LLM to generate biased sentiment and competition measures, even though these measures are intended to be downstream task-independent. Goal-aware prompting shifts intermediate measures toward the disclosed downstream objective. This purpose leakage improves performance before the LLM's knowledge cutoff, but with no advantage post-cutoff. AI bias due to "seeing the goal" is not an algorithmic flaw, but stems from human accountability in research design to ensure the statistical validity and reliability of AI-generated measurements.


[4] 2602.09887

Partially Active Automated Market Makers

We introduce a new class of automated market maker (AMM), the \emph{partially active automated market maker} (PA-AMM). PA-AMM divides its reserves into two parts, the active and the passive parts, and uses only the active part for trading. At the top of every block, such a division is done again to keep the active reserves always being \(\lambda\)-portion of total reserves, where \(\lambda \in (0, 1]\) is an activeness parameter. We show that this simple mechanism reduces adverse selection costs, measured by loss-versus-rebalancing (LVR), and thereby improves the wealth of liquidity providers (LPs) relative to plain constant-function market makers (CFMMs). As a trade-off, the asset weights within a PA-AMM pool may deviate from their target weights implied by its invariant curve. Motivated by the optimal index-tracking problem literature, we also propose and solve an optimization problem that balances such deviation and the reduction of LVR.


[5] 2602.09950

How can the dual martingale help solving the primal optimal stopping problem?

Motivated by recent results on the dual formulation of optimal stopping problems, we investigate in this short paper how the knowledge of an approximating dual martingale can improve the efficiency of primal methods. In particular, we show on numerical examples that accurate approximations of a dual martingale efficiently reduce the variance for the primal optimal stopping problem.


[6] 2602.10071

Deep Learning for Electricity Price Forecasting: A Review of Day-Ahead, Intraday, and Balancing Electricity Markets

Electricity price forecasting (EPF) plays a critical role in power system operation and market decision making. While existing review studies have provided valuable insights into forecasting horizons, market mechanisms, and evaluation practices, the rapid adoption of deep learning has introduced increasingly diverse model architectures, output structures, and training objectives that remain insufficiently analyzed in depth. This paper presents a structured review of deep learning methods for EPF in day-ahead, intraday, and balancing markets. Specifically, We introduce a unified taxonomy that decomposes deep learning models into backbone, head, and loss components, providing a consistent evaluation perspective across studies. Using this framework, we analyze recent trends in deep learning components across markets. Our study highlights the shift toward probabilistic, microstructure-centric, and market-aware designs. We further identify key gaps in the literature, including limited attention to intraday and balancing markets and the need for market-specific modeling strategies, thereby helping to consolidate and advance existing review studies.


[7] 2602.09608

Designing a Token Economy: Incentives, Governance, and Tokenomics

In recent years, tokenomic systems, decentralized systems that use cryptographic tokens to represent value and rights, have evolved considerably. Growing complexity in incentive structures has expanded the applicability of blockchain beyond purely transactional use. Existing research predominantly examines token economies within specific use cases, proposes conceptual frameworks, or studies isolated aspects such as governance, incentive design, and tokenomics. However, the literature offers limited empirically grounded, end-to-end guidance that integrates these dimensions into a coherent, step-by-step design approach informed by concrete token-economy development efforts. To address this gap, this paper presents the Token Economy Design Method (TEDM), a design-science artifact that synthesizes stepwise design propositions for token-economy design across incentives, governance, and tokenomics. TEDM is derived through an iterative qualitative synthesis of prior contributions and refined through a co-designed case. The artifact is formatively evaluated via the Currynomics case study and additional expert interviews. Currynomics is an ecosystem that maintains the Redcurry stablecoin, using real estate as the underlying asset. TEDM is positioned as reusable design guidance that facilitates the analysis of foundational requirements of tokenized ecosystems. The specificity of the proposed approach lies in the focus on the socio-technical context of the system and early stages of its design.


[8] 2602.09967

Incentive Pareto Efficiency in Monopoly Insurance Markets with Adverse Selection

We study a monopolistic insurance market with hidden information, where the agent's type $\theta$ is private information that is unobservable to the insurer, and it is drawn from a continuum of types. The hidden type affects both the loss distribution and the risk attitude of the agent. Within this framework, we show that a menu of contracts is incentive efficient if and only if it maximizes social welfare, subject to incentive compatibility and individual rationality constraints. This equivalence holds for general concave utility functionals. In the special case of Yaari Dual Utility, we provide a semi-explicit characterization of optimal incentive-efficient menus of contracts. We do this under two different settings: (i) the first assumes that types are ordered in a way such that larger values of $\theta$ correspond to more risk-averse types who face stochastically larger losses; whereas (ii) the second assumes that larger values of $\theta$ correspond to less risk-averse types who face stochastically larger losses. In both settings, the structure of optimal incentive-efficient menus of contracts depends on the level of the social welfare weight. Moreover, at the optimum, higher types receive greater coverage in exchange for higher premia. Additionally, optimal menus leave the lowest type indifferent, with the insurer absorbing all surplus from the lowest type; and they exhibit efficiency at the top, that is, the highest type receives full coverage.


[9] 2111.12799

The Macroeconomic Effects of Corporate Tax Reforms

Using aggregate, sectoral, and firm-level data, this paper examines the effects of two major U.S. corporate tax cuts. The Tax Cuts and Jobs Act (TCJA-17) led to large shareholder payouts but modest aggregate stimulus, while Kennedy's 1960s tax cuts stimulated output and investment with minimal payout impact. To explain this divergence, I incorporate tax depreciation policy and a pass-through business sector into a neoclassical growth model. The model suggests that accelerated depreciation and a large pass-through share dampen stimulus from corporate tax rate reductions, and that Kennedy's cuts boosted output four times more per dollar of lost revenue than the TCJA-17.


[10] 2505.07820

Revisiting the Excess Volatility Puzzle Through the Lens of the Chiarella Model

We amend and extend the Chiarella model of financial markets to deal with arbitrary long-term value drifts in a consistent way. This allows us to improve upon existing calibration schemes, opening the possibility of calibrating individual monthly time series instead of classes of time series. The technique is employed on spot prices of four asset classes from ca. 1800 onward (stock indices, bonds, commodities, currencies). The so-called fundamental value is a direct output of the calibration, which allows us to (a) quantify the amount of excess volatility in these markets, which we find to be large (e.g. a factor $\approx$ 4 for stock indices) and consistent with previous estimates; and (b) determine the distribution of mispricings (i.e. the difference between market price and value), which we find in many cases to be bimodal. Both findings are strongly at odds with the Efficient Market Hypothesis. We also study in detail the 'sloppiness' of the calibration, that is, the directions in parameter space that are weakly constrained by data. The main conclusions of our study are remarkably consistent across different asset classes, and reinforce the hypothesis that the medium-term fate of financial markets is determined by a tug-of-war between trend followers and fundamentalists.


[11] 2510.12049

Generative AI and Firm Productivity: Field Experiments in Online Retail

We quantify the impact of Generative Artificial Intelligence (GenAI) on firm productivity through a series of large-scale randomized field experiments involving millions of users and products at a leading cross-border online retail platform. Over six months in 2023-2024, GenAI-based enhancements were integrated into seven consumer-facing business workflows. We find that GenAI adoption significantly increases sales, with treatment effects ranging from $0\%$ to $16.3\%$, depending on GenAI's marginal contribution relative to existing firm practices. Because inputs and prices were held constant across experimental arms, these gains map directly into total factor productivity improvements. Across the four GenAI applications with positive sales effects, the implied annual incremental value is approximately $\$ 5$ per consumer-an economically meaningful impact given the retailer's scale and the early stage of GenAI adoption. The primary mechanism operates through higher conversion rates, consistent with GenAI reducing frictions and improving consumer experience. Importantly, these effects are not associated with worse post-purchase outcomes, as product return rates and customer ratings do not deteriorate. Finally, we document substantial demand-side heterogeneity, with larger gains for less experienced consumers. Our findings provide novel, large-scale causal evidence on the productivity effects of GenAI in online retail, highlighting both its immediate value and broader potential.


[12] 2511.13277

Stationary Distributions of the Mode-switching Chiarella Model

We derive the stationary distribution in various regimes of the extended Chiarella model of financial markets. This model is a stochastic nonlinear dynamical system that encompasses dynamical competition between a (saturating) trending and a mean-reverting component. We find the so-called mispricing distribution and the trend distribution to be unimodal Gaussians in the small noise, small feedback limit. Slow trends yield Gaussian-cosh mispricing distributions that allow for a P-bifurcation: unimodality occurs when mean-reversion is fast, bimodality when it is slow. The critical point of this bifurcation is established and refutes previous ad-hoc reports and differs from the bifurcation condition of the dynamical system itself. For fast, weakly coupled trends, deploying the Furutsu-Novikov theorem reveals that the result is again unimodal Gaussian. For the same case with higher coupling we disprove another claim from the literature: bimodal trend distributions do not generally imply bimodal mispricing distributions. The latter becomes bimodal only for stronger trend feedback. The exact solution in this last regime remains unfortunately beyond our proficiency.


[13] 2512.11731

Transfer Learning (Il)liquidity

The estimation of the Risk Neutral Density (RND) implicit in option prices is challenging, especially in illiquid markets. We introduce the Deep Log-Sum-Exp Neural Network, an architecture that leverages Deep and Transfer learning to address RND estimation in the presence of irregular and illiquid strikes. We prove key statistical properties of the model and the consistency of the estimator. We illustrate the benefits of transfer learning to improve the estimation of the RND in severe illiquidity conditions through Monte Carlo simulations, and we test it empirically on SPX data, comparing it with popular estimation methods. Overall, our framework shows recovery of the RND in conditions of extreme illiquidity with as few as three option quotes.


[14] 2512.24968

The Impact of LLMs on Online News Consumption and Production

Large language models (LLMs) change how consumers acquire information online; their bots also crawl news publishers' websites for training data and to answer consumer queries; and they provide tools that can lower the cost of content creation. These changes lead to predictions of adverse impact on news publishers in the form of lowered consumer demand, reduced demand for newsroom employees, and an increase in news "slop." Consequently, some publishers strategically responded by blocking LLM access to their websites using the this http URL file standard. Using high-frequency granular data, we document four effects related to the predicted shifts in news publishing following the introduction of generative AI (GenAI). First, we find a moderate decline in traffic to news publishers occurring after August 2024. Second, using a difference-in-differences approach, we find that blocking GenAI bots can be associated with a reduction of total website traffic to large publishers compared to not blocking. Third, on the hiring side, we do not find evidence that LLMs are replacing editorial or content-production jobs yet. The share of new editorial and content-production job listings increases over time. Fourth, regarding content production, we find no evidence that large publishers increased text volume; instead, they significantly increased rich content and use more advertising and targeting technologies. Together, these findings provide early evidence of some unforeseen impacts of the introduction of LLMs on news production and consumption.


[15] 2602.08955

Platform Design, Earnings Transparency and Minimum Wage Policies: Evidence from A Natural Experiment on Lyft

We study the effects of a significant design and policy change at a major ridesharing platform that altered both provider earnings and platform transparency, examining how it affected outcomes for drivers, riders, and the platform, and providing managerial insights on balancing competing stakeholder interests while avoiding unintended consequences. In February 2024, Lyft introduced a policy guaranteeing drivers a minimum fraction of rider payments while increasing per-ride earnings transparency. The staggered rollout, first in major markets, created a natural experiment to examine how earnings guarantees and transparency affect ride availability and driver engagement. Using trip-level data from over 47 million rides across a major market and adjacent markets over six months, we apply dynamic staggered difference-in-differences models combined with a geographic border strategy to estimate causal effects on supply, demand, ride production, and platform performance. We find that the policy led to substantial increases in driver engagement, with distinct effects from the guarantee and transparency. Drivers increased working hours and utilization, resulting in more completed trips and higher per-hour and per-trip earnings, with stronger effects among drivers with lower pre-policy earnings and greater income uncertainty. Increased supply also generated positive spillovers on demand. We also find evidence that greater transparency may induce strategic driver behavior. In ongoing work, we develop a counterfactual simulation framework linking driver supply and rider intents to ride production, illustrating how small changes in driver choices could further amplify policy effects. Our study shows how platform-led interventions present an intriguing alternative to government-led minimum pay regulation and provide new strategic insights into managing platform change.


[16] 2304.03042

Rough volatility, path-dependent PDEs and weak rates of convergence

In the setting of stochastic Volterra equations, and in particular rough volatility models, we show that conditional expectations are the unique classical solutions to path-dependent PDEs. The latter arise from the functional Itô formula developed by [Viens, F., & Zhang, J. (2019). A martingale approach for fractional Brownian motions and related path dependent PDEs. Ann. Appl. Probab.]. We then leverage these tools to study weak rates of convergence for discretised stochastic integrals of smooth functions of a Riemann-Liouville fractional Brownian motion with Hurst parameter $H \in (0,\frac{1}{2})$. These integrals approximate log-stock prices in rough volatility models. We obtain the optimal weak error rates of order $1$ if the test function is quadratic and of order $(3H+\frac{1}{2})\wedge1$ if the test function is five times differentiable; in particular these conditions are independent of the value of $H$.


[17] 2505.08654

Holistic Multi-Scale Inference of the Leverage Effect: Efficiency under Dependent Microstructure Noise

This paper addresses the long-standing challenge of estimating the leverage effect from high-frequency data contaminated by dependent, non-Gaussian microstructure noise. We depart from the conventional reliance on pre-averaging or volatility "plug-in" methods by introducing a holistic multi-scale framework that operates directly on the leverage effect. We propose two novel estimators: the Subsampling-and-Averaging Leverage Effect (SALE) and the Multi-Scale Leverage Effect (MSLE). Central to our approach is a shifted window technique that constructs a noise-unbiased base estimator, significantly simplifying the multi-scale architecture. We provide a rigorous theoretical foundation for these estimators, establishing central limit theorems and stable convergence results that remain valid under both noise-free and dependent-noise settings. The primary contribution to estimation efficiency is a specifically designed weighting strategy for the MSLE estimator. By optimizing the weights based on the asymptotic covariance structure across scales and incorporating finite-sample variance corrections, we achieve substantial efficiency gains over existing benchmarks. Extensive simulation studies and an empirical analysis of 30 U.S. assets demonstrate that our framework consistently yields smaller estimation errors and superior performance in realistic, noisy market environments.


[18] 2505.19013

Faithful Group Shapley Value

Data Shapley is an important tool for data valuation, which quantifies the contribution of individual data points to machine learning models. In practice, group-level data valuation is desirable when data providers contribute data in batch. However, we identify that existing group-level extensions of Data Shapley are vulnerable to shell company attacks, where strategic group splitting can unfairly inflate valuations. We propose Faithful Group Shapley Value (FGSV) that uniquely defends against such attacks. Building on original mathematical insights, we develop a provably fast and accurate approximation algorithm for computing FGSV. Empirical experiments demonstrate that our algorithm significantly outperforms state-of-the-art methods in computational efficiency and approximation accuracy, while ensuring faithful group-level valuation.