New articles on Quantitative Finance


[1] 2604.13150

Unveiling the Nexus Between Economic Complexity and Environmental Sustainability: Evidence from BRICS-T Countries

This study analyses the impacts of economic complexity on environmental performance in BRICS-T countries. Annual data for the period 1999-2021, Durbin-Hausman cointegration test and Augmented Mean Group (AMG) estimator are used in the analysis. The robustness of the Panel AMG results is tested with CCEMG and CS-ARDL methods. The results indicate that economic complexity has a positive impact on environmental performance. An increase of 1% in the economic complexity index increases environmental performance in BRICS-T countries between 0.020% and 1.243%. However, economic growth, energy intensity and population density were found to have a negative impact on environmental performance. Renewable energy use, in contrast, contributes positively to environmental performance.


[2] 2604.13224

Micro and Macro Perspectives on Production-Based Markups

We review the "production approach" to estimating markups, the ratio of price to marginal cost. The approach is uniquely scalable: it requires no model of consumer demand or market structure and applies broadly across firms, industries, and time. Our organizing insight is that the production-based markup is a residual. Like the Solow residual, it is clean in theory but potentially contaminated by misspecification and mismeasurement. This framing helps explain why small differences in implementation can produce starkly different results from the same data. In some cases, markups have risen sharply. In others, they have not. Despite the disagreements in the literature, the importance of understanding and measuring market power cannot be overstated. We provide conceptual rationales for this disagreement, offer practical guidance on data and estimation, and call for greater transparency about how much of the variation attributed to markups may instead reflect technology.


[3] 2604.13260

Which Voices Move Markets? Speaker Identity and the Cross-Section of Post-Earnings Returns

We utilize FinBERT, a domain-specific transformer model, to parse 6.5 million sentences from 16,428 S&P 500 quarterly earnings call transcripts (2015-2025) and demonstrate that post-earnings stock returns are not equally affected by all speakers in a conference call. Our section-weighted sentiment, with empirically derived speaker weights (Analyst 49%, CFO 30%, Executive 16%, Other 5%), achieves an out-of-sample Spearman IC of 0.142 versus 0.115 in-sample, generates monthly long-short alpha of 2.03% unexplained by the Fama-French five-factor model (t = 6.49), and remains significant after controlling for standardized unexpected earnings (SUE). FinBERT section-weighted sentiment entirely subsumes the Loughran-McDonald dictionary approach (FinBERT t = 5.90; LM t = 0.86 in the combined specification). Signal decay analysis and cumulative abnormal return charts confirm gradual price adjustment consistent with sluggish assimilation of soft information. All results undergo rigorous out-of-sample validation with an explicit temporal split, yielding improved rather than deteriorated predictive power.


[4] 2604.13334

Against a Universal Trading Strategy: No-Arbitrage, No-Free-Lunch, and Adversarial Cantor Diagonalization

We investigate the impossibility of universally winning trading strategies -- those generating strict profit across all market trajectories -- through three distinct mathematical paradigms. Fundamentally, under standard admissibility constraints, the existence of such a strategy is a strict subset of strong arbitrage, which is mathematically precluded in competitive markets admitting an equivalent martingale measure. Beyond this rigorous measure-theoretic foundation, we explore analogous limitations in two alternative modeling regimes. Combinatorially, the No-Free-Lunch theorem demonstrates that outperformance requires exploitation of non-uniform market structure, as uniform averaging precludes universal dominance. Computationally, a Turing diagonalization argument constructs an adversarial environment that defeats any computable trading algorithm, shifting the impossibility from exogenous price paths to adaptive adversaries. These mathematical limits are framed by a time-reversal heuristic that establishes a formal analogy between financial martingale measures and thermodynamic detailed balance, resolving the Maxwell's Demon analogy for markets without relying on physically irrelevant Landauer erasure costs. Using the Wheel Options Strategy as a case study, we demonstrate that strategies succeeding ``for all practical purposes'' (FAPP) inherently depend on transient regime assumptions, meaning their automated execution systematically amplifies tail risks.


[5] 2604.13458

Interpretable Systematic Risk around the Clock

In this paper, I present the first comprehensive, around-the-clock analysis of systematic jump risk by combining high-frequency market data with contemporaneous news narratives identified as the underlying causes of market jumps. These narratives are retrieved and classified using a state-of-the-art open-source reasoning LLM. Decomposing market risk into interpretable jump categories reveals significant heterogeneity in risk premia, with macroeconomic news commanding the largest and most persistent premium. Leveraging this insight, I construct an annually rebalanced real-time Fama-MacBeth factor-mimicking portfolio that isolates the most strongly priced jump risk, achieving a high out-of-sample Sharpe ratio and delivering significant alphas relative to standard factor models. The results highlight the value of around-the-clock analysis and LLM-based narrative understanding for identifying and managing priced risks in real time.


[6] 2604.13545

Waiting for Help: Timely Access to Psychological Support for Young Adults Exposed to Parental Substance Misuse

Access to mental health care is often rationed through waiting lists, yet there is limited causal evidence on the consequences of delayed access. We study whether eliminating waiting time for psychological support improves outcomes for young adults who grew up with parental substance misuse. Using a randomized waitlist-controlled trial in Denmark combined with survey and administrative data, we find that immediate access leads to sizable short-run improvements in psychological health. These gains persist three to four years after randomization, even after both groups have received the intervention. By contrast, we find limited evidence of large average effects on broader health or labor market outcomes. Our results highligth the importance of treatment timing in capacity-constrained settings.


[7] 2604.13597

Daycare Matching with Siblings: Social Implementation and Welfare Evaluation

In centralized assignment problems, agents may have preferences over joint rather than individual assignments, such as couples in residency matching or siblings in school choice and daycare. Standard preference estimation methods typically ignore such complementarities. This paper develops an empirical framework that explicitly incorporates them. Using data from daycare assignment in a municipality in Japan, we estimate a model in which families incur both additional commuting distance and a fixed non-distance disutility when siblings are assigned to different facilities. We find that split assignment generates a large disutility, equivalent to more than twice the average commuting distance. We then simulate counterfactual assignment policies that vary the strength of sibling priority and evaluate welfare. The sibling priority reform that we designed and that was implemented in 2024 increases welfare by 6.4% while reducing inequality in assignment rates across sibling groups; models that ignore sibling complementarities substantially understate these gains. At the same time, we uncover a clear efficiency-equity tradeoff: along the frontier, increasing mean welfare by 100 meters is associated with an increase in inequality of about 1.7 percentage points, and the welfare-maximizing policy reverses much of the reform's reduction in inequality, largely through the displacement of households without siblings.


[8] 2604.13603

On the Design of Stochastic Electricity Auctions

Electricity is typically traded in day-ahead auctions because many power system decisions, such as unit commitment, must be made in advance. However, when wind and solar generators sell power one day ahead, they face uncertainty about their actual production. In current day-ahead auctions, this uncertainty cannot be directly communicated, leading to inefficient use of renewable energy and suboptimal system decisions. We show how this problem can be addressed using the concept of equilibrium under uncertainty from microeconomic theory. In particular, we demonstrate that electricity contracts should be conditioned not only on the time and location of delivery, but also on the state of the world (e.g., whether it will be windy or calm). This requires a precise definition of the state of the world. Since there are infinitely many possible definitions, criteria are needed to select among them. We develop such criteria and show that the resulting states correspond to solutions of an optimal partitioning problem. Finally, we illustrate how these states can be computed and interpreted using a case study of offshore wind farms in the European North Sea.


[9] 2604.13798

Higher-order ATM asymptotics for the CGMY model via the characteristic function

Using only the characteristic function, we derive short-time at-the-money (ATM) call-price asymptotics for the exponential CGMY model with activity parameter $Y\in(1,2)$. The Lipton--Lewis formula expresses the normalized ATM call price, denoted $c(t,0)$, in terms of the characteristic exponent, which, upon rescaling at the rate $t^{-1/Y}$ from the $Y$-stable domain of attraction, yields $c(t,0) = d_{1} t^{1/Y} + d_{2} t + o(t)$ as $t\downarrow 0$. The first-order coefficient $d_{1}$ is the known stable limit from the domain of attraction of a symmetric $Y$-stable law, and $d_{2}$ is given by an explicit integral involving the characteristic exponent and the limiting stable exponent. We then extract closed-form higher-order coefficients by keeping the full Lipton--Lewis integrand intact and introducing a dynamic cutoff that partitions the domain into inner, core, and tail regions, establishing the expansion with controlled remainder. All coefficients are verified numerically against existing closed-form expressions where available.


[10] 2604.13896

Gender, Unpaid Work, and Social Norms in Young Italian Families: Evidence from Couples Time Diaries

Why do large gender inequalities in everyday life persist even as women strengthen their attachment to paid work? Existing evidence shows that women continue to do more unpaid work than men, but much of that evidence is based on individual diaries, says little about how inequality is jointly organized within couples, and rarely links daily time allocation to directly measured gender attitudes. This paper addresses that gap using the TIMES Observatory, an original survey of 1,928 co-resident couples with at least one child younger than 11 in Emilia-Romagna or Campania. The data combine matched partner diaries for one weekday and one weekend day with rich socio-economic information and direct measures of gender norms. We document three main findings. First, women do substantially more unpaid work and spend more time with children, while men do more paid work and enjoy more leisure without children. Second, these asymmetries remain sizeable even among dual full-time couples, implying that stronger female labor-market attachment does not by itself equalize daily life. Third, more traditional gender attitudes - especially among men - are descriptively associated with lower male participation in childcare and domestic work and with wider gaps in discretionary leisure. The analysis is descriptive rather than causal, but it shows that gender inequality within couples is visible not only in the amount of work performed, but also in the distribution of time that is genuinely discretionary.


[11] 2604.13998

The Revenue Effect of Demand Misspecification in Event Ticket Pricing

We study a finite-horizon dynamic pricing problem for event tickets with limited inventory and time-varying demand. The central practical difficulty is that the total demand function $L(t)$ is not observed directly and must be estimated from data, while pricing decisions are sensitive to its temporal shape. The paper examines how the accuracy of this estimate affects revenue. We consider a model in which sales intensity is driven by the total demand $L(t)$, a price-response function $v(p)$, and a time-dependent willingness-to-pay factor $\varphi(t)$. The factor $\varphi(t)$ plays a central role: it captures the increase in customers' willingness to pay as the event date approaches and makes the temporal profile of demand economically important for pricing. Within this framework, the updated numerical study evaluates a benchmark dynamic-programming policy across nine deterministic true-demand scenarios, a collection of feature-aware misspecifications of $L(t)$, and multiple environment regimes induced by $v(p)=e^{-\eta p}$, the deadline factor $\varphi(t)$, and inventory level $Q$. The reported summaries are based on stochastic simulation and a ratio-of-means relative-loss metric. The results show that a more accurate representation of the temporal demand profile leads to more effective pricing decisions and higher revenue. Over the full misspecification collection the aggregate relative revenue loss is $0.42\%$, the upper decile exceeds $1\%$, and the most expensive errors are omissions of late-demand components. The average effect is therefore modest but non-negligible, and it becomes stronger when deadline effects are pronounced and inventory is tight.


[12] 2604.14059

A Comparative Study of Dynamic Programming and Reinforcement Learning in Finite Horizon Dynamic Pricing

This paper provides a systematic comparison between Fitted Dynamic Programming (DP), where demand is estimated from data, and Reinforcement Learning (RL) methods in finite-horizon dynamic pricing problems. We analyze their performance across environments of increasing structural complexity, ranging from a single typology benchmark to multi-typology settings with heterogeneous demand and inter-temporal revenue constraints. Unlike simplified comparisons that restrict DP to low-dimensional settings, we apply dynamic programming in richer, multi-dimensional environments with multiple product types and constraints. We evaluate revenue performance, stability, constraint satisfaction behavior, and computational scaling, highlighting the trade-offs between explicit expectation-based optimization and trajectory-based learning.


[13] 2604.13311

Topological Complexity and Phase Space Stability: A Persistent Homology Approach to Cryptocurrency Risk

Traditional risk measures in finance, predominantly based on the second moment of return distributions or tail risk heuristics (VaR/CVaR), fail to account for the intrinsic geometric structure of market dynamics. This paper introduces a rigorous mathematical framework utilizing Topological Data Analysis (TDA) to quantify risk as the structural instability of the reconstructed phase space. By applying Takens' Delay Embedding Theorem to cryptocurrency log-returns, we generate a point cloud representation of the underlying attractor. We analyze the evolution of the filtration of Vietoris-Rips complexes to compute persistent homology groups $H_k$. We define a "Topological Persistence Norm" to characterize market regimes and propose a leverage calibration heuristic based on the persistence of 1-dimensional cycles. This approach provides a coordinate-free, stability-invariant metric for risk assessment that is robust to high-frequency noise.


[14] 2604.13478

Deepbullwhip: An Open-Source Simulation and Benchmarking for Multi-Echelon Bullwhip Analyses

The bullwhip effect remains operationally persistent despite decades of analytical research. Two computational deficiencies hinder progress: the absence of modular open-source simulation tools for multi-echelon inventory dynamics with asymmetric costs, and the lack of a standardized benchmarking protocol for comparing mitigation strategies across shared metrics and datasets. This paper introduces deepbullwhip, an open-source Python package that integrates a simulation engine for serial supply chains (with pluggable demand generators, ordering policies, and cost functions via abstract base classes, and a vectorized Monte Carlo engine achieving 50 to 90 times speedup) with a registry-based benchmarking framework shipping a curated catalog of ordering policies, forecasting methods, six bullwhip metrics, and demand datasets including WSTS semiconductor billings. Five sets of experiments on a four-echelon semiconductor chain demonstrate cumulative amplification of 427x (Monte Carlo mean across 1,000 paths), a stochastic filtering phenomenon at upstream tiers (CV = 0.01), super-exponential lead time sensitivity, and scalability to 20.8 million simulation cells in under 7 seconds. Benchmark experiments reveal a 155x disparity between synthetic AR(1) and real WSTS bullwhip severity under the Order-Up-To policy, and quantify the BWR-NSAmp tradeoff across ordering policies, demonstrating that no single metric captures policy quality.


[15] 2505.13019

Characterizing asymmetric and bimodal long-term financial return distributions through quantum walks

The analysis of logarithmic return distributions defined over large time scales is crucial for understanding the long-term dynamics of asset price movements. For large time scales of the order of two trading years, the anticipated Gaussian behavior of the returns often does not emerge, and their distributions often exhibit a high level of asymmetry and bimodality. These features are inadequately captured by the majority of classical models to address financial time series and return distributions. In the presented analysis, we use a model based on the discrete-time quantum walk to characterize the observed asymmetry and bimodality. The quantum walk distinguishes itself from a classical diffusion process by the occurrence of interference effects, which allows for the generation of bimodal and asymmetric probability distributions. By capturing the broader trends and patterns that emerge over extended periods, this analysis complements traditional short-term models and offers opportunities to more accurately describe the probabilistic structure underlying long-term financial decisions.


[16] 2505.16654

Optimising the decision threshold in a weighted voting system: The case of the IMF's Board of Governors

In a weighted majority voting game, the players' weights are determined based on the constitutional planner's intentions. The weights are challenging to change in numerous cases, as they represent some desired disparity. However, the voting weights and the actual voting power do not necessarily coincide. Changing a decision threshold would offer some remedy. The International Monetary Fund (IMF) is one of the most important international organisations that uses a weighted voting system to make decisions. The voting weights in its Board of Governors depend on the quotas of the 191 member countries, which reflect their economic strengths to some extent. We analyse the connection between the decision threshold and the a priori voting power of the countries by calculating the Banzhaf indices for each threshold between 50% and 87%. The difference between quotas and voting powers is minimised if the decision threshold is 58% or 59%.


[17] 2511.11364

Assessment of loan losses after default

The paper shows how to determine the loss on an LGD borrower's loan after default, with or without preparation of a separate model. The solution evaluates LGD after default, considering the average maturity of the defaulted loan, knowledge of volumes, moments of default and repayments, the rate or other parameters in the vector of determinants. The decision was based on the average recovery period for defaulted loans, which is calculated in the article. The model of general recovery for the recovery process for the required LGD segment was used. Only this type of model allows you to set LGD no more than one, which is required for calculating further estimates.


[18] 2512.20515

Modeling Bank Systemic Risk of Emerging Markets under Geopolitical Shocks: Empirical Evidence from BRICS Countries

In this study, we introduce an analytics framework, the Bank Risk Interlinkage with Dynamic Graph and Event Simulations (BRIDGES), to capture the systemic risks associated with the growing economic influence of the BRICS nations. This framework includes a Dynamic Time Warping (DTW) method to construct a dynamic network of 551 BRICS banks with their annual balance sheet data from 2008 to 2024; a trend analysis in risk ratios to detect shifts in banks' behavior; a Temporal Graph Neural Network (TGNN) to detect anomalous changes in the bank network's structural relationships; and Agent-Based Model (ABM) simulations to measure the impact of anomalous changes on network stability and assess the banking system's resilience to internal financial failure and external geopolitical shocks at the individual country level and across BRICS nations. Our simulation results highlight several important insights. The failure of the largest BRICS banks can cause more systemic damage than that of financially vulnerable or anomalous banks due to the panic effects. Moreover, compared to the failure of the largest BRICS banks, a geopolitical shock with correlated country-wide propagation can cause more systemic damage, resulting in a near-total systemic collapse. Our findings suggest that the panic over the failure of the largest BRICS banks and large-scale geopolitical shocks are the primary threats to the financial stability of the BRICS nations, which traditional bank risk analysis models might not detect.


[19] 2512.24968

Strategic Response of News Publishers to Generative AI

Generative AI can adversely impact news publishers by lowering consumer demand. It can also reduce demand for newsroom employees, and increase the creation of news "slop." However, it can also form a source of traffic referrals and an information-discovery channel that increases demand. We use high-frequency granular data to analyze the strategic response of news publishers to the introduction of Generative AI. Many publishers strategically blocked LLM access to their websites using the this http URL file standard. Using a difference-in-differences approach, we find that large publishers who block GenAI bots experience reduced website traffic compared to not blocking. In addition, we find that large publishers shift toward richer content that is harder for LLMs to replicate, without increasing text volume. Finally, we find that the share of new editorial and content-production job postings rises over time. Together, these findings illustrate the levers that publishers choose to use to strategically respond to competitive Generative AI threats, and their consequences.


[20] 2604.10402

Regime-Aware Specialist Routing for Volatility Forecasting

Volatility forecasting becomes challenging when market conditions shift and model performance varies across regimes. Motivated by this instability, we develop a regime-aware specialist routing framework for ETF volatility forecasting. The framework uses online risk-sensitive evaluation and state-dependent gating to combine different forecasting specialists across calm and stressed market states. Using a daily panel of six ETFs under a rolling walk-forward design, we find that the strongest forecaster is regime-dependent rather than stable across all regimes. Relative to the rolling-best baseline, the proposed routing framework reduces high-volatility forecast loss by about 24\% and underprediction loss by about 22\%. These results suggest that specialist routing provides a practical adaptive forecasting architecture for changing market conditions.


[21] 2505.01858

Mean Field Game of Optimal Tracking Portfolio

This paper studies the mean field game (MFG) problem arising from a large population competition in fund management, featuring a new type of relative performance via the benchmark tracking. In the $n$-player model, each agent aims to minimize the expected largest shortfall of the wealth with reference to the benchmark process, which is modeled by a linear combination of the population's average wealth process and a market index process. With a continuum of agents, we formulate the MFG problem with a reflected state process. We establish the existence of the mean field equilibrium (MFE) using the partial differential equation (PDE) approach. Firstly, by applying the dual transform, the best response control of the representative agent can be characterized in analytical form in terms of a dual reflected diffusion process. As a novel contribution, we verify the consistency condition of the MFE in separated domains with the help of the duality relationship and properties of the dual process. Moreover, based on the MFE, we construct an approximate Nash equilibrium for the $n$-player game when the number $n$ is sufficiently large.


[22] 2603.05264

Asset Returns, Portfolio Choice, and Proportional Wealth Taxation

We analyse the effect of a proportional wealth tax on asset returns, portfolio choice, and asset pricing. The tax is levied annually on the market value of all holdings at a uniform rate. We show that such a tax is economically equivalent to the government acquiring a proportional stake in the investor's portfolio each period -- a form of risk sharing in which expected wealth and risk are reduced by the same factor, while the return per share is unaffected. This multiplicative separability drives four main results. First, the coefficient of variation of wealth is invariant to the tax rate. Second, the optimal portfolio weights -- and in particular the tangency portfolio -- are independent of the tax rate. Third, the wealth tax is orthogonal to portfolio choice: it induces a homothetic contraction of the opportunity set in the mean-standard deviation plane that preserves the Sharpe ratio of every portfolio. Fourth, both taxed and untaxed investors are willing to pay the same price per share for any asset. The results are derived first under geometric Brownian motion and then generalised to any return distribution in the location-scale family. A complementary Modigliani-Miller analysis confirms pricing neutrality and identifies an inconsistency in the existing literature regarding the discount rate used for after-tax cash flows. Imposing the CAPM as a special case confirms that after-tax betas equal pre-tax betas and the security market line contracts uniformly by $(1-\tau_w)$; under CRRA preferences, general-equilibrium returns and prices are unchanged. This resolves an error in Fama (2021). The neutrality results depend on universal taxation at market value and frictionless markets. We formalise three channels -- book-value taxation, liquidity frictions, and dividend extraction -- through which these conditions break neutrality.


[23] 2603.05277

Extensions to the Wealth Tax Neutrality Framework

Frøseth (2026; arXiv:2603.05264) shows that a proportional wealth tax on market values is neutral with respect to portfolio choice, Sharpe ratios, and equilibrium prices under CRRA preferences and geometric Brownian motion. This paper investigates the robustness of that result along two dimensions. First, we extend the neutrality frontier: portfolio neutrality -- including all intertemporal hedging demands -- is preserved under stochastic volatility (Heston and general Markov diffusions) and Epstein-Zin recursive utility, but breaks under non-homothetic preferences such as HARA. Second, we identify four channels through which implemented wealth taxes depart from neutrality even under CRRA: non-uniform assessment across asset classes, general equilibrium price effects in inelastic markets, progressive threshold structures, and endogenous labour supply. Each channel is formalised and, where possible, calibrated to the Norwegian wealth tax system. The progressive threshold introduces a tax shield that increases risk-taking near the exemption boundary -- an effect opposite in sign to the HARA distortion -- and, at the extreme, generates a participation margin at which investors exit the tax jurisdiction entirely. We formalise this tax-induced migration as the extreme response at the progressive threshold and examine the Norwegian post-2022 experience as a case study. The full framework is applied to evaluate the Saez-Zucman proposal for a global minimum wealth tax on billionaires and the related French proposal for a national minimum tax above EUR 100 million.