New articles on Quantitative Finance


[1] 2510.20032

Evaluating Local Policies in Centralized Markets

We study a policy evaluation problem in centralized markets. We show that the aggregate impact of any marginal reform, the Marginal Policy Effect (MPE), is nonparametrically identified using data from a baseline equilibrium, without additional variation in the policy rule. We achieve this by constructing the equilibrium-adjusted outcome: a policy-invariant structural object that augments an agent's outcome with the full equilibrium externality their participation imposes on others. We show that these externalities can be constructed using estimands that are already common in empirical work. The MPE is identified as the covariance between our structural outcome and the reform's direction, providing a flexible tool for optimal policy targeting and a novel bridge to the Marginal Treatment Effects literature.


[2] 2510.20047

Multivariate Variance Swap Using Generalized Variance Method for Stochastic Volatility models

This paper develops a novel framework for modeling the variance swap of multi-asset portfolios by employing the generalized variance approach, which utilizes the determinant of the covariance matrix of the underlying assets. By specifying the distribution of the log returns of the underlying assets under the Heston and Barndorff-Nielsen and Shephard (BNS) stochastic volatility frameworks, we derive closed-form solutions for the realized variance through the computation of the covariance generalization of multi-assets. To evaluate the robustness of the proposed model, we conduct simulations using nine different assets generated via the quantmod package. For a three-asset portfolio, analytical expressions for the multivariate variance swap are obtained under both the Heston and BNS models. Numerical experiments further demonstrate the effectiveness of the proposed model through parameter testing, calibration, and validation.


[3] 2510.20221

FinCARE: Financial Causal Analysis with Reasoning and Evidence

Portfolio managers rely on correlation-based analysis and heuristic methods that fail to capture true causal relationships driving performance. We present a hybrid framework that integrates statistical causal discovery algorithms with domain knowledge from two complementary sources: a financial knowledge graph extracted from SEC 10-K filings and large language model reasoning. Our approach systematically enhances three representative causal discovery paradigms, constraint-based (PC), score-based (GES), and continuous optimization (NOTEARS), by encoding knowledge graph constraints algorithmically and leveraging LLM conceptual reasoning for hypothesis generation. Evaluated on a synthetic financial dataset of 500 firms across 18 variables, our KG+LLM-enhanced methods demonstrate consistent improvements across all three algorithms: PC (F1: 0.622 vs. 0.459 baseline, +36%), GES (F1: 0.735 vs. 0.367, +100%), and NOTEARS (F1: 0.759 vs. 0.163, +366%). The framework enables reliable scenario analysis with mean absolute error of 0.003610 for counterfactual predictions and perfect directional accuracy for intervention effects. It also addresses critical limitations of existing methods by grounding statistical discoveries in financial domain expertise while maintaining empirical validation, providing portfolio managers with the causal foundation necessary for proactive risk management and strategic decision-making in dynamic market environments.


[4] 2510.20434

Market-Implied Sustainability: Insights from Funds' Portfolio Holdings

In this work, we aim to develop a market-implied sustainability score for companies, based on the extent to which a stock is over- or under-represented in sustainable funds compared to traditional ones. To identify sustainable funds, we rely on the Sustainable Finance Disclosure Regulation (SFDR), a European framework designed to clearly categorize investment funds into different classes according to their commitment to sustainability. In our analysis, we classify as sustainable those funds categorized as Article 9 - also known as "dark green" - and compare them to funds categorized as Article 8 or Article 6. We compute an SFDR Market-Implied Sustainability (SMIS) score for a large set of European companies. We then conduct an econometric analysis to identify the factors influencing SMIS and compare them with state-of-the-art ESG (Environmental, Social, and Governance) scores provided by Refinitiv. Finally, we assess the realized risk-adjusted performance of stocks using portfolio-tilting strategies. Our results show that SMIS scores deviate substantially from traditional ESG scores and that, over the period 2010-2023, companies with high SMIS have been associated with significant financial outperformance.


[5] 2510.20699

Fusing Narrative Semantics for Financial Volatility Forecasting

We introduce M2VN: Multi-Modal Volatility Network, a novel deep learning-based framework for financial volatility forecasting that unifies time series features with unstructured news data. M2VN leverages the representational power of deep neural networks to address two key challenges in this domain: (i) aligning and fusing heterogeneous data modalities, numerical financial data and textual information, and (ii) mitigating look-ahead bias that can undermine the validity of financial models. To achieve this, M2VN combines open-source market features with news embeddings generated by Time Machine GPT, a recently introduced point-in-time LLM, ensuring temporal integrity. An auxiliary alignment loss is introduced to enhance the integration of structured and unstructured data within the deep learning architecture. Extensive experiments demonstrate that M2VN consistently outperforms existing baselines, underscoring its practical value for risk management and financial decision-making in dynamic markets.


[6] 2510.20748

Reinforcement Learning and Consumption-Savings Behavior

This paper demonstrates how reinforcement learning can explain two puzzling empirical patterns in household consumption behavior during economic downturns. I develop a model where agents use Q-learning with neural network approximation to make consumption-savings decisions under income uncertainty, departing from standard rational expectations assumptions. The model replicates two key findings from recent literature: (1) unemployed households with previously low liquid assets exhibit substantially higher marginal propensities to consume (MPCs) out of stimulus transfers compared to high-asset households (0.50 vs 0.34), even when neither group faces borrowing constraints, consistent with Ganong et al. (2024); and (2) households with more past unemployment experiences maintain persistently lower consumption levels after controlling for current economic conditions, a "scarring" effect documented by Malmendier and Shen (2024). Unlike existing explanations based on belief updating about income risk or ex-ante heterogeneity, the reinforcement learning mechanism generates both higher MPCs and lower consumption levels simultaneously through value function approximation errors that evolve with experience. Simulation results closely match the empirical estimates, suggesting that adaptive learning through reinforcement learning provides a unifying framework for understanding how past experiences shape current consumption behavior beyond what current economic conditions would predict.


[7] 2510.20763

Consumption-Investment Problem in Rank-Based Models

We study a consumption-investment problem in a multi-asset market where the returns follow a generic rank-based model. Our main result derives an HJB equation with Neumann boundary conditions for the value function and proves a corresponding verification theorem. The control problem is nonstandard due to the discontinuous nature of the coefficients in rank-based models, requiring a bespoke approach of independent mathematical interest. The special case of first-order models, prescribing constant drift and diffusion coefficients for the ranked returns, admits explicit solutions when the investor is either (a) unconstrained, (b) abides by open market constraints or (c) is fully invested in the market. The explicit optimal strategies in all cases are related to the celebrated solution to Merton's problem, despite the intractability of constraint (b) in that setting.


[8] 2510.20017

Simultaneously Solving Infinitely Many LQ Mean Field Games In Hilbert Spaces: The Power of Neural Operators

Traditional mean-field game (MFG) solvers operate on an instance-by-instance basis, which becomes infeasible when many related problems must be solved (e.g., for seeking a robust description of the solution under perturbations of the dynamics or utilities, or in settings involving continuum-parameterized agents.). We overcome this by training neural operators (NOs) to learn the rules-to-equilibrium map from the problem data (``rules'': dynamics and cost functionals) of LQ MFGs defined on separable Hilbert spaces to the corresponding equilibrium strategy. Our main result is a statistical guarantee: an NO trained on a small number of randomly sampled rules reliably solves unseen LQ MFG variants, even in infinite-dimensional settings. The number of NO parameters needed remains controlled under appropriate rule sampling during training. Our guarantee follows from three results: (i) local-Lipschitz estimates for the highly nonlinear rules-to-equilibrium map; (ii) a universal approximation theorem using NOs with a prespecified Lipschitz regularity (unlike traditional NO results where the NO's Lipschitz constant can diverge as the approximation error vanishes); and (iii) new sample-complexity bounds for $L$-Lipschitz learners in infinite dimensions, directly applicable as the Lipschitz constants of our approximating NOs are controlled in (ii).


[9] 2510.20612

Black Box Absorption: LLMs Undermining Innovative Ideas

Large Language Models are increasingly adopted as critical tools for accelerating innovation. This paper identifies and formalizes a systemic risk inherent in this paradigm: \textbf{Black Box Absorption}. We define this as the process by which the opaque internal architectures of LLM platforms, often operated by large-scale service providers, can internalize, generalize, and repurpose novel concepts contributed by users during interaction. This mechanism threatens to undermine the foundational principles of innovation economics by creating severe informational and structural asymmetries between individual creators and platform operators, thereby jeopardizing the long-term sustainability of the innovation ecosystem. To analyze this challenge, we introduce two core concepts: the idea unit, representing the transportable functional logic of an innovation, and idea safety, a multidimensional standard for its protection. This paper analyzes the mechanisms of absorption and proposes a concrete governance and engineering agenda to mitigate these risks, ensuring that creator contributions remain traceable, controllable, and equitable.


[10] 2509.12558

A Note on Subadditivity of Value at Risks (VaRs): A New Connection to Comonotonicity

In this paper, we provide a new property of value at risk (VaR), which is a standard risk measure that is widely used in quantitative financial risk management. We show that the subadditivity of VaR for given loss random variables holds for any confidence level if and only if those are comonotonic. This result also gives a new equivalent condition for the comonotonicity of random vectors.


[11] 2509.21929

Optimal Consumption-Investment with Epstein-Zin Utility under Leverage Constraint

We study optimal portfolio choice under Epstein-Zin recursive utility in the presence of general leverage constraints. We first establish that the optimal value function is the unique viscosity solution to the associated Hamilton-Jacobi-Bellman (HJB) equation, by developing a new dynamic programming principle under constraints. We further demonstrate that the value function admits smoothness and characterize the optimal consumption and investment strategies. In addition, we derive explicit solutions for the optimal strategy and explicitly delineate the constrained and unconstrained regions in several special cases of the leverage constraint. Finally, we conduct a comparative analysis, highlighting the differences relative to the classical time-separable preferences and to the setting without leverage constraints.


[12] 2510.14435

Cryptocurrency as an Investable Asset Class: Coming of Age

Cryptocurrencies are coming of age. We organize empirical regularities into ten stylized facts and analyze cryptocurrency through the lens of empirical asset pricing. We find important similarities with traditional markets -- risk-adjusted performance is broadly comparable, and the cross-section of returns can be summarized by a small set of factors. However, cryptocurrency also has its own distinct character: jumps are frequent and large, and blockchain information helps drive prices. This common set of facts provides evidence that cryptocurrency is emerging as an investable asset class.


[13] 2505.15602

Deep Learning for Continuous-time Stochastic Control with Jumps

In this paper, we introduce a model-based deep-learning approach to solve finite-horizon continuous-time stochastic control problems with jumps. We iteratively train two neural networks: one to represent the optimal policy and the other to approximate the value function. Leveraging a continuous-time version of the dynamic programming principle, we derive two different training objectives based on the Hamilton-Jacobi-Bellman equation, ensuring that the networks capture the underlying stochastic dynamics. Empirical evaluations on different problems illustrate the accuracy and scalability of our approach, demonstrating its effectiveness in solving complex, high-dimensional stochastic control tasks.