New articles on Quantitative Finance


[1] 2410.14839

Multi-Task Dynamic Pricing in Credit Market with Contextual Information

We study the dynamic pricing problem faced by a broker that buys and sells a large number of financial securities in the credit market, such as corporate bonds, government bonds, loans, and other credit-related securities. One challenge in pricing these securities is their infrequent trading, which leads to insufficient data for individual pricing. However, many of these securities share structural features that can be utilized. Building on this, we propose a multi-task dynamic pricing framework that leverages these shared structures across securities, enhancing pricing accuracy through learning. In our framework, a security is fully characterized by a $d$ dimensional contextual/feature vector. The customer will buy (sell) the security from the broker if the broker quotes a price lower (higher) than that of the competitors. We assume a linear contextual model for the competitor's pricing, with unknown parameters a prior. The parameters for pricing different securities may or may not be similar to each other. The firm's objective is to minimize the expected regret, namely, the expected revenue loss against a clairvoyant policy which has the knowledge of the parameters of the competitor's pricing model. We show that the regret of our policy is better than both a policy that treats each security individually and a policy that treats all securities as the same. Moreover, the regret is bounded by $\tilde{O} ( \delta_{\max} \sqrt{T M d} + M d ) $, where $M$ is the number of securities and $\delta_{\max}$ characterizes the overall dissimilarity across securities in the basket.


[2] 2410.14841

Dynamic Factor Allocation Leveraging Regime-Switching Signals

This article explores dynamic factor allocation by analyzing the cyclical performance of factors through regime analysis. The authors focus on a U.S. equity investment universe comprising seven long-only indices representing the market and six style factors: value, size, momentum, quality, low volatility, and growth. Their approach integrates factor-specific regime inferences of each factor index's active performance relative to the market into the Black-Litterman model to construct a fully-invested, long-only multi-factor portfolio. First, the authors apply the sparse jump model (SJM) to identify bull and bear market regimes for individual factors, using a feature set based on risk and return measures from historical factor active returns, as well as variables reflecting the broader market environment. The regimes identified by the SJM exhibit enhanced stability and interpretability compared to traditional methods. A hypothetical single-factor long-short strategy is then used to assess these regime inferences and fine-tune hyperparameters, resulting in a positive Sharpe ratio of this strategy across all factors with low correlation among them. These regime inferences are then incorporated into the Black-Litterman framework to dynamically adjust allocations among the seven indices, with an equally weighted (EW) portfolio serving as the benchmark. Empirical results show that the constructed multi-factor portfolio significantly improves the information ratio (IR) relative to the market, raising it from just 0.05 for the EW benchmark to approximately 0.4. When measured relative to the EW benchmark itself, the dynamic allocation achieves an IR of around 0.4 to 0.5. The strategy also enhances absolute portfolio performance across key metrics such as the Sharpe ratio and maximum drawdown.


[3] 2410.14927

Hierarchical Reinforced Trader (HRT): A Bi-Level Approach for Optimizing Stock Selection and Execution

Leveraging Deep Reinforcement Learning (DRL) in automated stock trading has shown promising results, yet its application faces significant challenges, including the curse of dimensionality, inertia in trading actions, and insufficient portfolio diversification. Addressing these challenges, we introduce the Hierarchical Reinforced Trader (HRT), a novel trading strategy employing a bi-level Hierarchical Reinforcement Learning framework. The HRT integrates a Proximal Policy Optimization (PPO)-based High-Level Controller (HLC) for strategic stock selection with a Deep Deterministic Policy Gradient (DDPG)-based Low-Level Controller (LLC) tasked with optimizing trade executions to enhance portfolio value. In our empirical analysis, comparing the HRT agent with standalone DRL models and the S&P 500 benchmark during both bullish and bearish market conditions, we achieve a positive and higher Sharpe ratio. This advancement not only underscores the efficacy of incorporating hierarchical structures into DRL strategies but also mitigates the aforementioned challenges, paving the way for designing more profitable and robust trading algorithms in complex markets.


[4] 2410.14984

Risk Aggregation and Allocation in the Presence of Systematic Risk via Stable Laws

In order to properly manage risk, practitioners must understand the aggregate risks they are exposed to. Additionally, to properly price policies and calculate bonuses the relative riskiness of individual business units must be well understood. Certainly, Insurers and Financiers are interested in the properties of the sums of the risks they are exposed to and the dependence of risks therein. Realistic risk models however must account for a variety of phenomena: ill-defined moments, lack of elliptical dependence structures, excess kurtosis and highly heterogeneous marginals. Equally important is the concern over industry-wide systematic risks that can affect multiple business lines at once. Many techniques of varying sophistication have been developed with all or some of these problems in mind. We propose a modification to the classical individual risk model that allows us to model company-wide losses via the class of Multivariate Stable Distributions. Stable Distributions incorporate many of the unpleasant features required for a realistic risk model while maintaining tractable aggregation and dependence results. We additionally compute the Tail Conditional Expectation of aggregate risks within the model and the corresponding allocations.


[5] 2410.15195

Risk Premia in the Bitcoin Market

Based on options and realized returns, we analyze risk premia in the Bitcoin market through the lens of the Pricing Kernel (PK). We identify that: 1) The projected PK into Bitcoin returns is W-shaped and steep in the negative returns region; 2) Negative Bitcoin returns account for 33% of the total Bitcoin index premium (BP) in contrast to 70% of S&P500 equity premium explained by negative returns. Applying a novel clustering algorithm to the collection of estimated Bitcoin risk-neutral densities, we find that risk premia vary over time as a function of two distinct market volatility regimes. In the low-volatility regime, the PK projection is steeper for negative returns. It has a more pronounced W-shape than the unconditional one, implying particularly high BP for both extreme positive and negative returns and a high Variance Risk Premium (VRP). In high-volatility states, the BP attributable to positive and negative returns is more balanced, and the VRP is lower. Overall, Bitcoin investors are more worried about variance and downside risk in low-volatility states.


[6] 2410.15439

The Economic Consequences of Being Widowed by War: A Life-Cycle Perspective

Despite millions of war widows worldwide, little is known about the economic consequences of being widowed by war. We use life history data from West Germany to show that war widowhood increased women's employment immediately after World War II but led to lower employment rates later in life. War widows, therefore, carried a double burden of employment and childcare while their children were young but left the workforce when their children reached adulthood. We show that the design of compensation policies likely explains this counterintuitive life-cycle pattern and examine potential spillovers to the next generation.


[7] 2410.16010

Time evaluation of portfolio for asymmetrically informed traders

We study the anticipating version of the classical portfolio optimization problem in a financial market with the presence of a trader who possesses privileged information about the future (insider information), but who is also subjected to a delay in the information flow about the market conditions; hence this trader possesses an asymmetric information with respect to the traditional one. We analyze it via the Russo-Vallois forward stochastic integral, i. e. using anticipating stochastic calculus, along with a white noise approach. We explicitly compute the optimal portfolios that maximize the expected logarithmic utility assuming different classical financial models: Black-Scholes-Merton, Heston, Vasicek. Similar results hold for other well-known models, such as the Hull-White and the Cox-Ingersoll-Ross ones. Our comparison between the performance of the traditional trader and the insider, although only asymmetrically informed, reveals that the privileged information overcompensates the delay in all cases, provided only one information flow is delayed. However, when two information flows are delayed, a competition between future information and delay magnitude enters into play, implying that the best performance depends on the parameter values. This, in turn, allows us to value future information in terms of time, and not only utility.


[8] 2410.16021

Stylized facts in money markets: an empirical analysis of the eurozone data

Using the secured transactions recorded within the Money Markets Statistical Reporting database of the European Central Bank, we test several stylized facts regarding interbank market of the 47 largest banks in the eurozone. We observe that the surge in the volume of traded evergreen repurchase agreements followed the introduction of the LCR regulation and we measure a rate of collateral re-use consistent with the literature. Regarding the topology of the interbank network, we confirm the high level of network stability but observe a higher density and a higher in- and out-degree symmetry than what is reported for unsecured markets.


[9] 2410.14788

Simultaneously Solving FBSDEs with Neural Operators of Logarithmic Depth, Constant Width, and Sub-Linear Rank

Forward-backwards stochastic differential equations (FBSDEs) are central in optimal control, game theory, economics, and mathematical finance. Unfortunately, the available FBSDE solvers operate on \textit{individual} FBSDEs, meaning that they cannot provide a computationally feasible strategy for solving large families of FBSDEs as these solvers must be re-run several times. \textit{Neural operators} (NOs) offer an alternative approach for \textit{simultaneously solving} large families of FBSDEs by directly approximating the solution operator mapping \textit{inputs:} terminal conditions and dynamics of the backwards process to \textit{outputs:} solutions to the associated FBSDE. Though universal approximation theorems (UATs) guarantee the existence of such NOs, these NOs are unrealistically large. We confirm that ``small'' NOs can uniformly approximate the solution operator to structured families of FBSDEs with random terminal time, uniformly on suitable compact sets determined by Sobolev norms, to any prescribed error $\varepsilon>0$ using a depth of $\mathcal{O}(\log(1/\varepsilon))$, a width of $\mathcal{O}(1)$, and a sub-linear rank; i.e. $\mathcal{O}(1/\varepsilon^r)$ for some $r<1$. This result is rooted in our second main contribution, which shows that convolutional NOs of similar depth, width, and rank can approximate the solution operator to a broad class of Elliptic PDEs. A key insight here is that the convolutional layers of our NO can efficiently encode the Green's function associated to the Elliptic PDEs linked to our FBSDEs. A byproduct of our analysis is the first theoretical justification for the benefit of lifting channels in NOs: they exponentially decelerate the growth rate of the NO's rank.


[10] 2410.14985

Stochastic Loss Reserving: Dependence and Estimation

Nowadays insurers have to account for potentially complex dependence between risks. In the field of loss reserving, there are many parametric and non-parametric models attempting to capture dependence between business lines. One common approach has been to use additive background risk models (ABRMs) which provide rich and interpretable dependence structures via a common shock model. Unfortunately, ABRMs are often restrictive. Models that capture necessary features may have impractical to estimate parameters. For example models without a closed-form likelihood function for lack of a probability density function (e.g. some Tweedie, Stable Distributions, etc). We apply a modification of the continuous generalised method of moments (CGMM) of [Carrasco and Florens, 2000] which delivers comparable estimators to the MLE to loss reserving. We examine models such as the one proposed by [Avanzi et al., 2016] and a related but novel one derived from the stable family of distributions. Our CGMM method of estimation provides conventional non-Bayesian estimates in the case where MLEs are impractical.


[11] 2410.15238

Economic Anthropology in the Era of Generative Artificial Intelligence

This paper explores the intersection of economic anthropology and generative artificial intelligence (GenAI). It examines how large language models (LLMs) can simulate human decision-making and the inductive biases present in AI research. The study introduces two AI models: C.A.L.L.O.N. (Conventionally Average Late Liberal ONtology) and M.A.U.S.S. (More Accurate Understanding of Society and its Symbols). The former is trained on standard data, while the latter is adapted with anthropological knowledge. The research highlights how anthropological training can enhance LLMs' ability to recognize diverse economic systems and concepts. The findings suggest that integrating economic anthropology with AI can provide a more pluralistic understanding of economics and improve the sustainability of non-market economic systems.


[12] 2410.15286

LTPNet Integration of Deep Learning and Environmental Decision Support Systems for Renewable Energy Demand Forecasting

Against the backdrop of increasingly severe global environmental changes, accurately predicting and meeting renewable energy demands has become a key challenge for sustainable business development. Traditional energy demand forecasting methods often struggle with complex data processing and low prediction accuracy. To address these issues, this paper introduces a novel approach that combines deep learning techniques with environmental decision support systems. The model integrates advanced deep learning techniques, including LSTM and Transformer, and PSO algorithm for parameter optimization, significantly enhancing predictive performance and practical applicability. Results show that our model achieves substantial improvements across various metrics, including a 30% reduction in MAE, a 20% decrease in MAPE, a 25% drop in RMSE, and a 35% decline in MSE. These results validate the model's effectiveness and reliability in renewable energy demand forecasting. This research provides valuable insights for applying deep learning in environmental decision support systems.


[13] 2410.15726

Reducing annotator bias by belief elicitation

Crowdsourced annotations of data play a substantial role in the development of Artificial Intelligence (AI). It is broadly recognised that annotations of text data can contain annotator bias, where systematic disagreement in annotations can be traced back to differences in the annotators' backgrounds. Being unaware of such annotator bias can lead to representational bias against minority group perspectives and therefore several methods have been proposed for recognising bias or preserving perspectives. These methods typically require either a substantial number of annotators or annotations per data instance. In this study, we propose a simple method for handling bias in annotations without requirements on the number of annotators or instances. Instead, we ask annotators about their beliefs of other annotators' judgements of an instance, under the hypothesis that these beliefs may provide more representative and less biased labels than judgements. The method was examined in two controlled, survey-based experiments involving Democrats and Republicans (n=1,590) asked to judge statements as arguments and then report beliefs about others' judgements. The results indicate that bias, defined as systematic differences between the two groups of annotators, is consistently reduced when asking for beliefs instead of judgements. Our proposed method therefore has the potential to reduce the risk of annotator bias, thereby improving the generalisability of AI systems and preventing harm to unrepresented socio-demographic groups, and we highlight the need for further studies of this potential in other tasks and downstream applications.


[14] 2410.15818

Three connected problems: principal with multiple agents in cooperation, Principal--Agent with Mckean--Vlasov dynamics and multitask Principal--Agent

In this paper, we address three Principal--Agent problems in a moral hazard context and show that they are connected. We start by studying the problem of Principal with multiple Agents in cooperation. The term cooperation is manifested here by the fact that the agents optimize their criteria through Pareto equilibria. We show that as the number of agents tends to infinity, the principal's value function converges to the value function of a McKean--Vlasov control problem. Using the solution to this McKean--Vlasov control problem, we derive a constructive method for obtaining approximately optimal contracts for the principal's problem with multiple agents in cooperation. In a second step, we show that the problem of Principal with multiple Agents turns out to also converge, when the number of agents goes to infinity, towards a new Principal--Agent problem which is the Principal--Agent problem with Mckean--Vlasov dynamics. This is a Principal--Agent problem where the agent--controlled production follows a Mckean-Vlasov dynamics and the contract can depend of the distribution of the production. The value function of the principal in this setting is equivalent to that of the same McKean--Vlasov control problem from the multi--agent scenario. Furthermore, we show that an optimal contract can be constructed from the solution to this McKean--Vlasov control problem. We conclude by discussing, in a simple example, the connection of these problems with the multitask Principal--Agent problem which is a situation when a principal delegates multiple tasks that can be correlated to a single agent.


[15] 2410.15824

Long time behavior of semi-Markov modulated perpetuity and some related processes

Examples of stochastic processes whose state space representations involve functions of an integral type structure $$I_{t}^{(a,b)}:=\int_{0}^{t}b(Y_{s})e^{-\int_{s}^{t}a(Y_{r})dr}ds, \quad t\ge 0$$ are studied under an ergodic semi-Markovian environment described by an $S$ valued jump type process $Y:=(Y_{s}:s\in\mathbb{R}^{+})$ that is ergodic with a limiting distribution $\pi\in\mathcal{P}(S)$. Under different assumptions on signs of $E_{\pi}a(\cdot):=\sum_{j\in S}\pi_{j}a(j)$ and tail properties of the sojourn times of $Y$ we obtain different long time limit results for $I^{(a,b)}_{}:=(I^{(a,b)}_{t}:t\ge 0).$ In all cases mixture type of laws emerge which are naturally represented through an affine stochastic recurrence equation (SRE) $X\stackrel{d}{=}AX+B,\,\, X\perp\!\!\!\perp (A, B)$. Examples include explicit long-time representations of pitchfork bifurcation, and regime-switching diffusions under semi-Markov modulated environments, etc.