New articles on Quantitative Finance


[1] 2409.09066

Replicating The Log of Gravity

This document replicates the main results from Santos Silva and Tenreyro (2006 in R. The original results were obtained in TSP back in 2006. The idea here is to be explicit regarding the conceptual approach to regression in R. For most of the replication I used base R without external libraries except when it was absolutely necessary. The findings are consistent with the original article and reveal that the replication effort is minimal, without the need to contact the authors for clarifications or incur into data transformations or filtering not mentioned in the article.


[2] 2409.09091

Claims processing and costs under capacity constraints

Random delays between the occurrence of accident events and the corresponding reporting times of insurance claims is a standard feature of insurance data. The time lag between the reporting and the processing of a claim depends on whether the claim can be processed without delay as it arrives or whether it remains unprocessed for some time because of temporarily insufficient processing capacity that is shared between all incoming claims. We aim to explain and analyze the nature of processing delays and build-up of backlogs. We show how to select processing capacity optimally in order to minimize claims costs, taking delay-adjusted costs and fixed costs for claims settlement capacity into account. Theoretical results are combined with a large-scale numerical study that demonstrates practical usefulness of our proposal.


[3] 2409.09179

Credit Spreads' Term Structure: Stochastic Modeling with CIR++ Intensity

This paper introduces a novel stochastic model for credit spreads. The stochastic approach leverages the diffusion of default intensities via a CIR++ model and is formulated within a risk-neutral probability space. Our research primarily addresses two gaps in the literature. The first is the lack of credit spread models founded on a stochastic basis that enables continuous modeling, as many existing models rely on factorial assumptions. The second is the limited availability of models that directly yield a term structure of credit spreads. An intermediate result of our model is the provision of a term structure for the prices of defaultable bonds. We present the model alongside an innovative, practical, and conservative calibration approach that minimizes the error between historical and theoretical volatilities of default intensities. We demonstrate the robustness of both the model and its calibration process by comparing its behavior to historical credit spread values. Our findings indicate that the model not only produces realistic credit spread term structure curves but also exhibits consistent diffusion over time. Additionally, the model accurately fits the initial term structure of implied survival probabilities and provides an analytical expression for the credit spread of any given maturity at any future time.


[4] 2409.09684

Anatomy of Machines for Markowitz: Decision-Focused Learning for Mean-Variance Portfolio Optimization

Markowitz laid the foundation of portfolio theory through the mean-variance optimization (MVO) framework. However, the effectiveness of MVO is contingent on the precise estimation of expected returns, variances, and covariances of asset returns, which are typically uncertain. Machine learning models are becoming useful in estimating uncertain parameters, and such models are trained to minimize prediction errors, such as mean squared errors (MSE), which treat prediction errors uniformly across assets. Recent studies have pointed out that this approach would lead to suboptimal decisions and proposed Decision-Focused Learning (DFL) as a solution, integrating prediction and optimization to improve decision-making outcomes. While studies have shown DFL's potential to enhance portfolio performance, the detailed mechanisms of how DFL modifies prediction models for MVO remain unexplored. This study aims to investigate how DFL adjusts stock return prediction models to optimize decisions in MVO, addressing the question: "MSE treats the errors of all assets equally, but how does DFL reduce errors of different assets differently?" Answering this will provide crucial insights into optimal stock return prediction for constructing efficient portfolios.


[5] 2409.09818

Revisiting the state-space model of unawareness

We propose a knowledge operator based on the agent's possibility correspondence which preserves her non-trivial unawareness within the standard state-space model. Our approach may provide a solution to the classical impossibility result that 'an unaware agent must be aware of everything'.


[6] 2409.09955

Simulation of Public Cash Transfer Programs on US Entrepreneurs' Financing Constraint

In this paper, I conduct a policy exercise about how much the introduction of a cash transfer program as large as a Norwegian-sized lottery sector to the United States would affect startups. The key results are that public cash transfer programs (like lottery) do not increase much the number of new startups, but increase the size of startups, and only modestly increase aggregate productivity and output. The most important factor for entrepreneurs to start new businesses is their ability.


[7] 2409.10331

Research and Design of a Financial Intelligent Risk Control Platform Based on Big Data Analysis and Deep Machine Learning

In the financial field of the United States, the application of big data technology has become one of the important means for financial institutions to enhance competitiveness and reduce risks. The core objective of this article is to explore how to fully utilize big data technology to achieve complete integration of internal and external data of financial institutions, and create an efficient and reliable platform for big data collection, storage, and analysis. With the continuous expansion and innovation of financial business, traditional risk management models are no longer able to meet the increasingly complex market demands. This article adopts big data mining and real-time streaming data processing technology to monitor, analyze, and alert various business data. Through statistical analysis of historical data and precise mining of customer transaction behavior and relationships, potential risks can be more accurately identified and timely responses can be made. This article designs and implements a financial big data intelligent risk control platform. This platform not only achieves effective integration, storage, and analysis of internal and external data of financial institutions, but also intelligently displays customer characteristics and their related relationships, as well as intelligent supervision of various risk information


[8] 2409.10407

Bitcoin Transaction Behavior Modeling Based on Balance Data

When analyzing Bitcoin users' balance distribution, we observed that it follows a log-normal pattern. Drawing parallels from the successful application of Gibrat's law of proportional growth in explaining city size and word frequency distributions, we tested whether the same principle could account for the log-normal distribution in Bitcoin balances. However, our calculations revealed that the exponent parameters in both the drift and variance terms deviate slightly from one. This suggests that Gibrat's proportional growth rule alone does not fully explain the log-normal distribution observed in Bitcoin users' balances. During our exploration, we discovered an intriguing phenomenon: Bitcoin users tend to fall into two distinct categories based on their behavior, which we refer to as ``poor" and ``wealthy" users. Poor users, who initially purchase only a small amount of Bitcoin, tend to buy more bitcoins first and then sell out all their holdings gradually over time. The certainty of selling all their coins is higher and higher with time. In contrast, wealthy users, who acquire a large amount of Bitcoin from the start, tend to sell off their holdings over time. The speed at which they sell their bitcoins is lower and lower over time and they will hold at least a small part of their initial holdings at last. Interestingly, the wealthier the user, the larger the proportion of their balance and the higher the certainty they tend to sell. This research provided an interesting perspective to explore bitcoin users' behaviors which may apply to other finance markets.


[9] 2409.10096

Robust Reinforcement Learning with Dynamic Distortion Risk Measures

In a reinforcement learning (RL) setting, the agent's optimal strategy heavily depends on her risk preferences and the underlying model dynamics of the training environment. These two aspects influence the agent's ability to make well-informed and time-consistent decisions when facing testing environments. In this work, we devise a framework to solve robust risk-aware RL problems where we simultaneously account for environmental uncertainty and risk with a class of dynamic robust distortion risk measures. Robustness is introduced by considering all models within a Wasserstein ball around a reference model. We estimate such dynamic robust risk measures using neural networks by making use of strictly consistent scoring functions, derive policy gradient formulae using the quantile representation of distortion risk measures, and construct an actor-critic algorithm to solve this class of robust risk-aware RL problems. We demonstrate the performance of our algorithm on a portfolio allocation example.