New articles on Economics


[1] 2410.14871

Learning the Effect of Persuasion via Difference-In-Differences

The persuasion rate is a key parameter for measuring the causal effect of a directional message on influencing the recipient's behavior. Its identification analysis has largely relied on the availability of credible instruments, but the requirement is not always satisfied in observational studies. Therefore, we develop a framework for identifying, estimating, and conducting inference for the average persuasion rates on the treated using a difference-in-differences approach. The average treatment effect on the treated is a standard parameter with difference-in-differences, but it underestimates the persuasion rate in our setting. Our estimation and inference methods include regression-based approaches and semiparametrically efficient estimators. Beginning with the canonical two-period case, we extend the framework to staggered treatment settings, where we show how to conduct rich analyses like the event-study design. We revisit previous studies of the British election and the Chinese curriculum reform to illustrate the usefulness of our methodology.


[2] 2410.15090

Fast and Efficient Bayesian Analysis of Structural Vector Autoregressions Using the R Package bsvars

The R package bsvars provides a wide range of tools for empirical macroeconomic and financial analyses using Bayesian Structural Vector Autoregressions. It uses frontier econometric techniques and C++ code to ensure fast and efficient estimation of these multivariate dynamic structural models, possibly with many variables, complex identification strategies, and non-linear characteristics. The models can be identified using adjustable exclusion restrictions and heteroskedastic or non-normal shocks. They feature a flexible three-level equation-specific local-global hierarchical prior distribution for the estimated level of shrinkage for autoregressive and structural parameters. Additionally, the package facilitates predictive and structural analyses such as impulse responses, forecast error variance and historical decompositions, forecasting, statistical verification of identification and hypotheses on autoregressive parameters, and analyses of structural shocks, volatilities, and fitted values. These features differentiate bsvars from existing R packages that either focus on a specific structural model, do not consider heteroskedastic shocks, or lack the implementation using compiled code.


[3] 2410.15097

Predictive Quantile Regression with High-Dimensional Predictors: The Variable Screening Approach

This paper advances a variable screening approach to enhance conditional quantile forecasts using high-dimensional predictors. We have refined and augmented the quantile partial correlation (QPC)-based variable screening proposed by Ma et al. (2017) to accommodate $\beta$-mixing time-series data. Our approach is inclusive of i.i.d scenarios but introduces new convergence bounds for time-series contexts, suggesting the performance of QPC-based screening is influenced by the degree of time-series dependence. Through Monte Carlo simulations, we validate the effectiveness of QPC under weak dependence. Our empirical assessment of variable selection for growth-at-risk (GaR) forecasting underscores the method's advantages, revealing that specific labor market determinants play a pivotal role in forecasting GaR. While prior empirical research has predominantly considered a limited set of predictors, we employ the comprehensive Fred-QD dataset, retaining a richer breadth of information for GaR forecasts.


[4] 2410.15195

Risk Premia in the Bitcoin Market

Based on options and realized returns, we analyze risk premia in the Bitcoin market through the lens of the Pricing Kernel (PK). We identify that: 1) The projected PK into Bitcoin returns is W-shaped and steep in the negative returns region; 2) Negative Bitcoin returns account for 33% of the total Bitcoin index premium (BP) in contrast to 70% of S&P500 equity premium explained by negative returns. Applying a novel clustering algorithm to the collection of estimated Bitcoin risk-neutral densities, we find that risk premia vary over time as a function of two distinct market volatility regimes. In the low-volatility regime, the PK projection is steeper for negative returns. It has a more pronounced W-shape than the unconditional one, implying particularly high BP for both extreme positive and negative returns and a high Variance Risk Premium (VRP). In high-volatility states, the BP attributable to positive and negative returns is more balanced, and the VRP is lower. Overall, Bitcoin investors are more worried about variance and downside risk in low-volatility states.


[5] 2410.15439

The Economic Consequences of Being Widowed by War: A Life-Cycle Perspective

Despite millions of war widows worldwide, little is known about the economic consequences of being widowed by war. We use life history data from West Germany to show that war widowhood increased women's employment immediately after World War II but led to lower employment rates later in life. War widows, therefore, carried a double burden of employment and childcare while their children were young but left the workforce when their children reached adulthood. We show that the design of compensation policies likely explains this counterintuitive life-cycle pattern and examine potential spillovers to the next generation.


[6] 2410.15634

Distributionally Robust Instrumental Variables Estimation

Instrumental variables (IV) estimation is a fundamental method in econometrics and statistics for estimating causal effects in the presence of unobserved confounding. However, challenges such as untestable model assumptions and poor finite sample properties have undermined its reliability in practice. Viewing common issues in IV estimation as distributional uncertainties, we propose DRIVE, a distributionally robust framework of the classical IV estimation method. When the ambiguity set is based on a Wasserstein distance, DRIVE minimizes a square root ridge regularized variant of the two stage least squares (TSLS) objective. We develop a novel asymptotic theory for this regularized regression estimator based on the square root ridge, showing that it achieves consistency without requiring the regularization parameter to vanish. This result follows from a fundamental property of the square root ridge, which we call ``delayed shrinkage''. This novel property, which also holds for a class of generalized method of moments (GMM) estimators, ensures that the estimator is robust to distributional uncertainties that persist in large samples. We further derive the asymptotic distribution of Wasserstein DRIVE and propose data-driven procedures to select the regularization parameter based on theoretical results. Simulation studies confirm the superior finite sample performance of Wasserstein DRIVE. Thanks to its regularization and robustness properties, Wasserstein DRIVE could be preferable in practice, particularly when the practitioner is uncertain about model assumptions or distributional shifts in data.


[7] 2410.15734

A Kernelization-Based Approach to Nonparametric Binary Choice Models

We propose a new estimator for nonparametric binary choice models that does not impose a parametric structure on either the systematic function of covariates or the distribution of the error term. A key advantage of our approach is its computational efficiency. For instance, even when assuming a normal error distribution as in probit models, commonly used sieves for approximating an unknown function of covariates can lead to a large-dimensional optimization problem when the number of covariates is moderate. Our approach, motivated by kernel methods in machine learning, views certain reproducing kernel Hilbert spaces as special sieve spaces, coupled with spectral cut-off regularization for dimension reduction. We establish the consistency of the proposed estimator for both the systematic function of covariates and the distribution function of the error term, and asymptotic normality of the plug-in estimator for weighted average partial derivatives. Simulation studies show that, compared to parametric estimation methods, the proposed method effectively improves finite sample performance in cases of misspecification, and has a rather mild efficiency loss if the model is correctly specified. Using administrative data on the grant decisions of US asylum applications to immigration courts, along with nine case-day variables on weather and pollution, we re-examine the effect of outdoor temperature on court judges' "mood", and thus, their grant decisions.


[8] 2410.15861

Analysis of short-run and long-run marginal costs of generation in the power market

In power markets, understanding the cost dynamics of electricity generation is crucial. The complexity of price formation in the power system arises from its diverse attributes, such as various generator types, each characterized by its specific fixed and variable costs as well as different lifetimes. In this paper, we adopt an approach that investigates both long-run marginal cost (LRMC) and short-run marginal cost (SRMC) in a perfect competition market. According to economic theory, marginal pricing serves as an effective method for determining the generation cost of electricity. This paper presents a capacity expansion model designed to evaluate the marginal cost of electricity generation, encompassing both long-term and short-term perspectives. Following a parametric analysis and the calculation of LRMCs, this study investigates the allocation of investment costs across various time periods and how these costs factor into the LRMC to ensure cost recovery. Additionally, an exploration of SRMCs reveals the conditions under which LRMCs and SRMCs converge or diverge. We observe that when there is a disparity between LRMC and SRMC, setting electricity generation prices equal to SRMCs does not ensure the complete recovery of investment and operational costs. This phenomenon holds implications for market reliability and challenges the pricing strategies that rely solely on SRMCs. Furthermore, our investigation highlighted the significance of addressing degeneracy in the power market modeling. Primal degeneracy in the SRMC model can result in multiple values for the dual variable representing SRMC. This multiplicity of values creates ambiguity regarding the precise SRMC value, making it challenging to ascertain the correct estimation. As a result, resolving degeneracy will ensure the reliability of the SRMC value, consequently enhancing the robustness and credibility of our analysis.


[9] 2410.16017

Semiparametric Bayesian Inference for a Conditional Moment Equality Model

Conditional moment equality models are regularly encountered in empirical economics, yet they are difficult to estimate. These models map a conditional distribution of data to a structural parameter via the restriction that a conditional mean equals zero. Using this observation, I introduce a Bayesian inference framework in which an unknown conditional distribution is replaced with a nonparametric posterior, and structural parameter inference is then performed using an implied posterior. The method has the same flexibility as frequentist semiparametric estimators and does not require converting conditional moments to unconditional moments. Importantly, I prove a semiparametric Bernstein-von Mises theorem, providing conditions under which, in large samples, the posterior for the structural parameter is approximately normal, centered at an efficient estimator, and has variance equal to the Chamberlain (1987) semiparametric efficiency bound. As byproducts, I show that Bayesian uncertainty quantification methods are asymptotically optimal frequentist confidence sets and derive low-level sufficient conditions for Gaussian process priors. The latter sheds light on a key prior stability condition and relates to the numerical aspects of the paper in which these priors are used to predict the welfare effects of price changes.


[10] 2410.16021

Stylized facts in money markets: an empirical analysis of the eurozone data

Using the secured transactions recorded within the Money Markets Statistical Reporting database of the European Central Bank, we test several stylized facts regarding interbank market of the 47 largest banks in the eurozone. We observe that the surge in the volume of traded evergreen repurchase agreements followed the introduction of the LCR regulation and we measure a rate of collateral re-use consistent with the literature. Regarding the topology of the interbank network, we confirm the high level of network stability but observe a higher density and a higher in- and out-degree symmetry than what is reported for unsecured markets.


[11] 2410.16112

Dynamic Biases of Static Panel Data Estimators

This paper identifies an important bias - termed dynamic bias - in fixed effects panel estimators that arises when dynamic feedback is ignored in the estimating equation. Dynamic feedback occurs if past outcomes impact current outcomes, a feature of many settings ranging from economic growth to agricultural and labor markets. When estimating equations omit past outcomes, dynamic bias can lead to significantly inaccurate treatment effect estimates, even with randomly assigned treatments. This dynamic bias in simulations is larger than Nickell bias. I show that dynamic bias stems from the estimation of fixed effects, as their estimation generates confounding in the data. To recover consistent treatment effects, I develop a flexible estimator that provides fixed-T bias correction. I apply this approach to study the impact of temperature shocks on GDP, a canonical example where economic theory points to an important feedback from past to future outcomes. Accounting for dynamic bias lowers the estimated effects of higher yearly temperatures on GDP growth by 10% and GDP levels by 120%.


[12] 2410.16203

Feedback strategies in the market with uncertainties

We explore how dynamic entry deterrence operates through feedback strategies in markets experiencing stochastic demand fluctuations. The incumbent firm, aware of its own cost structure, can deter a potential competitor by strategically adjusting prices. The potential entrant faces a one-time, irreversible decision to enter the market, incurring a fixed cost, with profits determined by market conditions and the incumbent's hidden type. Market demand follows a Chan-Karolyi-Longstaff-Sanders Brownian motion. If the demand is low, the threat of entry diminishes, making deterrence less advantageous. In equilibrium, a weak incumbent may be incentivized to reveal its type by raising prices. We derive an optimal equilibrium using path integral control, where the entrant enters once demand reaches a high enough level, and the weak incumbent mixes strategies between revealing itself when demand is sufficiently low.


[13] 2410.16214

Asymmetries in Financial Spillovers

This paper analyzes nonlinearities in the international transmission of financial shocks originating in the US. To do so, we develop a flexible nonlinear multi-country model. Our framework is capable of producing asymmetries in the responses to financial shocks for shock size and sign, and over time. We show that international reactions to US-based financial shocks are asymmetric along these dimensions. Particularly, we find that adverse shocks trigger stronger declines in output, inflation, and stock markets than benign shocks. Further, we investigate time variation in the estimated dynamic effects and characterize the responsiveness of three major central banks to financial shocks.


[14] 2410.14904

Switchback Price Experiments with Forward-Looking Demand

We consider a retailer running a switchback experiment for the price of a single product, with infinite supply. In each period, the seller chooses a price $p$ from a set of predefined prices that consist of a reference price and a few discounted price levels. The goal is to estimate the demand gradient at the reference price point, with the goal of adjusting the reference price to improve revenue after the experiment. In our model, in each period, a unit mass of buyers arrives on the market, with values distributed based on a time-varying process. Crucially, buyers are forward looking with a discounted utility and will choose to not purchase now if they expect to face a discounted price in the near future. We show that forward-looking demand introduces bias in naive estimators of the demand gradient, due to intertemporal interference. Furthermore, we prove that there is no estimator that uses data from price experiments with only two price points that can recover the correct demand gradient, even in the limit of an infinitely long experiment with an infinitesimal price discount. Moreover, we characterize the form of the bias of naive estimators. Finally, we show that with a simple three price level experiment, the seller can remove the bias due to strategic forward-looking behavior and construct an estimator for the demand gradient that asymptotically recovers the truth.


[15] 2410.15238

Economic Anthropology in the Era of Generative Artificial Intelligence

This paper explores the intersection of economic anthropology and generative artificial intelligence (GenAI). It examines how large language models (LLMs) can simulate human decision-making and the inductive biases present in AI research. The study introduces two AI models: C.A.L.L.O.N. (Conventionally Average Late Liberal ONtology) and M.A.U.S.S. (More Accurate Understanding of Society and its Symbols). The former is trained on standard data, while the latter is adapted with anthropological knowledge. The research highlights how anthropological training can enhance LLMs' ability to recognize diverse economic systems and concepts. The findings suggest that integrating economic anthropology with AI can provide a more pluralistic understanding of economics and improve the sustainability of non-market economic systems.


[16] 2410.15286

LTPNet Integration of Deep Learning and Environmental Decision Support Systems for Renewable Energy Demand Forecasting

Against the backdrop of increasingly severe global environmental changes, accurately predicting and meeting renewable energy demands has become a key challenge for sustainable business development. Traditional energy demand forecasting methods often struggle with complex data processing and low prediction accuracy. To address these issues, this paper introduces a novel approach that combines deep learning techniques with environmental decision support systems. The model integrates advanced deep learning techniques, including LSTM and Transformer, and PSO algorithm for parameter optimization, significantly enhancing predictive performance and practical applicability. Results show that our model achieves substantial improvements across various metrics, including a 30% reduction in MAE, a 20% decrease in MAPE, a 25% drop in RMSE, and a 35% decline in MSE. These results validate the model's effectiveness and reliability in renewable energy demand forecasting. This research provides valuable insights for applying deep learning in environmental decision support systems.


[17] 2410.15726

Reducing annotator bias by belief elicitation

Crowdsourced annotations of data play a substantial role in the development of Artificial Intelligence (AI). It is broadly recognised that annotations of text data can contain annotator bias, where systematic disagreement in annotations can be traced back to differences in the annotators' backgrounds. Being unaware of such annotator bias can lead to representational bias against minority group perspectives and therefore several methods have been proposed for recognising bias or preserving perspectives. These methods typically require either a substantial number of annotators or annotations per data instance. In this study, we propose a simple method for handling bias in annotations without requirements on the number of annotators or instances. Instead, we ask annotators about their beliefs of other annotators' judgements of an instance, under the hypothesis that these beliefs may provide more representative and less biased labels than judgements. The method was examined in two controlled, survey-based experiments involving Democrats and Republicans (n=1,590) asked to judge statements as arguments and then report beliefs about others' judgements. The results indicate that bias, defined as systematic differences between the two groups of annotators, is consistently reduced when asking for beliefs instead of judgements. Our proposed method therefore has the potential to reduce the risk of annotator bias, thereby improving the generalisability of AI systems and preventing harm to unrepresented socio-demographic groups, and we highlight the need for further studies of this potential in other tasks and downstream applications.


[18] 2410.15818

Three connected problems: principal with multiple agents in cooperation, Principal--Agent with Mckean--Vlasov dynamics and multitask Principal--Agent

In this paper, we address three Principal--Agent problems in a moral hazard context and show that they are connected. We start by studying the problem of Principal with multiple Agents in cooperation. The term cooperation is manifested here by the fact that the agents optimize their criteria through Pareto equilibria. We show that as the number of agents tends to infinity, the principal's value function converges to the value function of a McKean--Vlasov control problem. Using the solution to this McKean--Vlasov control problem, we derive a constructive method for obtaining approximately optimal contracts for the principal's problem with multiple agents in cooperation. In a second step, we show that the problem of Principal with multiple Agents turns out to also converge, when the number of agents goes to infinity, towards a new Principal--Agent problem which is the Principal--Agent problem with Mckean--Vlasov dynamics. This is a Principal--Agent problem where the agent--controlled production follows a Mckean-Vlasov dynamics and the contract can depend of the distribution of the production. The value function of the principal in this setting is equivalent to that of the same McKean--Vlasov control problem from the multi--agent scenario. Furthermore, we show that an optimal contract can be constructed from the solution to this McKean--Vlasov control problem. We conclude by discussing, in a simple example, the connection of these problems with the multitask Principal--Agent problem which is a situation when a principal delegates multiple tasks that can be correlated to a single agent.


[19] 2410.15938

Quantifying world geography as seen through the lens of Soviet propaganda

Cultural data typically contains a variety of biases. In particular, geographical locations are unequally portrayed in media, creating a distorted representation of the world. Identifying and measuring such biases is crucial to understand both the data and the socio-cultural processes that have produced them. Here we suggest to measure geographical biases in a large historical news media corpus by studying the representation of cities. Leveraging ideas of quantitative urban science, we develop a mixed quantitative-qualitative procedure, which allows us to get robust quantitative estimates of the biases. These biases can be further qualitatively interpreted resulting in a hermeneutic feedback loop. We apply this procedure to a corpus of the Soviet newsreel series 'Novosti Dnya' (News of the Day) and show that city representation grows super-linearly with city size, and is further biased by city specialization and geographical location. This allows to systematically identify geographical regions which are explicitly or sneakily emphasized by Soviet propaganda and quantify their importance.