New articles on Economics


[1] 2508.19326

Delegated Contracting

A principal seeks to contract with an agent but must do so through an informed delegate. Although the principal cannot directly mediate the interaction, she can constrain the menus of contracts the delegate may offer. We show that the principal can implement any outcome that is implementable through a direct mechanism satisfying dominant strategy incentive compatibility and ex-post participation for the agent. We apply this result to several settings. First, we show that a government that delegates procurement to a budget-indulgent agency should delegate an interval of screening contracts. Second, we show that a seller can delegate sales to an intermediary without revenue loss, provided she can commit to a return policy. Third, in contrast to centralized mechanism design, we demonstrate that no partnership can be efficiently dissolved in the absence of a mediator. Finally, we discuss when delegated contracting obstructs efficiency, and when choosing the right delegate may help restore it.


[2] 2508.19553

How much does SNAP Matter? SNAP's Effects on Food Security

Supplemental Nutrition Assistance Program (SNAP) aims to improve food security of low-income households in the U.S. A new, continuous food security measure called the Probability of Food Security (PFS), which proxies for the official food security measure but is implementable on longer periods, enables the study of SNAP's effects on the intensive margin. Using variations in state-level SNAP administrative policies as an instrument for individual SNAP participation, I find that SNAP does not have significant effects on estimated food security on average, both on the entire population and low-income population whom I defined as income is below 130\% of poverty line at least once during the study period. I find SNAP has stronger positive effects on those whose estimated food security status is in the middle of the distribution, but has no significant effects in the tails of the distribution.


[3] 2508.19585

Preference for Verifiability

Decision makers may face situations in which they cannot observe the consequences that result from their actions. In such decisions, motivations other than the expected utility of consequences may play a role. The present paper axiomatically characterizes a decision model in which the decision maker cares about whether it can be ex post verified that a good consequence has been achieved. Preferences over acts uniquely characterize a set of events that the decision maker expects to be able to verify in case they occur. The decision maker chooses the act that maximizes the expected utility across verifiable events of the worst possible consequence that may have occurred. For example, a firm choosing between different carbon emission reduction technologies may find some technologies to leave ex post more uncertainty about the level of emission reduction than other technologies. The firm may care about proving to its stakeholders that a certain amount of carbon reduction has been achieved and may employ privately obtained evidence to do so. It may choose in expectation less efficient technologies if the achieved carbon reduction is better verifiable using the expected future evidence.


[4] 2508.19625

Training for Obsolescence? The AI-Driven Education Trap

Artificial intelligence simultaneously transforms human capital production in schools and its demand in labor markets. Analyzing these effects in isolation can lead to a significant misallocation of educational resources. We model an educational planner whose decision to adopt AI is driven by its teaching productivity, failing to internalize AI's future wage-suppressing effect on those same skills. Our core assumption, motivated by a pilot survey, is that there is a positive correlation between these two effects. This drives our central proposition: this information failure creates a skill mismatch that monotonically increases with AI prevalence. Extensions show the mismatch is exacerbated by the neglect of unpriced non-cognitive skills and by a school's endogenous over-investment in AI. Our findings caution that policies promoting AI in education, if not paired with forward-looking labor market signals, may paradoxically undermine students' long-term human capital, especially if reliance on AI crowds out the development of unpriced non-cognitive skills, such as persistence, that are forged through intellectual struggle.


[5] 2508.19628

The Walras-Bowley Lecture: Fragmentation of Matching Markets and How Economics Can Help Integrate Them

Fragmentation of matching markets is a ubiquitous problem across countries and across applications. In order to study the implications of fragmentation and possibilities for integration, we first document and discuss a variety of fragmentation cases in practice such as school choice, medical residency matching, and so forth. Using the real-life dataset of daycare matching markets in Japan, we then empirically evaluate the impact of interregional transfer of students by estimating student utility functions under a variety of specifications and then using them for counterfactual simulation. Our simulation compares a fully integrated market and a partially integrated one with a ``balancedness'' constraint -- for each region, the inflow of students from the other regions must be equal to the outflow to the other areas. We find that partial integration achieves 39.2 to 59.6\% of the increase in the child welfare that can be attained under full integration, which is equivalent to a 3.3 to 4.9\% reduction of travel time. The percentage decrease in the unmatch rate is 40.0 to 52.8\% under partial integration compared to the case of full integration. The results suggest that even in environments where full integration is not a realistic option, partial integration, i.e., integration that respects the balancedness constraint, has a potential to recover a nontrivial portion of the loss from fragmentation.


[6] 2508.19676

Dynamic Delegation with Reputation Feedback

We study dynamic delegation with reputation feedback: a long-lived expert advises a sequence of implementers whose effort responds to current reputation, altering outcome informativeness and belief updates. We solve for a recursive, belief-based equilibrium and show that advice is a reputation-dependent cutoff in the expert's signal. A diagnosticity condition - failures at least as informative as successes - implies reputational conservatism: the cutoff (weakly) rises with reputation. Comparative statics are transparent: greater private precision or a higher good-state prior lowers the cutoff, whereas patience (value curvature) raises it. Reputation is a submartingale under competent types and a supermartingale under less competent types; we separate boundary hitting into learning (news generated infinitely often) versus no-news absorption. A success-contingent bonus implements any target experimentation rate with a plug-in calibration in a Gaussian benchmark. The framework yields testable predictions and a measurement map for surgery (operate vs. conservative care).


[7] 2508.19682

Public Persuasion with Endogenous Fact-Checking

We study public persuasion when a sender faces a mass audience that can verify the state at heterogeneous costs. The sender commits ex ante to a public information policy but must satisfy an ex post truthfulness constraint on verifiable content (EPIC). Receivers verify selectively, generating a verifying mass that depends on the public posterior mu. This yields an indirect value v(mu;F) and a concavification problem under implementability. Our main result is a reverse comparative static: when verification becomes cheaper (an FOSD improvement in F), v becomes more concave and the optimal public signal is strictly less informative (Blackwell). Intuitively, greater verifiability makes extreme claims invite scrutiny, so the sender optimally coarsens information - "confusion as strategy." We extend the model to two ex post instruments: falsification (continuous manipulation) and violence (a fixed-cost discrete tool), and characterize threshold substitutions from persuasion to manipulation and repression. The framework speaks to propaganda under improving fact-checking.


[8] 2508.19707

Risky Advice and Reputational Bias

We study expert advice under reputational incentives, with sell-side equity research as the lead application. A long-lived analyst receives a continuous private signal about a binary payoff and recommends a risky (Buy) or safe action. Recommendations and outcomes are public, and clients' implementation effort depends on current reputation. In a recursive, belief-based equilibrium: (i) advice follows a cutoff in the signal; (ii) under a simple diagnosticity asymmetry, the cutoff is (weakly) increasing in reputation (reputational conservatism); and (iii) comparative statics are transparent - higher signal precision or a higher success prior lowers the cutoff, whereas stronger career concerns raise it. A success-contingent bonus implements any target experimentation rate via a closed-form mapping. The model predicts that high-reputation analysts make fewer risky calls yet attain higher conditional hit rates, and it clarifies how committee thresholds and monitoring regimes shift behavior.


[9] 2508.19837

Contesting fake news

We model competition on a credence goods market governed by an imperfect label, signaling high quality, as a rank-order tournament between firms. In this market interaction, asymmetric firms jointly and competitively control the aggregate precision of a label ranking the competitors' qualities by releasing individual information. While the labels and the aggregated information they are based on can be seen as a public good guiding the consumers' purchasing decisions, individual firms have incentives to strategically amplify or counteract the competitors' information emission, thereby manipulating the aggregate precision of product labeling, i.e., the underlying ranking's discriminatory power. Elements of the introduced theory are applicable to several (credence-good) industries that employ labels or rankings, including academic departments, ``green'' certification, movies, and investment opportunities.


[10] 2508.19853

Inference on Partially Identified Parameters with Separable Nuisance Parameters: a Two-Stage Method

This paper develops a two-stage method for inference on partially identified parameters in moment inequality models with separable nuisance parameters. In the first stage, the nuisance parameters are estimated separately, and in the second stage, the identified set for the parameters of interest is constructed using a refined chi-squared test with variance correction that accounts for the first-stage estimation error. We establish the asymptotic validity of the proposed method under mild conditions and characterize its finite-sample properties. The method is broadly applicable to models where direct elimination of nuisance parameters is difficult or introduces conservativeness. Its practical performance is illustrated through an application: structural estimation of entry and exit costs in the U.S. vehicle market based on Wollmann (2018).


[11] 2508.20053

Misperception and informativeness in statistical discrimination

We study the interplay of information and prior (mis)perceptions in a Phelps-Aigner-Cain-type model of statistical discrimination in the labor market. We decompose the effect on average pay of an increase in how informative observables are about workers' skill into a non-negative instrumental component, reflecting increased surplus due to better matching of workers with tasks, and a perception-correcting component capturing how extra information diminishes the importance of prior misperceptions about the distribution of skills in the worker population. We sign the perception-correcting term: it is non-negative (non-positive) if the population was ex-ante under-perceived (over-perceived). We then consider the implications for pay gaps between equally-skilled populations that differ in information, perceptions, or both, and identify conditions under which improving information narrows pay gaps.


[12] 2508.20069

There must be an error here! Experimental evidence on coding errors' biases

Quantitative research relies heavily on coding, and coding errors are relatively common even in published research. In this paper, we examine whether individuals are more or less likely to check their code depending on the results they obtain. We test this hypothesis in a randomized experiment embedded in the recruitment process for research positions at a large international economic organization. In a coding task designed to assess candidates' programming abilities, we randomize whether participants obtain an expected or unexpected result if they commit a simple coding error. We find that individuals are 20% more likely to detect coding errors when they lead to unexpected results. This asymmetry in error detection depending on the results they generate suggests that coding errors may lead to biased findings in scientific research.


[13] 2508.20075

Predicting Qualification Thresholds in UEFA's incomplete round-robin tournaments via a Dixon and Coles Model

In the 2024/25 season, UEFA introduced the incomplete round-robin format in the Champions League and Europa League, replacing the traditional group stage with a single league phase table of all 36 teams. Now, the top eight qualify directly for the round of 16, while teams ranked 9th-24th enter a play-off round. Existing simulation-based analyses, such as those by Opta, provide guidance on points presumably needed for qualification but show discrepancies when compared to actual outcomes in the first season. We address this gap using a bivariate Dixon and Coles model to address lower draw frequencies, with team strengths proxied by Elo ratings. This framework allows us to simulate matches and derive qualification thresholds for direct and play-off advancement. Our results offer scientific guidance for clubs and managers, improving strategic decision-making under the new UEFA club competition formats.


[14] 2011.04306

Intensity-Efficient Allocations

This paper proposes a refinement of Pareto-efficient allocations for situations where, in addition to having ordinal preferences, agents also have ordinal intensities: they can make comparisons such as "I prefer a to b more than I prefer c to d", without necessarily being able to quantify them. A rank-based criterion for interpersonal comparisons of such ordinal intensities is introduced for this new analytical environment. Building on this, an allocation is defined to be intensity-efficient if it is Pareto-efficient with respect to the agents' preferences and also such that when another Pareto-efficient allocation assigns the same pairs of items to the same pairs of agents but in a "flipped" way, the former allocation assigns the commonly preferred item in every such pair to the agent who prefers it more. Conditions are established under which such Pareto-refining allocations exist. The potential usefulness of this theory in matching problems is illustrated with a quadratic-time extension of the Random Priority (RP) algorithm that returns an allocation which intensity-dominates RP's Pareto-efficient one.


[15] 2504.13375

Pricing AI Model Accuracy

This paper examines the market for AI models in which firms compete to provide accurate model predictions and consumers exhibit heterogeneous preferences for model accuracy. We develop a consumer-firm duopoly model to analyze how competition affects firms' incentives to improve model accuracy. Each firm aims to minimize its model's error, but this choice can often be suboptimal. Counterintuitively, we find that in a competitive market, firms that improve overall accuracy do not necessarily improve their profits. Rather, each firm's optimal decision is to invest further on the error dimension where it has a competitive advantage. By decomposing model errors into false positive and false negative rates, firms can reduce errors in each dimension through investments. Firms are strictly better off investing on their superior dimension and strictly worse off with investments on their inferior dimension. Profitable investments adversely affect consumers but increase overall welfare.


[16] 2508.13076

The purpose of an estimator is what it does: Misspecification, estimands, and over-identification

In over-identified models, misspecification -- the norm rather than exception -- fundamentally changes what estimators estimate. Different estimators imply different estimands rather than different efficiency for the same target. A review of recent applications of generalized method of moments in the American Economic Review suggests widespread acceptance of this fact: There is little formal specification testing and widespread use of estimators that would be inefficient were the model correct, including the use of "hand-selected" moments and weighting matrices. Motivated by these observations, we review and synthesize recent results on estimation under model misspecification, providing guidelines for transparent and robust empirical research. We also provide a new theoretical result, showing that Hansen's J-statistic measures, asymptotically, the range of estimates achievable at a given standard error. Given the widespread use of inefficient estimators and the resulting researcher degrees of freedom, we thus particularly recommend the broader reporting of J-statistics.


[17] 2308.00913

The Bayesian Context Trees State Space Model for time series modelling and forecasting

A hierarchical Bayesian framework is introduced for developing tree-based mixture models for time series, partly motivated by applications in finance and forecasting. At the top level, meaningful discrete states are identified as appropriately quantised values of some of the most recent samples. At the bottom level, a different, arbitrary base model is associated with each state. This defines a very general framework that can be used in conjunction with any existing model class to build flexible and interpretable mixture models. We call this the Bayesian Context Trees State Space Model, or the BCT-X framework. Appropriate algorithmic tools are described, which allow for effective and efficient Bayesian inference and learning; these algorithms can be updated sequentially, facilitating online forecasting. The utility of the general framework is illustrated in the particular instances when AR or ARCH models are used as base models. The latter results in a mixture model that offers a powerful way of modelling the well-known volatility asymmetries in financial data, revealing a novel, important feature of stock market index data, in the form of an enhanced leverage effect. In forecasting, the BCT-X methods are found to outperform several state-of-the-art techniques, both in terms of accuracy and computational requirements.