New articles on Economics


[1] 2507.08159

Long-term Health and Human Capital Effects of Early-Life Economic Conditions

We study the long-term health and human capital impacts of local economic conditions experienced during the first 1,000 days of life. We combine historical data on monthly unemployment rates in urban England and Wales 1952-1967 with data from the UK Biobank on later-life outcomes. Leveraging variation in unemployment driven by national industry-specific shocks weighted by industry's importance in each area, we find no evidence that small, common fluctuations in local economic conditions during the early life period affect health or human capital in older age.


[2] 2507.08222

Do Temporary Workers Face Higher Wage Markdowns? Evidence from India's Automotive Sector

Are temporary workers subject to different wage markdowns than permanent workers? This paper examines productivity, output markups, and wage markdowns in India's automotive sector during 2000--2020. I develop a model integrating CES production, nested logit labor supply, and differentiated labor market conduct: Nash-Bertrand wage setting for temporary workers versus Nash bargaining for unionized permanent workers. Results reveal declining output markups as marginal costs outpace prices through productivity deceleration. Rising labor-augmenting productivity cannot offset declining Hicks-neutral productivity, reducing overall TFP. Labor market power substantially compresses worker compensation: wage markdowns persist at 40% for temporary workers and 10% for permanent workers.


[3] 2507.08244

Advancing AI Capabilities and Evolving Labor Outcomes

This study investigates the labor market consequences of AI by analyzing near real-time changes in employment status and work hours across occupations in relation to advances in AI capabilities. We construct a dynamic Occupational AI Exposure Score based on a task-level assessment using state-of-the-art AI models, including ChatGPT 4o and Anthropic Claude 3.5 Sonnet. We introduce a five-stage framework that evaluates how AI's capability to perform tasks in occupations changes as technology advances from traditional machine learning to agentic AI. The Occupational AI Exposure Scores are then linked to the US Current Population Survey, allowing for near real-time analysis of employment, unemployment, work hours, and full-time status. We conduct a first-differenced analysis comparing the period from October 2022 to March 2023 with the period from October 2024 to March 2025. Higher exposure to AI is associated with reduced employment, higher unemployment rates, and shorter work hours. We also observe some evidence of increased secondary job holding and a decrease in full-time employment among certain demographics. These associations are more pronounced among older and younger workers, men, and college-educated individuals. College-educated workers tend to experience smaller declines in employment but are more likely to see changes in work intensity and job structure. In addition, occupations that rely heavily on complex reasoning and problem-solving tend to experience larger declines in full-time work and overall employment in association with rising AI exposure. In contrast, those involving manual physical tasks appear less affected. Overall, the results suggest that AI-driven shifts in labor are occurring along both the extensive margin (unemployment) and the intensive margin (work hours), with varying effects across occupational task content and demographics.


[4] 2507.08512

From Revolution to Ruin: An Empirical Analysis Yemen's State Collapse

We assess the broad repercussions of Yemen's 2011 revolution and subsequent civil war on its macroeconomic trajectories, human development, and quality of governance by constructing counterfactual benchmarks using a balanced panel of 37 developing countries over 1990-2022. Drawing on matrix-completion estimators with alternative shrinkage regimes and a LASSO-augmented synthetic-control method, we generate Yemen's hypothetical no-conflict paths for key macroeconomic aggregates, demographic and health indicators, and governance metrics. Across the full spectrum of methods, the conflict's outbreak corresponds with a dramatic reversal of economic and institutional development. We find that output and income experience an unprecedented contraction, investment and trade openness deteriorate sharply, and gains in life expectancy and human development are broadly reversed. Simultaneously, measures of political accountability, administrative capacity, rule of law, and corruption control collapse, reflecting systemic institutional breakdown. The concordance of results across a variety of empirical strategies attests to the robustness of our estimates.


[5] 2507.08764

Propensity score with factor loadings: the effect of the Paris Agreement

Factor models for longitudinal data, where policy adoption is unconfounded with respect to a low-dimensional set of latent factor loadings, have become increasingly popular for causal inference. Most existing approaches, however, rely on a causal finite-sample approach or computationally intensive methods, limiting their applicability and external validity. In this paper, we propose a novel causal inference method for panel data based on inverse propensity score weighting where the propensity score is a function of latent factor loadings within a framework of causal inference from super-population. The approach relaxes the traditional restrictive assumptions of causal panel methods, while offering advantages in terms of causal interpretability, policy relevance, and computational efficiency. Under standard assumptions, we outline a three-step estimation procedure for the ATT and derive its large-sample properties using Mestimation theory. We apply the method to assess the causal effect of the Paris Agreement, a policy aimed at fostering the transition to a low-carbon economy, on European stock returns. Our empirical results suggest a statistically significant and negative short-run effect on the stock returns of firms that issued green bonds.


[6] 2507.08019

Signal or Noise? Evaluating Large Language Models in Resume Screening Across Contextual Variations and Human Expert Benchmarks

This study investigates whether large language models (LLMs) exhibit consistent behavior (signal) or random variation (noise) when screening resumes against job descriptions, and how their performance compares to human experts. Using controlled datasets, we tested three LLMs (Claude, GPT, and Gemini) across contexts (No Company, Firm1 [MNC], Firm2 [Startup], Reduced Context) with identical and randomized resumes, benchmarked against three human recruitment experts. Analysis of variance revealed significant mean differences in four of eight LLM-only conditions and consistently significant differences between LLM and human evaluations (p < 0.01). Paired t-tests showed GPT adapts strongly to company context (p < 0.001), Gemini partially (p = 0.038 for Firm1), and Claude minimally (p > 0.1), while all LLMs differed significantly from human experts across contexts. Meta-cognition analysis highlighted adaptive weighting patterns that differ markedly from human evaluation approaches. Findings suggest LLMs offer interpretable patterns with detailed prompts but diverge substantially from human judgment, informing their deployment in automated hiring systems.


[7] 2112.03872

Nonparametric Treatment Effect Identification in School Choice

This paper studies nonparametric identification and estimation of causal effects in centralized school assignment. In many centralized assignment algorithms, students are subjected to both lottery-driven variation and regression discontinuity (RD) driven variation. We characterize the full set of identified atomic treatment effects (aTEs), defined as the conditional average treatment effect between a pair of schools, given student characteristics. Atomic treatment effects are the building blocks of more aggregated notions of treatment contrasts, and common approaches to estimating aggregations of aTEs can mask important heterogeneity. In particular, many aggregations of aTEs put zero weight on aTEs driven by RD variation, and estimators of such aggregations put asymptotically vanishing weight on the RD-driven aTEs. We provide a diagnostic and recommend new aggregation schemes. Lastly, we provide estimators and accompanying asymptotic results for inference for those aggregations.


[8] 2305.07362

Advancing the analysis of resilience of global phosphate flows

This paper introduces a novel method for estimating material flows, with a focus on tracing phosphate flows from mining countries to those using phosphate in agricultural production. Our approach integrates data on phosphate rock extraction, fertilizer use, and international trade of phosphate-related products. A key advantage of this method is that it does not require detailed data on material concentrations, as these are indirectly estimated within the model. We demonstrate that our model can reconstruct country-level phosphate flow matrices with a high degree of accuracy, thereby enhancing traditional material flow analyses. This method bridges the gap between conventional material flow analysis and the economic analysis of resilience of national supply chains, and it is applicable not only to phosphorus but also to other resource flows. We show how the estimated flows can support country-specific assessments of supply security: while global phosphate flows appear moderately concentrated, country-level analyses reveal significant disparities in import dependencies and, in some cases, substantially higher supplier concentration.


[9] 2401.07337

Individual and Collective Welfare in Risk Sharing with Many States

We study efficient risk sharing among risk-averse agents in an economy with a large, finite number of states. Following a random shock to an initial agreement, agents may renegotiate. If they require a minimal utility improvement to accept a new deal, we show the probability of finding a mutually acceptable allocation vanishes exponentially as the state space grows. This holds regardless of agents' degree of risk aversion. In a two-agent multiple-priors model, we find that the potential for Pareto-improving trade requires that at least one agent's set of priors has a vanishingly small measure. Our results hinge on the ``shape does not matter'' message of high-dimensional isoperimetric inequalities.


[10] 2407.21119

Potential weights and implicit causal designs in linear regression

When we interpret linear regression as estimating causal effects justified by quasi-experimental treatment variation, what do we mean? This paper characterizes the necessary implications when linear regressions are interpreted causally. A minimal requirement for causal interpretation is that the regression estimates some contrast of individual potential outcomes under the true treatment assignment process. This requirement implies linear restrictions on the true distribution of treatment. Solving these linear restrictions leads to a set of implicit designs. Implicit designs are plausible candidates for the true design if the regression were to be causal. The implicit designs serve as a framework that unifies and extends existing theoretical results across starkly distinct settings (including multiple treatment, panel, and instrumental variables). They lead to new theoretical insights for widely used but less understood specifications.


[11] 2501.05022

An Instrumental Variables Approach to Testing Firm Conduct under a Bertrand-Nash Framework

Understanding firm conduct is crucial for industrial organization and antitrust policy. In this article, we develop a testing procedure based on the Rivers and Vuong non-nested model selection framework. Unlike existing methods that require estimating the demand and supply system, our approach compares the model fit of two first-stage price regressions. Through an extensive Monte Carlo study, we demonstrate that our test performs comparably to, or outperforms, existing methods in detecting collusion across various collusive scenarios. The results are robust to model misspecification, alternative functional forms for instruments, and data limitations. By simplifying the diagnosis of firm behavior, our method offers researchers and regulators an efficient tool for assessing industry conduct under a Bertrand oligopoly framework. Additionally, our approach offers a practical guideline for enhancing the strength of BLP-style instruments in demand estimation: once collusion is detected, researchers are advised to incorporate the product characteristics of colluding partners into own-firm instruments while excluding them from other-firm instruments.


[12] 2502.07126

Decision theory and the "almost implies near" phenomenon

This paper explores relaxing behavioral axioms in decision theory. We demonstrate that when a preference approximately satisfies a key axiom (like independence or stationarity) its utility representation is close to a standard model. The degree of the axiom's violation quantitatively determines this proximity. Interestingly, we show that in some cases, a relaxed axiom, when combined with another property like homotheticity, still implies an exact representation. We establish these ``almost implies near'' results for choice under risk, uncertainty, and intertemporal choice, connecting axiomatic deviations to the notion of approximate optimization.


[13] 2503.23524

Reinterpreting demand estimation

This paper bridges the demand estimation and causal inference literatures by interpreting nonparametric structural assumptions as restrictions on counterfactual outcomes. It offers nontrivial and equivalent restatements of key demand estimation assumptions in the Neyman-Rubin potential outcomes model, for both settings with market-level data (Berry and Haile, 2014) and settings with demographic-specific market shares (Berry and Haile, 2024). The reformulation highlights a latent homogeneity assumption underlying structural demand models: The relationship between counterfactual outcomes is assumed to be identical across markets. This assumption is strong, but necessary for identification of market-level counterfactuals. Viewing structural demand models as misspecified but approximately correct reveals a tradeoff between specification flexibility and robustness to latent homogeneity.


[14] 2504.11436

Shifting Work Patterns with Generative AI

We present evidence on how generative AI changes the work patterns of knowledge workers using data from a 6-month-long, cross-industry, randomized field experiment. Half of the 7,137 workers in the study received access to a generative AI tool integrated into the applications they already used for emails, document creation, and meetings. We find that access to the AI tool during the first year of its release primarily impacted behaviors that workers could change independently and not behaviors that require coordination to change: workers who used the tool in more than half of the sample weeks spent 3.6 fewer hours, or 31% less time on email each week (intent to treat estimate is 1.3 hours) and completed documents moderately faster, but did not significantly change time spent in meetings.


[15] 2507.05844

Beyond Scalars: Zonotope-Valued Utility for Representation of Multidimensional Incomplete Preferences

In this paper, I propose a new framework for representing multidimensional incomplete preferences through zonotope-valued utilities, addressing the shortcomings of traditional scalar and vector-based models in decision theory. Traditional approaches assign single numerical values to alternatives, failing to capture the complexity of preferences where alternatives remainmain incomparable due to conflicting criteria across multiple dimensions. Our method maps each alternative to a zonotope, a convex geometric object in \(\mathbb{R}^m\) formed by Minkowski sums of intervals, which encapsulates the multidimensional structure of preferences with mathematical rigor. The set-valued nature of these payoffs stems from multiple sources, including non-probabilistic uncertainty, such as imprecise utility evaluation due to incomplete information about criteria weights, and probabilistic uncertainty arising from stochastic decision environments. By decomposing preference relations into interval orders and utilizing an extended set difference operator, we establish a rigorous axiomatization that defines preference as one alternative's zonotope differing from another's within the non-negative orthant of \(\mathbb{R}^m\). This framework generalizes existing representations and provides a visually intuitive and theoretically robust tool for modeling trade-offs among each dimension, while preferences are incomparable.


[16] 2402.07521

A step towards the integration of machine learning and classic model-based survey methods

The usage of machine learning methods in traditional surveys including official statistics, is still very limited. Therefore, we propose a predictor supported by these algorithms, which can be used to predict any population or subpopulation characteristics. Machine learning methods have already been shown to be very powerful in identifying and modelling complex and nonlinear relationships between the variables, which means they have very good properties in case of strong departures from the classic assumptions. Therefore, we analyse the performance of our proposal under a different set-up, which, in our opinion, is of greater importance in real-life surveys. We study only small departures from the assumed model to show that our proposal is a good alternative, even in comparison with optimal methods under the model. Moreover, we propose the method of the ex ante accuracy estimation of machine learning predictors, giving the possibility of the accuracy comparison with classic methods. The solution to this problem is indicated in the literature as one of the key issues in integrating these approaches. The simulation studies are based on a real, longitudinal dataset, where the prediction of subpopulation characteristics is considered.


[17] 2504.13959

AI Safety Should Prioritize the Future of Work

Current efforts in AI safety prioritize filtering harmful content, preventing manipulation of human behavior, and eliminating existential risks in cybersecurity or biosecurity. While pressing, this narrow focus overlooks critical human-centric considerations that shape the long-term trajectory of a society. In this position paper, we identify the risks of overlooking the impact of AI on the future of work and recommend comprehensive transition support towards the evolution of meaningful labor with human agency. Through the lens of economic theories, we highlight the intertemporal impacts of AI on human livelihood and the structural changes in labor markets that exacerbate income inequality. Additionally, the closed-source approach of major stakeholders in AI development resembles rent-seeking behavior through exploiting resources, breeding mediocrity in creative labor, and monopolizing innovation. To address this, we argue in favor of a robust international copyright anatomy supported by implementing collective licensing that ensures fair compensation mechanisms for using data to train AI models. We strongly recommend a pro-worker framework of global AI governance to enhance shared prosperity and economic justice while reducing technical debt.