New articles on Economics


[1] 2605.12284

A Grid-Rate Condition for Valid Uniform Inference

Estimating a continuous functional $F: \X \to \R$ involves specifying $L_n^d$ nodes on $\X \subset \R^d$ for estimation and uniform inference. While asymptotically valid inference requires $L_n$ to increase with $n$, existing fixed-$L$ rules of thumb and heuristic data-driven approaches lack formal justification. This paper shows that, for functions within a Donsker class, the simple grid-growth condition \(L_n=\omega(r_n^{1/4})\) is sufficient for valid inference for twice continuously differentiable functions estimable at the \(r_n^{1/2}\) rate. This condition ensures that the approximation error is asymptotically negligible relative to the stochastic variation of the empirical process.


[2] 2605.11157

The Price of Proportional Representation in Temporal Voting

We study proportional representation in the temporal voting model, where collective decisions are made repeatedly over time over a fixed horizon. Prior work has extensively investigated how proportional representation axioms from multiwinner voting (e.g., justified representation (JR) and its variants) can be adapted, satisfied, and verified in this setting. However, much less is understood about their interaction with social welfare. In this work, we quantify the efficiency cost of enforcing proportionality. We formalize the welfare-proportionality tension via the worst-case ratio between the maximum achievable utilitarian welfare and the maximum welfare attainable subject to a proportionality axiom. We show that imposing proportional representation in the temporal setting can incur a growing, yet sublinear, welfare loss as the number of voters or rounds increases. We further identify a clean separation among axioms: for JR, the welfare loss diminishes as the time horizon grows and vanishes asymptotically, whereas for stronger axioms this conflict persists even with many rounds. Moreover, we prove that welfare maximization under each axiom is NP-complete and APX-hard, even under static preferences and bounded-degree approvals, and provide fixed-parameter algorithms under several natural structural parameters.


[3] 2605.11180

The Value of Information: A Puzzle

We show that under mild assumptions, the total value of information to informed traders in the market can be measured by the covariance between price changes and order flow. This covariance captures noise trader losses, which equal informed trader gains when market making is competitive. We estimate the value of information using high frequency data on US equities at about $3.5 million per year for the average stock. The aggregate value of information is about 0.04% of market cap, which is considerably lower than the 0.67% in fees investors pay each year searching for superior returns (French 2008). We discuss potential resolutions for these puzzling findings.


[4] 2605.11350

Human-AI Productivity Paradoxes: Modeling the Interplay of Skill, Effort, and AI Assistance

Generative Artificial Intelligence (AI) tools are rapidly adopted in the workplace and in education, yet the empirical evidence on AI's impact remains mixed. We propose a model of human-AI interaction to better understand and analyze several mechanisms by which AI affects productivity. In our setup, human agents with varying skill levels exert utility-maximizing effort to produce certain task outcomes with AI assistance. We find that incorporating either endogeneity in skill development or in AI unreliability can induce a productivity paradox: increased levels of AI assistance may degrade productivity, leading to potentially significant shortfalls. Moreover, we examine the long-term distributional effect of AI on skill, and demonstrate that skill polarization can emerge in steady state when accounting for heterogeneity in AI literacy -- the agent's capability to identify and adapt to inaccurate AI outputs. Our results elucidate several mechanisms that may explain the emergence of human-AI productivity paradoxes and skill polarization, and identify simple measures that characterize when they arise.


[5] 2605.11736

Approximate Strategyproofness in Approval-based Budget Division

In approval-based budget division, the task is to allocate a divisible resource to the candidates based on the voters' approval preferences over the candidates. For this setting, Brandl et al. [2021] have shown that no distribution rule can be strategyproof, efficient, and fair at the same time. In this paper, we aim to circumvent this impossibility theorem by focusing on approximate strategyproofness. To this end, we analyze the incentive ratio of distribution rules, which quantifies the maximum multiplicative utility gain of a voter by manipulating. While it turns out that several classical rules have a large incentive ratio, we prove that the Nash product rule ($\mathsf{NASH}$) has an incentive ratio of $2$, thereby demonstrating that we can bypass the impossibility of Brandl et al. by relaxing strategyproofness. Moreover, we show that an incentive ratio of $2$ is optimal subject to some of the fairness and efficiency properties of $\mathsf{NASH}$, and that the positive result for the Nash product rule even holds when voters may report arbitrary concave utility functions. Finally, we complement our results with an experimental analysis.


[6] 2605.12094

Bayesian Persuasion with a Risk-Conscious Receiver

We study Bayesian persuasion when the receiver evaluates actions by reward-side Conditional Value-at-Risk (CVaR) rather than expected utility. CVaR preferences break the standard action-based direct-recommendation reduction: merging signals that recommend the same action can change the receiver's tail-risk ranking and destroy incentive compatibility. We show that this failure does not imply intractability in the explicit finite-state model. Each CVaR action value is max-affine in the posterior, and refining recommendations by the active affine piece yields an active-facet revelation principle and an exact polynomial-size linear program. We further identify a representation boundary: listed polyhedral risks remain tractable by the same LP, whereas succinctly represented facet families make exact persuasion NP-hard. Finally, we give a finite-precision approximation scheme for risk preferences determined by finitely many stable posterior statistics.


[7] 2407.21198

Lattice operations for the pairwise stable set in many-to-many markets via re-equilibration dynamics

We compute the lattice operations for the (pairwise) stable set in many-to-many matching markets when only path-independence on agents' choice functions is imposed. To do this, we first show that the sets of firm-quasi-stable and worker-quasi-stable many-to-many matchings form lattices. Then, we construct Tarski operators on these lattices whose fixed points coincide with the set of stable matchings, and show that iterating these operators from suitable quasi-stable matchings yields the lattice operations in the stable set. These operators resemble lay-off and vacancy chain dynamics, respectively.


[8] 2409.17035

Scaling up to the cloud: Cloud technology use and growth rates in small and large firms

Recent empirical evidence shows that investments in ICT disproportionately improve the performance of larger firms versus smaller ones. However, ICT may not be all alike, as they differ in their impact on firms' organisational structure. We investigate the effect of the use of cloud services on the long run size growth rate of French firms. We find that cloud services positively impact firms' growth rates, with smaller firms experiencing more significant benefits compared to larger firms. Our findings suggest cloud technologies help reduce barriers to digitalisation, which affect especially smaller firms. By lowering these barriers, cloud adoption enhances scalability and unlocks untapped growth potential.


[9] 2410.07906

Structural Change, Employment, and Inequality in Europe: an Economic Complexity Approach

Structural change consists of industrial diversification towards more productive, knowledge intensive activities. However, changes in the productive structure bear inherent links with job creation and income distribution. In this paper, we investigate the consequences of structural change, defined in terms of labour shifts towards more complex industries, on employment growth, wage inequality, and functional distribution of income. The analysis is conducted for European countries using data on disaggregated industrial employment shares over the period 2010-2018. First, we identify patterns of industrial specialisation by validating a country-industry industrial employment matrix using a bipartite weighted configuration model (BiWCM). Secondly, we introduce a country-level measure of labour-weighted Fitness, which can be decomposed in such a way as to isolate a component that identifies the movement of labour towards more complex industries, which we define as structural change. Thirdly, we link structural change to i) employment growth, ii) wage inequality, and iii) labour share of the economy. The results indicate that our structural change measure is associated negatively with employment growth. However, it is also associated with lower income inequality. As countries move to more complex industries, they drop the least complex ones, so the (low-paid) jobs in the least complex sectors disappear. Finally, structural change predicts a higher labour ratio of the economy; however, this is likely to be due to the increase in salaries rather than by job creation.


[10] 2504.01829

Revealed Bayesian Persuasion

How does one test empirically the hypothesis that a decision maker (DM) is being influenced by information via Bayesian persuasion? In this paper, I consider a DM whose state-dependent preferences are known to an analyst, who sees the conditional distribution of choices given the state. I provide necessary and sufficient conditions for the dataset to be consistent with the DM being Bayesian persuaded by an unobserved sender who generates a distribution of signals to ex-ante optimize the sender's expected payoff. I thereby provide a tool for empirical work on information design.


[11] 2508.12471

Do High-Premium Fields Buffer Labor Market Shocks? Evidence from India

Do high-return fields of study provide greater protection in labor market during crises? I construct pre-pandemic premia for major technical fields in India and examine whether workers in higher field-premium fields experience resilient labor market outcomes during COVID-19. Using a difference-in-difference with continuous treatment design, I find that field-premium advantages did not emerge immediately at the onset of the pandemic but through gradual adjustment during later phases.


[12] 2509.06697

Neural ARFIMA model for forecasting BRIC exchange rates with long memory

Accurate forecasting of exchange rates remains a persistent challenge, particularly for emerging economies such as Brazil, Russia, India, and China (BRIC). These series exhibit long memory and nonlinearity that conventional time series models struggle to capture. Exchange rate dynamics are further influenced by several key drivers, including global economic policy uncertainty, US equity market volatility, US monetary policy uncertainty, oil price growth rates, and short-term interest rates. These empirical complexities underscore the need for a flexible framework that can jointly accommodate long memory, nonlinearity, and the influence of external drivers. We propose a Neural AutoRegressive Fractionally Integrated Moving Average (NARFIMA) model that combines the long memory structure of ARFIMA with the nonlinear learning capability of neural networks while incorporating exogenous variables. We establish asymptotic stationarity of NARFIMA and quantify forecast uncertainty using conformal prediction intervals. Empirical results show that NARFIMA consistently outperforms benchmark methods in forecasting BRIC exchange rates.


[13] 2510.14285

Debiased Kernel Estimation of Spot Volatility in the Presence of Infinite Variation Jumps

Volatility estimation is a central problem in financial econometrics, but becomes particularly challenging when jump activity is high, a phenomenon observed empirically in highly traded financial securities. In this paper, we revisit the problem of spot volatility estimation for an Itô semimartingale with jumps of unbounded variation. We construct truncated kernel-based estimators and debiased variants that extend rate-optimal spot volatility estimation to a wider range of jump activity indices, from the previously available bound $Y<4/3$ to $Y<20/11$. Rate-suboptimal CLTs are also established for $Y>20/11$. Compared with earlier work, our approach achieves smaller asymptotic variances through the use of more general kernels and an optimal choice for the bandwidth convergence rate, and also has broader applicability under more flexible model assumptions. A comprehensive simulation study confirms that our procedures outperform competing methods in finite samples.


[14] 2512.06946

Testing the Significance of the Difference-in-Differences Coefficient via Doubly Randomised Inference

This article develops a significance test for the Difference-in-Differences (DiD) estimator based on dual-margin randomization, in which both the treatment and time indicators are independently permuted to generate an empirical null distribution of the DiD estimator. We situate the proposal explicitly within the landscape of existing inference methods for the DiD estimator, including OLS-based $t$-tests, heteroskedasticity-robust standard errors, cluster-robust variance estimators (CRVE), and the recently proposed jackknife standard errors of Hansen (2025). We show that CRVE-based procedures can be severely anti-conservative in small samples, motivating a nonparametric alternative. We formally characterise the permutation space induced by dual randomization, showing that it expands by a factor of $\binom{n}{n_T}$ relative to single-margin permutation tests, and provide an information-theoretic justification for balanced Bernoulli reshuffling. A controlled simulation study, augmented with robustness experiments under non-Gaussian and heteroskedastic errors, demonstrates that the doubly randomised test maintains accurate empirical size at all sample sizes considered, while HC0 and CRVE1 $t$-tests are substantially anti-conservative at small $n$. Crucially, this parametric inflation is driven by the leverage structure of the regressor matrix rather than by the error variance: heteroskedasticity-robust standard errors do not directly address the leverage-driven finite-sample distortion documented here, whereas randomization-based inference is insulated from both error-distributional and variance-structural departures by construction. Power costs relative to the Hansen jackknife test are real but bounded, and become negligible as $n$ grows. The proposed procedure is implemented in the sigDD R package and validated on four empirical datasets from the applied economics literature.


[15] 2601.02964

How Many Mechanisms? Measuring Parsimony in Risky Choice

Behavioral theories rest on parsimony: a small number of mechanisms organizing many decisions. We define a Maximum Rule Concentration Index that measures how parsimoniously a dataset of risky choices can be organized through a library of simple, parameter-free decision rules drawn from canonical behavioral theories: salience, regret, disappointment, modal-payoff focusing, extreme-outcome screening, and limited attention. Applied to three lottery-choice datasets, the data exhibit detectable parsimony: for a majority of subjects, observed concentration exceeds what standard utility models generate on the same menus. The concentration organizes around salience thinking, modal-payoff focusing, and regret.


[16] 2605.09642

From Expansion to Consolidation: Socio-Spatial Contagion Dynamics in Off-Grid PV Adoption

In traditional rural societies, where social ties are embedded in physical space, the diffusion of emerging technologies may be amplified through socio-spatial contagion (SSC). Such processes may play a key role in accelerating residential PV adoption in off-grid regions. Yet empirical evidence on SSC in PV adoption remains largely limited to affluent, grid-connected settings, while off-grid regions often lack systematic installation records. To address these gaps, we use a deep learning segmentation model to extract PV installations from a decade-long series of remote sensing imagery across 507 off-grid settlement clusters (hereafter, communities). This enables data-driven spatio-temporal point pattern inference of SSC in data-scarce contexts. SSC is quantified through the range and intensity of clustering of new installations around prior adopters, and the dynamics of these dimensions are linked to adoption outcomes. We found that SSC is nearly ubiquitous, often spanning most of the community's spatial extent, while exhibiting substantial heterogeneity in intensity. Although SSC intensifies over time, its effects remain temporally concentrated, peaking within 1 to 2 years of nearby installations and weakening thereafter. SSC intensity is positively associated with adoption rates in both cross-sectional and temporal analyses. However, the relationship between SSC range and adoption changes over time - in early diffusion phases, adoption growth is associated with range expansion, whereas in later phases it is associated with range contraction. This shift reflects a transition from clustering to consolidation of installations. These findings highlight the potential of seeding interventions to accelerate PV diffusion in off-grid regions.