New articles on Economics


[1] 2602.09189

Affirmative Action in India with Horizontal Reservations

India implements the world's most complex affirmative action program through vertical and horizontal reservations. Although applicants can belong to at most one vertical category, they can qualify for multiple horizontal reservation categories simultaneously. We examine resource allocation problems in India, where horizontal reservations follow a hierarchical structure within a one-to-all horizontal matching framework. We introduce the hierarchical choice rule and show that it selects the most meritorious set of applicants. We thoroughly analyze the properties of the aggregate choice rule, which comprises hierarchical choice rules across all vertical categories. We show that the generalized deferred acceptance mechanism, when coupled with this aggregate choice rule, is the unique stable and strategy-proof mechanism that eliminates justified envy.


[2] 2602.09237

Sign-Dependent Spillovers of Global Monetary Policy

This paper examines the sign-dependent international spillovers of Federal Reserve and European Central Bank monetary policy shocks. Using a consistent high-frequency identification of pure monetary policy shocks across 44 advanced and non-advanced economies and the methodology of Caravello and Martinez-Bruera, 2024, we document strong asymmetries in international transmission. Linear specifications mask these effects: contractionary shocks generate large and significant deteriorations in financial conditions, economic activity, and international trade abroad, while expansionary shocks yield little to no measurable improvement. Our results are robust across samples, identification strategies, and the framework proposed by Ben Zeev et al., 2023.


[3] 2602.09362

Behavioral Economics of AI: LLM Biases and Corrections

Do generative AI models, particularly large language models (LLMs), exhibit systematic behavioral biases in economic and financial decisions? If so, how can these biases be mitigated? Drawing on the cognitive psychology and experimental economics literatures, we conduct the most comprehensive set of experiments to date$-$originally designed to document human biases$-$on prominent LLM families across model versions and scales. We document systematic patterns in LLM behavior. In preference-based tasks, responses become more human-like as models become more advanced or larger, while in belief-based tasks, advanced large-scale models frequently generate rational responses. Prompting LLMs to make rational decisions reduces biases.


[4] 2602.09382

Initial-Condition-Robust Inference in Autoregressive Models

This paper considers confidence intervals (CIs) for the autoregressive (AR) parameter in an AR model with an AR parameter that may be close or equal to one. Existing CIs rely on the assumption of a stationary or fixed initial condition to obtain correct asymptotic coverage and good finite sample coverage. When this assumption fails, their coverage can be quite poor. In this paper, we introduce a new CI for the AR parameter whose coverage probability is completely robust to the initial condition, both asymptotically and in finite samples. This CI pays only a small price in terms of its length when the initial condition is stationary or fixed. The new CI also is robust to conditional heteroskedasticity of the errors.


[5] 2602.09406

Selective Disclosure in Overlapping Generations

We develop an overlapping generations model where each agent observes a verifiable private signal about the state and, with positive probability, also receives signals disclosed by his predecessor. The agent then takes an action and decides which signals to pass on. Each agent's action has a positive externality on his predecessor and his optimal action increases in his belief about the state. We show that as the communication friction vanishes, agents become increasingly selective in disclosing information. As the probability that messages reach the next generation approaches one, all signals except those with the highest likelihood ratio will be concealed in equilibrium.


[6] 2602.09490

Robust Trust

An agent chooses an action using her private information combined with recommendations from an informed but potentially misaligned adviser. With a known alignment probability, the adviser reports his signal truthfully; with remaining probability, the adviser can send an arbitrary message. We characterize the decision rule that maximizes the agent's worst-case expected payoff. Every optimal rule admits a trust region representation in belief space: advice is taken at face value when it induces a posterior within the trust region; otherwise, the agent acts as if the posterior were on the trust region's boundary. We derive thresholds on the alignment probability above which the adviser's presence strictly benefits the agent and fully characterize the solution in binary-state as well as binary-action environments.


[7] 2602.09728

Competitive Credit and Present Bias: A Stochastic Discounting Approach

A prominent theme in behavioural contract theory is the study of present-biased agents represented through quasi-hyperbolic discounting. In a model of competitive credit provision, we study an alternative to this framework in which the agent has a private stochastic discount factor and may overestimate the likelihood of more patient values. Agent preferences, however, are timeconsistent. While a limiting case of our model corresponds to a "fully naive" agent in work on quasi-hyperbolic discounting, another case is where the agent has correct beliefs about future discounting. In equilibrium, the agent selects options with earlier consumption in case of less patient discount factor realisations, but is penalised by receiving worse terms. Our model thus accounts for an important feature of equilibrium contracts identified in Heidhues and Kőszegi (2010). Unlike Heidhues and Kőszegi, our framework often predicts excessively backloaded consumption, including when the agent holds correct beliefs about future discounting.


[8] 2602.09967

Incentive Pareto Efficiency in Monopoly Insurance Markets with Adverse Selection

We study a monopolistic insurance market with hidden information, where the agent's type $\theta$ is private information that is unobservable to the insurer, and it is drawn from a continuum of types. The hidden type affects both the loss distribution and the risk attitude of the agent. Within this framework, we show that a menu of contracts is incentive efficient if and only if it maximizes social welfare, subject to incentive compatibility and individual rationality constraints. This equivalence holds for general concave utility functionals. In the special case of Yaari Dual Utility, we provide a semi-explicit characterization of optimal incentive-efficient menus of contracts. We do this under two different settings: (i) the first assumes that types are ordered in a way such that larger values of $\theta$ correspond to more risk-averse types who face stochastically larger losses; whereas (ii) the second assumes that larger values of $\theta$ correspond to less risk-averse types who face stochastically larger losses. In both settings, the structure of optimal incentive-efficient menus of contracts depends on the level of the social welfare weight. Moreover, at the optimum, higher types receive greater coverage in exchange for higher premia. Additionally, optimal menus leave the lowest type indifferent, with the insurer absorbing all surplus from the lowest type; and they exhibit efficiency at the top, that is, the highest type receives full coverage.


[9] 2602.09608

Designing a Token Economy: Incentives, Governance, and Tokenomics

In recent years, tokenomic systems, decentralized systems that use cryptographic tokens to represent value and rights, have evolved considerably. Growing complexity in incentive structures has expanded the applicability of blockchain beyond purely transactional use. Existing research predominantly examines token economies within specific use cases, proposes conceptual frameworks, or studies isolated aspects such as governance, incentive design, and tokenomics. However, the literature offers limited empirically grounded, end-to-end guidance that integrates these dimensions into a coherent, step-by-step design approach informed by concrete token-economy development efforts. To address this gap, this paper presents the Token Economy Design Method (TEDM), a design-science artifact that synthesizes stepwise design propositions for token-economy design across incentives, governance, and tokenomics. TEDM is derived through an iterative qualitative synthesis of prior contributions and refined through a co-designed case. The artifact is formatively evaluated via the Currynomics case study and additional expert interviews. Currynomics is an ecosystem that maintains the Redcurry stablecoin, using real estate as the underlying asset. TEDM is positioned as reusable design guidance that facilitates the analysis of foundational requirements of tokenized ecosystems. The specificity of the proposed approach lies in the focus on the socio-technical context of the system and early stages of its design.


[10] 2602.09969

Causal Identification in Multi-Task Demand Learning with Confounding

We study a canonical multi-task demand learning problem motivated by retail pricing, in which a firm seeks to estimate heterogeneous linear price-response functions across a large collection of decision contexts. Each context is characterized by rich observable covariates yet typically exhibits only limited historical price variation, motivating the use of multi-task learning to borrow strength across tasks. A central challenge in this setting is endogeneity: historical prices are chosen by managers or algorithms and may be arbitrarily correlated with unobserved, task-level demand determinants. Under such confounding by latent fundamentals, commonly used approaches, such as pooled regression and meta-learning, fail to identify causal price effects. We propose a new estimation framework that achieves causal identification despite arbitrary dependence between prices and latent task structure. Our approach, Decision-Conditioned Masked-Outcome Meta-Learning (DCMOML), involves carefully designing the information set of a meta-learner to leverage cross-task heterogeneity while accounting for endogenous decision histories. Under a mild restriction on price adaptivity in each task, we establish that this method identifies the conditional mean of the task-specific causal parameters given the designed information set. Our results provide guarantees for large-scale demand estimation with endogenous prices and small per-task samples, offering a principled foundation for deploying causal, data-driven pricing models in operational environments.


[11] 2602.10053

The Architecture of Illusion: Network Opacity and Strategic Escalation

Standard models of bounded rationality typically assume agents either possess accurate knowledge of the population's reasoning abilities (Cognitive Hierarchy) or hold dogmatic, degenerate beliefs (Level-$k$). We introduce the ``Connected Minds'' model, which unifies these frameworks by integrating iterative reasoning with a parameterized network bias. We posit that agents do not observe the global population; rather, they observe a sample biased by their network position, governed by a locality parameter $p$ representing algorithmic ranking, social homophily, or information disclosure. We show that this parameter acts as a continuous bridge: the model collapses to the myopic Level-$k$ recursion as networks become opaque ($p \to 0$) and recovers the standard Cognitive Hierarchy model under full transparency ($p=1$). Theoretically, we establish that network opacity induces a \emph{Sophisticated Bias}, causing agents to systematically overestimate the cognitive depth of their opponents while preserving the log-concavity of belief distributions. This makes $p$ an actionable lever: a planner or platform can tune transparency -- globally or by segment (a personalized $p_k$) -- to shape equilibrium behavior. From a mechanism design perspective, we derive the \emph{Escalation Principle}: in games of strategic complements, restricting information can maximize aggregate effort by trapping agents in echo chambers where they compete against hallucinated, high-sophistication peers. Conversely, we identify a \emph{Transparency Reversal} for coordination games, where maximizing network visibility is required to minimize variance and stabilize outcomes. Our results suggest that network topology functions as a cognitive zoom lens, determining whether agents behave as local imitators or global optimizers.


[12] 2111.12799

The Macroeconomic Effects of Corporate Tax Reforms

Using aggregate, sectoral, and firm-level data, this paper examines the effects of two major U.S. corporate tax cuts. The Tax Cuts and Jobs Act (TCJA-17) led to large shareholder payouts but modest aggregate stimulus, while Kennedy's 1960s tax cuts stimulated output and investment with minimal payout impact. To explain this divergence, I incorporate tax depreciation policy and a pass-through business sector into a neoclassical growth model. The model suggests that accelerated depreciation and a large pass-through share dampen stimulus from corporate tax rate reductions, and that Kennedy's cuts boosted output four times more per dollar of lost revenue than the TCJA-17.


[13] 2205.11684

Desirable Rankings

We study the problem of aggregating individual preferences over alternatives into a collective ranking. A distinctive feature of our setting is that agents are matched to alternatives. Applications include rankings of colleges or academic journals. The foundation of our approach is that alternatives agents desire -- that is, those they rank above their match -- should also be ranked higher socially. We introduce axioms to formalize this idea and call rankings that satisfy them desirable. We develop an algorithm to construct desirable rankings and prove that, as the market becomes large, desirable rankings converge to the true underlying ranking of the alternatives by quality. We support this convergence result through simulations and demonstrate the practical usefulness of our approach by ranking Chilean medical programs with data from their centralized admission system. Finally, we compare performance and show that our approach outperforms two benchmarks: revealed preference rankings and Borda counts.


[14] 2406.01398

Local non-bossiness

The student-optimal stable mechanism (DA), the most popular mechanism in school choice, is the only one that is stable and strategy-proof. However, when DA is implemented, a student can change the schools of others without changing her own. We show that this drawback is limited: a student cannot change her schoolmates while remaining at the same school. We refer to this new property as local non-bossiness and use it to provide a new characterization of DA that does not rely on stability. Furthermore, we show that local non-bossiness plays a crucial role in providing incentives to be truthful when students have preferences over their colleagues. As long as students first consider the school to which they are assigned and then their schoolmates, DA induces the only stable and strategy-proof mechanism. There is limited room to expand this preference domain without compromising the existence of a stable and strategy-proof mechanism.


[15] 2502.09569

Statistical Equilibrium of Optimistic Beliefs

We study finite normal-form games in which payoffs are subject to random perturbations and players face uncertainty about how these shocks co-move across actions, an ambiguity that naturally arises when only realized (not counterfactual) payoffs are observed. We introduce the Statistical Equilibrium of Optimistic Beliefs (SE-OB), inspired by discrete choice theory. We model players as \textit{optimistic better responders}: they face ambiguity about the dependence structure (copula) of payoff perturbations across actions and resolve this ambiguity by selecting, from a belief set, the joint distribution that maximizes the expected value of the best perturbed payoff. Given this optimistic belief, players choose actions according to the induced random-utility choice rule. We define SE-OB as a fixed point of this two-step response mapping. SE-OB generalizes the Nash equilibrium and the structural quantal response equilibrium. We establish existence under standard regularity conditions on belief sets. For the economically important class of marginal belief sets, that is, the set of all joint distributions with fixed action-wise marginals, optimistic belief selection reduces to an optimal coupling problem, and SE-OB admits a characterization via Nash equilibrium of a smooth regularized game, yielding tractability and enabling computation. We characterize the relationship between SE-OB and existing equilibrium notions and illustrate its empirical relevance in simulations, where it captures systematic violations of independence of irrelevant alternatives that standard logit-based models fail to explain.


[16] 2505.05341

Robust Learning with Private Information

Firms increasingly delegate decisions to learning algorithms in platform markets. Standard algorithms perform well when platform policies are stationary, but firms often face ambiguity about whether policies are stationary or adapt strategically to their behavior. When policies adapt, efficient learning under stationarity may backfire: it may reveal a firm's persistent private information, allowing the platform to personalize terms and extract information rents. We study a repeated screening problem in which an agent with a fixed private type commits ex ante to a learning algorithm, facing ambiguity about the principal's policy. We show that a broad class of standard algorithms, including all no-external-regret algorithms, can be manipulated by adaptive principals and permit asymptotic full surplus extraction. We then construct a misspecification-robust learning algorithm that treats stationarity as a testable hypothesis. It achieves the optimal payoff under stationarity at the minimax-optimal rate, while preventing dynamic rent extraction: against any adaptive principal, each type's long-run utility is at least its utility under the menu that maximizes revenue under the principal's prior.


[17] 2505.05603

Nonparametric Testability of Slutsky Symmetry

Economic theory implies strong limitations on what types of consumption behavior are considered rational. Rationality implies that the Slutsky matrix, which captures the substitution effects of compensated price changes on demand for different goods, is symmetric and negative semi-definite. While empirically informed versions of negative semi-definiteness have been shown to be nonparametrically testable, the analogous question for Slutsky symmetry has remained open. Recently, it has even been shown that the symmetry condition is not testable via the average Slutsky matrix, prompting conjectures about its non-testability. We settle this question by deriving nonparametric conditional quantile restrictions on observable data that constitute a testable implication of Slutsky symmetry in an empirical setting with individual heterogeneity and endogeneity. The theoretical contribution is a multivariate generalization of identification results for partial effects in nonseparable models without monotonicity, which is of independent interest. This result has implications for different areas in econometric theory, including nonparametric welfare analysis with individual heterogeneity for which, in the case of more than two goods, the symmetry condition introduces nonlinear correction factors.


[18] 2510.12049

Generative AI and Firm Productivity: Field Experiments in Online Retail

We quantify the impact of Generative Artificial Intelligence (GenAI) on firm productivity through a series of large-scale randomized field experiments involving millions of users and products at a leading cross-border online retail platform. Over six months in 2023-2024, GenAI-based enhancements were integrated into seven consumer-facing business workflows. We find that GenAI adoption significantly increases sales, with treatment effects ranging from $0\%$ to $16.3\%$, depending on GenAI's marginal contribution relative to existing firm practices. Because inputs and prices were held constant across experimental arms, these gains map directly into total factor productivity improvements. Across the four GenAI applications with positive sales effects, the implied annual incremental value is approximately $\$ 5$ per consumer-an economically meaningful impact given the retailer's scale and the early stage of GenAI adoption. The primary mechanism operates through higher conversion rates, consistent with GenAI reducing frictions and improving consumer experience. Importantly, these effects are not associated with worse post-purchase outcomes, as product return rates and customer ratings do not deteriorate. Finally, we document substantial demand-side heterogeneity, with larger gains for less experienced consumers. Our findings provide novel, large-scale causal evidence on the productivity effects of GenAI in online retail, highlighting both its immediate value and broader potential.


[19] 2512.24968

The Impact of LLMs on Online News Consumption and Production

Large language models (LLMs) change how consumers acquire information online; their bots also crawl news publishers' websites for training data and to answer consumer queries; and they provide tools that can lower the cost of content creation. These changes lead to predictions of adverse impact on news publishers in the form of lowered consumer demand, reduced demand for newsroom employees, and an increase in news "slop." Consequently, some publishers strategically responded by blocking LLM access to their websites using the this http URL file standard. Using high-frequency granular data, we document four effects related to the predicted shifts in news publishing following the introduction of generative AI (GenAI). First, we find a moderate decline in traffic to news publishers occurring after August 2024. Second, using a difference-in-differences approach, we find that blocking GenAI bots can be associated with a reduction of total website traffic to large publishers compared to not blocking. Third, on the hiring side, we do not find evidence that LLMs are replacing editorial or content-production jobs yet. The share of new editorial and content-production job listings increases over time. Fourth, regarding content production, we find no evidence that large publishers increased text volume; instead, they significantly increased rich content and use more advertising and targeting technologies. Together, these findings provide early evidence of some unforeseen impacts of the introduction of LLMs on news production and consumption.


[20] 2601.01421

A multi-self model of self-punishment

We investigate the choice of a decision maker (DM) who harms herself, by maximizing in each menu some distortion of her true preference, in which the first i alternatives are moved, in reverse order, to the bottom. This pattern has no empirical power, but it allows to define a degree of self-punishment, which measures the extent of the denial of pleasure adopted by the DM. We characterize irrational choices displaying the lowest degree of self-punishment, and we fully identify the preferences that explain the DM's picks by a minimal denial of pleasure. These datasets account for some well known selection biases, such as second-best procedures, and the handicapped avoidance. Necessary and sufficient conditions for the estimation of the degree of self-punishment of a choice are singled out. Moreover the linear orders whose harmful distortions justify choice data are partially elicited. Finally, we offer a simple characterization of the choice behavior that exhibits the highest degree of self-punishment, and we show that this subclass comprises almost all choices.


[21] 2601.07752

A Unified Framework for Debiased Machine Learning: Riesz Representer Fitting under Bregman Divergence

Estimating the Riesz representer is central to debiased machine learning for causal and structural parameter estimation. We propose generalized Riesz regression, a unified framework for estimating the Riesz representer by fitting a representer model via Bregman divergence minimization. This framework includes various divergences as special cases, such as the squared distance and the Kullback--Leibler (KL) divergence, where the former recovers Riesz regression and the latter recovers tailored loss minimization. Under suitable pairs of divergence and model specifications (link functions), the dual problems of the Riesz representer fitting problem correspond to covariate balancing, which we call automatic covariate balancing. Moreover, under the same specifications, the sample average of outcomes weighted by the estimated Riesz representer satisfies Neyman orthogonality even without estimating the regression function, a property we call automatic Neyman orthogonalization. This property not only reduces the estimation error of Neyman orthogonal scores but also clarifies a key distinction between debiased machine learning and targeted maximum likelihood estimation (TMLE). Our framework can also be viewed as a generalization of density ratio fitting under Bregman divergences to Riesz representer estimation, and it applies beyond density ratio estimation. We provide convergence analyses for both reproducing kernel Hilbert space (RKHS) and neural network model classes. A Python package for generalized Riesz regression is released as genriesz and is available at this https URL.


[22] 2602.08955

Platform Design, Earnings Transparency and Minimum Wage Policies: Evidence from A Natural Experiment on Lyft

We study the effects of a significant design and policy change at a major ridesharing platform that altered both provider earnings and platform transparency, examining how it affected outcomes for drivers, riders, and the platform, and providing managerial insights on balancing competing stakeholder interests while avoiding unintended consequences. In February 2024, Lyft introduced a policy guaranteeing drivers a minimum fraction of rider payments while increasing per-ride earnings transparency. The staggered rollout, first in major markets, created a natural experiment to examine how earnings guarantees and transparency affect ride availability and driver engagement. Using trip-level data from over 47 million rides across a major market and adjacent markets over six months, we apply dynamic staggered difference-in-differences models combined with a geographic border strategy to estimate causal effects on supply, demand, ride production, and platform performance. We find that the policy led to substantial increases in driver engagement, with distinct effects from the guarantee and transparency. Drivers increased working hours and utilization, resulting in more completed trips and higher per-hour and per-trip earnings, with stronger effects among drivers with lower pre-policy earnings and greater income uncertainty. Increased supply also generated positive spillovers on demand. We also find evidence that greater transparency may induce strategic driver behavior. In ongoing work, we develop a counterfactual simulation framework linking driver supply and rider intents to ride production, illustrating how small changes in driver choices could further amplify policy effects. Our study shows how platform-led interventions present an intriguing alternative to government-led minimum pay regulation and provide new strategic insights into managing platform change.


[23] 2505.07820

Revisiting the Excess Volatility Puzzle Through the Lens of the Chiarella Model

We amend and extend the Chiarella model of financial markets to deal with arbitrary long-term value drifts in a consistent way. This allows us to improve upon existing calibration schemes, opening the possibility of calibrating individual monthly time series instead of classes of time series. The technique is employed on spot prices of four asset classes from ca. 1800 onward (stock indices, bonds, commodities, currencies). The so-called fundamental value is a direct output of the calibration, which allows us to (a) quantify the amount of excess volatility in these markets, which we find to be large (e.g. a factor $\approx$ 4 for stock indices) and consistent with previous estimates; and (b) determine the distribution of mispricings (i.e. the difference between market price and value), which we find in many cases to be bimodal. Both findings are strongly at odds with the Efficient Market Hypothesis. We also study in detail the 'sloppiness' of the calibration, that is, the directions in parameter space that are weakly constrained by data. The main conclusions of our study are remarkably consistent across different asset classes, and reinforce the hypothesis that the medium-term fate of financial markets is determined by a tug-of-war between trend followers and fundamentalists.


[24] 2505.08654

Holistic Multi-Scale Inference of the Leverage Effect: Efficiency under Dependent Microstructure Noise

This paper addresses the long-standing challenge of estimating the leverage effect from high-frequency data contaminated by dependent, non-Gaussian microstructure noise. We depart from the conventional reliance on pre-averaging or volatility "plug-in" methods by introducing a holistic multi-scale framework that operates directly on the leverage effect. We propose two novel estimators: the Subsampling-and-Averaging Leverage Effect (SALE) and the Multi-Scale Leverage Effect (MSLE). Central to our approach is a shifted window technique that constructs a noise-unbiased base estimator, significantly simplifying the multi-scale architecture. We provide a rigorous theoretical foundation for these estimators, establishing central limit theorems and stable convergence results that remain valid under both noise-free and dependent-noise settings. The primary contribution to estimation efficiency is a specifically designed weighting strategy for the MSLE estimator. By optimizing the weights based on the asymptotic covariance structure across scales and incorporating finite-sample variance corrections, we achieve substantial efficiency gains over existing benchmarks. Extensive simulation studies and an empirical analysis of 30 U.S. assets demonstrate that our framework consistently yields smaller estimation errors and superior performance in realistic, noisy market environments.


[25] 2505.19013

Faithful Group Shapley Value

Data Shapley is an important tool for data valuation, which quantifies the contribution of individual data points to machine learning models. In practice, group-level data valuation is desirable when data providers contribute data in batch. However, we identify that existing group-level extensions of Data Shapley are vulnerable to shell company attacks, where strategic group splitting can unfairly inflate valuations. We propose Faithful Group Shapley Value (FGSV) that uniquely defends against such attacks. Building on original mathematical insights, we develop a provably fast and accurate approximation algorithm for computing FGSV. Empirical experiments demonstrate that our algorithm significantly outperforms state-of-the-art methods in computational efficiency and approximation accuracy, while ensuring faithful group-level valuation.


[26] 2508.13366

Monotonic Path-Specific Effects: Application to Estimating Educational Returns

Conventional research on educational effects typically either employs a "years of schooling" measure of education, or dichotomizes attainment as a point-in-time treatment. Yet, such a conceptualization of education is misaligned with the sequential process by which individuals make educational transitions. In this paper, I propose a causal mediation framework for the study of educational effects on outcomes such as earnings. The framework considers the effect of a given educational transition as operating indirectly, via progression through subsequent transitions, as well as directly, net of these transitions. I demonstrate that the average treatment effect (ATE) of education can be additively decomposed into mutually exclusive components that capture these direct and indirect effects. The decomposition has several special properties which distinguish it from conventional mediation decompositions of the ATE, properties which facilitate less restrictive identification assumptions as well as identification of all causal paths in the decomposition. An analysis of the returns to high school completion in the NLSY97 cohort suggests that the payoff to a high school degree stems overwhelmingly from its direct labor market returns. Mediation via college attendance, completion and graduate school attendance is small because of individuals' low counterfactual progression rates through these subsequent transitions.


[27] 2508.21536

Triply Robust Panel Estimators

This paper studies estimation of causal effects in a panel data setting. We introduce a new estimator, the Triply RObust Panel (TROP) estimator, that combines (i) a flexible model for the potential outcomes based on a low-rank factor structure on top of a two-way-fixed effect specification, with (ii) unit weights intended to upweight units similar to the treated units and (iii) time weights intended to upweight time periods close to the treated time periods. We study the performance of the estimator in a set of simulations designed to closely match several commonly studied real data sets. We find that there is substantial variation in the performance of the estimators across the settings considered. The proposed estimator outperforms two-way-fixed-effect/difference-in-differences, synthetic control, matrix completion and synthetic-difference-in-differences estimators. We investigate what features of the data generating process lead to this performance, and assess the relative importance of the three components of the proposed estimator. We have two recommendations. Our preferred strategy is that researchers use simulations closely matched to the data they are interested in, along the lines discussed in this paper, to investigate which estimators work well in their particular setting. A simpler approach is to use more robust estimators such as synthetic difference-in-differences or the new triply robust panel estimator which we find to substantially outperform two-way fixed effect estimators in many empirically relevant settings.


[28] 2512.14609

Asymptotic Inference for Rank Correlations

Kendall's tau and Spearman's rho are widely used tools for measuring dependence. Surprisingly, when it comes to asymptotic inference for these rank correlations, some fundamental results and methods have not yet been developed, in particular for discrete random variables and in the time series case, and concerning variance estimation in general. Consequently, asymptotic confidence intervals are not available. We provide a comprehensive treatment of asymptotic inference for classical rank correlations, including Kendall's tau, Spearman's rho, Goodman-Kruskal's gamma, Kendall's tau-b, and grade correlation. We derive asymptotic distributions for both iid and time series data, resorting to asymptotic results for U-statistics, and introduce consistent variance estimators. This enables the construction of confidence intervals and tests, generalizes classical results for continuous random variables and leads to corrected versions of widely used tests of independence. We analyze the finite-sample performance of our variance estimators, confidence intervals, and tests in simulations and illustrate their use in case studies.