New articles on Economics


[1] 2306.03960

Information aggregation with delegation of votes

Liquid democracy is a system that combines aspects of direct democracy and representative democracy by allowing voters to either vote directly themselves, or delegate their votes to others. In this paper we study the information aggregation properties of liquid democracy in a setting with heterogeneously informed truth-seeking voters -- who want the election outcome to match an underlying state of the world -- and partisan voters. We establish that liquid democracy admits equilibria which improve welfare and information aggregation over direct and representative democracy when voters' preferences and information precisions are publicly or privately known. Liquid democracy also admits equilibria which do worse than the other two systems. We discuss features of efficient and inefficient equilibria and provide conditions under which voters can more easily coordinate on the efficient equilibria in liquid democracy than the other two systems.


[2] 2306.04065

Sustainability criterion implied externality pricing for resource extraction

A dynamic model is constructed that generalises the Hartwick and Van Long (2020) endogenous discounting setup by introducing externalities and asks what implications this has for optimal natural resource extraction with constant consumption. It is shown that a modified form of the Hotelling and Hartwick rule holds in which the externality component of price is a specific function of the instantaneous user costs and cross price elasticities. It is demonstrated that the externality adjusted marginal user cost of remaining natural reserves is equal to the marginal user cost of extracted resources invested in human-made reproducible capital. This lends itself to a discrete form with a readily intuitive economic interpretation that illuminates the stepwise impact of externality pricing on optimal extraction schedules.


[3] 2306.04135

Semiparametric Discrete Choice Models for Bundles

We propose methods of estimation and inference for use in semiparametric discrete choice models for bundles in both cross-sectional and panel data settings. Our matching-based identification approach permits certain forms of heteroskedasticity and arbitrary correlation in the disturbances across choices. For the cross-sectional model, we propose a kernel-weighted rank procedure and show the validity of the nonparametric bootstrap for the inference. For the panel data model, we propose localized maximum score estimators and show that the numerical bootstrap is a valid inference method. Monte Carlo experiments demonstrate that our proposed estimation and inference procedures perform adequately in finite samples.


[4] 2306.04177

Semiparametric Efficiency Gains from Parametric Restrictions on the Generalized Propensity Score

Knowledge of the propensity score weakly improves efficiency when estimating causal parameters, but what kind of knowledge is more useful? To examine this, we first derive the semiparametric efficiency bound of multivalued treatment effects when the propensity score is correctly specified by a parametric model. We then reveal which parametric structure on the propensity score enhances the efficiency even when the the model is large. Finally, we apply the general theory we develop to a stratified experiment setup and find that knowing the strata improves the efficiency, especially when the size of each stratum component is small.


[5] 2306.04285

Dynamic Programming on a Quantum Annealer: Solving the RBC Model

We introduce a novel approach to solving dynamic programming problems, such as those in many economic models, on a quantum annealer, a specialized device that performs combinatorial optimization. Quantum annealers attempt to solve an NP-hard problem by starting in a quantum superposition of all states and generating candidate global solutions in milliseconds, irrespective of problem size. Using existing quantum hardware, we achieve an order-of-magnitude speed-up in solving the real business cycle model over benchmarks in the literature. We also provide a detailed introduction to quantum annealing and discuss its potential use for more challenging economic problems.


[6] 2306.04462

An Empirical Study of Obstacle Preemption in the Supreme Court

The Supreme Court's federal preemption decisions are notoriously unpredictable. Traditional left-right voting alignments break down in the face of competing ideological pulls. The breakdown of predictable voting blocs leaves the business interests most affected by federal preemption uncertain of the scope of potential liability to injured third parties and unsure even of whether state or federal law will be applied to future claims. This empirical analysis of the Court's decisions over the last fifteen years sheds light on the Court's unique voting alignments in obstacle preemption cases. A surprising anti-obstacle preemption coalition is forming as Justice Thomas gradually positions himself alongside the Court's liberals to form a five-justice voting bloc opposing obstacle preemption.


[7] 2306.04463

Calibrating Chevron for Preemption

Now almost three decades since its seminal Chevron decision, the Supreme Court has yet to articulate how that case's doctrine of deference to agency statutory interpretations relates to one of the most compelling federalism issues of our time: regulatory preemption of state law. Should courts defer to preemptive agency interpretations under Chevron, or do preemption's federalism implications demand a less deferential approach? Commentators have provided no shortage of possible solutions, but thus far the Court has resisted all of them. This Article makes two contributions to the debate. First, through a detailed analysis of the Court's recent agency-preemption decisions, I trace its hesitancy to adopt any of the various proposed rules to its high regard for congressional intent where areas of traditional state sovereignty are at risk. Recognizing that congressional intent to delegate preemptive authority varies from case to case, the Court has hesitated to adopt an across-the-board rule. Any such rule would constrain the Court and risk mismatch with congressional intent -- a risk it accepts under Chevron generally but which it finds particularly troublesome in the delicate area of federal preemption. Second, building on this previously underappreciated factor in the Court's analysis, I suggest a novel solution of variable deference that avoids the inflexibility inherent in an across-the-board rule while providing greater predictability than the Court's current haphazard approach. The proposed rule would grant full Chevron-style deference in those cases where congressional delegative intent is most likely -- where Congress has expressly preempted some state law and the agency interpretation merely resolves preemptive scope -- while withholding deference in those cases where Congress has remained completely silent as to preemption and delegative intent is least likely.


[8] 2306.04494

Evaluating the Impact of Regulatory Policies on Social Welfare in Difference-in-Difference Settings

Quantifying the impact of regulatory policies on social welfare generally requires the identification of counterfactual distributions. Many of these policies (e.g. minimum wages or minimum working time) generate mass points and/or discontinuities in the outcome distribution. Existing approaches in the difference-in-difference literature cannot accommodate these discontinuities while accounting for selection on unobservables and non-stationary outcome distributions. We provide a unifying partial identification result that can account for these features. Our main identifying assumption is the stability of the dependence (copula) between the distribution of the untreated potential outcome and group membership (treatment assignment) across time. Exploiting this copula stability assumption allows us to provide an identification result that is invariant to monotonic transformations. We provide sharp bounds on the counterfactual distribution of the treatment group suitable for any outcome, whether discrete, continuous, or mixed. Our bounds collapse to the point-identification result in Athey and Imbens (2006) for continuous outcomes with strictly increasing distribution functions. We illustrate our approach and the informativeness of our bounds by analyzing the impact of an increase in the legal minimum wage using data from a recent minimum wage study (Cengiz, Dube, Lindner, and Zipperer, 2019).


[9] 2306.04562

International Spillovers of ECB Interest Rates: Monetary Policy & Information Effects

This paper shows that disregarding the information effects around the European Central Bank monetary policy decision announcements biases its international spillovers. Using data from 23 economies, both Emerging and Advanced, I show that following an identification strategy that disentangles pure monetary policy shocks from information effects lead to international spillovers on industrial production, exchange rates and equity indexes which are between 2 to 3 times larger in magnitude than those arising from following the standard high frequency identification strategy. This bias is driven by pure monetary policy and information effects having intuitively opposite international spillovers. Results are present for a battery of robustness checks: for a sub-sample of ``close'' and ``further away'' countries, for both Emerging and Advanced economies, using local projection techniques and for alternative methods that control for ``information effects''. I argue that this biases may have led a previous literature to disregard or find little international spillovers of ECB rates.


[10] 2306.04587

Trade-off between manipulability and dictatorial power: a proof of the Gibbard-Satterthwaite Theorem

By endowing the class of tops-only and efficient social choice rules with a dual order structure that exploits the trade-off between different degrees of manipulability and dictatorial power rules allow agents to have, we provide a proof of the Gibbard-Satterthwaite Theorem.


[11] 2306.04606

Network-based Representations and Dynamic Discrete Choice Models for Multiple Discrete Choice Analysis

In many choice modeling applications, people demand is frequently characterized as multiple discrete, which means that people choose multiple items simultaneously. The analysis and prediction of people behavior in multiple discrete choice situations pose several challenges. In this paper, to address this, we propose a random utility maximization (RUM) based model that considers each subset of choice alternatives as a composite alternative, where individuals choose a subset according to the RUM framework. While this approach offers a natural and intuitive modeling approach for multiple-choice analysis, the large number of subsets of choices in the formulation makes its estimation and application intractable. To overcome this challenge, we introduce directed acyclic graph (DAG) based representations of choices where each node of the DAG is associated with an elemental alternative and additional information such that the number of selected elemental alternatives. Our innovation is to show that the multi-choice model is equivalent to a recursive route choice model on the DAG, leading to the development of new efficient estimation algorithms based on dynamic programming. In addition, the DAG representations enable us to bring some advanced route choice models to capture the correlation between subset choice alternatives. Numerical experiments based on synthetic and real datasets show many advantages of our modeling approach and the proposed estimation algorithms.


[12] 2306.04305

Self-Resolving Prediction Markets for Unverifiable Outcomes

Prediction markets elicit and aggregate beliefs by paying agents based on how close their predictions are to a verifiable future outcome. However, outcomes of many important questions are difficult to verify or unverifiable, in that the ground truth may be hard or impossible to access. Examples include questions about causal effects where it is infeasible or unethical to run randomized trials; crowdsourcing and content moderation tasks where it is prohibitively expensive to verify ground truth; and questions asked over long time horizons, where the delay until the realization of the outcome skews agents' incentives to report their true beliefs. We present a novel and unintuitive result showing that it is possible to run an $\varepsilon-$incentive compatible prediction market to elicit and efficiently aggregate information from a pool of agents without observing the outcome by paying agents the negative cross-entropy between their prediction and that of a carefully chosen reference agent. Our key insight is that a reference agent with access to more information can serve as a reasonable proxy for the ground truth. We use this insight to propose self-resolving prediction markets that terminate with some probability after every report and pay all but a few agents based on the final prediction. We show that it is an $\varepsilon-$Perfect Bayesian Equilibrium for all agents to report truthfully in our mechanism and to believe that all other agents report truthfully. Although primarily of interest for unverifiable outcomes, this design is also applicable for verifiable outcomes.