New articles on Economics


[1] 2407.17523

How does the national new area impact the local economy? -- An empirical analysis from Zhoushan

To empirically study the policy impact of a National New Area on the local economy, this paper evaluates the effect of the Zhoushan Archipelago New Area on local GDP growth rate and economic efficiency. By collecting input and output data from 20 prefectural-level cities in Jiangsu, Zhejiang, and Anhui provinces from 1995 to 2015, we estimate the economic efficiency of these cities using data envelopment analysis. Subsequently, we construct counterfactuals for Zhoushan by selecting comparable cities from the dataset, excluding Zhoushan, and applying a panel data approach. The difference between the actual and counterfactual values for GDP growth rate and economic efficiency in Zhoushan is analyzed to determine the treatment effect of the National New Area policy. The research reveals that in the initial four years, the New Area policy enhanced Zhoushan's economic efficiency but negatively affected its GDP growth rate. This influence gradually disappeared after four years. Further analysis suggests that the policy's effect on GDP growth rate varies with the level of economic development in different regions, having a more substantial impact in less developed areas. Therefore, we conclude that establishing a New Area in relatively undeveloped zones is more advantageous.


[2] 2407.17589

Diversity in Choice as Majorization

We use majorization to model comparative diversity in school choice. A population of agents is more diverse than another population of agents if its distribution over groups is less concentrated: being less concentrated takes a specific mathematical meaning borrowed from the theory of majorization. We adapt the standard notion of majorization in order to favor arbitrary distributional objectives, such as population-level distributions over race/ethnicity or socioeconomic status. With school admissions in mind, we axiomatically characterize choice rules that are consistent with modified majorization, and constitute a principled method for admitting a diverse population of students into a school. Two important advantages of our approach is that majorization provides a natural notion of diversity, and that our axioms are independent of any exogenous priority ordering. We compare our choice rule to the leading proposal in the literature, ``reserves and quotas,'' and find ours to be more flexible.


[3] 2407.17731

Optimal Trade and Industrial Policies in the Global Economy: A Deep Learning Framework

We propose a deep learning framework, DL-opt, designed to efficiently solve for optimal policies in quantifiable general equilibrium trade models. DL-opt integrates (i) a nested fixed point (NFXP) formulation of the optimization problem, (ii) automatic implicit differentiation to enhance gradient descent for solving unilateral optimal policies, and (iii) a best-response dynamics approach for finding Nash equilibria. Utilizing DL-opt, we solve for non-cooperative tariffs and industrial subsidies across 7 economies and 44 sectors, incorporating sectoral external economies of scale. Our quantitative analysis reveals significant sectoral heterogeneity in Nash policies: Nash industrial subsidies increase with scale elasticities, whereas Nash tariffs decrease with trade elasticities. Moreover, we show that global dual competition, involving both tariffs and industrial subsidies, results in lower tariffs and higher welfare outcomes compared to a global tariff war. These findings highlight the importance of considering sectoral heterogeneity and policy combinations in understanding global economic competition.


[4] 2407.17884

Generalization of Zhou fixed point theorem

We give two generalizations of the Zhou fixed point theorem. They weaken the subcompleteness condition of values, and relax the ascending condition of the correspondence. As an application, we derive a generalization of Topkis's theorem on the existence and order structure of the set of Nash equilibria of supermodular games.


[5] 2407.17888

Enhanced power enhancements for testing many moment equalities: Beyond the $2$- and $\infty$-norm

Tests based on the $2$- and $\infty$-norm have received considerable attention in high-dimensional testing problems, as they are powerful against dense and sparse alternatives, respectively. The power enhancement principle of Fan et al. (2015) combines these two norms to construct tests that are powerful against both types of alternatives. Nevertheless, the $2$- and $\infty$-norm are just two out of the whole spectrum of $p$-norms that one can base a test on. In the context of testing whether a candidate parameter satisfies a large number of moment equalities, we construct a test that harnesses the strength of all $p$-norms with $p\in[2, \infty]$. As a result, this test consistent against strictly more alternatives than any test based on a single $p$-norm. In particular, our test is consistent against more alternatives than tests based on the $2$- and $\infty$-norm, which is what most implementations of the power enhancement principle target. We illustrate the scope of our general results by using them to construct a test that simultaneously dominates the Anderson-Rubin test (based on $p=2$) and tests based on the $\infty$-norm in terms of consistency in the linear instrumental variable model with many (weak) instruments.


[6] 2407.18206

Starting Small: Prioritizing Safety over Efficacy in Randomized Experiments Using the Exact Finite Sample Likelihood

We use the exact finite sample likelihood and statistical decision theory to answer questions of ``why?'' and ``what should you have done?'' using data from randomized experiments and a utility function that prioritizes safety over efficacy. We propose a finite sample Bayesian decision rule and a finite sample maximum likelihood decision rule. We show that in finite samples from 2 to 50, it is possible for these rules to achieve better performance according to established maximin and maximum regret criteria than a rule based on the Boole-Frechet-Hoeffding bounds. We also propose a finite sample maximum likelihood criterion. We apply our rules and criterion to an actual clinical trial that yielded a promising estimate of efficacy, and our results point to safety as a reason for why results were mixed in subsequent trials.


[7] 2407.17489

Collective Attention in Human-AI Teams

How does the presence of an AI assistant affect the collective attention of a team? We study 20 human teams of 3-4 individuals paired with one voice-only AI assistant during a challenging puzzle task. Teams are randomly assigned to an AI assistant with a human- or robotic-sounding voice that provides either helpful or misleading information about the task. Treating each individual AI interjection as a treatment intervention, we identify the causal effects of the AI on dynamic group processes involving language use. Our findings demonstrate that the AI significantly affects what teams discuss, how they discuss it, and the alignment of their mental models. Teams adopt AI-introduced language for both terms directly related to the task and for peripheral terms, even when they (a) recognize the unhelpful nature of the AI, (b) do not consider the AI a genuine team member, and (c) do not trust the AI. The process of language adaptation appears to be automatic, despite doubts about the AI's competence. The presence of an AI assistant significantly impacts team collective attention by modulating various aspects of shared cognition. This study contributes to human-AI teaming research by highlighting collective attention as a central mechanism through which AI systems in team settings influence team performance. Understanding this mechanism will help CSCW researchers design AI systems that enhance team collective intelligence by optimizing collective attention.