New articles on econ


[1] 2007.01952

Binary Relations in Mathematical Economics: On the Continuity, Additivity and Monotonicity Postulates in Eilenberg, Villegas and DeGroot

This chapter examines how positivity and order play out in two important questions in mathematical economics, and in so doing, subjects the postulates of continuity, additivity and monotonicity to closer scrutiny. Two sets of results are offered: the first departs from Eilenberg's (1941) necessary and sufficient conditions on the topology under which an anti-symmetric, complete, transitive and continuous binary relation exists on a topologically connected space; and the second, from DeGroot's (1970) result concerning an additivity postulate that ensures a complete binary relation on a {\sigma}-algebra to be transitive. These results are framed in the registers of order, topology, algebra and measure-theory; and also beyond mathematics in economics: the exploitation of Villegas' notion of monotonic continuity by Arrow-Chichilnisky in the context of Savage's theorem in decision theory, and the extension of Diamond's impossibility result in social choice theory by Basu-Mitra. As such, this chapter has a synthetic and expository motivation, and can be read as a plea for inter-disciplinary conversations, connections and collaboration.


[2] 2007.02435

Forecasting with Bayesian Grouped Random Effects in Panel Data

In this paper, we estimate and leverage latent constant group structure to generate the point, set, and density forecasts for short dynamic panel data. We implement a nonparametric Bayesian approach to simultaneously identify coefficients and group membership in the random effects which are heterogeneous across groups but fixed within a group. This method allows us to incorporate subjective prior knowledge on the group structure that potentially improves the predictive accuracy. In Monte Carlo experiments, we demonstrate that our Bayesian grouped random effects (BGRE) estimators produce accurate estimates and score predictive gains over standard panel data estimators. With a data-driven group structure, the BGRE estimators exhibit comparable accuracy of clustering with the nonsupervised machine learning algorithm Kmeans and outperform Kmeans in a two-step procedure. In the empirical analysis, we apply our method to forecast the investment rate across a broad range of firms and illustrate that the estimated latent group structure facilitate forecasts relative to standard panel data estimators.


[3] 2007.02588

Spectral Targeting Estimation of $λ$-GARCH models

This paper presents a novel estimator of orthogonal GARCH models, which combines (eigenvalue and -vector) targeting estimation with stepwise (univariate) estimation. We denote this the spectral targeting estimator. This two-step estimator is consistent under finite second order moments, while asymptotic normality holds under finite fourth order moments. The estimator is especially well suited for modelling larger portfolios: we compare the empirical performance of the spectral targeting estimator to that of the quasi maximum likelihood estimator for five portfolios of 25 assets. The spectral targeting estimator dominates in terms of computational complexity, being up to 57 times faster in estimation, while both estimators produce similar out-of-sample forecasts, indicating that the spectral targeting estimator is well suited for high-dimensional empirical applications.


[4] 2007.02653

Teacher-to-classroom assignment and student achievement

We study the effects of counterfactual teacher-to-classroom assignments on average student achievement in elementary and middle schools in the US. We use the Measures of Effective Teaching (MET) experiment to semiparametrically identify the average reallocation effects (AREs) of such assignments. Our findings suggest that changes in within-district teacher assignments could have appreciable effects on student achievement. Unlike policies which require hiring additional teachers (e.g., class-size reduction measures), or those aimed at changing the stock of teachers (e.g., VAM-guided teacher tenure policies), alternative teacher-to-classroom assignments are resource neutral; they raise student achievement through a more efficient deployment of existing teachers.


[5] 2007.02739

Semi-nonparametric Latent Class Choice Model with a Flexible Class Membership Component: A Mixture Model Approach

This study presents a semi-nonparametric Latent Class Choice Model (LCCM) with a flexible class membership component. The proposed model formulates the latent classes using mixture models as an alternative approach to the traditional random utility specification with the aim of comparing the two approaches on various measures including prediction accuracy and representation of heterogeneity in the choice process. Mixture models are parametric model-based clustering techniques that have been widely used in areas such as machine learning, data mining and patter recognition for clustering and classification problems. An Expectation-Maximization (EM) algorithm is derived for the estimation of the proposed model. Using two different case studies on travel mode choice behavior, the proposed model is compared to traditional discrete choice models on the basis of parameter estimates' signs, value of time, statistical goodness-of-fit measures, and cross-validation tests. Results show that mixture models improve the overall performance of latent class choice models by providing better out-of-sample prediction accuracy in addition to better representations of heterogeneity without weakening the behavioral and economic interpretability of the choice models.


[6] 2007.01896

Spatial Iterated Prisoner's Dilemma as a Transformation Semigroup

The prisoner's dilemma (PD) is a game-theoretic model studied in a wide array of fields to understand the emergence of cooperation between rational self-interested agents. In this work, we formulate a spatial iterated PD as a discrete-event dynamical system where agents play the game in each time-step and analyse it algebraically using Krohn-Rhodes algebraic automata theory using a computational implementation of the holonomy decomposition of transformation semigroups. In each iteration all players adopt the most profitable strategy in their immediate neighbourhood. Perturbations resetting the strategy of a given player provide additional generating events for the dynamics. Our initial study shows that the algebraic structure, including how natural subsystems comprising permutation groups acting on the spatial distributions of strategies, arise in certain parameter regimes for the pay-off matrix, and are absent for other parameter regimes. Differences in the number of group levels in the holonomy decomposition (an upper bound for Krohn-Rhodes complexity) are revealed as more pools of reversibility appear when the temptation to defect is at an intermediate level. Algebraic structure uncovered by this analysis can be interpreted to shed light on the dynamics of the spatial iterated PD.


[7] 2007.02141

Off-Policy Exploitability-Evaluation and Equilibrium-Learning in Two-Player Zero-Sum Markov Games

Off-policy evaluation (OPE) is the problem of evaluating new policies using historical data obtained from a different policy. Off-policy learning (OPL), on the other hand, is the problem of finding an optimal policy using historical data. In recent OPE and OPL contexts, most of the studies have focused on one-player cases, and not on more than two-player cases. In this study, we propose methods for OPE and OPL in two-player zero-sum Markov games. For OPE, we estimate exploitability that is often used as a metric for determining how close a strategy profile is to a Nash equilibrium in two-player zero-sum games. For OPL, we calculate maximin policies as Nash equilibrium strategies over the historical data. We prove the exploitability estimation error bounds for OPE and regret bounds for OPL based on the doubly robust and double reinforcement learning estimators. Finally, we demonstrate the effectiveness and performance of the proposed methods through experiments.


[8] 2007.02411

Robust Causal Inference Under Covariate Shift via Worst-Case Subpopulation Treatment Effects

We propose the worst-case treatment effect (WTE) across all subpopulations of a given size, a conservative notion of topline treatment effect. Compared to the average treatment effect (ATE) that solely relies on the covariate distribution of collected data, WTE is robust to unanticipated covariate shifts, and ensures positive findings guarantee uniformly valid treatment effects over underrepresented minority groups. We develop a semiparametrically efficient estimator for the WTE, leveraging machine learning-based estimates of heterogenous treatment effects and propensity scores. By virtue of satisfying a key (Neyman) orthogonality property, our estimator enjoys central limit behavior---oracle rates with true nuisance parameters---even when estimates of nuisance parameters converge at slower rates. For both observational and randomized studies, we prove that our estimator achieves the optimal asymptotic variance, by establishing a semiparametric efficiency lower bound. On real datasets where robustness to covariate shift is of core concern, we illustrate the non-robustness of ATE under even mild distributional shift, and demonstrate that the WTE guards against brittle findings that are invalidated by unanticipated covariate shifts.


[9] 2007.02726

Bridging the COVID-19 Data and the Epidemiological Model using Time Varying Parameter SIRD Model

This paper extends the canonical model of epidemiology, SIRD model, to allow for time varying parameters for real-time measurement of the stance of the COVID-19 pandemic. Time variation in model parameters is captured using the generalized autoregressive score modelling structure designed for the typically daily count data related to pandemic. The resulting specification permits a flexible yet parsimonious model structure with a very low computational cost. This is especially crucial at the onset of the pandemic when the data is scarce and the uncertainty is abundant. Full sample results show that countries including US, Brazil and Russia are still not able to contain the pandemic with the US having the worst performance. Furthermore, Iran and South Korea are likely to experience the second wave of the pandemic. A real-time exercise show that the proposed structure delivers timely and precise information on the current stance of the pandemic ahead of the competitors that use rolling window. This, in turn, transforms into accurate short-term predictions of the active cases. We further modify the model to allow for unreported cases. Results suggest that the effects of the presence of these cases on the estimation results diminish towards the end of sample with the increasing number of testing.


[10] 2007.02823

Dynamic Awareness

We investigate how to model the beliefs of an agent who becomes more aware. We use the framework of Halpern and Rego (2013) by adding probability, and define a notion of a model transition that describes constraints on how, if an agent becomes aware of a new formula $\phi$ in state $s$ of a model $M$, she transitions to state $s^*$ in a model $M^*$. We then discuss how such a model can be applied to information disclosure.