Prior spending shutoff experiments in search advertising have found that paid ads cannibalize organic traffic. But it is unclear whether the same is true for other high volume advertising channels like mobile display advertising. We therefore analyzed a large-scale spending shutoff experiment by a US-based mobile game developer, GameSpace. Contrary to previous findings, we found that paid advertising boosts organic installs rather than cannibalizing them. Specifically, every $100 spent on ads is associated with 37 paid and 3 organic installs. The complementarity between paid ads and organic installs is corroborated by evidence of temporal and cross-platform spillover effects: ad spending today is associated with additional paid and organic installs tomorrow and impressions on one platform lead to clicks on other platforms. Our findings demonstrate that mobile app install advertising is about 7.5% more effective than indicated by paid install metrics alone due to spillover effects, suggesting that mobile app developers are under-investing in marketing.
We consider incomplete information finite-player games where players may hold mutually inconsistent beliefs without a common prior. We introduce absolute continuity of beliefs, extending the classical notion of absolutely continuous information in Milgrom and Weber (1985), and prove that a Bayesian equilibrium exists under broad conditions. Applying these results to games with rich type spaces that accommodate infinite belief hierarchies, we show that when the analyst's game has a type space satisfying absolute continuity of beliefs, the actual game played according to the belief hierarchies induced by the type space has a Bayesian equilibrium for a wide class of games. We provide examples that illustrate practical applications of our findings.
This paper develops a general framework for dynamic models in which individuals simultaneously make both discrete and continuous choices. The framework incorporates a wide range of unobserved heterogeneity. I show that such models are nonparametrically identified. Based on constructive identification arguments, I build a novel two-step estimation method in the lineage of Hotz and Miller (1993) and Arcidiacono and Miller (2011) but extended to simultaneous discrete-continuous choice. In the first step, I recover the (type-dependent) optimal choices with an expectation-maximization algorithm and instrumental variable quantile regression. In the second step, I estimate the primitives of the model taking the estimated optimal choices as given. The method is especially attractive for complex dynamic models because it significantly reduces the computational burden associated with their estimation compared to alternative full solution methods.
Multilateral index numbers, such as those used to make international comparisons of prices and income, are fundamental objects in economics. However, these numbers are often challenging to interpret in terms of economic welfare as a data-dependent 'taste bias' can arise for these indices that distorts measurement of price and income levels and dispersion. To study this problem, I develop a means to appraise indices' economic interpretability using non-parametric bounds on a true cost-of-living index. These bounds improve upon their classical counterparts, define a new class of indices, and can correct existing indices for preference misspecification. In my main application I apprise existing international comparison methods. I find that taste bias generally leads to overestimates (underestimates) of price (income) levels. Superlative indices, such as the Fisher-GEKS index, lie within the permitted bounds more frequently than non-superlative methods, but the mean size of this bias is modest in all examined cases. My results can thus be interpreted as supporting the economic validity of the myriad multilateral methods which are in practical use.
In this paper, we conduct a simulation study to evaluate conventional meta-regression approaches (study-level random, fixed, and mixed effects) against seven methodology specifications new to meta-regressions that control joint heterogeneity in location and time (including a new one that we introduce). We systematically vary heterogeneity levels to assess statistical power, estimator bias and model robustness for each methodology specification. This assessment focuses on three aspects: performance under joint heterogeneity in location and time, the effectiveness of our proposed settings incorporating location fixed effects and study-level fixed effects with a time trend, as well as guidelines for model selection. The results show that jointly modeling heterogeneity when heterogeneity is in both dimensions improves performance compared to modeling only one type of heterogeneity.
Machine learning models are widely recognized for their strong performance in forecasting. To keep that performance in streaming data settings, they have to be monitored and frequently re-trained. This can be done with machine learning operations (MLOps) techniques under supervision of an MLOps engineer. However, in digital platform settings where the number of data streams is typically large and unstable, standard monitoring becomes either suboptimal or too labor intensive for the MLOps engineer. As a consequence, companies often fall back on very simple worse performing ML models without monitoring. We solve this problem by adopting a design science approach and introducing a new monitoring framework, the Machine Learning Monitoring Agent (MLMA), that is designed to work at scale for any ML model with reasonable labor cost. A key feature of our framework concerns test-based automated re-training based on a data-adaptive reference loss batch. The MLOps engineer is kept in the loop via key metrics and also acts, pro-actively or retrospectively, to maintain performance of the ML model in the production stage. We conduct a large-scale test at a last-mile delivery platform to empirically validate our monitoring framework.
We study a variation of the price competition model a la Bertrand, in which firms must offer menus of contracts that obey monotonicity constraints, e.g., wages that rise with worker productivity to comport with equal pay legislation. While such constraints limit firms' ability to undercut their competitors, we show that Bertrand's classic result still holds: competition drives firm profits to zero and leads to efficient allocations without rationing. Our findings suggest that Bertrand's logic extends to a broader variety of markets, including labor and product markets that are subject to real-world constraints on pricing across workers and products.
A solution concept that is a refinement of Nash equilibria selects for each finite game a nonempty collection of closed and connected subsets of Nash equilibria as solutions. We impose three axioms for such solution concepts. The axiom of backward induction requires each solution to contain a quasi-perfect equilibrium. Two invariance axioms posit that solutions of a game are the same as those of a game obtained by the addition of strategically irrelevant strategies and players. Stability satisfies these axioms; and any solution concept that satisfies them must, for generic extensive-form games, select from among its stable outcomes. A strengthening of the two invariance axioms provides an analogous axiomatization of components of equilibria with a nonzero index.
In science and social science, we often wish to explain why an outcome is different in two populations. For instance, if a jobs program benefits members of one city more than another, is that due to differences in program participants (particular covariates) or the local labor markets (outcomes given covariates)? The Kitagawa-Oaxaca-Blinder (KOB) decomposition is a standard tool in econometrics that explains the difference in the mean outcome across two populations. However, the KOB decomposition assumes a linear relationship between covariates and outcomes, while the true relationship may be meaningfully nonlinear. Modern machine learning boasts a variety of nonlinear functional decompositions for the relationship between outcomes and covariates in one population. It seems natural to extend the KOB decomposition using these functional decompositions. We observe that a successful extension should not attribute the differences to covariates -- or, respectively, to outcomes given covariates -- if those are the same in the two populations. Unfortunately, we demonstrate that, even in simple examples, two common decompositions -- functional ANOVA and Accumulated Local Effects -- can attribute differences to outcomes given covariates, even when they are identical in two populations. We provide a characterization of when functional ANOVA misattributes, as well as a general property that any discrete decomposition must satisfy to avoid misattribution. We show that if the decomposition is independent of its input distribution, it does not misattribute. We further conjecture that misattribution arises in any reasonable additive decomposition that depends on the distribution of the covariates.