To study the causes of the 2021 Great Resignation, we use text analysis to investigate the changes in work- and quit-related posts between 2018 and 2021 on Reddit. We find that the Reddit discourse evolution resembles the dynamics of the U.S. quit and layoff rates. Furthermore, when the COVID-19 pandemic started, conversations related to working from home, switching jobs, work-related distress, and mental health increased. We distinguish between general work-related and specific quit-related discourse changes using a difference-in-differences method. Our main finding is that mental health and work-related distress topics disproportionally increased among quit-related posts since the onset of the pandemic, likely contributing to the Great Resignation. Along with better labor market conditions, some relief came beginning-to-mid-2021 when these concerns decreased. Our study validates the use of forums such as Reddit for studying emerging economic phenomena in real time, complementing traditional labor market surveys and administrative data.

We investigate Gale's important paper published in 1960. This paper contains an example of a candidate of the demand function that satisfies the weak axiom of revealed preference and that is doubtful that it is a demand function of some weak order. We examine this paper and first scrutinize what Gale proved. Then we identify a gap in Gale's proof and show that he failed to show that this candidate of the demand function is not a demand function. Next, we present three complete proofs of Gale's claim. First, we construct a proof that was constructible in 1960 by a fact that Gale himself demonstrated. Second, we construct a modern and simple proof using Shephard's lemma. Third, we construct a proof that follows the direction that Gale originally conceived. Our conclusion is as follows: although, in 1960, Gale was not able to prove that the candidate of the demand function that he constructed is not a demand function, he substantially proved it, and therefore it is fair to say that the credit for finding a candidate of the demand function that satisfies the weak axiom but is not a demand function is attributed to Gale.

One challenge in the estimation of financial market agent-based models (FABMs) is to infer reliable insights using numerical simulations validated by only a single observed time series. Ergodicity (besides stationarity) is a strong precondition for any estimation, however it has not been systematically explored and is often simply presumed. For finite-sample lengths and limited computational resources empirical estimation always takes place in pre-asymptopia. Thus broken ergodicity must be considered the rule, but it remains largely unclear how to deal with the remaining uncertainty in non-ergodic observables. Here we show how an understanding of the ergodic properties of moment functions can help to improve the estimation of (F)ABMs. We run Monte Carlo experiments and study the convergence behaviour of moment functions of two prototype models. We find infeasibly-long convergence times for most. Choosing an efficient mix of ensemble size and simulated time length guided our estimation and might help in general.

The productivity of a common pool of resources may degrade when overly exploited by a number of selfish investors, a situation known as the tragedy of the commons (TOC). Without regulations, agents optimize the size of their individual investments into the commons by balancing incurring costs with the returns received. The resulting Nash equilibrium involves a self-consistency loop between individual investment decisions and the state of the commons. As a consequence, several non-trivial properties emerge. For $N$ investing actors we proof rigorously that typical payoffs do not scale as $1/N$, the expected result for cooperating agents, but as $(1/N)^2$. Payoffs are hence functionally reduced, a situation denoted catastrophic poverty. This occurs despite the fact that the cumulative investment remains finite when $N\to\infty$. Catastrophic poverty is instead a consequence of an increasingly fine-tuned balance between returns and costs. In addition, we point out that a finite number of oligarchs may be present. Oligarchs are characterized by payoffs that are finite and not decreasing when $N$ increases. Our results hold for generic classes of models, including convex and moderately concave cost functions. For strongly concave cost functions the Nash equilibrium undergoes a collective reorganization, being characterized instead by entry barriers and sudden death forced market exits.

Algorithmic fairness is a new interdisciplinary field of study focused on how to measure whether a process, or algorithm, may unintentionally produce unfair outcomes, as well as whether or how the potential unfairness of such processes can be mitigated. Statistical discrimination describes a set of informational issues that can induce rational (i.e., Bayesian) decision-making to lead to unfair outcomes even in the absence of discriminatory intent. In this article, we provide overviews of these two related literatures and draw connections between them. The comparison illustrates both the conflict between rationality and fairness and the importance of endogeneity (e.g., "rational expectations" and "self-fulfilling prophecies") in defining and pursuing fairness. Taken in concert, we argue that the two traditions suggest a value for considering new fairness notions that explicitly account for how the individual characteristics an algorithm intends to measure may change in response to the algorithm.

"Banning the Box" refers to a policy campaign aimed at prohibiting employers from soliciting applicant information that could be used to statistically discriminate against categories of applicants (in particular, those with criminal records). In this article, we examine how the concealing or revealing of informative features about an applicant's identity affects hiring both directly and, in equilibrium, by possibly changing applicants' incentives to invest in human capital. We show that there exist situations in which an employer and an applicant are in agreement about whether to ban the box. Specifically, depending on the structure of the labor market, banning the box can be (1) Pareto dominant, (2) Pareto dominated, (3) benefit the applicant while harming the employer, or (4) benefit the employer while harming the applicant. Our results have policy implications spanning beyond employment decisions, including the use of credit checks by landlords and standardized tests in college admissions.

We characterize the full classes of M-estimators for semiparametric models of general functionals by formally connecting the theory of consistent loss functions from forecast evaluation with the theory of M-estimation. This novel characterization result opens up the possibility for theoretical research on efficient and equivariant M-estimation and, more generally, it allows to leverage existing results on loss functions known from the literature of forecast evaluation in estimation theory.

We study generic inference on identified linear functionals of nonunique nuisances defined as solutions to underidentified conditional moment restrictions. This problem appears in a variety of applications, including nonparametric instrumental variable models, proximal causal inference under unmeasured confounding, and missing-not-at-random data with shadow variables. Although the linear functionals of interest, such as average treatment effect, are identifiable under suitable conditions, nonuniqueness of nuisances pose serious challenges to statistical inference, since in this setting common nuisance estimators can be unstable and lack fixed limits. In this paper, we propose penalized minimax estimators for the nuisance functions and show they enable valid inference in this challenging setting. The proposed nuisance estimators can accommodate flexible function classes, and importantly, they can converge to fixed limits determined by the penalization, regardless of whether the nuisances are unique or not. We use the penalized nuisance estimators to form a debiased estimator for the linear functional of interest and prove its asymptotic normality under generic high-level conditions, which provide for asymptotically valid confidence intervals.