A repo trade involves the sale of a security coupled with a contract to repurchase at a later time. Following the 2008 financial crisis, accounting standards were updated to require repo intermediaries, who are mostly banks, to increase recorded assets at the time of the first transaction. Concurrently, US bank regulators implemented a supplementary leverage ratio constraint that reduces the volume of assets a bank is allowed record. The interaction of the new accounting rules and bank regulations limits the volume of repo trades that banks can intermediate. To reduce the balance-sheet impact of repo, the SEC has mandated banks to centrally clear all Treasuries trades. This achieves multilateral netting but shifts counterparty risk onto the clearinghouse, which can distort monitoring incentives and raise trading cost through the imposition of fees. We present RepoMech, a method that avoids these pitfalls by multilaterally netting repo trades without altering counterparty risk.
To evaluate the effectiveness of a counterfactual policy, it is often necessary to extrapolate treatment effects on compliers to broader populations. This extrapolation relies on exogenous variation in instruments, which is often weak in practice. This limited variation leads to invalid confidence intervals that are typically too short and cannot be accurately detected by classical methods. For instance, the F-test may falsely conclude that the instruments are strong. Consequently, I develop inference results that are valid even with limited variation in the instruments. These results lead to asymptotically valid confidence sets for various linear functionals of marginal treatment effects, including LATE, ATE, ATT, and policy-relevant treatment effects, regardless of identification strength. This is the first paper to provide weak instrument robust inference results for this class of parameters. Finally, I illustrate my results using data from Agan, Doleac, and Harvey (2023) to analyze counterfactual policies of changing prosecutors' leniency and their effects on reducing recidivism.
We study settings in which a researcher has an instrumental variable (IV) and seeks to evaluate the effects of a counterfactual policy that alters treatment assignment, such as a directive encouraging randomly assigned judges to release more defendants. We develop a general and computationally tractable framework for computing sharp bounds on the effects of such policies. Our approach does not require the often tenuous IV monotonicity assumption. Moreover, for an important class of policy exercises, we show that IV monotonicity -- while crucial for a causal interpretation of two-stage least squares -- does not tighten the bounds on the counterfactual policy impact. We analyze the identifying power of alternative restrictions, including the policy invariance assumption used in the marginal treatment effect literature, and develop a relaxation of this assumption. We illustrate our framework using applications to quasi-random assignment of bail judges in New York City and prosecutors in Massachusetts.
We develop an axiomatic theory for Automated Market Makers (AMMs) in local energy sharing markets and analyze the Markov Perfect Equilibrium of the resulting economy with a Mean-Field Game. In this game, heterogeneous prosumers solve a Bellman equation to optimize energy consumption, storage, and exchanges. Our axioms identify a class of mechanisms with linear, Lipschitz continuous payment functions, where prices decrease with the aggregate supply-to-demand ratio of energy. We prove that implementing batch execution and concentrated liquidity allows standard design conditions from decentralized finance-quasi-concavity, monotonicity, and homotheticity-to construct AMMs that satisfy our axioms. The resulting AMMs are budget-balanced and achieve ex-ante efficiency, contrasting with the strategy-proof, expost optimal VCG mechanism. Since the AMM implements a Potential Game, we solve its equilibrium by first computing the social planner's optimum and then decentralizing the allocation. Numerical experiments using data from the Paris administrative region suggest that the prosumer community can achieve gains from trade up to 40% relative to the grid-only benchmark.
How should nations price carbon? This paper examines how the treatment of global inequality, captured by regional welfare weights, affects optimal carbon prices. I develop theory to identify the conditions under which accounting for differences in marginal utilities of consumption across countries leads to more stringent global climate policy in the absence of international transfers. I further establish a connection between the optimal uniform carbon prices implied by different welfare weights and heterogeneous regional preferences over climate policy stringency. In calibrated simulations, I find that accounting for global inequality reduces optimal global emissions relative to an inequality-insensitive benchmark. This holds both when carbon prices are regionally differentiated, with emissions 21% lower, and when they are constrained to be globally uniform, with the uniform carbon price 15% higher.
This paper introduces a novel framework for analysing equilibrium in structured production systems incorporating a static social division of labour by distinguishing between consumption goods traded in competitive markets and intermediate goods exchanged through bilateral relationships. We develop the concept of viability -- the requirement that all producers earn positive incomes -- as a foundational equilibrium prerequisite. Our main theoretical contribution establishes that acyclic production systems -- those without circular conversion processes among goods -- are always viable, a condition that implies coherence. We characterise completely viable systems through input restrictions demonstrating that prohibiting consumption goods as inputs for other consumption goods is necessary for ensuring viable prices exist for all consumption good price vectors. The analysis reveals fundamental relationships between production system architectural design and economic sustainability. The introduced framework bridges Leontief-Sraffa production theory with modern network economics while capturing institutional realities of contemporary production systems. This also results in a contribution of the literature on the existence of a positive output price system and the Hawkins-Simon condition.
Why are interventions with weak evidence still adopted? We study charitable incentives for physical activity in Japan using three linked methods, including a randomized field experiment (N=808), a stakeholder belief survey (local government officials and private-sector employees, N=2,400), and a conjoint experiment on policy choice. Financial incentives increase daily steps by about 1,000, whereas charitable incentives deliver a precisely estimated null. Nonetheless, stakeholders greatly overpredict charitable incentives' effects on walking, participation, and prosociality. Conjoint choices show policymakers value step gains as well as other outcomes, shaping policy choice. Adoption thus reflects multidimensional beliefs and objectives, highlighting policy selection as a scaling challenge.
In light of the recent convergence between Agentic AI and our field of Algorithmization, this paper seeks to restore conceptual clarity and provide a structured analytical framework for an increasingly fragmented discourse. First, (a) it examines the contemporary landscape and proposes precise definitions for the key notions involved, ranging from intelligence to Agentic AI. Second, (b) it reviews our prior body of work to contextualize the evolution of methodologies and technological advances developed over the past decade, highlighting their interdependencies and cumulative trajectory. Third, (c) by distinguishing Machine and Learning efforts within the field of Machine Learning (d) it introduces the first Machine in Machine Learning (M1) as the underlying platform enabling today's LLM-based Agentic AI, conceptualized as an extension of B2C information-retrieval user experiences now being repurposed for B2B transformation. Building on this distinction, (e) the white paper develops the notion of the second Machine in Machine Learning (M2) as the architectural prerequisite for holistic, production-grade B2B transformation, characterizing it as Strategies-based Agentic AI and grounding its definition in the structural barriers-to-entry that such systems must overcome to be operationally viable. Further, (f) it offers conceptual and technical insight into what appears to be the first fully realized implementation of an M2. Finally, drawing on the demonstrated accuracy of the two previous decades of professional and academic experience in developing the foundational architectures of Algorithmization, (g) it outlines a forward-looking research and transformation agenda for the coming two decades.
Consumer regret is a widespread post-purchase emotion that significantly impacts satisfaction, product returns, complaint behavior, and customer loyalty. Despite its prevalence, there is a limited understanding of why certain consumers experience regret more frequently as a chronic aspect of their engagement in the marketplace. This study explores the antecedents of consumer regret frequency by integrating decision agency, status signaling motivations, and online shopping preferences into a cohesive framework. By analyzing survey data (n=338), we assess whether consumers' perceived agency and decision-making orientation correlate with the frequency of regret, and whether tendencies towards status-related consumption and preferences for online shopping environments exacerbate regret through mechanisms such as increased social comparison, expanded choice sets, and continuous exposure to alternative offers. The findings reveal that regret frequency is significantly linked to individual differences in decision-related orientations and status signaling, with a preference for online shopping further contributing to regret-prone consumption behaviors. These results extend the scope of regret and cognitive dissonance research beyond isolated decision episodes by emphasizing regret frequency as a persistent consumer outcome. From a managerial standpoint, the findings suggest that retailers can alleviate regret-driven dissatisfaction by enhancing decision support, minimizing choice overload, and developing post-purchase reassurance strategies tailored to segments prone to regret..
Large language models (LLMs) change how consumers acquire information online; their bots also crawl news publishers' websites for training data and to answer consumer queries; and they provide tools that can lower the cost of content creation. These changes lead to predictions of adverse impact on news publishers in the form of lowered consumer demand, reduced demand for newsroom employees, and an increase in news "slop." Consequently, some publishers strategically responded by blocking LLM access to their websites using the this http URL file standard. Using high-frequency granular data, we document four effects related to the predicted shifts in news publishing following the introduction of generative AI (GenAI). First, we find a consistent and moderate decline in traffic to news publishers occurring after August 2024. Second, using a difference-in-differences approach, we find that blocking GenAI bots can have adverse effects on large publishers by reducing total website traffic by 23% and real consumer traffic by 14% compared to not blocking. Third, on the hiring side, we do not find evidence that LLMs are replacing editorial or content-production jobs yet. The share of new editorial and content-production job listings increases over time. Fourth, regarding content production, we find no evidence that large publishers increased text volume; instead, they significantly increased rich content and use more advertising and targeting technologies. Together, these findings provide early evidence of some unforeseen impacts of the introduction of LLMs on news production and consumption.
We consider the extent to which we can learn from a completely randomized experiment whether everyone has treatment effects that are weakly of the same sign, a condition we call monotonicity. From a classical sampling perspective, it is well-known that monotonicity is untestable. By contrast, we show from the design-based perspective -- in which the units in the population are fixed and only treatment assignment is stochastic -- that the distribution of treatment effects in the finite population (and hence whether monotonicity holds) is formally identified. We argue, however, that the usual definition of identification is unnatural in the design-based setting because it imagines knowing the distribution of outcomes over different treatment assignments for the same units. We thus evaluate the informativeness of the data by the extent to which it enables frequentist testing and Bayesian updating. We show that frequentist tests can have nontrivial power against some alternatives, but power is generically limited. Likewise, we show that there exist (non-degenerate) Bayesian priors that never update about whether monotonicity holds. We conclude that, despite the formal identification result, the ability to learn about monotonicity from data in practice is severely limited.
Many applications involve estimating the mean of multiple binomial outcomes as a common problem -- assessing intergenerational mobility of census tracts, estimating prevalence of infectious diseases across countries, and measuring click-through rates for different demographic groups. The most standard approach is to report the plain average of each outcome. Despite simplicity, the estimates are noisy when the sample sizes or mean parameters are small. In contrast, the Empirical Bayes (EB) methods are able to boost the average accuracy by borrowing information across tasks. Nevertheless, the EB methods require a Bayesian model where the parameters are sampled from a prior distribution which, unlike the commonly-studied Gaussian case, is unidentified due to discreteness of binomial measurements. Even if the prior distribution is known, the computation is difficult when the sample sizes are heterogeneous as there is no simple joint conjugate prior for the sample size and mean parameter. In this paper, we consider the compound decision framework which treats the sample size and mean parameters as fixed quantities. We develop an approximate Stein's Unbiased Risk Estimator (SURE) for the average mean squared error given any class of estimators. For a class of machine learning-assisted linear shrinkage estimators, we establish asymptotic optimality, regret bounds, and valid inference. Unlike existing work, we work with the binomials directly without resorting to Gaussian approximations. This allows us to work with small sample sizes and/or mean parameters in both one-sample and two-sample settings. We demonstrate our approach using three datasets on firm discrimination, education outcomes, and innovation rates.
We introduce a Modewise Additive Factor Model (MAFM) for matrix-valued time series that captures row-specific and column-specific latent effects through an additive structure, offering greater flexibility than multiplicative frameworks such as Tucker and CP factor models. In MAFM, each observation decomposes into a row-factor component, a column-factor component, and noise, allowing distinct sources of variation along different modes to be modeled separately. We develop a computationally efficient two-stage estimation procedure: Modewise Inner-product Eigendecomposition (MINE) for initialization, followed by Complement-Projected Alternating Subspace Estimation (COMPAS) for iterative refinement. The key methodological innovation is that orthogonal complement projections completely eliminate cross-modal interference when estimating each loading space. We establish convergence rates for the estimated factor loading matrices under proper conditions. We further derive asymptotic distributions for the loading matrix estimators and develop consistent covariance estimators, yielding a data-driven inference framework that enables confidence interval construction and hypothesis testing. As a technical contribution of independent interest, we establish matrix Bernstein inequalities for quadratic forms of dependent matrix time series. Numerical experiments on synthetic and real data demonstrate the advantages of the proposed method over existing approaches.
This paper surveys the literature on theories of discrimination, focusing mainly on new contributions. Recent theories expand on the traditional taste-based and statistical discrimination frameworks by considering specific features of learning and signaling environments, often using novel information- and mechanism-design language; analyzing learning and decision making by algorithms; and introducing agents with behavioral biases and misspecified beliefs. An online appendix attempts to narrow the gap between the economic perspective on ``theories of discrimination'' and the broader study of discrimination in the social science literature by identifying a class of models of discriminatory institutions, made up of theories of discriminatory social norms and discriminatory institutional design.
We develop empirical models that efficiently process large amounts of unstructured product data (text, images, prices, quantities) to produce accurate hedonic price estimates and derived indices. To achieve this, we generate abstract product attributes (or ``features'') from descriptions and images using deep neural networks. These attributes are then used to estimate the hedonic price function. To demonstrate the effectiveness of this approach, we apply the models to Amazon's data for first-party apparel sales, and estimate hedonic prices. The resulting models have a very high out-of-sample predictive accuracy, with $R^2$ ranging from $80\%$ to $90\%$. Finally, we construct the AI-based hedonic Fisher price index, chained at the year-over-year frequency, and contrast it with the CPI and other electronic indices.
We study social learning in which agents weight neighbors' opinions differently based on their degrees, capturing situations in which agents place more trust in well-connected individuals or, conversely, discount their influence. We derive asymptotic properties of learning outcomes in large stochastic networks and analyze how the weighting rule affects societal wisdom and convergence speed. We find that assigning greater weight to higher-degree neighbors harms wisdom but has a non-monotonic effect on convergence speed, depending on the diversity of views within high- and low-degree groups, highlighting a potential trade-off between convergence speed and wisdom.
We characterize single-item auction formats that are shill-proof in the sense that a profit-maximizing seller has no incentive to submit shill bids. We distinguish between strong shill-proofness, in which a seller with full knowledge of bidders' valuations can never profit from shilling, and weak shill-proofness, which requires only that the expected equilibrium profit from shilling is non-positive. The Dutch auction (with a suitable reserve) is the unique (revenue-)optimal and strongly shill-proof auction. Any deterministic auction can satisfy only two properties in the set {static, strategy-proof, weakly shill-proof}. Our main results extend to settings with affiliated and interdependent values.
We compute the lattice operations for the (pairwise) stable set in many-to-many matching markets where only path-independence on agents' choice functions is imposed. To do this, we construct Tarski operators defined on the lattices of worker-quasi-stable and firm-quasi-stable matchings. These operators resemble lay-off and vacancy chain dynamics, respectively.
Standard optimal growth models implicitly impose a ``perpetual existence'' constraint, which can ethically justify infinite misery in stagnant economies. This paper investigates the optimal longevity of a dynasty within a Critical-Level Utilitarian (CLU) framework. By treating the planning horizon as an endogenous choice variable, we establish a structural isomorphism between static population ethics and dynamic growth theory. Our analysis derives closed-form solutions for optimal consumption and longevity in a roundabout production economy. We show that under low productivity, a finite horizon is structurally optimal to avoid the creation of lives not worth living. This result suggests that the termination of a dynasty can be interpreted not as a failure of sustainability, but as an {altruistic termination} to prevent intergenerational suffering. We also highlight an ethical asymmetry: while a finite horizon is optimal for declining economies, growing economies under intergenerational equity demand the ultimate sacrifice from the current generation.
We examine whether gender norms - proxied by the outcome of Switzerland's 1981 public referendum on constitutional gender equality - continue to shape local female startup activity today, despite substantial population changes over the past four decades. Using startup data for all Swiss municipalities from 2016 to 2023, we find that municipalities that historically expressed stronger support for gender equality have significantly higher present women-to-men startup ratios. The estimated elasticity of this ratio with respect to the share of "yes" votes in the 1981 referendum is 0.165. This finding is robust to controlling for a subsequent referendum on gender roles, a rich set of municipality-specific characteristics, and contemporary policy measures. The relationship between historical voting outcomes and current women's entrepreneurship is stronger in municipalities with greater population stability - measured by the share of residents born locally - and in municipalities where residents are less likely to report a religious affiliation. While childcare spending is not statistically related to startup rates on its own, it is positively associated with the women-to-men startup ratio when interacted with historical gender norms, consistent with both formal and informal support mechanisms jointly shaping women's entrepreneurial activity.
This paper introduces a new framework for multivariate quantile regression based on the multivariate distribution function, termed multivariate quantile regression (MQR). In contrast to existing approaches--such as directional quantiles, vector quantile regression, or copula-based methods--MQR defines quantiles through the conditional probability structure of the joint conditional distribution function. The method constructs multivariate quantile curves using sequential univariate quantile regressions derived from conditioning mechanisms, allowing for an intuitive interpretation and flexible estimation of marginal effects. The paper develops theoretical foundations of MQR, including asymptotic properties of the estimators. Through simulation exercises, the estimator demonstrates robust finite sample performance across different dependence structures. As an empirical application, the MQR framework is applied to the analysis of exchange rate pass-through in Argentina from 2004 to 2024.
In this paper we consider two generalizations of Lancaster's (Review of Economic Studies, 2002) Modified ML estimator (MMLE) for the panel AR(1) model with fixed effects and arbitrary initial conditions and possibly covariates when the time dimension, T, is fixed. When the autoregressive parameter rho=1, the limiting modified profile log-likelihood function for this model has a stationary point of inflection and rho is first-order underidentified but second-order identified. We show that, unlike the Random Effects and Transformed MLEs for this type of model, the generalized MMLEs are uniquely defined in finite samples w.p.1. for any value of |rho| =< 1. When rho=1, the rate of convergence of the MMLEs is N^{1/4}, where N is the cross-sectional dimension of the panel. We derive the limiting distributions of the MMLEs when rho=1. They are generally asymmetric. We also show that Quasi LM tests that are based on the modified profile log-likelihood function and use its expected rather than observed Hessian for hypotheses that include a restriction on rho, and confidence regions that are based on inverting these tests have correct asymptotic size in a uniform sense when |rho| =< 1. Finally, we investigate the finite sample properties of the MMLEs and the QLM test in a Monte Carlo study.
How does targeted advertising influence electoral outcomes? This paper presents a one-dimensional spatial model of voting in which a privately informed challenger persuades voters to support him over the status quo. I show that targeted advertising enables the challenger to persuade voters with opposing preferences and swing elections decided by such voters; under simple majority, the challenger can defeat the status quo even when it is located at the median voter's bliss point. Ex-ante commitment power is unnecessary -- the challenger succeeds by strategically revealing different pieces of verifiable information to different voters. Publicizing all political ads would mitigate the negative effects of targeted advertising and help voters collectively make the right choice.
Some well-known solutions for cooperative games with transferable utility (TU-games), such as the Banzhaf value, the Myerson value, and the Aumann-Dreze value, fail to satisfy efficiency. Despite their desirable normative properties, this inefficiency motivates the search for a systematic method to restore efficiency while preserving their underlying normative structure. This paper proposes an efficient extension operator as a general approach to restore efficiency by extending any underlying solution to an efficient one. We consider novel axioms for those operators and characterize the egalitarian surplus sharing method and the proportional sharing method in a unified manner. As applications, we demonstrate the generality of our method by developing an efficient-fair extension of solutions for TU games with communication networks, as well as a variant for TU games with coalition structures.
The growth of large-scale AI systems is increasingly constrained by infrastructure limits: power availability, thermal and water constraints, interconnect scaling, memory pressure, data-pipeline throughput, and rapidly escalating lifecycle cost. Across hyperscale clusters, these constraints interact, yet the main metrics remain fragmented. Existing metrics, ranging from facility measures (PUE) and rack power density to network metrics (all-reduce latency), data-pipeline measures, and financial metrics (TCO series), each capture only their own domain and provide no integrated view of how physical, computational, and economic constraints interact. This fragmentation obscures the structural relationships among energy, computation, and cost, preventing a coherent optimization across sector and how bottlenecks emerge, propagate, and jointly determine the efficiency frontier of AI infrastructure. This paper develops an integrated framework that unifies these disparate metrics through a three-domain semantic classification and a six-layer architectural decomposition, producing a 6x3 taxonomy that maps how various sectors propagate across the AI infrastructure stack. The taxonomy is grounded in a systematic review and meta-analysis of all metrics with economic and financial relevance, identifying the most widely used measures, their research intensity, and their cross-domain interdependencies. Building on this evidence base, the Metric Propagation Graph (MPG) formalizes cross-layer dependencies, enabling systemwide interpretation, composite-metric construction, and multi-objective optimization of energy, carbon, and cost. The framework offers a coherent foundation for benchmarking, cluster design, capacity planning, and lifecycle economic analysis by linking physical operations, computational efficiency, and cost outcomes within a unified analytic structure.
Despite their evolution from early copper-token schemes to sophisticated digital solutions, loyalty programs remain predominantly closed ecosystems, with brands retaining full control over all components. Coalition loyalty programs emerged to enable cross-brand interoperability, but approximately 60\% fail within 10 years in spite of theoretical advantages rooted in network economics. This paper demonstrates that coalition failures stem from fundamental architectural limitations in centralized operator models rather than operational deficiencies, and argues further that neither closed nor coalition systems can scale in intelligence-driven paradigms where AI agents mediate commerce and demand trustless, protocol-based coordination that existing architectures cannot provide. We propose a hybrid framework where brands maintain sovereign control over their programs while enabling cross-brand interoperability through trustless exchange mechanisms. Our framework preserves closed system advantages while enabling open system benefits without the structural problems that doom traditional coalitions. We derive a mathematical pricing model accounting for empirically-validated market factors while enabling fair value exchange across interoperable reward systems.