New articles on Economics


[1] 2604.13150

Unveiling the Nexus Between Economic Complexity and Environmental Sustainability: Evidence from BRICS-T Countries

This study analyses the impacts of economic complexity on environmental performance in BRICS-T countries. Annual data for the period 1999-2021, Durbin-Hausman cointegration test and Augmented Mean Group (AMG) estimator are used in the analysis. The robustness of the Panel AMG results is tested with CCEMG and CS-ARDL methods. The results indicate that economic complexity has a positive impact on environmental performance. An increase of 1% in the economic complexity index increases environmental performance in BRICS-T countries between 0.020% and 1.243%. However, economic growth, energy intensity and population density were found to have a negative impact on environmental performance. Renewable energy use, in contrast, contributes positively to environmental performance.


[2] 2604.13188

Is Productivity Advantage of Cities Really Down To Mean and Variance?

Firms in denser areas are more productive, a pattern attributed to agglomeration economies and firm selection. To disentangle these two channels, the popular approach of Combes et al. (2012, ECTA) critically assumes that total factor productivity (TFP) distributions between denser and less dense areas are the same up to mean, variance, and left-tail truncation. We empirically validate this assumption using Spanish administrative firm-level data and recent econometric methods adapted to noisy TFP estimates. Our results find that TFP distributions are indeed statistically identical up to these parameters, validating the use of such productivity decompositions. Furthermore, using only the mean and variance is sufficient to capture differences for all sectors. Accordingly, the productivity advantage of cities may be entirely due to agglomeration rather than stronger selection, suggesting that policymakers should focus on policies targeting agglomeration. Finally, our approach extends to related contexts like differences in worker skill distributions.


[3] 2604.13224

Micro and Macro Perspectives on Production-Based Markups

We review the "production approach" to estimating markups, the ratio of price to marginal cost. The approach is uniquely scalable: it requires no model of consumer demand or market structure and applies broadly across firms, industries, and time. Our organizing insight is that the production-based markup is a residual. Like the Solow residual, it is clean in theory but potentially contaminated by misspecification and mismeasurement. This framing helps explain why small differences in implementation can produce starkly different results from the same data. In some cases, markups have risen sharply. In others, they have not. Despite the disagreements in the literature, the importance of understanding and measuring market power cannot be overstated. We provide conceptual rationales for this disagreement, offer practical guidance on data and estimation, and call for greater transparency about how much of the variation attributed to markups may instead reflect technology.


[4] 2604.13399

Root-$n$ Asymptotically Normal Maximum Score Estimation

The maximum score method (Manski, 1975, 1985) is a powerful approach for binary choice models, yet it is known to face both practical and theoretical challenges. In particular, the estimator converges at a slower-than-root-$n$ rate to a nonstandard limiting distribution. We investigate conditions under which strictly concave surrogate score functions can be employed to achieve identification through a smooth criterion function. This criterion enables root-$n$ convergence to a normal limiting distribution. While the conditions to guarantee these desired properties are nontrivial, we characterize them in terms of primitive conditions. Extensive simulation studies support, the root-$n$ convergence rate, the asymptotic normality, and the validity of the standard inference methods.


[5] 2604.13545

Waiting for Help: Timely Access to Psychological Support for Young Adults Exposed to Parental Substance Misuse

Access to mental health care is often rationed through waiting lists, yet there is limited causal evidence on the consequences of delayed access. We study whether eliminating waiting time for psychological support improves outcomes for young adults who grew up with parental substance misuse. Using a randomized waitlist-controlled trial in Denmark combined with survey and administrative data, we find that immediate access leads to sizable short-run improvements in psychological health. These gains persist three to four years after randomization, even after both groups have received the intervention. By contrast, we find limited evidence of large average effects on broader health or labor market outcomes. Our results highligth the importance of treatment timing in capacity-constrained settings.


[6] 2604.13597

Daycare Matching with Siblings: Social Implementation and Welfare Evaluation

In centralized assignment problems, agents may have preferences over joint rather than individual assignments, such as couples in residency matching or siblings in school choice and daycare. Standard preference estimation methods typically ignore such complementarities. This paper develops an empirical framework that explicitly incorporates them. Using data from daycare assignment in a municipality in Japan, we estimate a model in which families incur both additional commuting distance and a fixed non-distance disutility when siblings are assigned to different facilities. We find that split assignment generates a large disutility, equivalent to more than twice the average commuting distance. We then simulate counterfactual assignment policies that vary the strength of sibling priority and evaluate welfare. The sibling priority reform that we designed and that was implemented in 2024 increases welfare by 6.4% while reducing inequality in assignment rates across sibling groups; models that ignore sibling complementarities substantially understate these gains. At the same time, we uncover a clear efficiency-equity tradeoff: along the frontier, increasing mean welfare by 100 meters is associated with an increase in inequality of about 1.7 percentage points, and the welfare-maximizing policy reverses much of the reform's reduction in inequality, largely through the displacement of households without siblings.


[7] 2604.13603

On the Design of Stochastic Electricity Auctions

Electricity is typically traded in day-ahead auctions because many power system decisions, such as unit commitment, must be made in advance. However, when wind and solar generators sell power one day ahead, they face uncertainty about their actual production. In current day-ahead auctions, this uncertainty cannot be directly communicated, leading to inefficient use of renewable energy and suboptimal system decisions. We show how this problem can be addressed using the concept of equilibrium under uncertainty from microeconomic theory. In particular, we demonstrate that electricity contracts should be conditioned not only on the time and location of delivery, but also on the state of the world (e.g., whether it will be windy or calm). This requires a precise definition of the state of the world. Since there are infinitely many possible definitions, criteria are needed to select among them. We develop such criteria and show that the resulting states correspond to solutions of an optimal partitioning problem. Finally, we illustrate how these states can be computed and interpreted using a case study of offshore wind farms in the European North Sea.


[8] 2604.13794

Balanced Contributions in Networks and Games with Externalities

For networks with externalities, where each component's worth may depend on the full network structure, balanced contributions and fairness lead to distinct component-efficient allocation rules. We characterize the unique component-efficient allocation rule satisfying balanced contributions -- the BCE rule. Existence is the main challenge: balanced contributions must hold on every edge, but the construction uses only spanning-tree edges. A cycle-sum identity bridges this gap by reducing balanced contributions on non-tree edges to relations in proper subnetworks. The BCE rule coincides with the Myerson value for TU games and with its generalization by Jackson--Wolinsky for network games without externalities, it recovers the externality-free value on the complete network, and -- unlike the fairness-based FCE rule -- it does not reduce to a graph-free formula applied to the graph-restricted game.


[9] 2604.13896

Gender, Unpaid Work, and Social Norms in Young Italian Families: Evidence from Couples Time Diaries

Why do large gender inequalities in everyday life persist even as women strengthen their attachment to paid work? Existing evidence shows that women continue to do more unpaid work than men, but much of that evidence is based on individual diaries, says little about how inequality is jointly organized within couples, and rarely links daily time allocation to directly measured gender attitudes. This paper addresses that gap using the TIMES Observatory, an original survey of 1,928 co-resident couples with at least one child younger than 11 in Emilia-Romagna or Campania. The data combine matched partner diaries for one weekday and one weekend day with rich socio-economic information and direct measures of gender norms. We document three main findings. First, women do substantially more unpaid work and spend more time with children, while men do more paid work and enjoy more leisure without children. Second, these asymmetries remain sizeable even among dual full-time couples, implying that stronger female labor-market attachment does not by itself equalize daily life. Third, more traditional gender attitudes - especially among men - are descriptively associated with lower male participation in childcare and domestic work and with wider gaps in discretionary leisure. The analysis is descriptive rather than causal, but it shows that gender inequality within couples is visible not only in the amount of work performed, but also in the distribution of time that is genuinely discretionary.


[10] 2604.13998

The Revenue Effect of Demand Misspecification in Event Ticket Pricing

We study a finite-horizon dynamic pricing problem for event tickets with limited inventory and time-varying demand. The central practical difficulty is that the total demand function $L(t)$ is not observed directly and must be estimated from data, while pricing decisions are sensitive to its temporal shape. The paper examines how the accuracy of this estimate affects revenue. We consider a model in which sales intensity is driven by the total demand $L(t)$, a price-response function $v(p)$, and a time-dependent willingness-to-pay factor $\varphi(t)$. The factor $\varphi(t)$ plays a central role: it captures the increase in customers' willingness to pay as the event date approaches and makes the temporal profile of demand economically important for pricing. Within this framework, the updated numerical study evaluates a benchmark dynamic-programming policy across nine deterministic true-demand scenarios, a collection of feature-aware misspecifications of $L(t)$, and multiple environment regimes induced by $v(p)=e^{-\eta p}$, the deadline factor $\varphi(t)$, and inventory level $Q$. The reported summaries are based on stochastic simulation and a ratio-of-means relative-loss metric. The results show that a more accurate representation of the temporal demand profile leads to more effective pricing decisions and higher revenue. Over the full misspecification collection the aggregate relative revenue loss is $0.42\%$, the upper decile exceeds $1\%$, and the most expensive errors are omissions of late-demand components. The average effect is therefore modest but non-negligible, and it becomes stronger when deadline effects are pronounced and inventory is tight.


[11] 2604.14059

A Comparative Study of Dynamic Programming and Reinforcement Learning in Finite Horizon Dynamic Pricing

This paper provides a systematic comparison between Fitted Dynamic Programming (DP), where demand is estimated from data, and Reinforcement Learning (RL) methods in finite-horizon dynamic pricing problems. We analyze their performance across environments of increasing structural complexity, ranging from a single typology benchmark to multi-typology settings with heterogeneous demand and inter-temporal revenue constraints. Unlike simplified comparisons that restrict DP to low-dimensional settings, we apply dynamic programming in richer, multi-dimensional environments with multiple product types and constraints. We evaluate revenue performance, stability, constraint satisfaction behavior, and computational scaling, highlighting the trade-offs between explicit expectation-based optimization and trajectory-based learning.


[12] 2604.13478

Deepbullwhip: An Open-Source Simulation and Benchmarking for Multi-Echelon Bullwhip Analyses

The bullwhip effect remains operationally persistent despite decades of analytical research. Two computational deficiencies hinder progress: the absence of modular open-source simulation tools for multi-echelon inventory dynamics with asymmetric costs, and the lack of a standardized benchmarking protocol for comparing mitigation strategies across shared metrics and datasets. This paper introduces deepbullwhip, an open-source Python package that integrates a simulation engine for serial supply chains (with pluggable demand generators, ordering policies, and cost functions via abstract base classes, and a vectorized Monte Carlo engine achieving 50 to 90 times speedup) with a registry-based benchmarking framework shipping a curated catalog of ordering policies, forecasting methods, six bullwhip metrics, and demand datasets including WSTS semiconductor billings. Five sets of experiments on a four-echelon semiconductor chain demonstrate cumulative amplification of 427x (Monte Carlo mean across 1,000 paths), a stochastic filtering phenomenon at upstream tiers (CV = 0.01), super-exponential lead time sensitivity, and scalability to 20.8 million simulation cells in under 7 seconds. Benchmark experiments reveal a 155x disparity between synthetic AR(1) and real WSTS bullwhip severity under the Order-Up-To policy, and quantify the BWR-NSAmp tradeoff across ordering policies, demonstrating that no single metric captures policy quality.


[13] 2604.13890

Sandpile Economics: Theory, Identification, and Evidence

Why do capitalist economies recurrently generate crises whose severity is disproportionate to the size of the triggering shock? This paper proposes a structural answer grounded in the evolutionary geometry of production networks. As economies evolve through specialization, integration, and competitive selection, their inter-sectoral linkages drift toward configurations of increasing geometric fragility, eventually crossing a threshold beyond which small disturbances generate disproportionately large cascades. We introduce Sandpile Economics, a formal framework that interprets macroeconomic instability as an emergent property of disequilibrium production networks. The key state variable is the Forman--Ricci curvature of the input--output graph, capturing local substitution possibilities when supply chains are disrupted. We show that when curvature falls below an endogenous threshold, the distribution of cascade sizes follows a power law with tail index $\alpha \in (1,2)$, implying a regime of unbounded amplification. The underlying mechanism is evolutionary: specialization reduces input substitutability, pushing the economy toward criticality, while crisis episodes induce endogenous network reconfiguration and path dependence. These dynamics are inherently non-ergodic and cannot be captured by representative-agent frameworks. Empirically, using global input--output data, we document that production networks operate in persistently negative curvature regimes and that curvature robustly predicts medium-run output dynamics. A one-standard-deviation increase in curvature is associated with higher cumulative growth over three-year horizons, and curvature systematically outperforms standard network metrics in explaining cross-country differences in resilience.


[14] 2312.05593

Benign Overfitting in Economic Forecasting via Noise Regularization

This paper studies linear overparameterized models in economic forecasting and highlights that including noise variables (regressors with no predictive power) regularizes the estimator. We consider a setting where both the outcome variable and the high-dimensional predictors are driven by a small number of latent factors, and show that the linear forecast model is dense rather than sparse. It turns out that a ridgeless regression augmented with noise predictors attains the same asymptotic forecast accuracy as an oracle with known true factors, without estimating the factors or assuming them to be strong. The gain comes from shrinkage of the eigenvalues of the design matrix, which reduces the out-of-sample variance. In contrast, perfect variable selection that removes noise variables can worsen forecasts when the number of retained predictors is comparable to the sample size. Empirically, we apply this approach to forecasting U.S. inflation, international GDP growth, and the U.S. equity risk premium, finding that noise regularization improves and stabilizes predictive performance.


[15] 2505.16654

Optimising the decision threshold in a weighted voting system: The case of the IMF's Board of Governors

In a weighted majority voting game, the players' weights are determined based on the constitutional planner's intentions. The weights are challenging to change in numerous cases, as they represent some desired disparity. However, the voting weights and the actual voting power do not necessarily coincide. Changing a decision threshold would offer some remedy. The International Monetary Fund (IMF) is one of the most important international organisations that uses a weighted voting system to make decisions. The voting weights in its Board of Governors depend on the quotas of the 191 member countries, which reflect their economic strengths to some extent. We analyse the connection between the decision threshold and the a priori voting power of the countries by calculating the Banzhaf indices for each threshold between 50% and 87%. The difference between quotas and voting powers is minimised if the decision threshold is 58% or 59%.


[16] 2506.14078

Temporal Disaggregation of GDP: When Does Machine Learning Help?

We propose a modular framework for temporal disaggregation of quarterly GDP into monthly frequency, in which the regression step accommodates any supervised learning model while Mariano-Murasawa reconciliation enforces quarterly consistency. Comparing Chow-Lin, Elastic Net, XGBoost, and a Multi-Layer Perceptron across four countries, we find that regularization, not nonlinearity, drives the gains: Elastic Net achieves $R^2 = 0.87$ for the United States when lagged indicators are included, while nonlinear models cannot overcome the variance cost of small quarterly samples. We formalize this tradeoff through regime-switching bias and ridge-regularization results.


[17] 2510.22841

Detection Boundaries for Panel Slope Homogeneity Tests Under Small-Group Heterogeneity

Empirical researchers often use slope-homogeneity tests to assess whether slopes can be treated as common across units. A key difficulty is that heterogeneity may be concentrated in a small number of units, so that a failure to reject homogeneity may reflect limited power rather than true homogeneity. We quantify this issue by analyzing the power of standard slope-homogeneity tests under doubly local alternatives - alternatives in which only small groups of units depart from the common slope and the magnitude of the deviations shrinks with sample size. We characterize detectability as a function of panel dimensions, the size of the departing groups, and the rate at which deviations shrink. The results tell the researcher clearly when homogeneity tests are informative and when they will miss small-group heterogeneity. A Monte Carlo study confirms the theory.


[18] 2512.24968

Strategic Response of News Publishers to Generative AI

Generative AI can adversely impact news publishers by lowering consumer demand. It can also reduce demand for newsroom employees, and increase the creation of news "slop." However, it can also form a source of traffic referrals and an information-discovery channel that increases demand. We use high-frequency granular data to analyze the strategic response of news publishers to the introduction of Generative AI. Many publishers strategically blocked LLM access to their websites using the this http URL file standard. Using a difference-in-differences approach, we find that large publishers who block GenAI bots experience reduced website traffic compared to not blocking. In addition, we find that large publishers shift toward richer content that is harder for LLMs to replicate, without increasing text volume. Finally, we find that the share of new editorial and content-production job postings rises over time. Together, these findings illustrate the levers that publishers choose to use to strategically respond to competitive Generative AI threats, and their consequences.


[19] 2604.11243

Knowledge Compounding: An Empirical Economic Analysis of Self-Evolving Knowledge Wikis under the Agentic ROI Framework

Building on the Agentic ROI framework proposed by Liu et al. (2026), this paper introduces knowledge compounding as a new measurable concept in the empirical economics of LLM agents and validates it through a controlled four-query experiment on Qing Claw, an industrial-grade C# reimplementation of the OpenClaw multi-agent framework. Our central theoretical claim is that the cost term in the original Agentic ROI equation contains an unexamined assumption -- that the cost of each task is mutually independent. This assumption holds under the traditional retrieval-augmented generation (RAG) paradigm but breaks down once a persistent, structured knowledge layer is introduced. We propose a dynamic Agentic ROI model in which cost is treated as a time-varying function Cost(t) governed by a knowledge-base coverage rate H(t). Empirical results from four sequential queries on the same domain yield a cumulative token consumption of 47K under the compounding regime versus 305K under a matched RAG baseline -- a savings of 84.6%. Calibrated 30-day projections indicate cumulative savings of 53.7% under medium topic concentration and 81.3% under high concentration, with the gap widening monotonically over time. We further identify three microeconomic mechanisms underlying the compounding effect: (i) one-time INGEST amortized over N retrievals, (ii) auto-feedback of high-value answers into synthesis pages, and (iii) write-back of external search results into entity pages. The theoretical contribution of this paper is a recategorization of LLM tokens from consumables to capital goods, shifting the economic discussion from static marginal cost analysis to dynamic capital accumulation. The engineering contribution is a minimal reproducible implementation in approximately 200 lines of C#, which we believe is the first complete industrial-grade reference implementation of Karpathy's (2026) LLM Wiki paradigm.


[20] 2604.12611

Distributional Change in Ordinal Data with Missing Observations: Minimal Mobility and Partial Identification

Empirical analyses often compare distributions of ordinal variables across groups or over time using repeated cross-sectional data, where only marginal distributions are observed and missing data are pervasive. As a result, the joint distribution linking these marginals is not identified, making it difficult to assess how observed differences arise. This paper studies how distributional change can be measured and interpreted under such limited information. I show that the $L_1$ distance between cumulative distribution functions admits an optimal transport representation as the minimal reallocation of probability mass across ordered categories. This representation delivers both a scalar measure of discrepancy and a structured description of how distributional change must occur, which I refer to as minimal-mobility configurations. To address missing data, I adopt a partial identification approach and construct sharp bounds on the marginal distributions. These bounds induce identified sets for both the discrepancy measure and the associated minimal-mobility configurations, providing inference that is robust to nonresponse and a transparent basis for assessing sensitivity to missing data. An empirical illustration using data from the \emph{Arab Barometer} demonstrates how the framework can be used in practice to quantify and interpret distributional change under limited information.


[21] 2512.20515

Modeling Bank Systemic Risk of Emerging Markets under Geopolitical Shocks: Empirical Evidence from BRICS Countries

In this study, we introduce an analytics framework, the Bank Risk Interlinkage with Dynamic Graph and Event Simulations (BRIDGES), to capture the systemic risks associated with the growing economic influence of the BRICS nations. This framework includes a Dynamic Time Warping (DTW) method to construct a dynamic network of 551 BRICS banks with their annual balance sheet data from 2008 to 2024; a trend analysis in risk ratios to detect shifts in banks' behavior; a Temporal Graph Neural Network (TGNN) to detect anomalous changes in the bank network's structural relationships; and Agent-Based Model (ABM) simulations to measure the impact of anomalous changes on network stability and assess the banking system's resilience to internal financial failure and external geopolitical shocks at the individual country level and across BRICS nations. Our simulation results highlight several important insights. The failure of the largest BRICS banks can cause more systemic damage than that of financially vulnerable or anomalous banks due to the panic effects. Moreover, compared to the failure of the largest BRICS banks, a geopolitical shock with correlated country-wide propagation can cause more systemic damage, resulting in a near-total systemic collapse. Our findings suggest that the panic over the failure of the largest BRICS banks and large-scale geopolitical shocks are the primary threats to the financial stability of the BRICS nations, which traditional bank risk analysis models might not detect.


[22] 2603.05264

Asset Returns, Portfolio Choice, and Proportional Wealth Taxation

We analyse the effect of a proportional wealth tax on asset returns, portfolio choice, and asset pricing. The tax is levied annually on the market value of all holdings at a uniform rate. We show that such a tax is economically equivalent to the government acquiring a proportional stake in the investor's portfolio each period -- a form of risk sharing in which expected wealth and risk are reduced by the same factor, while the return per share is unaffected. This multiplicative separability drives four main results. First, the coefficient of variation of wealth is invariant to the tax rate. Second, the optimal portfolio weights -- and in particular the tangency portfolio -- are independent of the tax rate. Third, the wealth tax is orthogonal to portfolio choice: it induces a homothetic contraction of the opportunity set in the mean-standard deviation plane that preserves the Sharpe ratio of every portfolio. Fourth, both taxed and untaxed investors are willing to pay the same price per share for any asset. The results are derived first under geometric Brownian motion and then generalised to any return distribution in the location-scale family. A complementary Modigliani-Miller analysis confirms pricing neutrality and identifies an inconsistency in the existing literature regarding the discount rate used for after-tax cash flows. Imposing the CAPM as a special case confirms that after-tax betas equal pre-tax betas and the security market line contracts uniformly by $(1-\tau_w)$; under CRRA preferences, general-equilibrium returns and prices are unchanged. This resolves an error in Fama (2021). The neutrality results depend on universal taxation at market value and frictionless markets. We formalise three channels -- book-value taxation, liquidity frictions, and dividend extraction -- through which these conditions break neutrality.