New articles on Economics


[1] 2501.06270

Sectorial Exclusion Criteria in the Marxist Analysis of the Average Rate of Profit: The United States Case (1960-2020)

The long-term estimation of the Marxist average rate of profit does not adhere to a theoretically grounded standard regarding which economic activities should or should not be included for such purposes, which is relevant because methodological non-uniformity can be a significant source of overestimation or underestimation, generating a less accurate reflection of the capital accumulation dynamics. This research aims to provide a standard Marxist decision criterion regarding the inclusion and exclusion of economic activities for the calculation of the Marxist average profit rate for the case of United States economic sectors from 1960 to 2020, based on the Marxist definition of productive labor, its location in the circuit of capital, and its relationship with the production of surplus value. Using wavelet-transformed Daubechies filters with increased symmetry, empirical mode decomposition, Hodrick-Prescott filter embedded in unobserved components model, and a wide variety of unit root tests the internal theoretical consistency of the presented criteria is evaluated. Also, the objective consistency of the theory is evaluated by a dynamic factor auto-regressive model, Principal Component Analysis, Singular Value Decomposition and Backward Elimination with Linear and Generalized Linear Models. The results are consistent both theoretically and econometrically with the logic of Marx's political economy.


[2] 2501.06404

A Hybrid Framework for Reinsurance Optimization: Integrating Generative Models and Reinforcement Learning

Reinsurance optimization is critical for insurers to manage risk exposure, ensure financial stability, and maintain solvency. Traditional approaches often struggle with dynamic claim distributions, high-dimensional constraints, and evolving market conditions. This paper introduces a novel hybrid framework that integrates {Generative Models}, specifically Variational Autoencoders (VAEs), with {Reinforcement Learning (RL)} using Proximal Policy Optimization (PPO). The framework enables dynamic and scalable optimization of reinsurance strategies by combining the generative modeling of complex claim distributions with the adaptive decision-making capabilities of reinforcement learning. The VAE component generates synthetic claims, including rare and catastrophic events, addressing data scarcity and variability, while the PPO algorithm dynamically adjusts reinsurance parameters to maximize surplus and minimize ruin probability. The framework's performance is validated through extensive experiments, including out-of-sample testing, stress-testing scenarios (e.g., pandemic impacts, catastrophic events), and scalability analysis across portfolio sizes. Results demonstrate its superior adaptability, scalability, and robustness compared to traditional optimization techniques, achieving higher final surpluses and computational efficiency. Key contributions include the development of a hybrid approach for high-dimensional optimization, dynamic reinsurance parameterization, and validation against stochastic claim distributions. The proposed framework offers a transformative solution for modern reinsurance challenges, with potential applications in multi-line insurance operations, catastrophe modeling, and risk-sharing strategy design.


[3] 2501.06473

Endogenous Persistence at the Effective Lower Bound

We develop a perfect foresight method to solve models with an interest rate lower bound constraint that nests OccBin/DynareOBC and \cite{Eggertsson2010}'s as well as \cite{Mertens2014}'s pen and paper solutions as special cases. Our method generalizes the pen-and-paper solutions by allowing for endogenous persistence while maintaining tractability and interpretability. We prove that our method necessarily gives stable multipliers. We use it to solve a New Keynesian model with habit formation and government spending, which we match to expectations data from the Great Recession. We find an output multiplier of government spending close to 1 for the US and Japan.


[4] 2501.06575

Is the Monetary Transmission Mechanism Broken? Time for People's Quantitative Easing

The monetary transmission channel is disrupted by many factors, especially securitization and liquidity traps. In our study we try to estimate the effect of securitization on the interest elasticity and to identify if a liquidity trap occurred during 1954Q3-2019Q3. The yield curve inversion mechanism shows us that economic cycles are very sensitive to decreasing profitability of banks. However there is no evidence that restoring their profits will ensure a strong recovery. In this regard, we research the low effect of Quantitative Easing (QE) upon economic growth and analyze whether securitization and liquidity traps posed challenges to QE or is it the mainstream theory flawed. In this regard we will examine the main weaknesses of QE, respectively the speculative behavior induced by artificial low rates and its unequal distribution. We propose a new form of QE that will relief households and not reward banks for their risky behavior before recession.


[5] 2501.06584

A novel approach to assessing corporate sustainable economic value

The goal of this study is to propose a new concept, Sustainable Economic Value, to define it logically, and to build a simplified model for its evaluation.


[6] 2501.06587

Optimizing Financial Data Analysis: A Comparative Study of Preprocessing Techniques for Regression Modeling of Apple Inc.'s Net Income and Stock Prices

This article presents a comprehensive methodology for processing financial datasets of Apple Inc., encompassing quarterly income and daily stock prices, spanning from March 31, 2009, to December 31, 2023. Leveraging 60 observations for quarterly income and 3774 observations for daily stock prices, sourced from Macrotrends and Yahoo Finance respectively, the study outlines five distinct datasets crafted through varied preprocessing techniques. Through detailed explanations of aggregation, interpolation (linear, polynomial, and cubic spline) and lagged variables methods, the study elucidates the steps taken to transform raw data into analytically rich datasets. Subsequently, the article delves into regression analysis, aiming to decipher which of the five data processing methods best suits capital market analysis, by employing both linear and polynomial regression models on each preprocessed dataset and evaluating their performance using a range of metrics, including cross-validation score, MSE, MAE, RMSE, R-squared, and Adjusted R-squared. The research findings reveal that linear interpolation with polynomial regression emerges as the top-performing method, boasting the lowest validation MSE and MAE values, alongside the highest R-squared and Adjusted R-squared values.


[7] 2501.06777

Identification and Estimation of Simultaneous Equation Models Using Higher-Order Cumulant Restrictions

Identifying structural parameters in linear simultaneous equation models is a fundamental challenge in economics and related fields. Recent work leverages higher-order distributional moments, exploiting the fact that non-Gaussian data carry more structural information than the Gaussian framework. While many of these contributions still require zero-covariance assumptions for structural errors, this paper shows that such an assumption can be dispensed with. Specifically, we demonstrate that under any diagonal higher-cumulant condition, the structural parameter matrix can be identified by solving an eigenvector problem. This yields a direct identification argument and motivates a simple sample-analogue estimator that is both consistent and asymptotically normal. Moreover, when uncorrelatedness may still be plausible -- such as in vector autoregression models -- our framework offers a transparent way to test for it, all within the same higher-order orthogonality setting employed by earlier studies. Monte Carlo simulations confirm desirable finite-sample performance, and we further illustrate the method's practical value in two empirical applications.


[8] 2501.06873

Causal Claims in Economics

We analyze over 44,000 NBER and CEPR working papers from 1980 to 2023 using a custom language model to construct knowledge graphs that map economic concepts and their relationships. We distinguish between general claims and those documented via causal inference methods (e.g., DiD, IV, RDD, RCTs). We document a substantial rise in the share of causal claims-from roughly 4% in 1990 to nearly 28% in 2020-reflecting the growing influence of the "credibility revolution." We find that causal narrative complexity (e.g., the depth of causal chains) strongly predicts both publication in top-5 journals and higher citation counts, whereas non-causal complexity tends to be uncorrelated or negatively associated with these outcomes. Novelty is also pivotal for top-5 publication, but only when grounded in credible causal methods: introducing genuinely new causal edges or paths markedly increases both the likelihood of acceptance at leading outlets and long-run citations, while non-causal novelty exhibits weak or even negative effects. Papers engaging with central, widely recognized concepts tend to attract more citations, highlighting a divergence between factors driving publication success and long-term academic impact. Finally, bridging underexplored concept pairs is rewarded primarily when grounded in causal methods, yet such gap filling exhibits no consistent link with future citations. Overall, our findings suggest that methodological rigor and causal innovation are key drivers of academic recognition, but sustained impact may require balancing novel contributions with conceptual integration into established economic discourse.


[9] 2501.07141

Knowledge Phenomenology Research of Future Industrial Iconic Product Innovation

Iconic products, as innovative carriers supporting the development of future industries, are key breakthrough points for driving the transformation of new quality productive forces. This article is grounded in the philosophy of technology and examines the evolution of human civilization to accurately identify the patterns of product innovation. By integrating theories from systems science, it analyzes the intrinsic logical differences between traditional products and iconic products. The study finds that iconic products are based on a comprehensive knowledge system that integrates explicit and tacit knowledge, enabling them to adapt to complex dynamic environments. Therefore, based on the method of phenomenological essence reduction and the process of specialized knowledge acquisition, this study establishes the first principle of knowledge phenomenology: "knowledge generation-moving from the tacit to the explicit-moving from the explicit to the tacit-fusion of the explicit and tacit." Grounded in knowledge phenomenology, it reconstructs the product design evolution process and establishes a forward innovative design framework for iconic products, consisting of "design problem space-explicit knowledge space-tacit knowledge space-innovative solution space." Furthermore, based on FBS design theory, it develops a disruptive technology innovation forecasting framework of "technology problem space-knowledge base prediction-application scenario prediction-coupled technology prediction," which collectively advances the innovation systems engineering of iconic products. In light of the analysis of the global future industrial competitive landscape, it proposes a strategy for enhancing embodied intelligence in iconic products.


[10] 2501.07178

The Spoils of Algorithmic Collusion: Profit Allocation Among Asymmetric Firms

We study the propensity of independent algorithms to collude in repeated Cournot duopoly games. Specifically, we investigate the predictive power of different oligopoly and bargaining solutions regarding the effect of asymmetry between firms. We find that both consumers and firms can benefit from asymmetry. Algorithms produce more competitive outcomes when firms are symmetric, but less when they are very asymmetric. Although the static Nash equilibrium underestimates the effect on total quantity and overestimates the effect on profits, it delivers surprisingly accurate predictions in terms of total welfare. The best description of our results is provided by the equal relative gains solution. In particular, we find algorithms to agree on profits that are on or close to the Pareto frontier for all degrees of asymmetry. Our results suggest that the common belief that symmetric industries are more prone to collusion may no longer hold when algorithms increasingly drive managerial decisions.


[11] 2501.07235

Entry deterrence by exploiting economies of scope in data aggregation

We model a market for data where an incumbent and a challenger compete for data from a producer. The incumbent has access to an exclusive data producer, and it uses this exclusive access, together with economies of scope in the aggregation of the data, as a strategy against the potential entry by the challenger. We assess the incumbent incentives to either deter or accommodate the entry of the challenger. We show that the incumbent will accommodate when the exclusive access is costly and when the economies of scope are low, and it will blockade or deter otherwise. The results would justify an access regulation that incentivizes the entry of the challenger, e.g., by increasing production costs for the exclusive data.


[12] 2501.07309

Making Tennis Fairer: The Grand Tiebreaker

Tennis, like other games and sports, is governed by rules, including the rules that determine the winner of points, games, sets, and matches. If the two players are equally skilled -- each has the same probability of winning a point when serving or when receiving -- we show that each has an equal chance of winning games, sets, and matches, whether or not sets go to a tiebreak. However, in a women's match that is decided by 2 out of 3 sets, and a men's match that is decided by 3 out of 5 sets, it is possible that the player who wins the most games may not be the player who wins the match. We calculate the probability that this happens and show that it has actually occurred -- most notably, in the 2019 men's Wimbledon final between Novak Djokovic and Roger Federer, which took almost five hours to complete and is considered one of the greatest tennis matches ever (Djokovic won). We argue that the discrepancy between the game winner and the match winner, when it occurs, should be resolved by a Grand Tiebreak (GT) -- played according to the rules of tiebreaks in sets -- because each player has a valid claim to being called the rightful winner. A GT would have the salutary effect of -- even every point -- lest he/she win in sets but lose more games. This would make competition keener throughout a match and probably decrease the need for a GT, because the game and set winner would more likely coincide when the players fight hard for every point.


[13] 2501.07386

Forecasting for monetary policy

This paper discusses three key themes in forecasting for monetary policy highlighted in the Bernanke (2024) review: the challenges in economic forecasting, the conditional nature of central bank forecasts, and the importance of forecast evaluation. In addition, a formal evaluation of the Bank of England's inflation forecasts indicates that, despite the large forecast errors in recent years, they were still accurate relative to common benchmarks.


[14] 2501.07410

Anonymous Attention and Abuse

We analyze the content of the anonymous online discussion forum Economics Job Market Rumors (EJMR) and document its evolving interactions with external information sources. We focus on three key aspects: the prevalence and impact of links to external domains, the surge in discussions driven by Twitter posts since 2018, and the categorization of individuals whose tweets are most frequently discussed on EJMR. Using data on linked domains, we show how these trends reflect broader changes in the economics profession's digital footprint. Our analysis sheds light on EJMR's informational role but also raises questions about inclusivity and professional ethics in economics.


[15] 2501.07514

Estimating Sequential Search Models Based on a Partial Ranking Representation

Consumers are increasingly shopping online, and more and more datasets documenting consumer search are becoming available. While sequential search models provide a framework for utilizing such data, they present empirical challenges. A key difficulty arises from the inequality conditions implied by these models, which depend on multiple unobservables revealed during the search process and necessitate solving or simulating high-dimensional integrals for likelihood-based estimation methods. This paper introduces a novel representation of inequalities implied by a broad class of sequential search models, demonstrating that the empirical content of such models can be effectively captured through a specific partial ranking of available actions. This representation reduces the complexity caused by unobservables and provides a tractable expression for joint probabilities. Leveraging this insight, we propose a GHK-style simulation-based likelihood estimator that is simpler to implement than existing ones. It offers greater flexibility for handling incomplete search data, incorporating additional ranking information, and accommodating complex search processes, including those involving product discovery. We show that the estimator achieves robust performance while maintaining relatively low computational costs, making it a practical and versatile tool for researchers and practitioners.


[16] 2501.07550

disco: Distributional Synthetic Controls

The method of synthetic controls is widely used for evaluating causal effects of policy changes in settings with observational data. Often, researchers aim to estimate the causal impact of policy interventions on a treated unit at an aggregate level while also possessing data at a finer granularity. In this article, we introduce the new disco command, which implements the Distributional Synthetic Controls method introduced in Gunsilius (2023). This command allows researchers to construct entire synthetic distributions for the treated unit based on an optimally weighted average of the distributions of the control units. Several aggregation schemes are provided to facilitate clear reporting of the distributional effects of the treatment. The package offers both quantile-based and CDF-based approaches, comprehensive inference procedures via bootstrap and permutation methods, and visualization capabilities. We empirically illustrate the use of the package by replicating the results in Van Dijcke et al. (2024).


[17] 2501.06221

Optimizing Supply Chain Networks with the Power of Graph Neural Networks

Graph Neural Networks (GNNs) have emerged as transformative tools for modeling complex relational data, offering unprecedented capabilities in tasks like forecasting and optimization. This study investigates the application of GNNs to demand forecasting within supply chain networks using the SupplyGraph dataset, a benchmark for graph-based supply chain analysis. By leveraging advanced GNN methodologies, we enhance the accuracy of forecasting models, uncover latent dependencies, and address temporal complexities inherent in supply chain operations. Comparative analyses demonstrate that GNN-based models significantly outperform traditional approaches, including Multilayer Perceptrons (MLPs) and Graph Convolutional Networks (GCNs), particularly in single-node demand forecasting tasks. The integration of graph representation learning with temporal data highlights GNNs' potential to revolutionize predictive capabilities for inventory management, production scheduling, and logistics optimization. This work underscores the pivotal role of forecasting in supply chain management and provides a robust framework for advancing research and applications in this domain.


[18] 2501.06248

Utility-inspired Reward Transformations Improve Reinforcement Learning Training of Language Models

Current methods that train large language models (LLMs) with reinforcement learning feedback, often resort to averaging outputs of multiple rewards functions during training. This overlooks crucial aspects of individual reward dimensions and inter-reward dependencies that can lead to sub-optimal outcomes in generations. In this work, we show how linear aggregation of rewards exhibits some vulnerabilities that can lead to undesired properties of generated text. We then propose a transformation of reward functions inspired by economic theory of utility functions (specifically Inada conditions), that enhances sensitivity to low reward values while diminishing sensitivity to already high values. We compare our approach to the existing baseline methods that linearly aggregate rewards and show how the Inada-inspired reward feedback is superior to traditional weighted averaging. We quantitatively and qualitatively analyse the difference in the methods, and see that models trained with Inada-transformations score as more helpful while being less harmful.


[19] 2501.06834

LLMs Model Non-WEIRD Populations: Experiments with Synthetic Cultural Agents

Despite its importance, studying economic behavior across diverse, non-WEIRD (Western, Educated, Industrialized, Rich, and Democratic) populations presents significant challenges. We address this issue by introducing a novel methodology that uses Large Language Models (LLMs) to create synthetic cultural agents (SCAs) representing these populations. We subject these SCAs to classic behavioral experiments, including the dictator and ultimatum games. Our results demonstrate substantial cross-cultural variability in experimental behavior. Notably, for populations with available data, SCAs' behaviors qualitatively resemble those of real human subjects. For unstudied populations, our method can generate novel, testable hypotheses about economic behavior. By integrating AI into experimental economics, this approach offers an effective and ethical method to pilot experiments and refine protocols for hard-to-reach populations. Our study provides a new tool for cross-cultural economic studies and demonstrates how LLMs can help experimental behavioral research.


[20] 2501.06969

Doubly Robust Inference on Causal Derivative Effects for Continuous Treatments

Statistical methods for causal inference with continuous treatments mainly focus on estimating the mean potential outcome function, commonly known as the dose-response curve. However, it is often not the dose-response curve but its derivative function that signals the treatment effect. In this paper, we investigate nonparametric inference on the derivative of the dose-response curve with and without the positivity condition. Under the positivity and other regularity conditions, we propose a doubly robust (DR) inference method for estimating the derivative of the dose-response curve using kernel smoothing. When the positivity condition is violated, we demonstrate the inconsistency of conventional inverse probability weighting (IPW) and DR estimators, and introduce novel bias-corrected IPW and DR estimators. In all settings, our DR estimator achieves asymptotic normality at the standard nonparametric rate of convergence. Additionally, our approach reveals an interesting connection to nonparametric support and level set estimation problems. Finally, we demonstrate the applicability of our proposed estimators through simulations and a case study of evaluating a job training program.