This study investigates the relationship between corporate digital innovation and Environmental, Social, and Governance (ESG) performance, with a specific focus on the mediating role of Generative artificial intelligence technology adoption. Using a comprehensive panel dataset of 8,000 observations from the CMARS and WIND database spanning from 2015 to 2023, we employ multiple econometric techniques to examine this relationship. Our findings reveal that digital innovation significantly enhances corporate ESG performance, with GAI technology adoption serving as a crucial mediating mechanism. Specifically, digital innovation positively influences GAI technology adoption, which subsequently improves ESG performance. Furthermore, our heterogeneity analysis indicates that this relationship varies across firm size, industry type, and ownership structure. Finally, our results remain robust after addressing potential endogeneity concerns through instrumental variable estimation, propensity score matching, and differenc in differences approaches. This research contributes to the growing literature on technologydriven sustainability transformations and offers practical implications for corporate strategy and policy development in promoting sustainable business practices through technological advancement.
This paper identifies and analyzes six key strategies used to exploit the Eurosystem's financial mechanisms, and attempts a quantitative reconstruction: inflating TARGET balances, leveraging collateral swaps followed by defaults, diluting self-imposed regulatory rules, issuing money through Emergency Liquidity Assistance (ELA), acquisitions facilitated via the Agreement on Net Financial Assets (ANFA), and the perpetual (re)issuance of sovereign bonds as collateral. The paper argues that these practices stem from systemic vulnerabilities or deliberate opportunism within the Eurosystem. While it does not advocate for illicit activities, the paper highlights significant weaknesses in the current structure and concludes that comprehensive reforms are urgently needed.
We consider a monopoly insurance market with a risk-neutral profit-maximizing insurer and a consumer with Yaari Dual Utility preferences that distort the given continuous loss distribution. The insurer observes the loss distribution but not the risk attitude of the consumer, proxied by a distortion function drawn from a continuum of types. We characterize the profit-maximizing, incentive-compatible, and individually rational menus of insurance contracts, show that equilibria are separating, and provide key properties thereof. Notably, insurance coverage and premia are monotone in the level of risk aversion; the most risk-averse consumer receives full insurance $(\textit{efficiency at the top})$; the monopoly absorbs all surplus from the least-risk averse consumer; and consumers with a higher level of risk aversion induce a higher expected profit for the insurer. Under certain regularity conditions, equilibrium contracts can be characterized in terms of the marginal loss retention per type of consumer, and they consist of menus of layered deductible contracts, where each such layered structure is determined by the risk type of the consumer. In addition, we examine the effect of a fixed insurance provision cost on equilibria. We show that if the fixed cost is prohibitively high, then there will be no $\textit{ex ante}$ gains from trade. However, when trade occurs, separating equilibrium contracts always outperform pooling equilibrium contracts, and they are identical to those obtained in the absence of fixed costs, with the exception that only part of the menu is excluded. The excluded contracts are those designed for consumers with relatively lower risk aversion, who are less valuable to the insurer. Finally, we characterize incentive-efficient menus of contracts in the context of an arbitrary type space.
This paper explores how ethical consumption can transform democratic governance toward sustainability by challenging traditional economic models centered on utility and efficiency. As societal values shift toward transparency equity and environmental responsibility ethical consumers increasingly influence markets. Drawing on Whites Kantian economic framework and Ingleharts theory of value change the paper proposes a model integrating moral imperatives into economic theory. Using a vector bundle approach it captures evolving ethical preferences advocating for an inclusive sustainability focused economic paradigm aligned with post materialist values.
This paper was prepared as a comment on "Dynamic Causal Effects in a Nonlinear World: the Good, the Bad, and the Ugly" by Michal Koles\'ar, Mikkel Plagborg-M{\o}ller. We make three comments, including a novel contribution to the literature, showing how a reasonable economic interpretation can potentially be restored for average-effect estimators with negative weights.
A central objective of international large-scale assessment (ILSA) studies is to generate knowledge about the probability distribution of student achievement in each education system participating in the assessment. In this article, we study one of the most fundamental threats that these studies face when justifying the conclusions reached about these distributions: the problem that arises from student non-participation during data collection. ILSA studies have traditionally employed a narrow range of strategies to address non-participation. We examine this problem using tools developed within the framework of partial identification that we tailor to the problem at hand. We demonstrate this approach with application to the International Computer and Information Literacy Study in 2018. By doing so, we bring to the field of ILSA an alternative strategy for identification and estimation of population parameters of interest.
We introduce a novel perspective by linking ordered probabilistic choice to copula theory, a mathematical framework for modeling dependencies in multivariate distributions. Each representation of ordered probabilistic choice behavior can be associated with a copula, enabling the analysis of representations through established results from copula theory. We provide functional forms to describe the "extremal" representations of an ordered probabilistic choice behavior and their distinctive structural properties. The resulting functional forms act as an "identification method" that uniquely generates heterogeneous choice types and their weights. These results provide valuable tools for analysts to identify micro-level behavioral heterogeneity from macro-level observable data.
We study dynamic decentralized two-sided matching in which players may encounter unanticipated experiences. As they become aware of these experiences, they may change their preferences over players on the other side of the market. Consequently, they may get ``divorced'' and rematch again with other agents, which may lead to further unanticipated experiences etc. A matching is stable if there is absence of pairwise common belief in blocking. Stable matchings can be destabilized by unanticipated experiences. Yet, we show that there exist self-confirming outcomes that are stable and do not lead to further unanticipated experiences. We introduce a natural decentralized matching process that, at each period assigns probability $1 - \varepsilon$ to the satisfaction of a mutual optimal blocking pair (if it exists) and picks any optimal blocking pair otherwise. The parameter $\varepsilon$ is interpreted as a friction of the matching market. We show that for any decentralized matching process, frictions are necessary for convergence to stability even without unawareness. Our process converges to self-confirming stable outcomes. Further, we allow for bilateral communication/flirting that changes the awareness and say that a matching is flirt-proof stable if there is absence of communication leading to pairwise common belief in blocking. We show that our natural decentralized matching process converges to flirt-proof self-confirming outcomes.
This paper analyzes Structural Vector Autoregressions (SVARs) where identification of structural parameters holds locally but not globally. In this case there exists a set of isolated structural parameter points that are observationally equivalent under the imposed restrictions. Although the data do not inform us which observationally equivalent point should be selected, the common frequentist practice is to obtain one as a maximum likelihood estimate and perform impulse response analysis accordingly. For Bayesians, the lack of global identification translates to non-vanishing sensitivity of the posterior to the prior, and the multi-modal likelihood gives rise to computational challenges as posterior sampling algorithms can fail to explore all the modes. This paper overcomes these challenges by proposing novel estimation and inference procedures. We characterize a class of identifying restrictions and circumstances that deliver local but non-global identification, and the resulting number of observationally equivalent parameter values. We propose algorithms to exhaustively compute all admissible structural parameters given reduced-form parameters and utilize them to sample from the multi-modal posterior. In addition, viewing the set of observationally equivalent parameter points as the identified set, we develop Bayesian and frequentist procedures for inference on the corresponding set of impulse responses. An empirical example illustrates our proposal.
Generative Pre-trained Transformers (GPTs), particularly Large Language Models (LLMs) like ChatGPT, have proven effective in content generation and productivity enhancement. However, legal risks associated with these tools lead to adoption variance and concealment of AI use within organizations. This study examines the impact of disclosure on ChatGPT adoption in legal, audit and advisory roles in consulting firms through the lens of agency theory. We conducted a survey experiment to evaluate agency costs in the context of unregulated corporate use of ChatGPT, with a particular focus on how mandatory disclosure influences information asymmetry and misaligned interests. Our findings indicate that in the absence of corporate regulations, such as an AI policy, firms may incur agency costs, which can hinder the full benefits of GPT adoption. While disclosure policies reduce information asymmetry, they do not significantly lower overall agency costs due to managers undervaluing analysts' contributions with GPT use. Finally, we examine the scope of existing regulations in Europe and the United States regarding disclosure requirements, explore the sharing of risk and responsibility within firms, and analyze how incentive mechanisms promote responsible AI adoption.
We propose a formal model for counterfactual estimation with unobserved confounding in "data-rich" settings, i.e., where there are a large number of units and a large number of measurements per unit. Our model provides a bridge between the structural causal model view of causal inference common in the graphical models literature with that of the latent factor model view common in the potential outcomes literature. We show how classic models for potential outcomes and treatment assignments fit within our framework. We provide an identification argument for the average treatment effect, the average treatment effect on the treated, and the average treatment effect on the untreated. For any estimator that has a fast enough estimation error rate for a certain nuisance parameter, we establish it is consistent for these various causal parameters. We then show principal component regression is one such estimator that leads to consistent estimation, and we analyze the minimal smoothness required of the potential outcomes function for consistency.
We investigate the impact of the policy-driven expansion of the diesel and renewable diesel industry on local soybean prices. Because soybean oil is a key feedstock for biodiesel and renewable diesel, significant investments have been made in new soybean crush facilities and the expansion of existing ones. We quantify the effect of both new and existing soybean plants on soybean basis using panel data and a differences-in-difference approach. The data available on new plants does not allow us to identify any statistically significant impacts. However, existing plants increase the basis by 23.36 to 9.20 cents per bushel, with the effect diminishing with distance. These results suggest the relevance of biofuel policies in supporting rural economies and have relevant policy implications.
When is random choice generated by a decision maker (DM) who is Bayesian-persuaded by a sender? In this paper, I consider a DM whose state-dependent preferences are known to an analyst, yet chooses stochastically as a function of the state. I provide necessary and sufficient conditions for the dataset to be consistent with the DM being Bayesian persuaded by an unobserved sender who generates a distribution of signals to ex-ante optimize the sender's expected payoff.
Graphon games are a class of games with a continuum of agents, introduced to approximate the strategic interactions in large network games. The first result of this study is an equilibrium existence theorem in graphon games, under the same conditions as those in network games. We prove the existence of an equilibrium in a graphon game with an infinite-dimensional strategy space, under the continuity and quasi-concavity of the utility functions. The second result characterizes Nash equilibria in graphon games as the limit points of asymptotic Nash equilibria in large network games. If a sequence of large network games converges to a graphon game, any convergent sequence of asymptotic Nash equilibria in these large network games also converges to a Nash equilibrium of the graphon game. In addition, for any graphon game and its equilibrium, there exists a sequence of large network games that converges to the graphon game and has asymptotic Nash equilibria converging to the equilibrium. These results suggest that the concept of a graphon game is an idealized limit of large network games as the number of players tends to infinity.
Removing carbon dioxide from the atmosphere may slow climate change and ocean acidification. My approach converts atmospheric carbon dioxide into graphite (CD2G). The net profit for this conversion is ~$381/ton CO2 removed from the atmosphere. At the gigaton scale, CD2G factories will increase the affordability and availability of graphite. Since graphite can be used to make thermal batteries and electrodes for fuel cells and batteries, CD2G factories will help lower the cost of storing renewable energy, which will accelerate the transition to renewable energy. Replacing fossil fuel energy with renewable energy will slow the release of carbon dioxide to the atmosphere, also slowing climate change. Converting atmospheric carbon dioxide into graphite will both generate a profit and slow climate change.
This Element offers a practical guide to estimating conditional marginal effects-how treatment effects vary with a moderating variable-using modern statistical methods. Commonly used approaches, such as linear interaction models, often suffer from unclarified estimands, limited overlap, and restrictive functional forms. This guide begins by clearly defining the estimand and presenting the main identification results. It then reviews and improves upon existing solutions, such as the semiparametric kernel estimator, and introduces robust estimation strategies, including augmented inverse propensity score weighting with Lasso selection (AIPW-Lasso) and double machine learning (DML) with modern algorithms. Each method is evaluated through simulations and empirical examples, with practical recommendations tailored to sample size and research context. All tools are implemented in the accompanying interflex package for R.
Convex Hull (CH) pricing, used in US electricity markets and raising interest in Europe, is a pricing rule designed to handle markets with non-convexities such as startup costs and minimum up and down times. In such markets, the market operator makes side payments to generators to cover lost opportunity costs, and CH prices minimize the total "lost opportunity costs", which include both actual losses and missed profit opportunities. These prices can also be obtained by solving a (partial) Lagrangian dual of the original mixed-integer program, where power balance constraints are dualized. Computing CH prices then amounts to minimizing a sum of nonsmooth convex objective functions, where each term depends only on a single generator. The subgradient of each of those terms can be obtained independently by solving smaller mixed-integer programs. In this work, we benchmark a large panel of first-order methods to solve the above dual CH pricing problem. We test several dual methods, most of which not previously considered for CH pricing, namely a proximal variant of the bundle level method, subgradient methods with three different stepsize strategies, two recent parameter-free methods and an accelerated gradient method combined with smoothing. We compare those methods on two representative sets of real-world large-scale instances and complement the comparison with a (Dantzig-Wolfe) primal column generation method shown to be efficient at computing CH prices, for reference. Our numerical experiments show that the bundle proximal level method and two variants of the subgradient method perform the best among all dual methods and compare favorably with the Dantzig-Wolfe primal method.
Empirical likelihood serves as a powerful tool for constructing confidence intervals in nonparametric regression and regression discontinuity designs (RDD). The original empirical likelihood framework can be naturally extended to these settings using local linear smoothers, with Wilks' theorem holding only when an undersmoothed bandwidth is selected. However, the generalization of bias-corrected versions of empirical likelihood under more realistic conditions is non-trivial and has remained an open challenge in the literature. This paper provides a satisfactory solution by proposing a novel approach, referred to as robust empirical likelihood, designed for nonparametric regression and RDD. The core idea is to construct robust weights which simultaneously achieve bias correction and account for the additional variability introduced by the estimated bias, thereby enabling valid confidence interval construction without extra estimation steps involved. We demonstrate that the Wilks' phenomenon still holds under weaker conditions in nonparametric regression, sharp and fuzzy RDD settings. Extensive simulation studies confirm the effectiveness of our proposed approach, showing superior performance over existing methods in terms of coverage probabilities and interval lengths. Moreover, the proposed procedure exhibits robustness to bandwidth selection, making it a flexible and reliable tool for empirical analyses. The practical usefulness is further illustrated through applications to two real datasets.
This paper studies the non-parametric estimation and uniform inference for the conditional quantile regression function (CQRF) with covariates exposed to measurement errors. We consider the case that the distribution of the measurement error is unknown and allowed to be either ordinary or super smooth. We estimate the density of the measurement error by the repeated measurements and propose the deconvolution kernel estimator for the CQRF. We derive the uniform Bahadur representation of the proposed estimator and construct the uniform confidence bands for the CQRF, uniformly in the sense for all covariates and a set of quantile indices, and establish the theoretical validity of the proposed inference. A data-driven approach for selecting the tuning parameter is also included. Monte Carlo simulations and a real data application demonstrate the usefulness of the proposed method.