This article is part of a Living Literature Review exploring topics related to intellectual property, focusing on insights from the economic literature. Our aim is to provide a clear and non-technical introduction to patent rights, making them accessible to graduate students, legal scholars and practitioners, policymakers, and anyone curious about the subject.
We study monotone persuasion in the linear case, where posterior distributions over states are summarized by their mean. We solve the two leading cases where optimal unrestricted signals can be nonmonotone. First, if the objective is s-shaped and the state is discrete, then optimal monotone signals are upper censorship, whereas optimal unrestricted signals may require randomization. Second, if the objective is m-shaped and the state is continuous, then optimal monotone signals are interval disclosure, whereas optimal unrestricted signals may require nonmonotone pooling. We illustrate our results with an application to media censorship.
This paper introduces the two-way common causal covariates (CCC) assumption, which is necessary to get an unbiased estimate of the ATT when using time-varying covariates in existing Difference-in-Differences methods. The two-way CCC assumption implies that the effect of the covariates remain the same between groups and across time periods. This assumption has been implied in previous literature, but has not been explicitly addressed. Through theoretical proofs and a Monte Carlo simulation study, we show that the standard TWFE and the CS-DID estimators are biased when the two-way CCC assumption is violated. We propose a new estimator called the Intersection Difference-in-differences (DID-INT) which can provide an unbiased estimate of the ATT under two-way CCC violations. DID-INT can also identify the ATT under heterogeneous treatment effects and with staggered treatment rollout. The estimator relies on parallel trends of the residuals of the outcome variable, after appropriately adjusting for covariates. This covariate residualization can recover parallel trends that are hidden with conventional estimators.
National carbon peak track and optimized provincial carbon allocations are crucial for mitigating regional inequality within the commercial building sector during China's transition to carbon neutrality. This study proposes a top-down model to evaluate carbon trajectories in operational commercial buildings up to 2060. Through Monte Carlo simulation, scenario analysis is conducted to assess carbon peak values and the corresponding peaking year, thereby optimizing carbon allocation schemes both nationwide and provincially. The results reveal that (1) the nationwide carbon peak for commercial building operations is projected to reach 890 (+- 50) megatons of carbon dioxide (MtCO2) by 2028 (+- 3.7 years) in the case of the business-as-usual scenario, with a 7.87% probability of achieving the carbon peak under the decarbonization scenario. (2) Significant disparities will exist among provinces, with Shandong's carbon peak projected at 69.6 (+- 4.0) MtCO2 by 2029, approximately 11 times higher than Ningxia's peak of 6.0 (+- 0.3) MtCO2 by 2027. (3) Guided by the principle of maximizing the emission reduction potential, the optimal provincial allocation scheme reveals the top three provinces requiring the most significant reductions in the commercial sector: Xinjiang (5.6 MtCO2), Shandong (4.8 MtCO2), and Henan (4.7 MtCO2). Overall, this study offers optimized provincial carbon allocation strategies within the commercial building sector in China via dynamic scenario simulations, with the goal of hitting the carbon peak target and progressing toward a low-carbon future for the building sector.
We propose a computationally straightforward test for the linearity of a spatial interaction function. Such functions arise commonly, either as practitioner imposed specifications or due to optimizing behaviour by agents. Our test is nonparametric, but based on the Lagrange Multiplier principle and reminiscent of the Ramsey RESET approach. This entails estimation only under the null hypothesis, which yields an easy to estimate linear spatial autoregressive model. Monte Carlo simulations show excellent size control and power. An empirical study with Finnish data illustrates the test's practical usefulness, shedding light on debates on the presence of tax competition among neighbouring municipalities.
Recent pledges to triple global nuclear capacity by 2050 suggest a "nuclear renaissance," bolstered by reactor concepts such as sodium-cooled fast reactors, high-temperature reactors, and molten salt reactors. These technologies claim to address the challenges of today's high-capacity light-water reactors, i.e., cost overruns, delays, and social acceptance, while also offering additional non-electrical applications. However, this analysis reveals that none of these concepts currently meet the prerequisites of affordability, competitiveness, or commercial availability. We omit social acceptability. The cost analysis reveals optimistic FOAK cost assumptions of 5,623 to 9,511 USD per kW, and NOAK cost projections as low as 1,476 USD per kW. At FOAK cost, the applied energy system model includes no nuclear power capacity, and thus indicates that significant cost reductions would be required for these technologies to contribute to energy system decarbonization. In low-cost scenarios, reactors capable of producing high temperature heat become competitive with other low-carbon technologies. We conclude that, for reactor capacties to increase significantly, a focus on certain technology lines ist necessary. However, until a concept becomes viable and commercially available, policymakers should prioritize existing technologies to decarbonize energy systems.
We present a novel Bayesian framework for quantifying uncertainty in portfolio temperature alignment models, leveraging the X-Degree Compatibility (XDC) approach with the scientifically validated Finite Amplitude Impulse Response (FaIR) climate model. This framework significantly advances the widely adopted linear approaches that use the Transient Climate Response to Cumulative CO2 Emissions (TCRE). Developed in collaboration with right{\deg}, one of the pioneering companies in portfolio temperature alignment, our methodology addresses key sources of uncertainty, including parameter variability and input emission data across diverse decarbonization pathways. By employing adaptive Markov Chain Monte Carlo (MCMC) methods, we provide robust parametric uncertainty quantification for the FaIR model. To enhance computational efficiency, we integrate a deep learning-based emulator, enabling near real-time simulations. Through practical examples, we demonstrate how this framework improves climate risk management and decision-making in portfolio construction by treating uncertainty as a critical feature rather than a constraint. Moreover, our approach identifies the primary sources of uncertainty, offering valuable insights for future research.
We present the results of an experiment documenting racial bias on Meta's Advertising Platform in Brazil and the United States. We find that darker skin complexions are penalized, leading to real economic consequences. For every \$1,000 an advertiser spends on ads with models with light-skin complexions, that advertiser would have to spend \$1,159 to achieve the same level of engagement using photos of darker skin complexion models. Meta's budget optimization tool reinforces these viewer biases. When pictures of models with light and dark complexions are allocated a shared budget, Meta funnels roughly 64\% of the budget towards photos featuring lighter skin complexions.
We analyze the French housing market prices in the period 1970-2022, with high-resolution data from 2018 to 2022. The spatial correlation of the observed price field exhibits logarithmic decay characteristic of the two-dimensional random diffusion equation -- local interactions may create long-range correlations. We introduce a stylized model, used in the past to model spatial regularities in voting patterns, that accounts for both spatial and temporal correlations with reasonable values of parameters. Our analysis reveals that price shocks are persistent in time and their amplitude is strongly heterogeneous in space. Our study confirms and quantifies the diffusive nature of housing prices that was anticipated long ago (Clapp et al. 1994, Pollakowski et al. 1997), albeit on much restricted, local data sets.
In socioeconomic systems, nonequilibrium dynamics naturally stem from the generically non-reciprocal interactions between self-interested agents, whereas equilibrium descriptions often only apply to scenarios where individuals act with the common good in mind. We bridge these two contrasting paradigms by studying a Sakoda-Schelling occupation model with both individualistic and altruistic agents, who, in isolation, follow nonequilibrium and equilibrium dynamics respectively. We investigate how the relative fraction of these two populations impacts the behavior of the system. In particular, we find that when fluctuations in the agents' decision-making process are small (high rationality), a very moderate amount of altruistic agents mitigates the sub-optimal concentration of individualists in dense clusters. In the regime where fluctuations carry more weight (low rationality), on the other hand, altruism progressively allows the agents to coordinate in a way that is significantly more robust, which we understand by reducing the model to a single effective population studied through the lens of active matter physics. We highlight that localizing the altruistic intervention at the right point in space may be paramount for its effectiveness.