Localisation and circularity in perishable food supply chains are essential for sustainability. Poor allocation of time-sensitive food leads to waste, higher transport emissions, and unnecessary long-distance sourcing. Algorithms used in digital trading platforms and allocation systems can help address these problems by improving how local supply is matched with demand under real operational constraints. This paper examines localisation and circularity in the UK apple supply chain. Apples are an informative case because they are perishable, consumed fresh as dessert fruit, used as inputs across multiple food industries, and generate valuable by-products. We present a weighted-sum mixed-integer linear programming formulation for supply-demand allocation. The model encodes a single global objective with explicit weights on four operational criteria: price matching, quantity alignment, freshness requirements, and geographic distance. These weights make priorities explicit and adjustable, enabling transparent balancing between economic and sustainability considerations. The framework also supports the circulation of unallocated supply across allocation cycles. Using a realistic apple supply-demand dataset, we evaluate allocation outcomes under different priority settings. Results indicate that allocation outcomes are strongly shaped by both priority settings and the structure of the underlying supply network characteristics.
Purpose: This paper examines the prevalence of long COVID across different demographic groups in the U.S. and the extent to which workers with impairments associated with long COVID have engaged in pandemic-related remote work. Methods: We use the U.S. Household Pulse Survey to evaluate the proportion of all adults who self-reported to (1) have had long COVID, and (2) have activity limitations due to long COVID. We also use data from the U.S. Current Population Survey to estimate linear probability regressions for the likelihood of pandemic-related remote work among workers with and without disabilities. Results: Findings indicate that women, Hispanic people, sexual and gender minorities, individuals without four-year college degrees, and people with preexisting disabilities are more likely to have long COVID and to have activity limitations from long COVID. Remote work is a reasonable arrangement for people with such activity limitations and may be an unintentional accommodation for some people who have undisclosed disabilities. However, this study shows that people with disabilities were less likely than people without disabilities to perform pandemic-related remote work. Conclusion: The data suggest this disparity persists because people with disabilities are clustered in jobs that are not amenable to remote work. Employers need to consider other accommodations, especially shorter workdays and flexible scheduling, to hire and retain employees who are struggling with the impacts of long COVID.
Background. Long COVID symptoms (which include brain fog, depression, and fatigue) are mild at best and debilitating at worst. Some U.S. health surveys have found that women, lower income individuals, and those with less education are overrepresented among adults with long COVID, but these studies do not address intersectionality. Methods. We use 10 rounds of Household Pulse Survey (HPS) data from 2022 to 2023 to perform an intersectional analysis using descriptive statistics that evaluate the prevalence of long COVID and the interference of long COVID symptoms with day-to-day activities. We also estimate multivariate logistic regressions that relate the odds of having long COVID and activity limitations due to long COVID to a set of individual characteristics and intersections by sex, race/ethnicity, education, and sexual orientation and gender identity. Results. Women, some people of color, sexual and gender minorities, and people without college degrees are more likely to have long COVID and to have activity limitations from long COVID. Intersectional analysis reveals a striking step-like pattern: college-educated men have the lowest prevalence of long COVID while women without college educations have the highest prevalence. Daily activity limitations are more evenly distributed across demographics, but a different step-like pattern is present: fewer women with degrees have activity limitations while limitations are more widespread among men without degrees. Regression results confirm the negative association of long COVID with being a woman, less educated, Hispanic, and a sexual and gender minority, while results for the intersectional effects are more nuanced. Conclusions. Results point to systematic disparities in health, highlighting the need for policies that increase access to quality healthcare, strengthen the social safety net, and reduce economic precarity.
This article extends the analysis of Atkinson, Foley, and Ganz in "Beyond the Spoiler Effect: Can Ranked-Choice Voting Solve the Problem of Political Polarization?". Their work uses a one-dimensional spatial model based on survey data from the Cooperative Election Survey (CES) to examine how instant-runoff voting (IRV) and Condorcet methods promote candidate moderation. Their model assumes an idealized electoral environment in which all voters possess complete information regarding candidates' ideological positions, all voters provide complete preference rankings, etc. Under these assumptions, their results indicate that Condorcet methods tend to yield winners who are substantially more moderate than those produced by IRV. We construct new models based on CES data which take into account more realistic voter behavior, such as the presence of partial ballots. Our general finding is that under more realistic models the differences between Condorcet methods and IRV largely disappear, implying that in real-world settings the moderating effect of Condorcet methods may not be nearly as strong as what is suggested by more theoretical models.
Some investors say increasing investors with the same strategy decreasing their profits per an investor. On the other hand, some investors using technical analysis used to use same strategy and parameters with other investors, and say that it is better. Those argues are conflicted each other because one argues using with same strategy decreases profits but another argues it increase profits. However, those arguments have not been investigated yet. In this study, the agent-based artificial financial market model(ABAFMM) was built by adding "additional agents"(AAs) that includes additional fundamental agents (AFAs) and additional technical agents (ATAs) to the prior model. The AFAs(ATAs) trade obeying simple fundamental(technical) strategy having only the one parameter. We investigated earnings of AAs when AAs increased. We found that in the case with increasing AFAs, market prices are made stable that leads to decrease their profits. In the case with increasing ATAs, market prices are made unstable that leads to gain their profits more.
We propose a Random Rule Model (RRM) in which behavior is generated by switching among a small library of transparent, parameter-free decision rules. A differentiable gate learns environment-dependent rule propensities, producing an interpretable mixture over named procedures. We develop a global identification theory based on two verifiable conditions on the observed support. Applied to 10,000 binary lottery problems, rule-gating substantially outperforms structured neural benchmarks based on expected utility and prospect theory, approaching the most flexible benchmark while remaining highly restrictive under permutation-fit tests, and retains predictive content on an independent dataset. Mechanism diagnostics reveal that extreme-outcome screening, salience, and attention rules carry the largest responsibility weights, with systematic shifts along tradeoff complexity and dispersion asymmetry. Robustness checks confirm that the findings are not driven by the ex-ante library choice, marginal dominance relationships, or the availability of additional regressors.
We introduce inference methods for score decompositions, which partition scoring functions for predictive assessment into three interpretable components: miscalibration, discrimination, and uncertainty. Our estimation and inference relies on a linear recalibration of the forecasts, which is applicable to general multi-step ahead point forecasts such as means and quantiles due to its validity for both smooth and non-smooth scoring functions. This approach ensures desirable finite-sample properties, enables asymptotic inference, and establishes a direct connection to the classical Mincer-Zarnowitz regression. The resulting inference framework facilitates tests for equal forecast calibration or discrimination, which yield three key advantages. They enhance the information content of predictive ability tests by decomposing scores, deliver higher statistical power in certain scenarios, and formally connect scoring-function-based evaluation to traditional calibration tests, such as financial backtests. Applications demonstrate the method's utility. We find that for survey inflation forecasts, discrimination abilities can differ significantly even when overall predictive ability does not. In an application to financial risk models, our tests provide deeper insights into the calibration and information content of volatility and Value-at-Risk forecasts. By disentangling forecast accuracy from backtest performance, the method exposes critical shortcomings in current banking regulation.
Advanced space technology systems often face high fixed costs, can serve limited non-government demand, and are significantly driven by non-market motivations. While increased entrepreneurial activity and national ambitions in space have encouraged planners at public space agencies to develop markets around such systems, the very factors that make the recent growth of the space economy so remarkable also challenge planners' efforts to develop and sustain markets for space-related goods and services. I propose a graphical framework to visualize the number of competitors a market can sustain as a function of the industry's cost structure; the distribution of government support across direct purchases, direct investments, and shared infrastructure; and the magnitude of non-government demand. Building on public goods theory, the framework shows how marginal dollars invested in shared infrastructure can create non-rival benefits supporting more competitors per dollar than direct purchases or subsidies. I demonstrate the framework with a stylized application inspired by NASA's Commercial LEO Destinations program. Under cost and demand conditions consistent with public data, independent stations generate industry-wide losses of \$355 million annually, while shared core infrastructure enables industry-wide profits of \$154 million annually. I also outline key directions for future research on public investment and market development strategies for advanced technologies.
We study the economic viability of liquidity provision in decentralised exchanges (DEXs) within a structural framework in which market outcomes are endogenous. We formulate strategic interactions as a sequential game: a risk-averse liquidity provider (LP) sets the supply of liquidity in the DEX and a costly dynamic replication strategy in a centralised exchange (CEX), price-sensitive traders determine trading volumes, and arbitrageurs align prices. We establish existence of equilibrium under general trading functions. We show that DEX liquidity depth is a central instrument for risk management, because the LP adjusts liquidity ex ante to manage exposure. In addition to the classical trade-off between liquidity demand and adverse selection, we identify two further determinants of the viability of liquidity provision: the ratio of risk aversion to replication costs and private information. The ratio governs the aggressiveness of replication: greater relative risk aversion reduces risk but also lowers equilibrium liquidity and its mean profitability. Private information has a non-monotonic effect. For moderate price movements, speculative benefits increase liquidity. For large price movements, anticipated adverse selection and replication costs lead to thinner markets.
The growing adoption of artificial intelligence (AI) technologies has heightened interest in the labor market value of AI related skills, yet causal evidence on their role in hiring decisions remains scarce. This study examines whether AI skills serve as a positive hiring signal and whether they can offset conventional disadvantages such as older age or lower formal education. We conducted an experimental survey with 1,725 recruiters from the United Kingdom, the United States and Germany. Using a paired conjoint design, recruiters evaluated hypothetical candidates represented by synthetically designed resumes. Across three occupations of graphic design, office assistance, and software engineering, AI skills significantly increase interview invitation probabilities by approximately 8 to 15 percentage points, compared with candidates without such skills. AI credentials, such as university or company backed skill certificates, only lead to a moderate increase in invitation probabilities compared with self declaration of AI skills. AI skills also partially or fully offset disadvantages related to age and lower education, with effects strongest for office assistants, for whom formal AI certificates play a significant additional compensatory role. Effects are weaker for graphic designers, consistent with more skeptical recruiter attitudes toward AI in creative work. Finally, recruiters own background and AI usage significantly moderate these effects. Overall, the findings demonstrate that AI skills function as a powerful hiring signal and can mitigate traditional labor market disadvantages, with implications for workers skill acquisition strategies and firms recruitment practices.
The classical theory of efficient allocations of an aggregate endowment in a pure-exchange economy has hitherto primarily focused on the Pareto-efficiency of allocations, under the implicit assumption that transfers between agents are frictionless, and hence costless to the economy. In this paper, we argue that certain transfers cause frictions that result in costs to the economy. We show that these frictional costs are tantamount to a form of subadditivity of the cost of transferring endowments between agents. We suggest an axiomatic study of allocation mechanisms, that is, the mechanisms that transform feasible allocations into other feasible allocations, in the presence of such transfer costs. Among other results, we provide an axiomatic characterization of those allocation mechanisms that admit representations as robust (worst-case) linear allocation mechanisms, as well as those mechanisms that admit representations as worst-case conditional expectations. We call the latter Robust Conditional Mean Allocation mechanisms, and we relate our results to the literature on (decentralized) risk sharing within a pool of agents.
Cryptocurrency markets are highly volatile and influenced by both price trends and market sentiment, making effective portfolio management challenging. This paper proposes a dynamic cryptocurrency portfolio strategy that integrates technical indicators and sentiment analysis to enhance investment decision-making. Market momentum is captured using the 14-day Relative Strength Index (RSI) and Simple Moving Average (SMA), while sentiment signals are extracted from news articles with VADER and further validated using the Google Gemini large language model. These signals are incorporated into expected return estimates and used in a constrained mean-variance optimization framework. Backtesting across multiple cryptocurrencies shows that the integrated approach outperforms traditional benchmarks, including momentum strategy, Bitcoin Long-Short strategy, and an equal-weighted portfolio, achieving stronger risk-adjusted returns and more consistent cumulative growth. Furthermore, comparing the sentiment-only and technical-only strategies shows that incorporating sentiment information alongside technical indicators can lead to more consistent performance gains. However, the strategies exhibit substantial drawdowns that coincide with known periods of market stress, indicating that additional risk-management components are required to improve stability.