New articles on Statistics


[1] 2507.08811

Optimal estimators for threshold-based quality measures

We consider a problem in parametric estimation: given $n$ samples from an unknown distribution, we want to estimate which distribution, from a given one-parameter family, produced the data. Following Schulman and Vazirani, we evaluate an estimator in terms of the chance of being within a specified tolerance of the correct answer, in the worst case. We provide optimal estimators for several families of distributions on $\mathbb{R}$. We prove that for distributions on a compact space, there is always an optimal estimator that is translation-invariant, and we conjecture that this conclusion also holds for any distribution on $\mathbb{R}$. By contrast, we give an example showing it does not hold for a certain distribution on an infinite tree.


[2] 2507.08814

Mapping Dengue Vulnerability in Recife, Brazil: Socioeconomic Insights from PCA and Robust Regression

Based on approximately 90,000 confirmed dengue cases reported in Recife - a major city in northeastern Brazil - between 2015 and 2024, we conducted a neighborhood-level spatial analysis. Socioeconomic and demographic indicators from the 2022 Brazilian Census were integrated to explore factors associated with the spatial distribution of dengue incidence. To address multicollinearity and reduce dimensionality, we applied Principal Component Analysis (PCA) to the explanatory variables. Using the resulting components, we built predictive models via Ordinary Least Squares (OLS), robust regression, and Random Forest algorithms. The OLS model explained 60.4% of the variance in case density (cases per square kilometer), while the robust model - more resilient to outliers - accounted for 43.2%. The Random Forest model, capturing nonlinear patterns, achieved 37.3%. Despite some localized gains from nonlinearity, linear models showed greater overall stability and interpretability. Using PCA scores, we constructed a dengue risk ranking of neighborhoods and compared it to the actual 2024 distribution, achieving an 83.5% match in relative ordering. Our findings indicate that census-based socioeconomic data, when combined with dimensionality reduction and predictive modeling, can effectively estimate urban dengue risk and guide spatially targeted public health strategies.


[3] 2507.08896

Predictive Causal Inference via Spatio-Temporal Modeling and Penalized Empirical Likelihood

This study introduces an integrated framework for predictive causal inference designed to overcome limitations inherent in conventional single model approaches. Specifically, we combine a Hidden Markov Model (HMM) for spatial health state estimation with a Multi Task and Multi Graph Convolutional Network (MTGCN) for capturing temporal outcome trajectories. The framework asymmetrically treats temporal and spatial information regarding them as endogenous variables in the outcome regression, and exogenous variables in the propensity score model, thereby expanding the standard doubly robust treatment effect estimation to jointly enhance bias correction and predictive accuracy. To demonstrate its utility, we focus on clinical domains such as cancer, dementia, and Parkinson disease, where treatment effects are challenging to observe directly. Simulation studies are conducted to emulate latent disease dynamics and evaluate the model performance under varying conditions. Overall, the proposed framework advances predictive causal inference by structurally adapting to spatiotemporal complexities common in biomedical data.


[4] 2507.08906

Physics-informed machine learning: A mathematical framework with applications to time series forecasting

Physics-informed machine learning (PIML) is an emerging framework that integrates physical knowledge into machine learning models. This physical prior often takes the form of a partial differential equation (PDE) system that the regression function must satisfy. In the first part of this dissertation, we analyze the statistical properties of PIML methods. In particular, we study the properties of physics-informed neural networks (PINNs) in terms of approximation, consistency, overfitting, and convergence. We then show how PIML problems can be framed as kernel methods, making it possible to apply the tools of kernel ridge regression to better understand their behavior. In addition, we use this kernel formulation to develop novel physics-informed algorithms and implement them efficiently on GPUs. The second part explores industrial applications in forecasting energy signals during atypical periods. We present results from the Smarter Mobility challenge on electric vehicle charging occupancy and examine the impact of mobility on electricity demand. Finally, we introduce a physics-constrained framework for designing and enforcing constraints in time series, applying it to load forecasting and tourism forecasting in various countries.


[5] 2507.08921

Are Betting Markets Better than Polling in Predicting Political Elections?

Political elections are one of the most significant aspects of what constitutes the fabric of the United States. In recent history, typical polling estimates have largely lacked precision in predicting election outcomes, which has not only caused uncertainty for American voters, but has also impacted campaign strategies, spending, and fundraising efforts. One intriguing aspect of traditional polling is the types of questions that are asked -- the questions largely focus on asking individuals who they intend to vote for. However, they don't always probe who voters think will win -- regardless of who they want to win. In contrast, online betting markets allow individuals to wager money on who they expect to win, which may capture who individuals think will win in an especially salient manner. The current study used both descriptive and predictive analytics to determine whether data from Polymarket, the world's largest online betting market, provided insights that differed from traditional presidential polling. Overall, findings suggest that Polymarket was superior to polling in predicting the outcome of the 2024 presidential election, particularly in swing states. Results are in alignment with research on ''Wisdom of Crowds'' theory, which suggests a large group of people are often accurate in predicting outcomes, even if they are not necessarily experts or closely aligned with the issue at hand. Overall, our results suggest that betting markets, such as Polymarket, could be employed to predict presidential elections and/or other real-world events. However, future investigations are needed to fully unpack and understand the current study's intriguing results, including alignment with Wisdom of Crowds theory and portability to other events.


[6] 2507.08922

The Bayesian Approach to Continual Learning: An Overview

Continual learning is an online paradigm where a learner continually accumulates knowledge from different tasks encountered over sequential time steps. Importantly, the learner is required to extend and update its knowledge without forgetting about the learning experience acquired from the past, and while avoiding the need to retrain from scratch. Given its sequential nature and its resemblance to the way humans think, continual learning offers an opportunity to address several challenges which currently stand in the way of widening the range of applicability of deep models to further real-world problems. The continual need to update the learner with data arriving sequentially strikes inherent congruence between continual learning and Bayesian inference which provides a principal platform to keep updating the prior beliefs of a model given new data, without completely forgetting the knowledge acquired from the old data. This survey inspects different settings of Bayesian continual learning, namely task-incremental learning and class-incremental learning. We begin by discussing definitions of continual learning along with its Bayesian setting, as well as the links with related fields, such as domain adaptation, transfer learning and meta-learning. Afterwards, we introduce a taxonomy offering a comprehensive categorization of algorithms belonging to the Bayesian continual learning paradigm. Meanwhile, we analyze the state-of-the-art while zooming in on some of the most prominent Bayesian continual learning algorithms to date. Furthermore, we shed some light on links between continual learning and developmental psychology, and correspondingly introduce analogies between both fields. We follow that with a discussion of current challenges, and finally conclude with potential areas for future research on Bayesian continual learning.


[7] 2507.08994

Fixed-Confidence Multiple Change Point Identification under Bandit Feedback

Piecewise constant functions describe a variety of real-world phenomena in domains ranging from chemistry to manufacturing. In practice, it is often required to confidently identify the locations of the abrupt changes in these functions as quickly as possible. For this, we introduce a fixed-confidence piecewise constant bandit problem. Here, we sequentially query points in the domain and receive noisy evaluations of the function under bandit feedback. We provide instance-dependent lower bounds for the complexity of change point identification in this problem. These lower bounds illustrate that an optimal method should focus its sampling efforts adjacent to each of the change points, and the number of samples around each change point should be inversely proportional to the magnitude of the change. Building on this, we devise a simple and computationally efficient variant of Track-and-Stop and prove that it is asymptotically optimal in many regimes. We support our theoretical findings with experimental results in synthetic environments demonstrating the efficiency of our method.


[8] 2507.09007

Possibilistic inferential models: a review

An inferential model (IM) is a model describing the construction of provably reliable, data-driven uncertainty quantification and inference about relevant unknowns. IMs and Fisher's fiducial argument have similar objectives, but a fundamental distinction between the two is that the former doesn't require that uncertainty quantification be probabilistic, offering greater flexibility and allowing for a proof of its reliability. Important recent developments have been made thanks in part to newfound connections with the imprecise probability literature, in particular, possibility theory. The brand of possibilistic IMs studied here are straightforward to construct, have very strong frequentist-like reliability properties, and offer fully conditional, Bayesian-like (imprecise) probabilistic reasoning. This paper reviews these key recent developments, describing the new theory, methods, and computational tools. A generalization of the basic possibilistic IM is also presented, making new and unexpected connections with ideas in modern statistics and machine learning, e.g., bootstrap and conformal prediction.


[9] 2507.09032

Modeling Latent Underdispersion with Discrete Order Statistics

The Poisson distribution is the default choice of likelihood for probabilistic models of count data. However, due to the equidispersion contraint of the Poisson, such models may have predictive uncertainty that is artificially inflated. While overdispersion has been extensively studied, conditional underdispersion -- where latent structure renders data more regular than Poisson -- remains underexplored, in part due to the lack of tractable modeling tools. We introduce a new class of models based on discrete order statistics, where observed counts are assumed to be an order statistic (e.g., minimum, median, maximum) of i.i.d. draws from some discrete parent, such as the Poisson or negative binomial. We develop a general data augmentation scheme that is modular with existing tools tailored to the parent distribution, enabling parameter estimation or posterior inference in a wide range of such models. We characterize properties of Poisson and negative binomial order statistics, exposing interpretable knobs on their dispersion. We apply our framework to four case studies -- i.e., to commercial flight times, COVID-19 case counts, Finnish bird abundance, and RNA sequencing data -- and illustrate the flexibility and generality of the proposed framework. Our results suggest that order statistic models can be built, used, and interpreted in much the same way as commonly-used alternatives, while often obtaining better fit, and offer promise in the wide range of applications in which count data arise.


[10] 2507.09046

Hierarchical Bayesian Modeling of Total Column Ozone: Unraveling Equatorial Variability over Ethiopia Using Satellite Data and Multisource Covariates

Understanding the spatiotemporal dynamics of total column ozone (TCO) is critical for monitoring ultraviolet (UV) exposure and ozone trends, particularly in equatorial regions where variability remains underexplored. This study investigates monthly TCO over Ethiopia (2012-2022) using a Bayesian hierarchical model implemented via Integrated Nested Laplace Approximation (INLA). The model incorporates nine environmental covariates, capturing meteorological, stratospheric, and topographic influences alongside spatiotemporal random effects. Spatial dependence is modeled using the Stochastic Partial Differential Equation (SPDE) approach, while temporal autocorrelation is handled through an autoregressive structure. The model shows strong predictive accuracy, with correlation coefficients of 0.94 (training) and 0.91 (validation), and RMSE values of 3.91 DU and 4.45 DU, respectively. Solar radiation, stratospheric temperature, and the Quasi-Biennial Oscillation are positively associated with TCO, whereas surface temperature, precipitation, humidity, water vapor, and altitude exhibit negative associations. Random effects highlight persistent regional clusters and seasonal peaks during summer. These findings provide new insights into regional ozone behavior over complex equatorial terrains, contributing to the understanding of the equatorial ozone paradox. The approach demonstrates the utility of combining satellite observations with environmental data in data-scarce regions, supporting improved UV risk monitoring and climate-informed policy planning.


[11] 2507.09057

A monotone single index model for spatially-referenced multistate current status data

Assessment of multistate disease progression is commonplace in biomedical research, such as, in periodontal disease (PD). However, the presence of multistate current status endpoints, where only a single snapshot of each subject's progression through disease states is available at a random inspection time after a known starting state, complicates the inferential framework. In addition, these endpoints can be clustered, and spatially associated, where a group of proximally located teeth (within subjects) may experience similar PD status, compared to those distally located. Motivated by a clinical study recording PD progression, we propose a Bayesian semiparametric accelerated failure time model with an inverse-Wishart proposal for accommodating (spatial) random effects, and flexible errors that follow a Dirichlet process mixture of Gaussians. For clinical interpretability, the systematic component of the event times is modeled using a monotone single index model, with the (unknown) link function estimated via a novel integrated basis expansion and basis coefficients endowed with constrained Gaussian process priors. In addition to establishing parameter identifiability, we present scalable computing via a combination of elliptical slice sampling, fast circulant embedding techniques, and smoothing of hard constraints, leading to straightforward estimation of parameters, and state occupation and transition probabilities. Using synthetic data, we study the finite sample properties of our Bayesian estimates, and their performance under model misspecification. We also illustrate our method via application to the real clinical PD dataset.


[12] 2507.09077

Convex Clustering

This survey reviews a clustering method based on solving a convex optimization problem. Despite the plethora of existing clustering methods, convex clustering has several uncommon features that distinguish it from prior art. The optimization problem is free of spurious local minima, and its unique global minimizer is stable with respect to all its inputs, including the data, a tuning parameter, and weight hyperparameters. Its single tuning parameter controls the number of clusters and can be chosen using standard techniques from penalized regression. We give intuition into the behavior and theory for convex clustering as well as practical guidance. We highlight important algorithms and give insight into how their computational costs scale with the problem size. Finally, we highlight the breadth of its uses and flexibility to be combined and integrated with other inferential methods.


[13] 2507.09093

Optimal High-probability Convergence of Nonlinear SGD under Heavy-tailed Noise via Symmetrization

We study convergence in high-probability of SGD-type methods in non-convex optimization and the presence of heavy-tailed noise. To combat the heavy-tailed noise, a general black-box nonlinear framework is considered, subsuming nonlinearities like sign, clipping, normalization and their smooth counterparts. Our first result shows that nonlinear SGD (N-SGD) achieves the rate $\widetilde{\mathcal{O}}(t^{-1/2})$, for any noise with unbounded moments and a symmetric probability density function (PDF). Crucially, N-SGD has exponentially decaying tails, matching the performance of linear SGD under light-tailed noise. To handle non-symmetric noise, we propose two novel estimators, based on the idea of noise symmetrization. The first, dubbed Symmetrized Gradient Estimator (SGE), assumes a noiseless gradient at any reference point is available at the start of training, while the second, dubbed Mini-batch SGE (MSGE), uses mini-batches to estimate the noiseless gradient. Combined with the nonlinear framework, we get N-SGE and N-MSGE methods, respectively, both achieving the same convergence rate and exponentially decaying tails as N-SGD, while allowing for non-symmetric noise with unbounded moments and PDF satisfying a mild technical condition, with N-MSGE additionally requiring bounded noise moment of order $p \in (1,2]$. Compared to works assuming noise with bounded $p$-th moment, our results: 1) are based on a novel symmetrization approach; 2) provide a unified framework and relaxed moment conditions; 3) imply optimal oracle complexity of N-SGD and N-SGE, strictly better than existing works when $p < 2$, while the complexity of N-MSGE is close to existing works. Compared to works assuming symmetric noise with unbounded moments, we: 1) provide a sharper analysis and improved rates; 2) facilitate state-dependent symmetric noise; 3) extend the strong guarantees to non-symmetric noise.


[14] 2507.09103

CoVAE: Consistency Training of Variational Autoencoders

Current state-of-the-art generative approaches frequently rely on a two-stage training procedure, where an autoencoder (often a VAE) first performs dimensionality reduction, followed by training a generative model on the learned latent space. While effective, this introduces computational overhead and increased sampling times. We challenge this paradigm by proposing Consistency Training of Variational AutoEncoders (CoVAE), a novel single-stage generative autoencoding framework that adopts techniques from consistency models to train a VAE architecture. The CoVAE encoder learns a progressive series of latent representations with increasing encoding noise levels, mirroring the forward processes of diffusion and flow matching models. This sequence of representations is regulated by a time dependent $\beta$ parameter that scales the KL loss. The decoder is trained using a consistency loss with variational regularization, which reduces to a conventional VAE loss at the earliest latent time. We show that CoVAE can generate high-quality samples in one or few steps without the use of a learned prior, significantly outperforming equivalent VAEs and other single-stage VAEs methods. Our approach provides a unified framework for autoencoding and diffusion-style generative modeling and provides a viable route for one-step generative high-performance autoencoding. Our code is publicly available at this https URL.


[15] 2507.09110

Sharp Trade-Offs in High-Dimensional Inference via 2-Level SLOPE

Among techniques for high-dimensional linear regression, Sorted L-One Penalized Estimation (SLOPE) generalizes the LASSO via an adaptive $l_1$ regularization that applies heavier penalties to larger coefficients in the model. To achieve such adaptivity, SLOPE requires the specification of a complex hierarchy of penalties, i.e., a monotone penalty sequence in $R^p$, in contrast to a single penalty scalar for LASSO. Tuning this sequence when $p$ is large poses a challenge, as brute force search over a grid of values is computationally prohibitive. In this work, we study the 2-level SLOPE, an important subclass of SLOPE, with only three hyperparameters. We demonstrate both empirically and analytically that 2-level SLOPE not only preserves the advantages of general SLOPE -- such as improved mean squared error and overcoming the Donoho-Tanner power limit -- but also exhibits computational benefits by reducing the penalty hyperparameter space. In particular, we prove that 2-level SLOPE admits a sharp, theoretically tight characterization of the trade-off between true positive proportion (TPP) and false discovery proportion (FDP), contrasting with general SLOPE where only upper and lower bounds are known. Empirical evaluations further underscore the effectiveness of 2-level SLOPE in settings where predictors exhibit high correlation, when the noise is large, or when the underlying signal is not sparse. Our results suggest that 2-level SLOPE offers a robust, scalable alternative to both LASSO and general SLOPE, making it particularly suited for practical high-dimensional data analysis.


[16] 2507.09119

A Moment-Based Generalization to Post-Prediction Inference

Artificial intelligence (AI) and machine learning (ML) are increasingly used to generate data for downstream analyses, yet naively treating these predictions as true observations can lead to biased results and incorrect inference. Wang et al. (2020) proposed a method, post-prediction inference, which calibrates inference by modeling the relationship between AI/ML-predicted and observed outcomes in a small, gold-standard sample. Since then, several methods have been developed for inference with predicted data. We revisit Wang et al. in light of these recent developments. We reflect on their assumptions and offer a simple extension of their method which relaxes these assumptions. Our extension (1) yields unbiased point estimates under standard conditions and (2) incorporates a simple scaling factor to preserve calibration variability. In extensive simulations, we show that our method maintains nominal Type I error rates, reduces bias, and achieves proper coverage.


[17] 2507.09121

Poisson Approximate Likelihood versus the block particle filter for a spatiotemporal measles model

Filtering algorithms for high-dimensional nonlinear non-Gaussian partially observed stochastic processes provide access to the likelihood function and hence enable likelihood-based or Bayesian inference for this methodologically challenging class of models. A novel Poisson approximate likelihood (PAL) filter was introduced by Whitehouse et al.\ (2023). PAL employs a Poisson approximation to conditional densities, offering a fast approximation to the likelihood function for a certain subset of partially observed Markov process models. PAL was demonstrated on an epidemiological metapopulation model for measles, specifically, a spatiotemporal model for disease transmission within and between cities. At face value, Table\ 3 of Whitehouse et al.\ (2023) suggests that PAL considerably out-performs previous analysis as well as an ARMA benchmark model. We show that PAL does not outperform a block particle filter and that the lookahead component of PAL was implemented in a way that introduces substantial positive bias in the log-likelihood estimates. Therefore, the results of Table\ 3 of Whitehouse et al.\ (2023) do not accurately represent the true capabilities of PAL.


[18] 2507.09128

A Generalization Theory for Zero-Shot Prediction

A modern paradigm for generalization in machine learning and AI consists of pre-training a task-agnostic foundation model, generally obtained using self-supervised and multimodal contrastive learning. The resulting representations can be used for prediction on a downstream task for which no labeled data is available. We present a theoretical framework to better understand this approach, called zero-shot prediction. We identify the target quantities that zero-shot prediction aims to learn, or learns in passing, and the key conditional independence relationships that enable its generalization ability.


[19] 2507.09148

A Randomized Algorithm for Sparse PCA based on the Basic SDP Relaxation

Sparse Principal Component Analysis (SPCA) is a fundamental technique for dimensionality reduction, and is NP-hard. In this paper, we introduce a randomized approximation algorithm for SPCA, which is based on the basic SDP relaxation. Our algorithm has an approximation ratio of at most the sparsity constant with high probability, if called enough times. Under a technical assumption, which is consistently satisfied in our numerical tests, the average approximation ratio is also bounded by $\mathcal{O}(\log{d})$, where $d$ is the number of features. We show that this technical assumption is satisfied if the SDP solution is low-rank, or has exponentially decaying eigenvalues. We then present a broad class of instances for which this technical assumption holds. We also demonstrate that in a covariance model, which generalizes the spiked Wishart model, our proposed algorithm achieves a near-optimal approximation ratio. We demonstrate the efficacy of our algorithm through numerical results on real-world datasets.


[20] 2507.09156

Robust designs for Gaussian process emulation of computer experiments

We study in this paper two classes of experimental designs, support points and projected support points, which can provide robust and effective emulation of computer experiments with Gaussian processes. These designs have two important properties that are appealing for surrogate modeling of computer experiments. First, the proposed designs are robust: they enjoy good emulation performance over a wide class of smooth and rugged response surfaces. Second, they can be efficiently generated for large designs in high dimensions using difference-of-convex programming. In this work, we present a theoretical framework that investigates the above properties, then demonstrate their effectiveness for Gaussian process emulation in a suite of numerical experiments.


[21] 2507.09178

The BdryMatérn GP: Reliable incorporation of boundary information on irregular domains for Gaussian process modeling

Gaussian processes (GPs) are broadly used as surrogate models for expensive computer simulators of complex phenomena. However, a key bottleneck is that its training data are generated from this expensive simulator and thus can be highly limited. A promising solution is to supplement the learning model with boundary information from scientific knowledge. However, despite recent work on boundary-integrated GPs, such models largely cannot accommodate boundary information on irregular (i.e., non-hypercube) domains, and do not provide sample path smoothness control or approximation error analysis, both of which are important for reliable surrogate modeling. We thus propose a novel BdryMatérn GP modeling framework, which can reliably integrate Dirichlet, Neumann and Robin boundaries on an irregular connected domain with a boundary set that is twice-differentiable almost everywhere. Our model leverages a new BdryMatérn covariance kernel derived in path integral form via a stochastic partial differential equation formulation. Similar to the GP with Matérn kernel, we prove that sample paths from the BdryMatérn GP satisfy the desired boundaries with smoothness control on its derivatives. We further present an efficient approximation procedure for the BdryMatérn kernel using finite element modeling with rigorous error analysis. Finally, we demonstrate the effectiveness of the BdryMatérn GP in a suite of numerical experiments on incorporating broad boundaries on irregular domains.


[22] 2507.09302

The Multiplicative Instrumental Variable Model

The instrumental variable (IV) design is a common approach to address hidden confounding bias. For validity, an IV must impact the outcome only through its association with the treatment. In addition, IV identification has required a homogeneity condition such as monotonicity or no unmeasured common effect modifier between the additive effect of the treatment on the outcome, and that of the IV on the treatment. In this work, we introduce a novel identifying condition of no multiplicative interaction between the instrument and the unmeasured confounder in the treatment model, which we establish nonparametrically identifies the average treatment effect on the treated (ATT). For inference, we propose an estimator that is multiply robust and semiparametric efficient, while allowing for the use of machine learning to adaptively estimate required nuisance functions via cross-fitting. Finally, we illustrate the methods in extended simulations and an application on the causal impact of a job training program on subsequent earnings.


[23] 2507.09317

Uncovering symmetric and asymmetric species associations from community and environmental data

There is no much doubt that biotic interactions shape community assembly and ultimately the spatial co-variations between species. There is a hope that the signal of these biotic interactions can be observed and retrieved by investigating the spatial associations between species while accounting for the direct effects of the environment. By definition, biotic interactions can be both symmetric and asymmetric. Yet, most models that attempt to retrieve species associations from co-occurrence or co-abundance data internally assume symmetric relationships between species. Here, we propose and validate a machine-learning framework able to retrieve bidirectional associations by analyzing species community and environmental data. Our framework (1) models pairwise species associations as directed influences from a source to a target species, parameterized with two species-specific latent embeddings: the effect of the source species on the community, and the response of the target species to the community; and (2) jointly fits these associations within a multi-species conditional generative model with different modes of interactions between environmental drivers and biotic associations. Using both simulated and empirical data, we demonstrate the ability of our framework to recover known asymmetric and symmetric associations and highlight the properties of the learned association networks. By comparing our approach to other existing models such as joint species distribution models and probabilistic graphical models, we show its superior capacity at retrieving symmetric and asymmetric interactions. The framework is intuitive, modular and broadly applicable across various taxonomic groups.


[24] 2507.09358

An Integrated and Coherent Framework for Point Estimation and Hypothesis Testing with Concurrent Controls in Platform Trials

A platform trial with a master protocol provides an infrastructure to ethically and efficiently evaluate multiple treatment options in multiple diseases. Given that certain study drugs can enter or exit a platform trial, the randomization ratio is possible to change over time, and this potential modification is not necessarily dependent on accumulating outcomes data. It is recommended that the analysis should account for time periods with different randomization ratios, with possible approaches such as Inverse Probability of Treatment Weighting (IPTW) or a weighted approach by the time period. To guide practical implementation, we specifically investigate the relationship between these two estimators, and further derive an optimal estimator within this class to gain efficacy. Practical guidance is provided on how to construct estimators based on observed data to approximate this unknown weight. The connection between the proposed method and the weighted least squares is also studied. We conduct simulation studies to demonstrate that the proposed method can control type I error rate with a reduced estimation bias, and can also achieve satisfactory power and mean squared error (MSE) with computational efficiency. Another appealing feature of our framework is the ability to provide consistent conclusions for both point estimation and hypothesis testing. This is critical to the interpretation of clinical trial results. The proposed method is further applied to the Accelerating COVID-19 Therapeutic Interventions and Vaccines (ACTIV) platform trial.


[25] 2507.09370

A Latent Position Co-Clustering Model for Multiplex Networks

Multiplex networks are increasingly common across diverse domains, motivating the development of clustering methods that uncover patterns at multiple levels. Existing approaches typically focus on clustering either entire networks or nodes within a single network. We address the lack of a unified latent space framework for simultaneous network- and node-level clustering by proposing a latent position co-clustering model (LaPCoM), based on a hierarchical mixture-of-mixtures formulation. LaPCoM enables co-clustering of networks and their constituent nodes, providing joint dimension reduction and two-level cluster detection. At the network level, it identifies global homogeneity in topological patterns by grouping networks that share similar latent representations. At the node level, it captures local connectivity and community patterns. The model adopts a Bayesian nonparametric framework using a mixture of finite mixtures, which places priors on the number of clusters at both levels and incorporates sparse priors to encourage parsimonious clustering. Inference is performed via Markov chain Monte Carlo with automatic selection of the number of clusters. LaPCoM accommodates both binary and count-valued multiplex data. Simulation studies and comparisons with existing methods demonstrate accurate recovery of latent structure and clusters. Applications to real-world social multiplexes reveal interpretable network-level clusters aligned with context-specific patterns, and node-level clusters reflecting social patterns and roles.


[26] 2507.09380

Robust Spatiotemporal Epidemic Modeling with Integrated Adaptive Outlier Detection

In epidemic modeling, outliers can distort parameter estimation and ultimately lead to misguided public health decisions. Although there are existing robust methods that can mitigate this distortion, the ability to simultaneously detect outliers is equally vital for identifying potential disease hotspots. In this work, we introduce a robust spatiotemporal generalized additive model (RST-GAM) to address this need. We accomplish this with a mean-shift parameter to quantify and adjust for the effects of outliers and rely on adaptive Lasso regularization to model the sparsity of outlying observations. We use univariate polynomial splines and bivariate penalized splines over triangulations to estimate the functional forms and a data-thinning approach for data-adaptive weight construction. We derive a scalable proximal algorithm to estimate model parameters by minimizing a convex negative log-quasi-likelihood function. Our algorithm uses adaptive step-sizes to ensure global convergence of the resulting iterate sequence. We establish error bounds and selection consistency for the estimated parameters and demonstrate our model's effectiveness through numerical studies under various outlier scenarios. Finally, we demonstrate the practical utility of RST-GAM by analyzing county-level COVID-19 infection data in the United States, highlighting its potential to inform public health decision-making.


[27] 2507.09388

Optimal Differentially Private Ranking from Pairwise Comparisons

Data privacy is a central concern in many applications involving ranking from incomplete and noisy pairwise comparisons, such as recommendation systems, educational assessments, and opinion surveys on sensitive topics. In this work, we propose differentially private algorithms for ranking based on pairwise comparisons. Specifically, we develop and analyze ranking methods under two privacy notions: edge differential privacy, which protects the confidentiality of individual comparison outcomes, and individual differential privacy, which safeguards potentially many comparisons contributed by a single individual. Our algorithms--including a perturbed maximum likelihood estimator and a noisy count-based method--are shown to achieve minimax optimal rates of convergence under the respective privacy constraints. We further demonstrate the practical effectiveness of our methods through experiments on both simulated and real-world data.


[28] 2507.09468

Semiparametric Regression Models for Explanatory Variables with Missing Data due to Detection Limit

Detection limit (DL) has become an increasingly ubiquitous issue in statistical analyses of biomedical studies, such as cytokine, metabolite and protein analysis. In regression analysis, if an explanatory variable is left-censored due to concentrations below the DL, one may limit analyses to observed data. In many studies, additional, or surrogate, variables are available to model, and incorporating such auxiliary modeling information into the regression model can improve statistical power. Although methods have been developed along this line, almost all are limited to parametric models for both the regression and left-censored explanatory variable. While some recent work has considered semiparametric regression for the censored DL-effected explanatory variable, the regression of primary interest is still left parametric, which not only makes it prone to biased estimates, but also suffers from high computational cost and inefficiency due to maximizing an extremely complex likelihood function and bootstrap inference. In this paper, we propose a new approach by considering semiparametric generalized linear models (SPGLM) for the primary regression and parametric or semiparametric models for DL-effected explanatory variable. The semiparametric and semiparametric combination provides the most robust inference, while the semiparametric and parametric case enables more efficient inference. The proposed approach is also much easier to implement and allows for leveraging sample splitting and cross fitting (SSCF) to improve computational efficiency in variance estimation. In particular, our approach improves computational efficiency over bootstrap by 450 times. We use simulated and real study data to illustrate the approach.


[29] 2507.09494

An Algorithm for Identifying Interpretable Subgroups With Elevated Treatment Effects

We introduce an algorithm for identifying interpretable subgroups with elevated treatment effects, given an estimate of individual or conditional average treatment effects (CATE). Subgroups are characterized by ``rule sets'' -- easy-to-understand statements of the form (Condition A AND Condition B) OR (Condition C) -- which can capture high-order interactions while retaining interpretability. Our method complements existing approaches for estimating the CATE, which often produce high dimensional and uninterpretable results, by summarizing and extracting critical information from fitted models to aid decision making, policy implementation, and scientific understanding. We propose an objective function that trades-off subgroup size and effect size, and varying the hyperparameter that controls this trade-off results in a ``frontier'' of Pareto optimal rule sets, none of which dominates the others across all criteria. Valid inference is achievable through sample splitting. We demonstrate the utility and limitations of our method using simulated and empirical examples.


[30] 2507.09559

The Use of Variational Inference for Lifetime Data with Spatial Correlations

Lifetime data with spatial correlations are often collected for analysis in modern engineering, clinical, and medical applications. For such spatial lifetime data, statistical models usually account for the spatial dependence through spatial random effects, such as the cumulative exposure model and the proportional hazards model. For these models, the Bayesian estimation is commonly used for model inference, but often encounters computational challenges when the number of spatial locations is large. The conventional Markov Chain Monte Carlo (MCMC) methods for sampling the posterior can be time-consuming. In this case-study paper, we investigate the capability of variational inference (VI) for the model inference on spatial lifetime data, aiming for a good balance between the estimation accuracy and computational efficiency. Specifically, the VI methods with different divergence metrics are investigated for the spatial lifetime models. In the case study, the Titan GPU lifetime data and the pine tree lifetime data are used to examine the VI methods in terms of their computational advantage and estimation accuracy.


[31] 2507.09584

Edgeworth corrections for the spiked eigenvalues of non-Gaussian sample covariance matrices with applications

Yang and Johnstone (2018) established an Edgeworth correction for the largest sample eigenvalue in a spiked covariance model under the assumption of Gaussian observations, leaving the extension to non-Gaussian settings as an open problem. In this paper, we address this issue by establishing first-order Edgeworth expansions for spiked eigenvalues in both single-spike and multi-spike scenarios with non-Gaussian data. Leveraging these expansions, we construct more accurate confidence intervals for the population spiked eigenvalues and propose a novel estimator for the number of spikes. Simulation studies demonstrate that our proposed methodology outperforms existing approaches in both robustness and accuracy across a wide range of settings, particularly in low-dimensional cases.


[32] 2507.09634

Correction for Weak IV Bias and Winner's Curse in Mendelian Randomization Egger Regression: Rerandomized Egger estimator

In two-sample Mendelian randomization (MR), Egger regression is widely used as a sensitivity analysis when directional pleiotropy is detected. However, the increasing complexity of modern MR studies, characterized by many weak instruments, renders the original Egger method less efficient. We first identify the source of weak instrument bias in Egger regression and introduce a debiased Egger (dEgger) estimator that restores consistency and asymptotic normality under substantially weaker conditions. To boost statistical power and ensure the validity of results, we then embed a random instrument selection procedure and present the rerandomized Egger (REgger) estimator along with an associated directional pleiotropy test. Recognizing the challenge of obtaining closed-form variances, we derive simple regression-residual-based variance estimators by truncating higher-order terms. The REgger estimator simultaneously removes the weak instrument bias and winner's curse while retaining robustness to directional pleiotropy, and is asymptotically normal when the effective sample size and post-selection instrument count are sufficiently large. Under balanced pleiotropy, REgger matches the rerandomized inverse-variance-weighted estimator, differing only in having marginally wider confidence intervals; under directional pleiotropy, it achieves substantially greater precision. Extensive simulations and real-data analyses confirm REgger's superior statistical properties, making it a valuable addition to two-sample MR sensitivity analyses.


[33] 2507.09717

Signed Graph Learning: Algorithms and Theory

Real-world data is often represented through the relationships between data samples, forming a graph structure. In many applications, it is necessary to learn this graph structure from the observed data. Current graph learning research has primarily focused on unsigned graphs, which consist only of positive edges. However, many biological and social systems are better described by signed graphs that account for both positive and negative interactions, capturing similarity and dissimilarity between samples. In this paper, we develop a method for learning signed graphs from a set of smooth signed graph signals. Specifically, we employ the net Laplacian as a graph shift operator (GSO) to define smooth signed graph signals as the outputs of a low-pass signed graph filter defined by the net Laplacian. The signed graph is then learned by formulating a non-convex optimization problem where the total variation of the observed signals is minimized with respect to the net Laplacian. The proposed problem is solved using alternating direction method of multipliers (ADMM) and a fast algorithm reducing the per-ADMM iteration complexity from quadratic to linear in the number of nodes is introduced. Furthermore, theoretical proofs of convergence for the algorithm and a bound on the estimation error of the learned net Laplacian as a function of sample size, number of nodes, and graph topology are provided. Finally, the proposed method is evaluated on simulated data and gene regulatory network inference problem and compared to existing signed graph learning methods.


[34] 2507.09718

Bridging Structural Causal Inference and Machine Learning The S-DIDML Estimator for Heterogeneous Treatment Effects

In response to the increasing complexity of policy environments and the proliferation of high-dimensional data, this paper introduces the S-DIDML estimator a framework grounded in structure and semiparametrically flexible for causal inference. By embedding Difference-in-Differences (DID) logic within a Double Machine Learning (DML) architecture, the S-DIDML approach combines the strengths of temporal identification, machine learning-based nuisance adjustment, and orthogonalized estimation. We begin by identifying critical limitations in existing methods, including the lack of structural interpretability in ML models, instability of classical DID under high-dimensional confounding, and the temporal rigidity of standard DML frameworks. Building on recent advances in staggered adoption designs and Neyman orthogonalization, S-DIDML offers a five-step estimation pipeline that enables robust estimation of heterogeneous treatment effects (HTEs) while maintaining interpretability and scalability. Demonstrative applications are discussed across labor economics, education, taxation, and environmental policy. The proposed framework contributes to the methodological frontier by offering a blueprint for policy-relevant, structurally interpretable, and statistically valid causal analysis in complex data settings.


[35] 2507.09740

Discovering Governing Equations in the Presence of Uncertainty

In the study of complex dynamical systems, understanding and accurately modeling the underlying physical processes is crucial for predicting system behavior and designing effective interventions. Yet real-world systems exhibit pronounced input (or system) variability and are observed through noisy, limited data conditions that confound traditional discovery methods that assume fixed-coefficient deterministic models. In this work, we theorize that accounting for system variability together with measurement noise is the key to consistently discover the governing equations underlying dynamical systems. As such, we introduce a stochastic inverse physics-discovery (SIP) framework that treats the unknown coefficients as random variables and infers their posterior distribution by minimizing the Kullback-Leibler divergence between the push-forward of the posterior samples and the empirical data distribution. Benchmarks on four canonical problems -- the Lotka-Volterra predator-prey system (multi- and single-trajectory), the historical Hudson Bay lynx-hare data, the chaotic Lorenz attractor, and fluid infiltration in porous media using low- and high-viscosity liquids -- show that SIP consistently identifies the correct equations and lowers coefficient root-mean-square error by an average of 82\% relative to the Sparse Identification of Nonlinear Dynamics (SINDy) approach and its Bayesian variant. The resulting posterior distributions yield 95\% credible intervals that closely track the observed trajectories, providing interpretable models with quantified uncertainty. SIP thus provides a robust, data-efficient approach for consistent physics discovery in noisy, variable, and data-limited settings.


[36] 2507.09787

Fixed-Point Estimation of the Drift Parameter in Stochastic Differential Equations Driven by Rough Multiplicative Fractional Noise

We investigate the problem of estimating the drift parameter from $N$ independent copies of the solution of a stochastic differential equation driven by a multiplicative fractional Brownian noise with Hurst parameter $H\in (1/3,1)$. Building on a least-squares-type object involving the Skorokhod integral, a key challenge consists in approximating this unobservable quantity with a computable fixed-point estimator, which requires addressing the correction induced by replacing the Skorokhod integral with its pathwise counterpart. To this end, a crucial technical contribution of this work is the reformulation of the Malliavin derivative of the process in a way that does not depend explicitly on the driving noise, enabling control of the approximation error in the multiplicative setting. For the case $H\in (1/3,1/2]$, we further exploit results on two-dimensional Young integrals to manage the more intricate correction term that appears. As a result, we establish the well-posedness of a fixed-point estimator for any $H\in (1/3,1)$, together with both an asymptotic confidence interval and a non-asymptotic risk bound. Finally, a numerical study illustrates the good practical performance of the proposed estimator.


[37] 2507.09800

FLAT: Fused Lasso Regression with Adaptive Minimum Spanning Tree with Applications on Thermohaline Circulation

Spatial heterogeneity widely exists in many applications, such as in ocean science, where the temperature-salinity (T-S) relationship in thermohaline circulation varies across different geographical locations and depths. While spatial regression models are powerful tools for this purpose, they often face challenges in simultaneously estimating spatial parameters, detecting heterogeneity boundaries, and performing adaptive modeling, especially in complex systems. This paper proposes a Fused Lasso regression model with an Adaptive minimum spanning Tree (FLAT) to address these challenges in a unified framework. Specifically, FLAT constructs an adaptive minimum spanning tree guided by both spatial proximity and coefficient dissimilarity, and incorporates a spatial heterogeneity penalty to capture the underlying structure. A subsequent spatial clustering algorithm then identifies discrete heterogeneity boundaries, such as oceanic thermohaline fronts. Numerical simulations confirm that FLAT significantly outperforms classic spatial regression models in both coefficient estimation and heterogeneity detection. An empirical analysis with Atlantic Ocean data further demonstrates FLAT's capability to elucidate region-specific thermohaline compensation mechanisms and to detect surfaces with inverse T-S relationships. These findings advance the mechanistic understanding of T-S compensation dynamics in the Antarctic Intermediate Water region.


[38] 2507.09807

Discrete Hamiltonian-Assisted Metropolis Sampling

Gradient-based Markov Chain Monte Carlo methods have recently received much attention for sampling discrete distributions, with interesting connections to their continuous counterparts. For examples, there are two discrete analogues to the Metropolis-adjusted Langevin Algorithm (MALA). As motivated by Hamiltonian-Assisted Metropolis Sampling (HAMS), we propose Discrete HAMS (DHAMS), a discrete sampler which, for the first time, not only exploits gradient information but also incorporates a Gaussian momentum variable and samples a Hamiltonian as an augmented distribution. DHAMS is derived through several steps, including an auxiliary-variable proposal scheme, negation and gradient correction for the momentum variable, and over-relaxation for the state variable. Two distinctive properties are achieved simultaneously. One is generalized detailed balance, which enables irreversible exploration of the target distribution. The other is a rejection-free property for a target distribution with a linear potential function. In experiments involving both ordinal and binary distributions, DHAMS algorithms consistently yield superior performance compared with existing algorithms.


[39] 2507.09809

Exploring the effects of mechanical ventilator settings with modified vector-valued treatment policies

Mechanical ventilation is critical for managing respiratory failure, but inappropriate ventilator settings can lead to ventilator-induced lung injury (VILI), increasing patient morbidity and mortality. Evaluating the causal impact of ventilator settings is challenging due to the complex interplay of multiple treatment variables and strong confounding due to ventilator guidelines. In this paper, we propose a modified vector-valued treatment policy (MVTP) framework coupled with energy balancing weights to estimate causal effects involving multiple continuous ventilator parameters simultaneously in addition to sensitivity analysis to unmeasured confounding. Our approach mitigates common challenges in causal inference for vector-valued treatments, such as infeasible treatment combinations, stringent positivity assumptions, and interpretability concerns. Using the MIMIC-III database, our analyses suggest that equal reductions in the total power of ventilation (i.e., the mechanical power) through different ventilator parameters result in different expected patient outcomes. Specifically, lowering airway pressures may yield greater reductions in patient mortality compared to proportional adjustments of tidal volume alone. Moreover, controlling for respiratory-system compliance and minute ventilation, we found a significant benefit of reducing driving pressure in patients with acute respiratory distress syndrome (ARDS). Our analyses help shed light on the contributors to VILI.


[40] 2507.09824

Spatial Dependencies in Item Response Theory: Gaussian Process Priors for Geographic and Cognitive Measurement

Measurement validity in Item Response Theory depends on appropriately modeling dependencies between items when these reflect meaningful theoretical structures rather than random measurement error. In ecological assessment, citizen scientists identifying species across geographic regions exhibit systematic spatial patterns in task difficulty due to environmental factors. Similarly, in Author Recognition Tests, literary knowledge organizes by genre, where familiarity with science fiction authors systematically predicts recognition of other science fiction authors. Current spatial Item Response Theory methods, represented by the 1PLUS, 2PLUS, and 3PLUS model family, address these dependencies but remain limited by (1) binary response restrictions, and (2) conditional autoregressive priors that impose rigid local correlation assumptions, preventing effective modeling of complex spatial relationships. Our proposed method, Spatial Gaussian Process Item Response Theory (SGP-IRT), addresses these limitations by replacing conditional autoregressive priors with flexible Gaussian process priors that adapt to complex dependency structures while maintaining principled uncertainty quantification. SGP-IRT accommodates polytomous responses and models spatial dependencies in both geographic and abstract cognitive spaces, where items cluster by theoretical constructs rather than physical proximity. Simulation studies demonstrate improved parameter recovery, particularly for item difficulty estimation. Empirical applications show enhanced recovery of meaningful difficulty surfaces and improved measurement precision across psychological, educational, and ecological research applications.


[41] 2507.09828

Regret Analysis of Posterior Sampling-Based Expected Improvement for Bayesian Optimization

Bayesian optimization is a powerful tool for optimizing an expensive-to-evaluate black-box function. In particular, the effectiveness of expected improvement (EI) has been demonstrated in a wide range of applications. However, theoretical analyses of EI are limited compared with other theoretically established algorithms. This paper analyzes a randomized variant of EI, which evaluates the EI from the maximum of the posterior sample path. We show that this posterior sampling-based random EI achieves the sublinear Bayesian cumulative regret bounds under the assumption that the black-box function follows a Gaussian process. Finally, we demonstrate the effectiveness of the proposed method through numerical experiments.


[42] 2507.09889

High-Dimensional Multi-Study Multi-Modality Covariate-Augmented Generalized Factor Model

Latent factor models that integrate data from multiple sources/studies or modalities have garnered considerable attention across various disciplines. However, existing methods predominantly focus either on multi-study integration or multi-modality integration, rendering them insufficient for analyzing the diverse modalities measured across multiple studies. To address this limitation and cater to practical needs, we introduce a high-dimensional generalized factor model that seamlessly integrates multi-modality data from multiple studies, while also accommodating additional covariates. We conduct a thorough investigation of the identifiability conditions to enhance the model's interpretability. To tackle the complexity of high-dimensional nonlinear integration caused by four large latent random matrices, we utilize a variational lower bound to approximate the observed log-likelihood by employing a variational posterior distribution. By profiling the variational parameters, we establish the asymptotical properties of estimators for model parameters using M-estimation theory. Furthermore, we devise a computationally efficient variational EM algorithm to execute the estimation process and a criterion to determine the optimal number of both study-shared and study-specific factors. Extensive simulation studies and a real-world application show that the proposed method significantly outperforms existing methods in terms of estimation accuracy and computational efficiency. The R package for the proposed method is publicly accessible at this https URL.


[43] 2507.09905

Statistical Inference for Conditional Group Distributionally Robust Optimization with Cross-Entropy Loss

In multi-source learning with discrete labels, distributional heterogeneity across domains poses a central challenge to developing predictive models that transfer reliably to unseen domains. We study multi-source unsupervised domain adaptation, where labeled data are drawn from multiple source domains and only unlabeled data from a target domain. To address potential distribution shifts, we propose a novel Conditional Group Distributionally Robust Optimization (CG-DRO) framework that learns a classifier by minimizing the worst-case cross-entropy loss over the convex combinations of the conditional outcome distributions from the sources. To solve the resulting minimax problem, we develop an efficient Mirror Prox algorithm, where we employ a double machine learning procedure to estimate the risk function. This ensures that the errors of the machine learning estimators for the nuisance models enter only at higher-order rates, thereby preserving statistical efficiency under covariate shift. We establish fast statistical convergence rates for the estimator by constructing two surrogate minimax optimization problems that serve as theoretical bridges. A distinguishing challenge for CG-DRO is the emergence of nonstandard asymptotics: the empirical estimator may fail to converge to a standard limiting distribution due to boundary effects and system instability. To address this, we introduce a perturbation-based inference procedure that enables uniformly valid inference, including confidence interval construction and hypothesis testing.


[44] 2507.09983

Gradient boosted multi-population mortality modelling with high-frequency data

High-frequency mortality data remains an understudied yet critical research area. While its analysis can reveal short-term health impacts of climate extremes and enable more timely mortality forecasts, its complex temporal structure poses significant challenges to traditional mortality models. To leverage the power of high-frequency mortality data, this paper introduces a novel integration of gradient boosting techniques into traditional stochastic mortality models under a multi-population setting. Our key innovation lies in using the Li and Lee model as the weak learner within the gradient boosting framework, replacing conventional decision trees. Empirical studies are conducted using weekly mortality data from 30 countries (Human Mortality Database, 2015--2019). The proposed methodology not only enhances model fit by accurately capturing underlying mortality trends and seasonal patterns, but also achieves superior forecast accuracy, compared to the benchmark models. We also investigate a key challenge in multi-population mortality modelling: how to select appropriate sub-populations with sufficiently similar mortality experiences. A comprehensive clustering exercise is conducted based on mortality improvement rates and seasonal strength. The results demonstrate the robustness of our proposed model, yielding stable forecast accuracy under different clustering configurations.


[45] 2507.10019

Sampling-Based Estimation of Jaccard Containment and Similarity

This paper addresses the problem of estimating the containment and similarity between two sets using only random samples from each set, without relying on sketches or full data access. The study introduces a binomial model for predicting the overlap between samples, demonstrating that it is both accurate and practical when sample sizes are small compared to the original sets. The paper compares this model to previous approaches and shows that it provides better estimates under the considered conditions. It also analyzes the statistical properties of the estimator, including error bounds and sample size requirements needed to achieve a desired level of accuracy and confidence. The framework is extended to estimate set similarity, and the paper provides guidance for applying these methods in large scale data systems where only partial or sampled data is available.


[46] 2507.10041

An Accurate Discretized Approach to Parameter Estimation in the CKLS Model via the CIR Framework

This paper provides insight into the estimation and asymptotic behavior of parameters in interest rate models, focusing primarily on the Cox-Ingersoll-Ross (CIR) process and its extension -- the more general Chan-Karolyi-Longstaff-Sanders (CKLS) framework ($\alpha\in[0.5,1]$). The CIR process is widely used in modeling interest rates which possess the mean reverting feature. An Extension of CIR model, CKLS model serves as a foundational case for analyzing more complex dynamics. We employ Euler-Maruyama discretization to transform the continuous-time stochastic differential equations (SDEs) of these models into a discretized form that facilitates efficient simulation and estimation of parameters using linear regression techniques. We established the strong consistency and asymptotic normality of the estimators for the drift and volatility parameters, providing a theoretical underpinning for the parameter estimation process. Additionally, we explore the boundary behavior of these models, particularly in the context of unattainability at zero and infinity, by examining the scale and speed density functions associated with generalized SDEs involving polynomial drift and diffusion terms. Furthermore, we derive sufficient conditions for the existence of a stationary distribution within the CKLS framework and the corresponding stationary density function; and discuss its dependence on model parameters for $\alpha\in[0.5,1]$.


[47] 2507.10077

New Equivalence Tests for Hardy-Weinberg Equilibrium and Multiple Alleles

We consider testing equivalence to Hardy-Weinberg Equilibrium in case of multiple alleles. Two different test statistics are proposed for this test problem. The asymptotic distribution of the test statistics is derived. The corresponding tests can be carried out using asymptotic approximation. Alternatively, the variance of the test statistics can be estimated by the bootstrap method. The proposed tests are applied to three real data sets. The finite sample performance of the tests is studied by simulations, which are inspired by the real data sets.


[48] 2507.10154

Simulating Biases for Interpretable Fairness in Offline and Online Classifiers

Predictive models often reinforce biases which were originally embedded in their training data, through skewed decisions. In such cases, mitigation methods are critical to ensure that, regardless of the prevailing disparities, model outcomes are adjusted to be fair. To assess this, datasets could be systematically generated with specific biases, to train machine learning classifiers. Then, predictive outcomes could aid in the understanding of this bias embedding process. Hence, an agent-based model (ABM), depicting a loan application process that represents various systemic biases across two demographic groups, was developed to produce synthetic datasets. Then, by applying classifiers trained on them to predict loan outcomes, we can assess how biased data leads to unfairness. This highlights a main contribution of this work: a framework for synthetic dataset generation with controllable bias injection. We also contribute with a novel explainability technique, which shows how mitigations affect the way classifiers leverage data features, via second-order Shapley values. In experiments, both offline and online learning approaches are employed. Mitigations are applied at different stages of the modelling pipeline, such as during pre-processing and in-processing.


[49] 2507.10201

History Matching under Uncertainty of Geological Scenarios with Implicit Geological Realism Control with Generative Deep Learning and Graph Convolutions

The graph-based variational autoencoder represents an architecture that can handle the uncertainty of different geological scenarios, such as depositional or structural, through the concept of a lowerdimensional latent space. The main difference from recent studies is utilisation of a graph-based approach in reservoir modelling instead of the more traditional lattice-based deep learning methods. We provide a solution to implicitly control the geological realism through the latent variables of a generative model and Geodesic metrics. Our experiments of AHM with synthetic dataset that consists of 3D realisations of channelised geological representations with two distinct scenarios with one and two channels shows the viability of the approach. We offer in-depth analysis of the latent space using tools such as PCA, t-SNE, and TDA to illustrate its structure.


[50] 2507.10220

Low-Dose Tomography of Random Fields and the Problem of Continuous Heterogeneity

We consider the problem of nonparametric estimation of the conformational variability in a population of related structures, based on low-dose tomography of a random sample of representative individuals. In this context, each individual represents a random perturbation of a common template and is imaged noisily and discretely at but a few projection angles. Such problems arise in the cryo Electron Microscopy of structurally heterogeneous biological macromolecules. We model the population as a random field, whose mean captures the typical structure, and whose covariance reflects the heterogeneity. We show that consistent estimation is achievable with as few as two projections per individual, and derive uniform convergence rates reflecting how the various parameters of the problem affect statistical efficiency, and their trade-offs. Our analysis formulates the domain of the forward operator to be a reproducing kernel Hilbert space, where we establish representer and Mercer theorems tailored to question at hand. This allows us to exploit pooling estimation strategies central to functional data analysis, illustrating their versatility in a novel context. We provide an efficient computational implementation using tensorized Krylov methods and demonstrate the performance of our methodology by way of simulation.


[51] 2507.10269

The efficiencies of pilot feasibility trials in rare diseases using Bayesian methods

Pilot feasibility studies play a pivotal role in the development of clinical trials for rare diseases, where small populations and slow recruitment often threaten trial viability. While such studies are commonly used to assess operational parameters, they also offer a valuable opportunity to inform the design and analysis of subsequent definitive trials-particularly through the use of Bayesian methods. In this paper, we demonstrate how data from a single, protocol-aligned pilot study can be incorporated into a definitive trial using robust meta-analytic-predictive priors. We focus on the case of a binary efficacy outcome, motivated by a feasibility trial of intravenous immunoglobulin tapering in autoimmune inflammatory myopathies. Through simulation studies, we evaluate the operating characteristics of trials informed by pilot data, including sample size, expected trial duration, and the probability of meeting recruitment targets. Our findings highlight the operational and ethical advantages of leveraging pilot data via robust Bayesian priors, and offer practical guidance for their application in rare disease settings.


[52] 2507.10303

MF-GLaM: A multifidelity stochastic emulator using generalized lambda models

Stochastic simulators exhibit intrinsic stochasticity due to unobservable, uncontrollable, or unmodeled input variables, resulting in random outputs even at fixed input conditions. Such simulators are common across various scientific disciplines; however, emulating their entire conditional probability distribution is challenging, as it is a task traditional deterministic surrogate modeling techniques are not designed for. Additionally, accurately characterizing the response distribution can require prohibitively large datasets, especially for computationally expensive high-fidelity (HF) simulators. When lower-fidelity (LF) stochastic simulators are available, they can enhance limited HF information within a multifidelity surrogate modeling (MFSM) framework. While MFSM techniques are well-established for deterministic settings, constructing multifidelity emulators to predict the full conditional response distribution of stochastic simulators remains a challenge. In this paper, we propose multifidelity generalized lambda models (MF-GLaMs) to efficiently emulate the conditional response distribution of HF stochastic simulators by exploiting data from LF stochastic simulators. Our approach builds upon the generalized lambda model (GLaM), which represents the conditional distribution at each input by a flexible, four-parameter generalized lambda distribution. MF-GLaMs are non-intrusive, requiring no access to the internal stochasticity of the simulators nor multiple replications of the same input values. We demonstrate the efficacy of MF-GLaM through synthetic examples of increasing complexity and a realistic earthquake application. Results show that MF-GLaMs can achieve improved accuracy at the same cost as single-fidelity GLaMs, or comparable performance at significantly reduced cost.


[53] 2507.10373

Post-reduction inference for confidence sets of models

Sparsity in a regression context makes the model itself an object of interest, pointing to a confidence set of models as the appropriate presentation of evidence. A difficulty in areas such as genomics, where the number of candidate variables is vast, arises from the need for preliminary reduction prior to the assessment of models. The present paper considers a resolution using inferential separations fundamental to the Fisherian approach to conditional inference, namely, the sufficiency/co-sufficiency separation, and the ancillary/co-ancillary separation. The advantage of these separations is that no direction for departure from any hypothesised model is needed, avoiding issues that would otherwise arise from using the same data for reduction and for model assessment. In idealised cases with no nuisance parameters, the separations extract all the information in the data, solely for the purpose for which it is useful, without loss or redundancy. The extent to which estimation of nuisance parameters affects the idealised information extraction is illustrated in detail for the normal-theory linear regression model, extending immediately to a log-normal accelerated-life model for time-to-event outcomes. This idealised analysis provides insight into when sample-splitting is likely to perform as well as, or better than, the co-sufficient or ancillary tests, and when it may be unreliable. The considerations involved in extending the detailed implementation to canonical exponential-family and more general regression models are briefly discussed. As part of the analysis for the Gaussian model, we introduce a modified version of the refitted cross-validation estimator of Fan et al. (2012), whose distribution theory is exact in an appropriate conditional sense.


[54] 2507.10388

Semiparametric empirical likelihood inference for abundance from one-inflated capture-recapture data

Abundance estimation from capture-recapture data is of great importance in many disciplines. Analysis of capture-recapture data is often complicated by the existence of one-inflation and heterogeneity problems. Simultaneously taking these issues into account, existing abundance estimation methods are usually constructed on the basis of conditional likelihood (CL) under one-inflated zero-truncated count models. However, the resulting Horvitz-Thompson-type estimators may be unstable, and the resulting Wald-type confidence intervals may exhibit severe undercoverage. In this paper, we propose a semiparametric empirical likelihood (EL) approach to abundance estimation under one-inflated binomial and Poisson regression models. We show that the maximum EL estimator for the abundance follows an asymptotically normal distribution and that the EL ratio statistic of abundance follows a limiting chi-square distribution with one degree of freedom. To facilitate computation of the EL method, we develop an expectation-maximization (EM) algorithm, and establish its appealing convergence property. We also propose a new score test for the existence of one-inflation and prove its asymptotic normality. Our simulation studies indicate that compared with CL-based methods, the maximum EL estimator has a smaller mean square error, the EL ratio confidence interval has a remarkable gain in coverage probability, and the proposed score test is more powerful. The advantages of the proposed approaches are further demonstrated by analyses of prinia data from Hong Kong and drug user data from Bangkok.


[55] 2507.10404

Two-step semiparametric empirical likelihood inference from capture-recapture data with missing covariates

Missing covariates are not uncommon in capture-recapture studies. When covariate information is missing at random in capture-recapture data, an empirical full likelihood method has been demonstrated to outperform conditional-likelihood-based methods in abundance estimation. However, the fully observed covariates must be discrete, and the method is not directly applicable to continuous-time capture-recapture data. Based on the Binomial and Poisson regression models, we propose a two-step semiparametric empirical likelihood approach for abundance estimation in the presence of missing covariates, regardless of whether the fully observed covariates are discrete or continuous. We show that the maximum semiparametric empirical likelihood estimators for the underlying parameters and the abundance are asymptotically normal, and more efficient than the counterpart for a completely known non-missingness probability. After scaling, the empirical likelihood ratio test statistic for abundance follows a limiting chi-square distribution with one degree of freedom. The proposed approach is further extended to one-inflated count regression models, and a score-like test is constructed to assess whether one-inflation exists among the number of captures. Our simulation shows that, compared with the previous method, the proposed method not only performs better in correcting bias, but also has a more accurate coverage in the presence of fully observed continuous covariates, although there may be a slight efficiency loss when the fully observed covariates are only discrete. The performance of the new method is illustrated by an analysis of the Hong Kong prinia data.


[56] 2507.10443

Information Must Flow: Recursive Bootstrapping for Information Bottleneck in Optimal Transport

We present the Context-Content Uncertainty Principle (CCUP), a unified framework that models cognition as the directed flow of information between high-entropy context and low-entropy content. Inference emerges as a cycle of bidirectional interactions, bottom-up contextual disambiguation paired with top-down content reconstruction, which resolves the Information Bottleneck in Optimal Transport (iBOT). Implemented via Rao-Blackwellized variational entropy minimization, CCUP steers representations toward minimal joint uncertainty while preserving inferential directionality. Local cycle completion underpins temporal bootstrapping, chaining simulations to refine memory, and spatial bootstrapping, enabling compositional hierarchical inference. We prove a Delta Convergence Theorem showing that recursive entropy minimization yields delta-like attractors in latent space, stabilizing perceptual schemas and motor plans. Temporal bootstrapping through perception-action loops and sleep-wake consolidation further transforms episodic traces into semantic knowledge. Extending CCUP, each hierarchical level performs delta-seeded inference: low-entropy content seeds diffuse outward along goal-constrained paths shaped by top-down priors and external context, confining inference to task-relevant manifolds and circumventing the curse of dimensionality. Building on this, we propose that language emerges as a symbolic transport system, externalizing latent content to synchronize inference cycles across individuals. Together, these results establish iBOT as a foundational principle of information flow in both individual cognition and collective intelligence, positioning recursive inference as the structured conduit through which minds adapt, align, and extend.


[57] 2507.10465

Flexible Modeling of Multivariate Skewed and Heavy-Tailed Data via a Non-Central Skew t Distribution: Application to Tumor Shape Data

We propose a flexible formulation of the multivariate non-central skew t (NCST) distribution, defined by scaling skew-normal random vectors with independent chi-squared variables. This construction extends the classical multivariate t family by allowing both asymmetry and non-centrality, which provides an alternative to existing skew t models that often rely on restrictive assumptions for tractability. We derive key theoretical properties of the NCST distribution, which includes its moment structure, affine transformation behavior, and the distribution of quadratic forms. Due to the lack of a closed-form density, we implement a Monte Carlo likelihood approximation to enable maximum likelihood estimation and evaluate its performance through simulation studies. To demonstrate practical utility, we apply the NCST model to breast cancer diagnostic data, modeling multiple features of tumor shape. The NCST model achieves a superior fit based on information criteria and visual diagnostics, particularly in the presence of skewness and heavy tails compared to standard alternatives, including the multivariate normal, skew normal, and Azzalini's skew $t$ distribution. Our findings suggest that the NCST distribution offers a useful and interpretable choice for modeling complex multivariate data, which highlights promising directions for future development in likelihood inference, Bayesian computation, and applications involving asymmetry and non-Gaussian dependence.


[58] 2507.10511

Constructing Confidence Intervals for Infinite-Dimensional Functional Prameters by Highly Adaptive Lasso

Estimating the conditional mean function is a central task in statistical learning. In this paper, we consider estimation and inference for a nonparametric class of real-valued càdlàg functions with bounded sectional variation (Gill et al., 1995), using the Highly Adaptive Lasso (HAL) (van der Laan, 2015; Benkeser and van der Laan, 2016; van der Laan, 2023), a flexible empirical risk minimizer over linear combinations of tensor products of zero- or higher-order spline basis functions under an L1 norm constraint. Building on recent theoretical advances in asymptotic normality and uniform convergence rates for higher-order spline HAL estimators (van der Laan, 2023), this work focuses on constructing robust confidence intervals for HAL-based conditional mean estimators. To address regularization bias, we propose a targeted HAL with a debiasing step to remove bias for the conditional mean, and also consider a relaxed HAL estimator to reduce bias. We also introduce both global and local undersmoothing strategies to adaptively select the working model, reducing bias relative to variance. Combined with delta-method-based variance estimation, we construct confidence intervals for conditional means based on HAL. Through simulations, we evaluate combinations of estimation and model selection strategies, showing that our methods substantially reduce bias and yield confidence intervals with coverage rates close to nominal levels across scenarios. We also provide recommendations for different estimation objectives and illustrate the generality of our framework by applying it to estimate conditional average treatment effect (CATE) functions, highlighting how HAL-based inference extends to other infinite-dimensional, non-pathwise differentiable parameters.


[59] 2505.16921

A NuSTAR study of quasi-periodic oscillations from the ultraluminous X-ray sources in M82

The study of quasi-periodic oscillations in X-ray binaries provides valuable insights into the physics of accretion around compact objects. The M82 galaxy hosts two ultraluminous X-ray sources (ULXs), one of which is suspected to harbor an intermediate-mass black hole. Using 39 NuSTAR observations acquired between 2014--2024, we investigate the aperiodic X-ray variability in M82. In particular, we study in detail the evolution of the QPO from M82 X-1 in the range 20--300 mHz. We do not find additional timing features in the data, besides a frequent broad noise component at lower frequencies. The QPO behaves similarly to other classes of low-frequency oscillations in accreting compact objects, both black holes and neutron stars.


[60] 2507.08828

Recurrent Expansion: A Pathway Toward the Next Generation of Deep Learning

This paper introduces Recurrent Expansion (RE) as a new learning paradigm that advances beyond conventional Machine Learning (ML) and Deep Learning (DL). While DL focuses on learning from static data representations, RE proposes an additional dimension: learning from the evolving behavior of models themselves. RE emphasizes multiple mappings of data through identical deep architectures and analyzes their internal representations (i.e., feature maps) in conjunction with observed performance signals such as loss. By incorporating these behavioral traces, RE enables iterative self-improvement, allowing each model version to gain insight from its predecessors. The framework is extended through Multiverse RE (MVRE), which aggregates signals from parallel model instances, and further through Heterogeneous MVRE (HMVRE), where models of varying architectures contribute diverse perspectives. A scalable and adaptive variant, Sc-HMVRE, introduces selective mechanisms and scale diversity for real-world deployment. Altogether, RE presents a shift in DL: from purely representational learning to behavior-aware, self-evolving systems. It lays the groundwork for a new class of intelligent models capable of reasoning over their own learning dynamics, offering a path toward scalable, introspective, and adaptive artificial intelligence. A simple code example to support beginners in running their own experiments is provided in Code Availability Section of this paper.


[61] 2507.08835

Representation learning with a transformer by contrastive learning for money laundering detection

The present work tackles the money laundering detection problem. A new procedure is introduced which exploits structured time series of both qualitative and quantitative data by means of a transformer neural network. The first step of this procedure aims at learning representations of time series through contrastive learning (without any labels). The second step leverages these representations to generate a money laundering scoring of all observations. A two-thresholds approach is then introduced, which ensures a controlled false-positive rate by means of the Benjamini-Hochberg (BH) procedure. Experiments confirm that the transformer is able to produce general representations that succeed in exploiting money laundering patterns with minimal supervision from domain experts. It also illustrates the higher ability of the new procedure for detecting nonfraudsters as well as fraudsters, while keeping the false positive rate under control. This greatly contrasts with rule-based procedures or the ones based on LSTM architectures.


[62] 2507.08838

wd1: Weighted Policy Optimization for Reasoning in Diffusion Language Models

Improving the reasoning capabilities of diffusion-based large language models (dLLMs) through reinforcement learning (RL) remains an open problem. The intractability of dLLMs likelihood function necessitates approximating the current, old, and reference policy likelihoods at each policy optimization step. This reliance introduces additional computational overhead and lead to potentially large bias -- particularly when approximation errors occur in the denominator of policy ratios used for importance sampling. To mitigate these issues, we introduce $\mathtt{wd1}$, a novel policy optimization approach that reformulates the objective as a weighted likelihood, requiring only a single approximation for the current parametrized policy likelihood. Experiments on widely used reasoning benchmarks demonstrate that $\mathtt{wd1}$, without supervised fine-tuning (SFT) or any supervised data, outperforms existing RL methods for dLLMs, achieving up to 16% higher accuracy. $\mathtt{wd1}$ delivers additional computational gains, including reduced training time and fewer function evaluations (NFEs) per gradient step. These findings, combined with the simplicity of method's implementation and R1-Zero-like training (no SFT), position $\mathtt{wd1}$ as a more effective and efficient method for applying RL to dLLMs reasoning.


[63] 2507.08858

Foundation models for time series forecasting: Application in conformal prediction

The zero-shot capabilities of foundation models (FMs) for time series forecasting offer promising potentials in conformal prediction, as most of the available data can be allocated to calibration. This study compares the performance of Time Series Foundation Models (TSFMs) with traditional methods, including statistical models and gradient boosting, within a conformal prediction setting. Our findings highlight two key advantages of TSFMs. First, when the volume of data is limited, TSFMs provide more reliable conformalized prediction intervals than classic models, thanks to their superior predictive accuracy. Second, the calibration process is more stable because more data are used for calibration. Morever, the fewer data available, the more pronounced these benefits become, as classic models require a substantial amount of data for effective training. These results underscore the potential of foundation models in improving conformal prediction reliability in time series applications, particularly in data-constrained cases. All the code to reproduce the experiments is available.


[64] 2507.08861

On the under-reaching phenomenon in message-passing neural PDE solvers: revisiting the CFL condition

This paper proposes sharp lower bounds for the number of message passing iterations required in graph neural networks (GNNs) when solving partial differential equations (PDE). This significantly reduces the need for exhaustive hyperparameter tuning. Bounds are derived for the three fundamental classes of PDEs (hyperbolic, parabolic and elliptic) by relating the physical characteristics of the problem in question to the message-passing requirement of GNNs. In particular, we investigate the relationship between the physical constants of the equations governing the problem, the spatial and temporal discretisation and the message passing mechanisms in GNNs. When the number of message passing iterations is below these proposed limits, information does not propagate efficiently through the network, resulting in poor solutions, even for deep GNN architectures. In contrast, when the suggested lower bound is satisfied, the GNN parameterisation allows the model to accurately capture the underlying phenomenology, resulting in solvers of adequate accuracy. Examples are provided for four different examples of equations that show the sharpness of the proposed lower bounds.


[65] 2507.08866

Underrepresentation, Label Bias, and Proxies: Towards Data Bias Profiles for the EU AI Act and Beyond

Undesirable biases encoded in the data are key drivers of algorithmic discrimination. Their importance is widely recognized in the algorithmic fairness literature, as well as legislation and standards on anti-discrimination in AI. Despite this recognition, data biases remain understudied, hindering the development of computational best practices for their detection and mitigation. In this work, we present three common data biases and study their individual and joint effect on algorithmic discrimination across a variety of datasets, models, and fairness measures. We find that underrepresentation of vulnerable populations in training sets is less conducive to discrimination than conventionally affirmed, while combinations of proxies and label bias can be far more critical. Consequently, we develop dedicated mechanisms to detect specific types of bias, and combine them into a preliminary construct we refer to as the Data Bias Profile (DBP). This initial formulation serves as a proof of concept for how different bias signals can be systematically documented. Through a case study with popular fairness datasets, we demonstrate the effectiveness of the DBP in predicting the risk of discriminatory outcomes and the utility of fairness-enhancing interventions. Overall, this article bridges algorithmic fairness research and anti-discrimination policy through a data-centric lens.


[66] 2507.08867

Mind the Gap: Navigating Inference with Optimal Transport Maps

Machine learning (ML) techniques have recently enabled enormous gains in sensitivity across the sciences. In particle physics, much of this progress has relied on excellent simulations of a wide range of physical processes. However, due to the sophistication of modern machine learning (ML) algorithms and their reliance on high-quality training samples, discrepancies between simulation and experimental data can significantly limit the effectiveness of ML techniques. In this work, we present a solution to this ``mis-specification'' problem: a calibration approach based on optimal transport, which we apply to high-dimensional simulations for the first time. We demonstrate the performance of our approach through jet tagging, using a CMS-inspired dataset. A 128-dimensional internal jet representation from a powerful general-purpose classifier is studied; after calibrating this internal ``latent'' representation, we find that a wide variety of quantities derived from it for downstream tasks are also properly calibrated: using this calibrated high-dimensional representation, powerful new applications of jet flavor information can be utilized in LHC analyses. This is a key step toward allowing properly-calibrated ``foundation models'' in particle physics. More broadly, this calibration framework has broad applications for correcting high-dimensional simulations across the sciences.


[67] 2507.08913

Revisiting Convergence: Shuffling Complexity Beyond Lipschitz Smoothness

Shuffling-type gradient methods are favored in practice for their simplicity and rapid empirical performance. Despite extensive development of convergence guarantees under various assumptions in recent years, most require the Lipschitz smoothness condition, which is often not met in common machine learning models. We highlight this issue with specific counterexamples. To address this gap, we revisit the convergence rates of shuffling-type gradient methods without assuming Lipschitz smoothness. Using our stepsize strategy, the shuffling-type gradient algorithm not only converges under weaker assumptions but also match the current best-known convergence rates, thereby broadening its applicability. We prove the convergence rates for nonconvex, strongly convex, and non-strongly convex cases, each under both random reshuffling and arbitrary shuffling schemes, under a general bounded variance condition. Numerical experiments further validate the performance of our shuffling-type gradient algorithm, underscoring its practical efficacy.


[68] 2507.08956

Beyond Scores: Proximal Diffusion Models

Diffusion models have quickly become some of the most popular and powerful generative models for high-dimensional data. The key insight that enabled their development was the realization that access to the score -- the gradient of the log-density at different noise levels -- allows for sampling from data distributions by solving a reverse-time stochastic differential equation (SDE) via forward discretization, and that popular denoisers allow for unbiased estimators of this score. In this paper, we demonstrate that an alternative, backward discretization of these SDEs, using proximal maps in place of the score, leads to theoretical and practical benefits. We leverage recent results in proximal matching to learn proximal operators of the log-density and, with them, develop Proximal Diffusion Models (ProxDM). Theoretically, we prove that $\widetilde{O}(d/\sqrt{\varepsilon})$ steps suffice for the resulting discretization to generate an $\varepsilon$-accurate distribution w.r.t. the KL divergence. Empirically, we show that two variants of ProxDM achieve significantly faster convergence within just a few sampling steps compared to conventional score-matching methods.


[69] 2507.08963

Stochastic Approximation with Block Coordinate Optimal Stepsizes

We consider stochastic approximation with block-coordinate stepsizes and propose adaptive stepsize rules that aim to minimize the expected distance from the next iterate to an optimal point. These stepsize rules employ online estimates of the second moment of the search direction along each block coordinate. The popular Adam algorithm can be interpreted as a particular heuristic for such estimation. By leveraging a simple conditional estimator, we derive a new method that obtains comparable performance as Adam but requires less memory and fewer hyper-parameters. We prove that this family of methods converges almost surely to a small neighborhood of the optimal point, and the radius of the neighborhood depends on the bias and variance of the second-moment estimator. Our analysis relies on a simple aiming condition that assumes neither convexity nor smoothness, thus has broad applicability.


[70] 2507.08965

Theory-Informed Improvements to Classifier-Free Guidance for Discrete Diffusion Models

Classifier-Free Guidance (CFG) is a widely used technique for conditional generation and improving sample quality in continuous diffusion models, and recent works have extended it to discrete diffusion. This paper theoretically analyzes CFG in the context of masked discrete diffusion, focusing on the role of guidance schedules. Our analysis shows that high guidance early in sampling (when inputs are heavily masked) harms generation quality, while late-stage guidance has a larger effect. These findings provide a theoretical explanation for empirical observations in recent studies on guidance schedules. The analysis also reveals an imperfection of the current CFG implementations. These implementations can unintentionally cause imbalanced transitions, such as unmasking too rapidly during the early stages of generation, which degrades the quality of the resulting samples. To address this, we draw insight from the analysis and propose a novel classifier-free guidance mechanism empirically applicable to any discrete diffusion. Intuitively, our method smoothens the transport between the data distribution and the initial (masked/uniform) distribution, which results in improved sample quality. Remarkably, our method is achievable via a simple one-line code change. The efficacy of our method is empirically demonstrated with experiments on ImageNet (masked discrete diffusion) and QM9 (uniform discrete diffusion).


[71] 2507.08977

Simulation as Supervision: Mechanistic Pretraining for Scientific Discovery

Scientific modeling faces a core limitation: mechanistic models offer interpretability but collapse under real-world complexity, while machine learning models are flexible but require large labeled datasets, cannot infer unobservable quantities, and operate as black boxes. We introduce Simulation-Grounded Neural Networks (SGNNs), a general framework that uses mechanistic simulations as training data for neural networks. SGNNs are pretrained on synthetic corpora spanning diverse model structures, parameter regimes, stochasticity, and observational artifacts. We evaluated SGNNs across scientific disciplines and modeling tasks, and found that SGNNs achieved state-of-the-art results across settings: for prediction tasks, they nearly tripled COVID-19 forecasting skill versus CDC baselines, reduced chemical yield prediction error by one third, and maintained accuracy in ecological forecasting where task specific models failed. For inference tasks, SGNNs also accurately classified the source of information spread in simulated social networks and enabled supervised learning for unobservable targets, such as estimating COVID-19 transmissibility more accurately than traditional methods even in early outbreaks. Finally, SGNNs enable back-to-simulation attribution, a new form of mechanistic interpretability. Given real world input, SGNNs retrieve simulations based on what the model has learned to see as most similar, revealing which underlying dynamics the model believes are active. This provides process-level insight -- what the model thinks is happening -- not just which features mattered. SGNNs unify scientific theory with deep learning flexibility and unlock a new modeling paradigm -- transforming simulations from rigid, post hoc tools into flexible sources of supervision, enabling robust, interpretable inference even when ground truth is missing.


[72] 2507.09043

Shortening the Trajectories: Identity-Aware Gaussian Approximation for Efficient 3D Molecular Generation

Gaussian-based Probabilistic Generative Models (GPGMs) generate data by reversing a stochastic process that progressively corrupts samples with Gaussian noise. While these models have achieved state-of-the-art performance across diverse domains, their practical deployment remains constrained by the high computational cost of long generative trajectories, which often involve hundreds to thousands of steps during training and sampling. In this work, we introduce a theoretically grounded and empirically validated framework that improves generation efficiency without sacrificing training granularity or inference fidelity. Our key insight is that for certain data modalities, the noising process causes data to rapidly lose its identity and converge toward a Gaussian distribution. We analytically identify a characteristic step at which the data has acquired sufficient Gaussianity, and then replace the remaining generation trajectory with a closed-form Gaussian approximation. Unlike existing acceleration techniques that coarsening the trajectories by skipping steps, our method preserves the full resolution of learning dynamics while avoiding redundant stochastic perturbations between `Gaussian-like' distributions. Empirical results across multiple data modalities demonstrate substantial improvements in both sample quality and computational efficiency.


[73] 2507.09061

Imitation Learning in Continuous Action Spaces: Mitigating Compounding Error without Interaction

We study the problem of imitating an expert demonstrator in a continuous state-and-action dynamical system. While imitation learning in discrete settings such as autoregressive language modeling has seen immense success and popularity in recent years, imitation in physical settings such as autonomous driving and robot learning has proven comparably more complex due to the compounding errors problem, often requiring elaborate set-ups to perform stably. Recent work has demonstrated that even in benign settings, exponential compounding errors are unavoidable when learning solely from expert-controlled trajectories, suggesting the need for more advanced policy parameterizations or data augmentation. To this end, we present minimal interventions that provably mitigate compounding errors in continuous state-and-action imitation learning. When the system is open-loop stable, we prescribe "action chunking," i.e., predicting and playing sequences of actions in open-loop; when the system is possibly unstable, we prescribe "noise injection," i.e., adding noise during expert demonstrations. These interventions align with popular choices in modern robot learning, though the benefits we derive are distinct from the effects they were designed to target. Our results draw insights and tools from both control theory and reinforcement learning; however, our analysis reveals novel considerations that do not naturally arise when either literature is considered in isolation.


[74] 2507.09087

Deep Reinforcement Learning with Gradient Eligibility Traces

Achieving fast and stable off-policy learning in deep reinforcement learning (RL) is challenging. Most existing methods rely on semi-gradient temporal-difference (TD) methods for their simplicity and efficiency, but are consequently susceptible to divergence. While more principled approaches like Gradient TD (GTD) methods have strong convergence guarantees, they have rarely been used in deep RL. Recent work introduced the Generalized Projected Bellman Error ($\GPBE$), enabling GTD methods to work efficiently with nonlinear function approximation. However, this work is only limited to one-step methods, which are slow at credit assignment and require a large number of samples. In this paper, we extend the $\GPBE$ objective to support multistep credit assignment based on the $\lambda$-return and derive three gradient-based methods that optimize this new objective. We provide both a forward-view formulation compatible with experience replay and a backward-view formulation compatible with streaming algorithms. Finally, we evaluate the proposed algorithms and show that they outperform both PPO and StreamQ in MuJoCo and MinAtar environments, respectively. Code available at this https URL\_algos


[75] 2507.09091

Continuous-Time Signal Decomposition: An Implicit Neural Generalization of PCA and ICA

We generalize the low-rank decomposition problem, such as principal and independent component analysis (PCA, ICA) for continuous-time vector-valued signals and provide a model-agnostic implicit neural signal representation framework to learn numerical approximations to solve the problem. Modeling signals as continuous-time stochastic processes, we unify the approaches to both the PCA and ICA problems in the continuous setting through a contrast function term in the network loss, enforcing the desired statistical properties of the source signals (decorrelation, independence) learned in the decomposition. This extension to a continuous domain allows the application of such decompositions to point clouds and irregularly sampled signals where standard techniques are not applicable.


[76] 2507.09127

A Study of Value-Aware Eigenoptions

Options, which impose an inductive bias toward temporal and hierarchical structure, offer a powerful framework for reinforcement learning (RL). While effective in sequential decision-making, they are often handcrafted rather than learned. Among approaches for discovering options, eigenoptions have shown strong performance in exploration, but their role in credit assignment remains underexplored. In this paper, we investigate whether eigenoptions can accelerate credit assignment in model-free RL, evaluating them in tabular and pixel-based gridworlds. We find that pre-specified eigenoptions aid not only exploration but also credit assignment, whereas online discovery can bias the agent's experience too strongly and hinder learning. In the context of deep RL, we also propose a method for learning option-values under non-linear function approximation, highlighting the impact of termination conditions on performance. Our findings reveal both the promise and complexity of using eigenoptions, and options more broadly, to simultaneously support credit assignment and exploration in reinforcement learning.


[77] 2507.09151

Convergence Rate of the Solution of Multi-marginal Schrodinger Bridge Problem with Marginal Constraints from SDEs

In this paper, we investigate the multi-marginal Schrodinger bridge (MSB) problem whose marginal constraints are marginal distributions of a stochastic differential equation (SDE) with a constant diffusion coefficient, and with time dependent drift term. As the number $m$ of marginal constraints increases, we prove that the solution of the corresponding MSB problem converges to the law of the solution of the SDE at the rate of $O(m^{-1})$, in the sense of KL divergence. Our result extends the work of~\cite{agarwal2024iterated} to the case where the drift of the underlying stochastic process is time-dependent.


[78] 2507.09177

Continual Reinforcement Learning by Planning with Online World Models

Continual reinforcement learning (CRL) refers to a naturalistic setting where an agent needs to endlessly evolve, by trial and error, to solve multiple tasks that are presented sequentially. One of the largest obstacles to CRL is that the agent may forget how to solve previous tasks when learning a new task, known as catastrophic forgetting. In this paper, we propose to address this challenge by planning with online world models. Specifically, we learn a Follow-The-Leader shallow model online to capture the world dynamics, in which we plan using model predictive control to solve a set of tasks specified by any reward functions. The online world model is immune to forgetting by construction with a proven regret bound of $\mathcal{O}(\sqrt{K^2D\log(T)})$ under mild assumptions. The planner searches actions solely based on the latest online model, thus forming a FTL Online Agent (OA) that updates incrementally. To assess OA, we further design Continual Bench, a dedicated environment for CRL, and compare with several strong baselines under the same model-planning algorithmic framework. The empirical results show that OA learns continuously to solve new tasks while not forgetting old skills, outperforming agents built on deep world models with various continual learning techniques.


[79] 2507.09181

Generalized Orlicz premia

We introduce a generalized version of Orlicz premia, based on possibly non-convex loss functions. We show that this generalized definition covers a variety of relevant examples, such as the geometric mean and the expectiles, while at the same time retaining a number of relevant properties. We establish that cash-additivity leads to $L^p$-quantiles, extending a classical result on 'collapse to the mean' for convex Orlicz premia. We then focus on the geometrically convex case, discussing the dual representation of generalized Orlicz premia and comparing it with a multiplicative form of the standard dual representation for the convex case. Finally, we show that generalized Orlicz premia arise naturally as the only elicitable, positively homogeneous, monotone and normalized functionals.


[80] 2507.09211

Capturing Unseen Spatial Extremes Through Knowledge-Informed Generative Modeling

Observed records of climate extremes provide an incomplete picture of risk, missing "unseen" extremes that exceed historical bounds. In parallel, neglecting spatial dependence undervalues the risk of synchronized hazards that amplify impacts. To address these challenges, we develop DeepX-GAN (Dependence-Enhanced Embedding for Physical eXtremes - Generative Adversarial Network), a knowledge-informed deep generative model designed to better capture the spatial structure of rare extremes. The zero-shot generalizability of DeepX-GAN enables simulation of unseen extremes that fall outside historical experience yet remain statistically plausible. We define two types of unseen extremes: "checkmate" extremes that directly hit targets, and "stalemate" extremes that narrowly miss. These unrealized scenarios expose latent risks in fragile systems and may reinforce a false sense of resilience if overlooked. Near misses, in particular, can prompt either proactive adaptation or dangerous complacency, depending on how they are interpreted. Applying DeepX-GAN to the Middle East and North Africa (MENA), we find that these unseen extremes disproportionately affect regions with high vulnerability and low socioeconomic readiness, but differ in urgency and interpretation. Future warming could expand and redistribute these unseen extremes, with emerging exposure hotspots in Indo-Pakistan and Central Africa. This distributional shift highlights critical blind spots in conventional hazard planning and underscores the need to develop spatially adaptive policies that anticipate emergent risk hotspots rather than simply extrapolating from historical patterns.


[81] 2507.09212

Warm Starts Accelerate Generative Modelling

Iterative generative models, like diffusion and flow-matching, create high-fidelity samples by progressively refining a noise vector into data. However, this process is notoriously slow, often requiring hundreds of function evaluations. We introduce the warm-start model, a simple, deterministic model that dramatically accelerates conditional generation by providing a better starting point. Instead of starting generation from an uninformed N(0, I) prior, our warm-start model predicts an informed prior N(mu, sigma), whose moments are conditioned on the input context. This "warm start" substantially reduces the distance the generative process must traverse, particularly when the conditioning information is strongly informative. On tasks like image inpainting, our method achieves results competitive with a 1000-step DDPM baseline using only 11 total function evaluations (1 for the warm start, 10 for generation). A simple conditional normalization trick makes our method compatible with any standard generative model and sampler without modification, allowing it to be combined with other efficient sampling techniques for further acceleration. Our implementation is available at this https URL.


[82] 2507.09213

Optimizing Basis Function Selection in Constructive Wavelet Neural Networks and Its Applications

Wavelet neural network (WNN), which learns an unknown nonlinear mapping from the data, has been widely used in signal processing, and time-series analysis. However, challenges in constructing accurate wavelet bases and high computational costs limit their application. This study introduces a constructive WNN that selects initial bases and trains functions by introducing new bases for predefined accuracy while reducing computational costs. For the first time, we analyze the frequency of unknown nonlinear functions and select appropriate initial wavelets based on their primary frequency components by estimating the energy of the spatial frequency component. This leads to a novel constructive framework consisting of a frequency estimator and a wavelet-basis increase mechanism to prioritize high-energy bases, significantly improving computational efficiency. The theoretical foundation defines the necessary time-frequency range for high-dimensional wavelets at a given accuracy. The framework's versatility is demonstrated through four examples: estimating unknown static mappings from offline data, combining two offline datasets, identifying time-varying mappings from time-series data, and capturing nonlinear dependencies in real time-series data. These examples showcase the framework's broad applicability and practicality. All the code will be released at this https URL.


[83] 2507.09247

A CLuP algorithm to practically achieve $\sim 0.76$ SK--model ground state free energy

We consider algorithmic determination of the $n$-dimensional Sherrington-Kirkpatrick (SK) spin glass model ground state free energy. It corresponds to a binary maximization of an indefinite quadratic form and under the \emph{worst case} principles of the classical NP complexity theory it is hard to approximate within a $\log(n)^{const.}$ factor. On the other hand, the SK's random nature allows (polynomial) spectral methods to \emph{typically} approach the optimum within a constant factor. Naturally one is left with the fundamental question: can the residual (constant) \emph{computational gap} be erased? Following the success of \emph{Controlled Loosening-up} (CLuP) algorithms in planted models, we here devise a simple practical CLuP-SK algorithmic procedure for (non-planted) SK models. To analyze the \emph{typical} success of the algorithm we associate to it (random) CLuP-SK models. Further connecting to recent random processes studies [94,97], we characterize the models and CLuP-SK algorithm via fully lifted random duality theory (fl RDT) [98]. Moreover, running the algorithm we demonstrate that its performance is in an excellent agrement with theoretical predictions. In particular, already for $n$ on the order of a few thousands CLuP-SK achieves $\sim 0.76$ ground state free energy and remarkably closely approaches theoretical $n\rightarrow\infty$ limit $\approx 0.763$. For all practical purposes, this renders computing SK model's near ground state free energy as a \emph{typically} easy problem.


[84] 2507.09252

TPP-SD: Accelerating Transformer Point Process Sampling with Speculative Decoding

We propose TPP-SD, a novel approach that accelerates Transformer temporal point process (TPP) sampling by adapting speculative decoding (SD) techniques from language models. By identifying the structural similarities between thinning algorithms for TPPs and speculative decoding for language models, we develop an efficient sampling framework that leverages a smaller draft model to generate multiple candidate events, which are then verified by the larger target model in parallel. TPP-SD maintains the same output distribution as autoregressive sampling while achieving significant acceleration. Experiments on both synthetic and real datasets demonstrate that our approach produces samples from identical distributions as standard methods, but with 2-6$\times$ speedup. Our ablation studies analyze the impact of hyperparameters such as draft length and draft model size on sampling efficiency. TPP-SD bridges the gap between powerful Transformer TPP models and the practical need for rapid sequence sampling.


[85] 2507.09347

A Framework for Predictive Directional Trading Based on Volatility and Causal Inference

Purpose: This study introduces a novel framework for identifying and exploiting predictive lead-lag relationships in financial markets. We propose an integrated approach that combines advanced statistical methodologies with machine learning models to enhance the identification and exploitation of predictive relationships between equities. Methods: We employed a Gaussian Mixture Model (GMM) to cluster nine prominent stocks based on their mid-range historical volatility profiles over a three-year period. From the resulting clusters, we constructed a multi-stage causal inference pipeline, incorporating the Granger Causality Test (GCT), a customised Peter-Clark Momentary Conditional Independence (PCMCI) test, and Effective Transfer Entropy (ETE) to identify robust, predictive linkages. Subsequently, Dynamic Time Warping (DTW) and a K-Nearest Neighbours (KNN) classifier were utilised to determine the optimal time lag for trade execution. The resulting strategy was rigorously backtested. Results: The proposed volatility-based trading strategy, tested from 8 June 2023 to 12 August 2023, demonstrated substantial efficacy. The portfolio yielded a total return of 15.38%, significantly outperforming the 10.39% return of a comparative Buy-and-Hold strategy. Key performance metrics, including a Sharpe Ratio up to 2.17 and a win rate up to 100% for certain pairs, confirmed the strategy's viability. Conclusion: This research contributes a systematic and robust methodology for identifying profitable trading opportunities derived from volatility-based causal relationships. The findings have significant implications for both academic research in financial modelling and the practical application of algorithmic trading, offering a structured approach to developing resilient, data-driven strategies.


[86] 2507.09353

Impute With Confidence: A Framework for Uncertainty Aware Multivariate Time Series Imputation

Time series data with missing values is common across many domains. Healthcare presents special challenges due to prolonged periods of sensor disconnection. In such cases, having a confidence measure for imputed values is critical. Most existing methods either overlook model uncertainty or lack mechanisms to estimate it. To address this gap, we introduce a general framework that quantifies and leverages uncertainty for selective imputation. By focusing on values the model is most confident in, highly unreliable imputations are avoided. Our experiments on multiple EHR datasets, covering diverse types of missingness, demonstrate that selectively imputing less-uncertain values not only reduces imputation errors but also improves downstream tasks. Specifically, we show performance gains in a 24-hour mortality prediction task, underscoring the practical benefit of incorporating uncertainty into time series imputation.


[87] 2507.09445

Fourier Basis Mapping: A Time-Frequency Learning Framework for Time Series Forecasting

The integration of Fourier transform and deep learning opens new avenues for time series forecasting. We reconsider the Fourier transform from a basis functions perspective. Specifically, the real and imaginary parts of the frequency components can be regarded as the coefficients of cosine and sine basis functions at tiered frequency levels, respectively. We find that existing Fourier-based methods face inconsistent starting cycles and inconsistent series length issues. They fail to interpret frequency components precisely and overlook temporal information. Accordingly, the novel Fourier Basis Mapping (FBM) method addresses these issues by integrating time-frequency features through Fourier basis expansion and mapping in the time-frequency space. Our approach extracts explicit frequency features while preserving temporal characteristics. FBM supports plug-and-play integration with various types of neural networks by only adjusting the first initial projection layer for better performance. First, we propose FBM-L, FBM-NL, and FBM-NP to enhance linear, MLP-based, and Transformer-based models, respectively, demonstrating the effectiveness of time-frequency features. Next, we propose a synergetic model architecture, termed FBM-S, which decomposes the seasonal, trend, and interaction effects into three separate blocks, each designed to model time-frequency features in a specialized manner. Finally, we introduce several techniques tailored for time-frequency features, including interaction masking, centralization, patching, rolling window projection, and multi-scale down-sampling. The results are validated on diverse real-world datasets for both long-term and short-term forecasting tasks with SOTA performance.


[88] 2507.09473

Incentive-Aware Dynamic Resource Allocation under Long-Term Cost Constraints

Motivated by applications such as cloud platforms allocating GPUs to users or governments deploying mobile health units across competing regions, we study the dynamic allocation of a reusable resource to strategic agents with private valuations. Our objective is to simultaneously (i) maximize social welfare, (ii) satisfy multi-dimensional long-term cost constraints, and (iii) incentivize truthful reporting. We begin by numerically evaluating primal-dual methods widely used in constrained online optimization and find them to be highly fragile in strategic settings -- agents can easily manipulate their reports to distort future dual updates for future gain. To address this vulnerability, we develop an incentive-aware framework that makes primal-dual methods robust to strategic behavior. Our design combines epoch-based lazy updates -- where dual variables remain fixed within each epoch -- with randomized exploration rounds that extract approximately truthful signals for learning. Leveraging carefully designed online learning subroutines that can be of independent interest for dual updates, our mechanism achieves $\tilde{\mathcal{O}}(\sqrt{T})$ social welfare regret, satisfies all cost constraints, and ensures incentive alignment. This matches the performance of non-strategic allocation approaches while being robust to strategic agents.


[89] 2507.09678

Conformal Prediction for Privacy-Preserving Machine Learning

We investigate the integration of Conformal Prediction (CP) with supervised learning on deterministically encrypted data, aiming to bridge the gap between rigorous uncertainty quantification and privacy-preserving machine learning. Using AES-encrypted variants of the MNIST dataset, we demonstrate that CP methods remain effective even when applied directly in the encrypted domain, owing to the preservation of data exchangeability under fixed-key encryption. We test traditional $p$-value-based against $e$-value-based conformal predictors. Our empirical evaluation reveals that models trained on deterministically encrypted data retain the ability to extract meaningful structure, achieving 36.88\% test accuracy -- significantly above random guessing (9.56\%) observed with per-instance encryption. Moreover, $e$-value-based CP achieves predictive set coverage of over 60\% with 4.3 loss-threshold calibration, correctly capturing the true label in 4888 out of 5000 test cases. In contrast, the $p$-value-based CP yields smaller predictive sets but with reduced coverage accuracy. These findings highlight both the promise and limitations of CP in encrypted data settings and underscore critical trade-offs between prediction set compactness and reliability. %Our work sets a foundation for principled uncertainty quantification in secure, privacy-aware learning systems.


[90] 2507.09694

Frequency-aware Surrogate Modeling With SMT Kernels For Advanced Data Forecasting

This paper introduces a comprehensive open-source framework for developing correlation kernels, with a particular focus on user-defined and composition of kernels for surrogate modeling. By advancing kernel-based modeling techniques, we incorporate frequency-aware elements that effectively capture complex mechanical behaviors and timefrequency dynamics intrinsic to aircraft systems. Traditional kernel functions, often limited to exponential-based methods, are extended to include a wider range of kernels such as exponential squared sine and rational quadratic kernels, along with their respective firstand second-order derivatives. The proposed methodologies are first validated on a sinus cardinal test case and then applied to forecasting Mauna-Loa Carbon Dioxide (CO 2 ) concentrations and airline passenger traffic. All these advancements are integrated into the open-source Surrogate Modeling Toolbox (SMT 2.0), providing a versatile platform for both standard and customizable kernel configurations. Furthermore, the framework enables the combination of various kernels to leverage their unique strengths into composite models tailored to specific problems. The resulting framework offers a flexible toolset for engineers and researchers, paving the way for numerous future applications in metamodeling for complex, frequency-sensitive domains.


[91] 2507.09711

Phase transition of the Sinkhorn-Knopp algorithm

The matrix scaling problem, particularly the Sinkhorn-Knopp algorithm, has been studied for over 60 years. In practice, the algorithm often yields high-quality approximations within just a few iterations. Theoretically, however, the best-known upper bound places it in the class of pseudopolynomial-time approximation algorithms. Meanwhile, the lower-bound landscape remains largely unexplored. Two fundamental questions persist: what accounts for the algorithm's strong empirical performance, and can a tight bound on its iteration count be established? For an $n\times n$ matrix, its normalized version is obtained by dividing each entry by its largest entry. We say that a normalized matrix has a density $\gamma$ if there exists a constant $\rho > 0$ such that one row or column has exactly $\lceil \gamma n \rceil$ entries with values at least $\rho$, and every other row and column has at least $\lceil \gamma n \rceil$ such entries. For the upper bound, we show that the Sinkhorn-Knopp algorithm produces a nearly doubly stochastic matrix in $O(\log n - \log \varepsilon)$ iterations and $\widetilde{O}(n^2)$ time for all nonnegative square matrices whose normalized version has a density $\gamma > 1/2$. Such matrices cover both the algorithm's principal practical inputs and its typical theoretical regime, and the $\widetilde{O}(n^2)$ runtime is optimal. For the lower bound, we establish a tight bound of $\widetilde{\Omega}\left(n^{1/2}/\varepsilon\right)$ iterations for positive matrices under the $\ell_2$-norm error measure. Moreover, for every $\gamma < 1/2$, there exists a matrix with density $\gamma$ for which the algorithm requires $\Omega\left(n^{1/2}/\varepsilon\right)$ iterations. In summary, our results reveal a sharp phase transition in the Sinkhorn-Knopp algorithm at the density threshold $\gamma = 1/2$.


[92] 2507.09732

Continental scale habitat modelling with artificial intelligence and multimodal earth observation

Habitats integrate the abiotic conditions and biophysical structures that support biodiversity and sustain nature's contributions to people. As these ecosystems face mounting pressure from human activities, accurate, high-resolution habitat maps are essential for effective conservation and restoration. Yet current maps often fall short in thematic or spatial resolution because they must (1) model several mutually exclusive habitat types that co-occur across landscapes and (2) cope with severe class imbalance that complicate multi-class training. Here, we evaluated how high-resolution remote sensing (RS) data and Artificial Intelligence (AI) tools can improve habitat classification over large geographic extents at fine thematic resolution. Using vegetation plots from the European Vegetation Archive, we modelled Level 3 EUNIS habitats across Europe and assessed multiple modelling strategies against independent validation datasets. Strategies that exploited the hierarchical nature of habitat nomenclatures resolved classification ambiguities, especially in fragmented landscapes. Integrating multi-spectral (MSI) and synthetic aperture radar (SAR) imagery, particularly through Earth Observation Foundation models, enhanced within-formation discrimination and overall performance. Finally, ensemble machine learning that corrects class imbalance boosted accuracy further. Our methodological framework is transferable beyond Europe and adaptable to other classification systems. Future research should advance temporal modelling of dynamic habitats, extend to habitat segmentation and quality assessment, and exploit next-generation EO data paired with higher-quality in-situ observations.


[93] 2507.09808

Frank-Wolfe Recursions for the Emergency Response Problem on Measure Spaces

We consider an optimization problem over measures for emergency response to out-of-hospital cardiac arrest (OHCA), where the goal is to allocate volunteer resources across a spatial region to minimize the probability of death. The problem is infinite-dimensional and poses challenges for analysis and computation. We first establish structural properties, including convexity of the objective functional, compactness of the feasible set, and existence of optimal solutions. We also derive the influence function, which serves as the first-order variational object in our optimization framework. We then adapt and analyze a fully-corrective Frank-Wolfe (fc-FW) algorithm that operates directly on the infinite-dimensional problem without discretization or parametric approximation. We show a form of convergence even when subproblems are not solved to global optimality. Our full implementation of fc-FW demonstrates complex solution structure even in simple discrete cases, reveals nontrivial volunteer allocations in continuous cases, and scales to realistic urban scenarios using OHCA data from the city of Auckland, New Zealand. Finally, we show that when volunteer travel is modeled through the $L_1$ norm, the influence function is piecewise strictly concave, enabling fast computation via support reduction. The proposed framework and analysis extend naturally to a broad class of $P$-means problems.


[94] 2507.09846

Through the River: Understanding the Benefit of Schedule-Free Methods for Language Model Training

As both model and dataset sizes continue to scale rapidly, conventional pretraining strategies with fixed compute budgets-such as cosine learning rate schedules-are increasingly inadequate for large-scale training. Recent alternatives, including warmup-stable-decay (WSD) schedules and weight averaging, offer greater flexibility. However, WSD relies on explicit decay phases to track progress, while weight averaging addresses this limitation at the cost of additional memory. In search of a more principled and scalable alternative, we revisit the Schedule-Free (SF) method [Defazio et al., 2024], which has shown strong empirical performance across diverse settings. We show that SF-AdamW effectively navigates the "river" structure of the loss landscape without decay phases or auxiliary averaging, making it particularly suitable for continuously scaling training workloads. To understand this behavior, we conduct a theoretical and empirical analysis of SF dynamics, revealing that it implicitly performs weight averaging without memory overhead. Guided by this analysis, we propose a refined variant of SF that improves robustness to momentum and performs better under large batch sizes, addressing key limitations of the original method. Together, these results establish SF as a practical, scalable, and theoretically grounded approach for language model training.


[95] 2507.09888

NeuTSFlow: Modeling Continuous Functions Behind Time Series Forecasting

Time series forecasting is a fundamental task with broad applications, yet conventional methods often treat data as discrete sequences, overlooking their origin as noisy samples of continuous processes. Crucially, discrete noisy observations cannot uniquely determine a continuous function; instead, they correspond to a family of plausible functions. Mathematically, time series can be viewed as noisy observations of a continuous function family governed by a shared probability measure. Thus, the forecasting task can be framed as learning the transition from the historical function family to the future function family. This reframing introduces two key challenges: (1) How can we leverage discrete historical and future observations to learn the relationships between their underlying continuous functions? (2) How can we model the transition path in function space from the historical function family to the future function family? To address these challenges, we propose NeuTSFlow, a novel framework that leverages Neural Operators to facilitate flow matching for learning path of measure between historical and future function families. By parameterizing the velocity field of the flow in infinite-dimensional function spaces, NeuTSFlow moves beyond traditional methods that focus on dependencies at discrete points, directly modeling function-level features instead. Experiments on diverse forecasting tasks demonstrate NeuTSFlow's superior accuracy and robustness, validating the effectiveness of the function-family perspective.


[96] 2507.09916

Solving dynamic portfolio selection problems via score-based diffusion models

In this paper, we tackle the dynamic mean-variance portfolio selection problem in a {\it model-free} manner, based on (generative) diffusion models. We propose using data sampled from the real model $\mathcal P$ (which is unknown) with limited size to train a generative model $\mathcal Q$ (from which we can easily and adequately sample). With adaptive training and sampling methods that are tailor-made for time series data, we obtain quantification bounds between $\mathcal P$ and $\mathcal Q$ in terms of the adapted Wasserstein metric $\mathcal A W_2$. Importantly, the proposed adapted sampling method also facilitates {\it conditional sampling}. In the second part of this paper, we provide the stability of the mean-variance portfolio optimization problems in $\mathcal A W _2$. Then, combined with the error bounds and the stability result, we propose a policy gradient algorithm based on the generative environment, in which our innovative adapted sampling method provides approximate scenario generators. We illustrate the performance of our algorithm on both simulated and real data. For real data, the algorithm based on the generative environment produces portfolios that beat several important baselines, including the Markowitz portfolio, the equal weight (naive) portfolio, and S\&P 500.


[97] 2507.09940

Long-Tailed Data Classification by Increasing and Decreasing Neurons During Training

In conventional deep learning, the number of neurons typically remains fixed during training. However, insights from biology suggest that the human hippocampus undergoes continuous neuron generation and pruning of neurons over the course of learning, implying that a flexible allocation of capacity can contribute to enhance performance. Real-world datasets often exhibit class imbalance situations where certain classes have far fewer samples than others, leading to significantly reduce recognition accuracy for minority classes when relying on fixed size this http URL address the challenge, we propose a method that periodically adds and removes neurons during training, thereby boosting representational power for minority classes. By retaining critical features learned from majority classes while selectively increasing neurons for underrepresented classes, our approach dynamically adjusts capacity during training. Importantly, while the number of neurons changes throughout training, the final network size and structure remain unchanged, ensuring efficiency and compatibility with this http URL, by experiments on three different datasets and five representative models, we demonstrate that the proposed method outperforms fixed size networks and shows even greater accuracy when combined with other imbalance-handling techniques. Our results underscore the effectiveness of dynamic, biologically inspired network designs in improving performance on class-imbalanced data.


[98] 2507.09952

Radial Neighborhood Smoothing Recommender System

Recommender systems inherently exhibit a low-rank structure in latent space. A key challenge is to define meaningful and measurable distances in the latent space to capture user-user, item-item, user-item relationships effectively. In this work, we establish that distances in the latent space can be systematically approximated using row-wise and column-wise distances in the observed matrix, providing a novel perspective on distance estimation. To refine the distance estimation, we introduce the correction based on empirical variance estimator to account for noise-induced non-centrality. The novel distance estimation enables a more structured approach to constructing neighborhoods, leading to the Radial Neighborhood Estimator (RNE), which constructs neighborhoods by including both overlapped and partially overlapped user-item pairs and employs neighborhood smoothing via localized kernel regression to improve imputation accuracy. We provide the theoretical asymptotic analysis for the proposed estimator. We perform evaluations on both simulated and real-world datasets, demonstrating that RNE achieves superior performance compared to existing collaborative filtering and matrix factorization methods. While our primary focus is on distance estimation in latent space, we find that RNE also mitigates the ``cold-start'' problem.


[99] 2507.10088

Towards High Supervised Learning Utility Training Data Generation: Data Pruning and Column Reordering

Tabular data synthesis for supervised learning ('SL') model training is gaining popularity in industries such as healthcare, finance, and retail. Despite the progress made in tabular data generators, models trained with synthetic data often underperform compared to those trained with original data. This low SL utility of synthetic data stems from class imbalance exaggeration and SL data relationship overlooked by tabular generator. To address these challenges, we draw inspirations from techniques in emerging data-centric artificial intelligence and elucidate Pruning and ReOrdering ('PRRO'), a novel pipeline that integrates data-centric techniques into tabular data synthesis. PRRO incorporates data pruning to guide the table generator towards observations with high signal-to-noise ratio, ensuring that the class distribution of synthetic data closely matches that of the original data. Besides, PRRO employs a column reordering algorithm to align the data modeling structure of generators with that of SL models. These two modules enable PRRO to optimize SL utility of synthetic data. Empirical experiments on 22 public datasets show that synthetic data generated using PRRO enhances predictive performance compared to data generated without PRRO. Specifically, synthetic replacement of original data yields an average improvement of 26.74% and up to 871.46% improvement using PRRO, while synthetic appendant to original data results with PRRO-generated data results in an average improvement of 6.13% and up to 200.32%. Furthermore, experiments on six highly imbalanced datasets show that PRRO enables the generator to produce synthetic data with a class distribution that resembles the original data more closely, achieving a similarity improvement of 43%. Through PRRO, we foster a seamless integration of data synthesis to subsequent SL prediction, promoting quality and accessible data analysis.


[100] 2507.10132

Wavelet-Enhanced Neural ODE and Graph Attention for Interpretable Energy Forecasting

Accurate forecasting of energy demand and supply is critical for optimizing sustainable energy systems, yet it is challenged by the variability of renewable sources and dynamic consumption patterns. This paper introduces a neural framework that integrates continuous-time Neural Ordinary Differential Equations (Neural ODEs), graph attention, multi-resolution wavelet transformations, and adaptive learning of frequencies to address the issues of time series prediction. The model employs a robust ODE solver, using the Runge-Kutta method, paired with graph-based attention and residual connections to better understand both structural and temporal patterns. Through wavelet-based feature extraction and adaptive frequency modulation, it adeptly captures and models diverse, multi-scale temporal dynamics. When evaluated across seven diverse datasets: ETTh1, ETTh2, ETTm1, ETTm2 (electricity transformer temperature), and Waste, Solar, and Hydro (renewable energy), this architecture consistently outperforms state-of-the-art baselines in various forecasting metrics, proving its robustness in capturing complex temporal dependencies. Furthermore, the model enhances interpretability through SHAP analysis, making it suitable for sustainable energy applications.


[101] 2507.10184

Fractional Cointegration of Geometric Functionals

In this paper, we show that geometric functionals (e.g., excursion area, boundary length) evaluated on excursion sets of sphere-cross-time long memory random fields can exhibit fractional cointegration, meaning that some of their linear combinations have shorter memory than the original vector. These results prove the existence of long-run equilibrium relationships between functionals evaluated at different threshold values; as a statistical application, we discuss a frequency-domain estimator for the Adler-Taylor metric factor, i.e., the variance of the field's gradient. Our results are illustrated also by Monte Carlo simulations.


[102] 2507.10215

A Graph Sufficiency Perspective for Neural Networks

This paper analyzes neural networks through graph variables and statistical sufficiency. We interpret neural network layers as graph-based transformations, where neurons act as pairwise functions between inputs and learned anchor points. Within this formulation, we establish conditions under which layer outputs are sufficient for the layer inputs, that is, each layer preserves the conditional distribution of the target variable given the input variable. Under dense anchor point assumptions, we prove that asymptotic sufficiency holds in the infinite-width limit and is preserved throughout training. To align more closely with practical architectures, we further show that sufficiency can be achieved with finite-width networks by assuming region-separated input distributions and constructing appropriate anchor points. Our framework covers fully connected layers, general pairwise functions, ReLU and sigmoid activations, and convolutional neural networks. This work bridges statistical sufficiency, graph-theoretic representations, and deep learning, providing a new statistical understanding of neural networks.


[103] 2507.10419

Multiple Choice Learning of Low Rank Adapters for Language Modeling

We propose LoRA-MCL, a training scheme that extends next-token prediction in language models with a method designed to decode diverse, plausible sentence continuations at inference time. Traditional language modeling is an intrinsically ill-posed problem: given a context, multiple futures may be equally plausible. Our approach leverages Multiple Choice Learning (MCL) and the Winner-Takes-All (WTA) loss to efficiently handle ambiguity through Low-Rank Adaptation (LoRA). We provide a theoretical interpretation of applying Multiple Choice Learning to Language Modeling, assuming the data is generated from a mixture of distributions. To illustrate the proposed approach, we use data sampled from mixtures of Markov chains. We then demonstrate with extensive experiments on real-world visual and audio captioning tasks that our method achieves high diversity and relevance in generated outputs.


[104] 2507.10425

Non-exchangeable Conformal Prediction with Optimal Transport: Tackling Distribution Shifts with Unlabeled Data

Conformal prediction is a distribution-free uncertainty quantification method that has gained popularity in the machine learning community due to its finite-sample guarantees and ease of use. Its most common variant, dubbed split conformal prediction, is also computationally efficient as it boils down to collecting statistics of the model predictions on some calibration data not yet seen by the model. Nonetheless, these guarantees only hold if the calibration and test data are exchangeable, a condition that is difficult to verify and often violated in practice due to so-called distribution shifts. The literature is rife with methods to mitigate the loss in coverage in this non-exchangeable setting, but these methods require some prior information on the type of distribution shift to be expected at test time. In this work, we study this problem via a new perspective, through the lens of optimal transport, and show that it is possible to estimate the loss in coverage and mitigate it in case of distribution shift.


[105] 2507.10531

Quantitative central limit theorems for exponential random graphs

Ferromagnetic exponential random graph models (ERGMs) are nonlinear exponential tilts of Erdős-Rényi models, under which the presence of certain subgraphs such as triangles may be emphasized. These models are mixtures of metastable wells which each behave macroscopically like new Erdős-Rényi models themselves, exhibiting the same laws of large numbers for the overall edge count as well as all subgraph counts. However, the microscopic fluctuations of these quantities remained elusive for some time. Building on a recent breakthrough by Fang, Liu, Shao and Zhao [FLSZ24] driven by Stein's method, we prove quantitative central limit theorems (CLTs) for these quantities and more in metastable wells under ferromagnetic ERGMs. One main novelty of our results is that they apply also in the supercritical (low temperature) regime of parameters, which has previously been relatively unexplored. To accomplish this, we develop a novel probabilistic technique based on the careful analysis of the evolution of relevant quantities under the ERGM Glauber dynamics. Our technique allows us to deliver the main input to the method developed by [FLSZ24], which is the fact that the fluctuations of subgraph counts are driven by those of the overall edge count. This was first shown for the triangle count by Sambale and Sinulis [SS20] in the Dobrushin (very high temperature) regime via functional-analytic methods. We feel our technique clarifies the underlying mechanisms at play, and it also supplies improved bounds on the Wasserstein and Kolmogorov distances between the observables at hand and the limiting Gaussians, as compared to the results of [FLSZ24] in the subcritical (high temperature) regime beyond the Dobrushin regime. Moreover, our technique is flexible enough to also yield quantitative CLTs for vertex degrees and local subgraph counts, which have not appeared before in any parameter regime.


[106] 1809.03905

Mapping food insecurity in the Brazilian Amazon using a spatial item factor analysis model

Food insecurity, a latent construct defined as the lack of consistent access to sufficient and nutritious food, is a pressing global issue with serious health and social justice implications. Item factor analysis is commonly used to study such latent constructs, but it typically assumes independence between sampling units. In the context of food insecurity, this assumption is often unrealistic, as food access is linked to socio-economic conditions and social relations that are spatially structured. To address this, we propose a spatial item factor analysis model that captures spatial dependence, allowing us to predict latent factors at unsampled locations and identify food insecurity hotspots. We develop a Bayesian sampling scheme for inference and illustrate the explanatory strength of our model by analysing household perceptions of food insecurity in Ipixuna, a remote river-dependent urban centre in the Brazilian Amazon. Our approach is implemented in the R package spifa, with further details provided in the Supplementary Material. This spatial extension offers policymakers and researchers a stronger tool for understanding and addressing food insecurity to locate and prioritise areas in greatest need. Our proposed methodology can be applied more widely to other spatially structured latent constructs.


[107] 1910.02170

Donor's Deferral and Return Behavior: Partial Identification from a Regression Discontinuity Design with Manipulation

Volunteer labor can temporarily yield lower benefits to charities than its costs. In such instances, organizations may wish to defer volunteer donations to a later date. Exploiting a discontinuity in blood donations' eligibility criteria, we show that deferring donors reduces their future volunteerism. In our setting, medical staff manipulates donors' reported hemoglobin levels over a threshold to facilitate donation. Such manipulation invalidates standard regression discontinuity design. To circumvent this issue, we propose a procedure for obtaining partial identification bounds where manipulation is present. Our procedure is applicable in various regression discontinuity settings where the running variable is manipulated.


[108] 2201.09350

Elementary proofs of several results on false discovery rate

We collect self-contained elementary proofs of four results in the literature on the false discovery rate of the Benjamini-Hochberg (BH) procedure for independent or positive-regression dependent p-values, the Benjamini-Yekutieli correction for arbitrarily dependent p-values, and the e-BH procedure for arbitrarily dependent e-values. As a corollary, the above proofs also lead to some inequalities of Simes and Hommel.


[109] 2206.05974

Deep Neural Network Based Accelerated Failure Time Models using Rank Loss

An accelerated failure time (AFT) model assumes a log-linear relationship between failure times and a set of covariates. In contrast to other popular survival models that work on hazard functions, the effects of covariates are directly on failure times, whose interpretation is intuitive. The semiparametric AFT model that does not specify the error distribution is flexible and robust to departures from the distributional assumption. Owing to the desirable features, this class of models has been considered as a promising alternative to the popular Cox model in the analysis of censored failure time data. However, in these AFT models, a linear predictor for the mean is typically assumed. Little research has addressed the nonlinearity of predictors when modeling the mean. Deep neural networks (DNNs) have received a focal attention over the past decades and have achieved remarkable success in a variety of fields. DNNs have a number of notable advantages and have been shown to be particularly useful in addressing the nonlinearity. By taking advantage of this, we propose to apply DNNs in fitting AFT models using a Gehan-type loss, combined with a sub-sampling technique. Finite sample properties of the proposed DNN and rank based AFT model (DeepR-AFT) are investigated via an extensive stimulation study. DeepR-AFT shows a superior performance over its parametric or semiparametric counterparts when the predictor is nonlinear. For linear predictors, DeepR-AFT performs better when the dimensions of covariates are large. The proposed DeepR-AFT is illustrated using two real datasets, which demonstrates its superiority.


[110] 2210.11611

3D Bivariate Spatial Modelling of Argo Ocean Temperature and Salinity Profiles

Variables contained within the global oceans can detect and reveal the effects of the warming climate, as the oceans absorb huge amounts of solar energy. Hence, information regarding the joint spatial distribution of ocean variables is critical for understanding the climate. In this paper, we investigate the spatial dependence structure between ocean temperature and salinity using data harvested from the Argo program and construct a bivariate spatial model for the data that cover the surface to the ocean's interior. We develop a flexible class of multivariate nonstationary covariance models defined in 3-dimensional (3D) space (longitude $\times$ latitude $\times$ depth) that allow the variances and correlation to vary with ocean depth. These models describe the joint spatial distribution of the two variables while incorporating the underlying vertical structure of the ocean. We apply this framework to temperature and salinity data from Argo floats. To manage the computational challenges posed by the large volume of the Argo data, we apply the Vecchia approximation to the likelihood functions. We demonstrate that the proposed bivariate covariance is able to describe the complex vertical cross-covariance structure between the original processes as well as their first and second-order differenciations, while existing bivariate models, including bivariate Matérn, poorly fit the empirical cross-covariance structure.


[111] 2212.10406

GEEPERs: Principal Stratification using Principal Scores and Stacked Estimating Equations

Principal stratification is a framework for making sense of causal effects conditioned on variables that themselves may have been affected by treatment. For instance, one component of an educational computer application is the availability of ``bottom-out'' hints that provide the answer. In evaluating a recent experimental evaluation against alternative programs without bottom-out hints, researchers may be interested in estimating separate average treatment effects for students who, if given the opportunity, would request bottom-out hints frequently, and for students who would not. Most principal stratification estimators rely on strong structural or modeling assumptions, and many require advanced statistical training to fit and check. In this paper, we introduce a new M-estimation principal effect estimator for one-way noncompliance based on a binary indicator. Estimates may be computed using conventional regressions (though the standard errors require a specialized sandwich formula) and do not rely on distributional assumptions. We present a simulation study that demonstrates the novel method's greater robustness compared to popular alternatives and illustrate the method through two real-data analyses.


[112] 2303.07152

Score Attack: A Lower Bound Technique for Optimal Differentially Private Learning

Achieving optimal statistical performance while ensuring the privacy of personal data is a challenging yet crucial objective in modern data analysis. However, characterizing the optimality, particularly the minimax lower bound, under privacy constraints is technically difficult. To address this issue, we propose a novel approach called the score attack, which provides a lower bound on the differential-privacy-constrained minimax risk of parameter estimation. The score attack method is based on the tracing attack concept in differential privacy and can be applied to any statistical model with a well-defined score statistic. It can optimally lower bound the minimax risk of estimating unknown model parameters, up to a logarithmic factor, while ensuring differential privacy for a range of statistical problems. We demonstrate the effectiveness and optimality of this general method in various examples, such as the generalized linear model in both classical and high-dimensional sparse settings, the Bradley-Terry-Luce model for pairwise comparisons, and non-parametric regression over the Sobolev class.


[113] 2309.16861

Demystifying Spatial Confounding

Spatial confounding is a fundamental issue in spatial regression models which arises because spatial random effects, included to approximate unmeasured spatial variation, are typically not independent of covariates in the model. This can lead to significant bias in covariate effect estimates. The problem is complex and has been the topic of extensive research with sometimes puzzling and seemingly contradictory results. Here, we develop a broad theoretical framework that brings mathematical clarity to the mechanisms of spatial confounding, providing explicit analytical expressions for the resulting bias. We see that the problem is directly linked to spatial smoothing and identify exactly how the size and occurrence of bias relate to the features of the spatial model as well as the underlying confounding scenario. Using our results, we can explain subtle and counter-intuitive behaviours. Finally, we propose a general approach for dealing with spatial confounding bias in practice, applicable for any spatial model specification. When a covariate has non-spatial information, we show that a general form of the so-called spatial+ method can be used to eliminate bias. When no such information is present, the situation is more challenging but, under the assumption of unconfounded high frequencies, we develop a procedure in which multiple capped versions of spatial+ are applied to assess the bias in this case. We illustrate our approach with an application to air temperature in Germany.


[114] 2310.14890

Bounding the Worst-class Error: A Boosting Approach

This paper tackles the problem of the worst-class error rate, instead of the standard error rate averaged over all classes. For example, a three-class classification task with class-wise error rates of 10%, 10%, and 40% has a worst-class error rate of 40%, whereas the average is 20% under the class-balanced condition. The worst-class error is important in many applications. For example, in a medical image classification task, it would not be acceptable for the malignant tumor class to have a 40% error rate, while the benign and healthy classes have a 10% error rates. To avoid overfitting in worst-class error minimization using Deep Neural Networks (DNNs), we design a problem formulation for bounding the worst-class error instead of achieving zero worst-class error. Moreover, to correctly bound the worst-class error, we propose a boosting approach which ensembles DNNs. We give training and generalization worst-class-error bound. Experimental results show that the algorithm lowers worst-class test error rates while avoiding overfitting to the training set.


[115] 2310.17165

Price Experimentation and Interference

In this paper, we examine the biases that arise when firms run A/B tests on continuous parameters to estimate global treatment effects on performance metrics of interest; we particularly focus on price experiments to measure the price impact on quantity demanded, and on profit. In canonical A/B experimental estimators, biases emerge due to interference between market participants. We employ structural modeling and differential calculus to derive intuitive characterizations of these biases. We then specialize our general model to the standard revenue-management pricing problem. This setting highlights a fundamental risk innate to A/B pricing experiments: that the canonical estimator for the expected change in profits, counterintuitively, can have the em wrong sign in expectation. In other words, following the guidance of canonical estimators may lead firms to move prices (or fees) in the wrong direction, inadvertently decreasing profits. We introduce a novel debiasing technique for these canonical experiments, requiring only that firms equally split units between treatment and control. We apply these results to a two-sided market model, and demonstrate how the "change of sign" regime depends on market factors such as the supply/demand imbalance, and the price markup. We conclude by calibrating our two-sided market model to published empirical estimates from Airbnb marketplaces, demonstrating that estimators with the wrong sign are not a knife-edge issue, and that they may be prevalent enough to be of concern to practitioners.


[116] 2408.02060

Winners with Confidence: Discrete Argmin Inference with an Application to Model Selection

We study the problem of finding the index of the minimum value of a vector from noisy observations. This problem is relevant in population/policy comparison, discrete maximum likelihood, and model selection. We develop an asymptotically normal test statistic, even in high-dimensional settings and with potentially many ties in the population mean vector, by integrating concepts and tools from cross-validation and differential privacy. The key technical ingredient is a central limit theorem for globally dependent data. We also propose practical ways to select the tuning parameter that adapts to the signal landscape. Numerical experiments and data examples demonstrate the ability of the proposed method to achieve a favorable bias-variance trade-off in practical scenarios.


[117] 2408.10650

Principal component analysis for max-stable distributions

Principal component analysis (PCA) is one of the most popular dimension reduction techniques in statistics and is especially powerful when a multivariate distribution is concentrated near a lower-dimensional subspace. Multivariate extreme value distributions have turned out to provide challenges for the application of PCA since their constraint support impedes the detection of lower-dimensional structures and heavy-tails can imply that second moments do not exist, thereby preventing the application of classical variance-based techniques for PCA. We adapt PCA to max-stable distributions using a regression setting and employ max-linear maps to project the random vector to a lower-dimensional space while preserving max-stability. We also provide a characterization of those distributions which allow for a perfect reconstruction from the lower-dimensional representation. Finally, we demonstrate how an optimal projection matrix can be consistently estimated and show viability in practice with a simulation study and application to a benchmark dataset.


[118] 2409.18209

A Unified View on Learning Unnormalized Distributions via Noise-Contrastive Estimation

This paper studies a family of estimators based on noise-contrastive estimation (NCE) for learning unnormalized distributions. The main contribution of this work is to provide a unified perspective on various methods for learning unnormalized distributions, which have been independently proposed and studied in separate research communities, through the lens of NCE. This unified view offers new insights into existing estimators. Specifically, for exponential families, we establish the finite-sample convergence rates of the proposed estimators under a set of regularity assumptions, most of which are new.


[119] 2409.19241

Estimating Interpretable Heterogeneous Treatment Effect with Causal Subgroup Discovery in Survival Outcomes

Estimating heterogeneous treatment effect (HTE) for survival outcomes has gained increasing attention, as it captures the variation in treatment efficacy across patients or subgroups in delaying disease progression. However, most existing methods focus on post-hoc subgroup identification rather than simultaneously estimating HTE and selecting relevant subgroups. In this paper, we propose an interpretable HTE estimation framework that integrates three meta-learners that simultaneously estimate CATE for survival outcomes and identify predictive subgroups. We evaluated the performance of our method through comprehensive simulation studies across various randomized clinical trial (RCT) settings. Additionally, we demonstrated its application in a large RCT for age-related macular degeneration (AMD), a polygenic progressive eye disease, to estimate the HTE of an antioxidant and mineral supplement on time-to-AMD progression and to identify genetics-based subgroups with enhanced treatment effects. Our method offers a direct interpretation of the estimated HTE and provides evidence to support precision healthcare.


[120] 2412.06528

Highest Posterior Density Intervals of Unimodal Distributions As Analogues to Profile Likelihood Ratio Confidence Intervals

In Bayesian statistics, the highest posterior density (HPD) interval is often used to describe properties of a posterior distribution. As a method for estimating confidence intervals (CIs), the HPD has two main desirable properties. Firstly, it is the shortest interval to have a specified coverage probability. Secondly, every point inside the HPD interval has a density greater than every point outside the interval. However, the HPD interval is sometimes criticized for being transformation invariant. We make the case that under certain conditions the HPD interval is a natural analog to the frequentist profile likelihood ratio confidence interval (LRCI). Our main result is to derive a proof showing that under specified conditions, the HPD interval with respect to the density mode is transformation invariant for monotonic functions in a manner which is similar to a profile LRCI.


[121] 2412.15243

Asymptotic efficiency of inferential models and a possibilistic Bernstein--von Mises theorem

The inferential model (IM) framework offers an alternative to the classical probabilistic (e.g., Bayesian and fiducial) uncertainty quantification in statistical inference. A key distinction is that classical uncertainty quantification takes the form of precise probabilities and offers only limited large-sample validity guarantees, whereas the IM's uncertainty quantification is imprecise in such a way that exact, finite-sample valid inference is possible. But is the IM's imprecision and finite-sample validity compatible with statistical efficiency? That is, can IMs be both finite-sample valid and asymptotically efficient? This paper gives an affirmative answer to this question via a new possibilistic Bernstein--von Mises theorem that parallels a fundamental Bayesian result. Among other things, our result shows that the IM solution is efficient in the sense that, asymptotically, its credal set is the smallest that contains the Gaussian distribution with variance equal to the Cramer--Rao lower bound. Moreover, a corresponding version of this new Bernstein--von Mises theorem is presented for problems that involve the elimination of nuisance parameters, which settles an open question concerning the relative efficiency of profiling-based versus extension-based marginalization strategies.


[122] 2501.06133

Testing conditional independence under isotonicity

We propose a test of the conditional independence of random variables $X$ and $Y$ given $Z$ under the additional assumption that $X$ is stochastically increasing in $Z$. The well-documented hardness of testing conditional independence means that some further restriction on the null hypothesis parameter space is required, but in contrast to existing approaches based on parametric models, smoothness assumptions, or approximations to the conditional distribution of $X$ given $Z$ and/or $Y$ given $Z$, our test requires only the stochastic monotonicity assumption. Our procedure, called PairSwap-ICI, determines the significance of a statistic by randomly swapping the $X$ values within ordered pairs of $Z$ values. The matched pairs and the test statistic may depend on both $Y$ and $Z$, providing the analyst with significant flexibility in constructing a powerful test. Our test offers finite-sample Type I error control, and provably achieves high power against a large class of alternatives that are not too close to the null. We validate our theoretical findings through a series of simulations and real data experiments.


[123] 2501.13535

LITE: Efficiently Estimating Gaussian Probability of Maximality

We consider the problem of computing the probability of maximality (PoM) of a Gaussian random vector, i.e., the probability for each dimension to be maximal. This is a key challenge in applications ranging from Bayesian optimization to reinforcement learning, where the PoM not only helps with finding an optimal action, but yields a fine-grained analysis of the action domain, crucial in tasks such as drug discovery. Existing techniques are costly, scaling polynomially in computation and memory with the vector size. We introduce LITE, the first approach for estimating Gaussian PoM with almost-linear time and memory complexity. LITE achieves SOTA accuracy on a number of tasks, while being in practice several orders of magnitude faster than the baselines. This also translates to a better performance on downstream tasks such as entropy estimation and optimal control of bandits. Theoretically, we cast LITE as entropy-regularized UCB and connect it to prior PoM estimators.


[124] 2501.14142

Gaussian Rank Verification

Statistical experiments often seek to identify random variables with the largest population means. This inferential task, known as rank verification, has been well-studied on Gaussian data with equal variances. This work provides the first treatment of the unequal variances case, utilizing ideas from the selective inference literature. We design a hypothesis test that verifies the rank of the largest observed value without losing power due to multiple testing corrections. This test is subsequently extended for two procedures: Identifying some number of correctly-ordered Gaussian means, and validating the top-K set. The testing procedures are validated on NHANES survey data.


[125] 2501.17463

Nonparametric Smoothing of Directional and Axial Data

We discuss generalized linear models for directional data where the conditional distribution of the response is a von Mises-Fisher distribution in arbitrary dimension or a Bingham distribution on the unit circle. To do this properly, we parametrize von Mises-Fisher distributions by Euclidean parameters and investigate computational aspects of this parametrization. Then we modify this approach for local polynomial regression as a means of nonparametric smoothing of distributional data. The methods are illustrated with simulated data and a data set from planetary sciences involving covariate vectors on a sphere with axial response.


[126] 2503.21968

GLM Inference with AI-Generated Synthetic Data Using Misspecified Linear Regression

Data privacy concerns have led to the growing interest in synthetic data, which strives to preserve the statistical properties of the original dataset while ensuring privacy by excluding real records. Recent advances in deep neural networks and generative artificial intelligence have facilitated the generation of synthetic data. However, although prediction with synthetic data has been the focus of recent research, statistical inference with synthetic data remains underdeveloped. In particular, in many settings, including generalized linear models (GLMs), the estimator obtained using synthetic data converges much more slowly than in standard settings. To address these limitations, we propose a method that leverages summary statistics from the original data. Using a misspecified linear regression estimator, we then develop inference that greatly improves the convergence rate and restores the standard root-$n$ behavior for GLMs.


[127] 2503.24004

Multivariate Species Sampling Models

Species sampling processes have long served as the fundamental framework for modeling random discrete distributions and exchangeable sequences. However, data arising from distinct but related sources require a broader notion of probabilistic invariance, making partial exchangeability a natural choice. Countless models for partially exchangeable data, collectively known as dependent nonparametric priors, have been proposed. These include hierarchical, nested and additive processes, widely used in statistics and machine Learning. Still, a unifying framework is lacking and key questions about their underlying learning mechanisms remain unanswered. We fill this gap by introducing multivariate species sampling models, a new general class of nonparametric priors that encompasses most existing finite- and infinite-dimensional dependent processes. They are characterized by the induced partially exchangeable partition probability function encoding their multivariate clustering structure. We establish their core distributional properties and analyze their dependence structure, demonstrating that borrowing of information across groups is entirely determined by shared ties. This provides new insights into the underlying learning mechanisms, offering, for instance, a principled rationale for the previously unexplained correlation structure observed in existing models. Beyond providing a cohesive theoretical foundation, our approach serves as a constructive tool for developing new models and opens novel research directions to capture richer dependence structures beyond the framework of multivariate species sampling processes.


[128] 2504.07426

Conditional Data Synthesis Augmentation

Reliable machine learning and statistical analysis rely on diverse, well-distributed training data. However, real-world datasets are often limited in size and exhibit underrepresentation across key subpopulations, leading to biased predictions and reduced performance, particularly in supervised tasks such as classification. To address these challenges, we propose Conditional Data Synthesis Augmentation (CoDSA), a novel framework that leverages generative models, such as diffusion models, to synthesize high-fidelity data for improving model performance across multimodal domains including tabular, textual, and image data. CoDSA generates synthetic samples that faithfully capture the conditional distributions of the original data, with a focus on under-sampled or high-interest regions. Through transfer learning, CoDSA fine-tunes pre-trained generative models to enhance the realism of synthetic data and increase sample density in sparse areas. This process preserves inter-modal relationships, mitigates data imbalance, improves domain adaptation, and boosts generalization. We also introduce a theoretical framework that quantifies the statistical accuracy improvements enabled by CoDSA as a function of synthetic sample volume and targeted region allocation, providing formal guarantees of its effectiveness. Extensive experiments demonstrate that CoDSA consistently outperforms non-adaptive augmentation strategies and state-of-the-art baselines in both supervised and unsupervised settings.


[129] 2504.11775

Discrimination-free Insurance Pricing with Privatized Sensitive Attributes

Fairness has emerged as a critical consideration in the landscape of machine learning algorithms, particularly as AI continues to transform decision-making across societal domains. To ensure that these algorithms are free from bias and do not discriminate against individuals based on sensitive attributes such as gender and race, the field of algorithmic bias has introduced various fairness concepts, along with methodologies to achieve these notions in different contexts. Despite the rapid advancement, not all sectors have embraced these fairness principles to the same extent. One specific sector that merits attention in this regard is insurance. Within the realm of insurance pricing, fairness is defined through a distinct and specialized framework. Consequently, achieving fairness according to established notions does not automatically ensure fair pricing in insurance. In particular, regulators are increasingly emphasizing transparency in pricing algorithms and imposing constraints on insurance companies on the collection and utilization of sensitive consumer attributes. These factors present additional challenges in the implementation of fairness in pricing algorithms. To address these complexities and comply with regulatory demands, we propose an efficient method for constructing fair models that are tailored to the insurance domain, using only privatized sensitive attributes. Notably, our approach ensures statistical guarantees, does not require direct access to sensitive attributes, and adapts to varying transparency requirements, addressing regulatory demands while ensuring fairness in insurance pricing.


[130] 2504.19450

Signal detection from spiked noise via asymmetrization

The signal plus noise model $H=S+Y$ is a fundamental model in signal detection when a low rank signal $S$ is polluted by noise $Y$. In the high-dimensional setting, one often uses the leading singular values and corresponding singular vectors of $H$ to conduct the statistical inference of the signal $S$. Especially, when $Y$ consists of iid random entries, the singular values of $S$ can be estimated from those of $H$ as long as the signal $S$ is strong enough. However, when the $Y$ entries are heteroscedastic or heavy-tailed, this standard approach may fail. Especially in this work, we consider a situation that can easily arise with heteroscedastic or heavy-tailed noise but is particularly difficult to address using the singular value approach, namely, when the noise $Y$ itself may create spiked singular values. It has been a recurring question how to distinguish the signal $S$ from the spikes in $Y$, as this seems impossible by examining the leading singular values of $H$. Inspired by the work \cite{CCF21}, we turn to study the eigenvalues of an asymmetrized model when two samples $H_1=S+Y_1$ and $H_2=S+Y_2$ are available. We show that by looking into the leading eigenvalues (in magnitude) of the asymmetrized model $H_1H_2^*$, one can easily detect $S$. We will primarily discuss the heteroscedastic case and then discuss the extension to the heavy-tailed case. As a byproduct, we also derive the fundamental result regarding the outlier of non-Hermitian random matrix in \cite{Tao} under the minimal 2nd moment condition.


[131] 2505.02197

Central limit theorems under non-stationarity via relative weak convergence

Statistical inference for non-stationary data is hindered by the failure of classical central limit theorems (CLTs), not least because there is no fixed Gaussian limit to converge to. To resolve this, we introduce relative weak convergence, an extension of weak convergence that compares a statistic or process to a sequence of evolving processes. Relative weak convergence retains the essential consequences of classical weak convergence and coincides with it under stationarity. Crucially, it applies in general non-stationary settings where classical weak convergence fails. We establish concrete relative CLTs for random vectors and empirical processes, along with sequential, weighted, and bootstrap variants, that parallel the state-of-the-art in stationary settings. Our framework and results offer simple, plug-in replacements for classical CLTs whenever stationarity is untenable, as illustrated by applications in nonparametric trend estimation and hypothesis testing.


[132] 2505.07729

Nonparametric Instrumental Variable Inference with Many Weak Instruments

We study inference on linear functionals in the nonparametric instrumental variable (NPIV) problem with a discretely-valued instrument under a many-weak-instruments asymptotic regime, where the number of instrument values grows with the sample size. A key motivating example is estimating long-term causal effects in a new experiment with only short-term outcomes, using past experiments to instrument for the effect of short- on long-term outcomes. Here, the assignment to a past experiment serves as the instrument: we have many past experiments but only a limited number of units in each. Since the structural function is nonparametric but constrained by only finitely many moment restrictions, point identification typically fails. To address this, we consider linear functionals of the minimum-norm solution to the moment restrictions, which is always well-defined. As the number of instrument levels grows, these functionals define an approximating sequence to a target functional, replacing point identification with a weaker asymptotic notion suited to discrete instruments. Extending the Jackknife Instrumental Variable Estimator (JIVE) beyond the classical parametric setting, we propose npJIVE, a nonparametric estimator for solutions to linear inverse problems with many weak instruments. We construct automatic debiased machine learning estimators for linear functionals of both the structural function and its minimum-norm projection, and establish their efficiency in the many-weak-instruments regime. To do so, we develop a general semiparametric efficiency theory for regular estimators under weak identification and many-weak-instrument asymptotics.


[133] 2505.09075

Risk Bounds For Distributional Regression

This work examines risk bounds for nonparametric distributional regression estimators. For convex-constrained distributional regression, general upper bounds are established for the continuous ranked probability score (CRPS) and the worst-case mean squared error (MSE) across the domain. These theoretical results are applied to isotonic and trend filtering distributional regression, yielding convergence rates consistent with those for mean estimation. Furthermore, a general upper bound is derived for distributional regression under non-convex constraints, with a specific application to neural network-based estimators. Comprehensive experiments on both simulated and real data validate the theoretical contributions, demonstrating their practical effectiveness.


[134] 2506.07816

Accelerating Constrained Sampling: A Large Deviations Approach

The problem of sampling a target probability distribution on a constrained domain arises in many applications including machine learning. For constrained sampling, various Langevin algorithms such as projected Langevin Monte Carlo (PLMC) based on the discretization of reflected Langevin dynamics (RLD) and more generally skew-reflected non-reversible Langevin Monte Carlo (SRNLMC) based on the discretization of skew-reflected non-reversible Langevin dynamics (SRNLD) have been proposed and studied in the literature. This work focuses on the long-time behavior of SRNLD, where a skew-symmetric matrix is added to RLD. Although acceleration for SRNLD has been studied, it is not clear how one should design the skew-symmetric matrix in the dynamics to achieve good performance in practice. We establish a large deviation principle (LDP) for the empirical measure of SRNLD when the skew-symmetric matrix is chosen such that its product with the inward unit normal vector field on the boundary is zero. By explicitly characterizing the rate functions, we show that this choice of the skew-symmetric matrix accelerates the convergence to the target distribution compared to RLD and reduces the asymptotic variance. Numerical experiments for SRNLMC based on the proposed skew-symmetric matrix show superior performance, which validate the theoretical findings from the large deviations theory.


[135] 2506.17229

Coupled Entropy: A Goldilocks Generalization for Nonextensive Statistical Mechanics

Evidence is presented that the accuracy of Nonextensive Statistical Mechanics framework is improved using the coupled entropy, which carefully establishes the physical measures of complex systems. While Nonextensive Statistical Mechanics (NSM) has developed into a powerful toolset, questions have persisted as to how to evaluate whether its proposed solutions properly characterize the uncertainty of heavy-tailed distributions. The entropy of the generalized Pareto distribution (GPD) is $1+\kappa+\ln\sigma$, where $\kappa$ is the shape or nonlinear coupling and $\sigma$ is the scale. A generalized entropy should retain the uncertainty due to the scale, while minimizing the dependence of the nonlinear coupling. The Tsallis entropy of the GPD instead subtracts a function of the inverse-scale and converges to one as $\kappa\rightarrow\infty$. Colloquially, the Tsallis entropy is too cold. The normalized Tsallis entropy (NTE) rectifies the positive dependence on the scale but introduces a nonlinear term multiplying the scale and the coupling, making it too hot. The coupled entropy measures the uncertainty of the GPD to be $1+\ln_\frac{\kappa}{1+\kappa}\sigma=1+\frac{1+\kappa}{\kappa}(\sigma^\frac{\kappa}{1+\kappa}-1)$, which converges to $\sigma$ as $\kappa\rightarrow\infty$. One could say, the coupled entropy allows scientists, engineers, and analysts to eat their porridge, confident that its measure of uncertainty reflects the mathematical physics of the scale of non-exponential distributions while minimizing the dependence on the shape or nonlinear coupling. The training of the coupled variational autoencoder is an example of the unique ability of the coupled entropy to improve the performance of complex systems.


[136] 2506.21278

Hyperspherical Variational Autoencoders Using Efficient Spherical Cauchy Distribution

We propose a novel variational autoencoder (VAE) architecture that employs a spherical Cauchy (spCauchy) latent distribution. Unlike traditional Gaussian latent spaces or the widely used von Mises-Fisher (vMF) distribution, spCauchy provides a more natural hyperspherical representation of latent variables, better capturing directional data while maintaining flexibility. Its heavy-tailed nature prevents over-regularization, ensuring efficient latent space utilization while offering a more expressive representation. Additionally, spCauchy circumvents the numerical instabilities inherent to vMF, which arise from computing normalization constants involving Bessel functions. Instead, it enables a fully differentiable and efficient reparameterization trick via Möbius transformations, allowing for stable and scalable training. The KL divergence can be computed through a rapidly converging power series, eliminating concerns of underflow or overflow associated with evaluation of ratios of hypergeometric functions. These properties make spCauchy a compelling alternative for VAEs, offering both theoretical advantages and practical efficiency in high-dimensional generative modeling.


[137] 2506.22925

Confidence sequences with informative, bounded-influence priors

Confidence sequences are collections of confidence regions that simultaneously cover the true parameter for every sample size at a prescribed confidence level. Tightening these sequences is of practical interest and can be achieved by incorporating prior information through the method of mixture martingales. However, confidence sequences built from informative priors are vulnerable to misspecification and may become vacuous when the prior is poorly chosen. We study this trade-off for Gaussian observations with known variance. By combining the method of mixtures with a global informative prior whose tails are polynomial or exponential and the extended Ville's inequality, we construct confidence sequences that are sharper than their non-informative counterparts whenever the prior is well specified, yet remain bounded under arbitrary misspecification. The theory is illustrated with several classical priors.


[138] 2507.04560

A Test for Jumps in Metric-Space Conditional Means

Standard methods for detecting discontinuities in conditional means are not applicable to outcomes that are complex, non-Euclidean objects like distributions, networks, or covariance matrices. This article develops a nonparametric test for jumps in conditional means when outcomes lie in a non-Euclidean metric space. Using local Fréchet regression, the method estimates a mean path on either side of a candidate cutoff. This extends existing $k$-sample tests to a non-parametric regression setting with metric-space valued outcomes. I establish the asymptotic distribution of the test and its consistency against contiguous alternatives. For this, I derive a central limit theorem for the local estimator of the conditional Fréchet variance and a consistent estimator of its asymptotic variance. Simulations confirm nominal size control and robust power in finite samples. Two empirical illustrations demonstrate the method's ability to reveal discontinuities missed by scalar-based tests. I find sharp changes in (i) work-from-home compositions at an income threshold for non-compete enforceability and (ii) national input-output networks following the loss of preferential U.S. trade access. These findings show the value of analyzing regression outcomes in their native metric spaces.


[139] 2507.08773

Total/dual correlation/coherence, redundancy/synergy, complexity, and O-information for real and complex valued multivariate data

Firstly, assuming Gaussianity, equations for the following information theory measures are presented: total correlation/coherence (TC), dual total correlation/coherence (DTC), O-information, TSE complexity, and redundancy-synergy index (RSI). Since these measures are functions of the covariance matrix "S" and its inverse "S^-1", the associated Wishart and inverse-Wishart distributions are of note. DTC is shown to be the Kullback-Leibler (KL) divergence for the inverse-Wishart pair "(S^-1)" and its diagonal matrix "D=diag(S^-1)", shedding light on its interpretation as a measure of "total partial correlation", -lndetP, with test hypothesis H0: P=I, where "P" is the standardized inverse covariance (i.e. P=(D^-1/2)(S^-1)(D^-1/2). The second aim of this paper introduces a generalization of all these measures for structured groups of variables. For instance, consider three or more groups, each consisting of three or more variables, with predominant redundancy within each group, but with synergistic interactions between groups. O-information will miss the between group synergy (since redundancy occurs more often in the system). In contrast, the structured O-information measure presented here will correctly report predominant synergy between groups. This is a relevant generalization towards structured multivariate information measures. A third aim is the presentation of a framework for quantifying the contribution of "connections" between variables, to the system's TC, DTC, O-information, and TSE complexity. A fourth aim is to present a generalization of the redundancy-synergy index for quantifying the contribution of a group of variables to the system's redundancy-synergy balance. Finally, it is shown that the expressions derived here directly apply to data from several other elliptical distributions. All program codes, data files, and executables are available (this https URL).


[140] 2110.14842

Towards the ultimate limits of quantum channel discrimination and quantum communication

Distinguishability is fundamental to information theory and extends naturally to quantum systems. While quantum state discrimination is well understood, quantum channel discrimination remains challenging due to the dynamic nature of channels and the variety of discrimination strategies. This work advances the understanding of quantum channel discrimination and its fundamental limits. We develop new tools for quantum divergences, including sharper bounds on the quantum hypothesis testing relative entropy and additivity results for channel divergences. We establish a quantum Stein's lemma for memoryless channel discrimination, and link the strong converse property to the asymptotic equipartition property and continuity of divergences. Notably, we prove the equivalence of exponentially strong converse properties under coherent and sequential strategies. We further explore the interplay among operational regimes, discrimination strategies, and channel divergences, deriving exponents in various settings and contributing to a unified framework for channel discrimination. Finally, we recast quantum communication tasks as discrimination problems, uncovering deep connections between channel capacities, channel discrimination, and the mathematical structure of channel divergences. These results bridge two core areas of quantum information theory and offer new insights for future exploration.


[141] 2307.16463

Don't be so negative! Score-based Generative Modeling with Oracle-assisted Guidance

Score-based diffusion models are a powerful class of generative models, widely utilized across diverse domains. Despite significant advancements in large-scale tasks such as text-to-image generation, their application to constrained domains has received considerably less attention. This work addresses model learning in a setting where, in addition to the training dataset, there further exists side-information in the form of an oracle that can label samples as being outside the support of the true data generating distribution. Specifically we develop a new denoising diffusion probabilistic modeling methodology, Gen-neG, that leverages this additional side-information. Gen-neG builds on classifier guidance in diffusion models to guide the generation process towards the positive support region indicated by the oracle. We empirically establish the utility of Gen-neG in applications including collision avoidance in self-driving simulators and safety-guarded human motion generation.


[142] 2310.15512

Inference for Rank-Rank Regressions

The slope coefficient in a rank-rank regression is a popular measure of intergenerational mobility. In this article, we first show that commonly used inference methods for this slope parameter are invalid. Second, when the underlying distribution is not continuous, the OLS estimator and its asymptotic distribution may be highly sensitive to how ties in the ranks are handled. Motivated by these findings we develop a new asymptotic theory for the OLS estimator in a general class of rank-rank regression specifications without imposing any assumptions about the continuity of the underlying distribution. We then extend the asymptotic theory to other regressions involving ranks that have been used in empirical work. Finally, we apply our new inference methods to two empirical studies on intergenerational mobility, highlighting the practical implications of our theoretical findings.


[143] 2310.19603

Transformers Can Solve Non-Linear and Non-Markovian Filtering Problems in Continuous Time For Conditionally Gaussian Signals

The use of attention-based deep learning models in stochastic filtering, e.g. transformers and deep Kalman filters, has recently come into focus; however, the potential for these models to solve stochastic filtering problems remains largely unknown. The paper provides an affirmative answer to this open problem in the theoretical foundations of machine learning by showing that a class of continuous-time transformer models, called \textit{filterformers}, can approximately implement the conditional law of a broad class of non-Markovian and conditionally Gaussian signal processes given noisy continuous-time (possibly non-Gaussian) measurements. Our approximation guarantees hold uniformly over sufficiently regular compact subsets of continuous-time paths, where the worst-case 2-Wasserstein distance between the true optimal filter and our deep learning model quantifies the approximation error. Our construction relies on two new customizations of the standard attention mechanism: The first can losslessly adapt to the characteristics of a broad range of paths since we show that the attention mechanism implements bi-Lipschitz embeddings of sufficiently regular sets of paths into low-dimensional Euclidean spaces; thus, it incurs no ``dimension reduction error''. The latter attention mechanism is tailored to the geometry of Gaussian measures in the $2$-Wasserstein space. Our analysis relies on new stability estimates of robust optimal filters in the conditionally Gaussian setting.


[144] 2312.13195

Principal Component Copulas for Capital Modelling and Systemic Risk

We introduce a class of copulas that we call Principal Component Copulas (PCCs). This class combines the strong points of copula-based techniques with principal component analysis (PCA), which results in flexibility when modelling tail dependence along the most important directions in high-dimensional data. We obtain theoretical results for PCCs that are important for practical applications. In particular, we derive tractable expressions for the high-dimensional copula density, which can be represented in terms of characteristic functions. We also develop algorithms to perform Maximum Likelihood and Generalized Method of Moment estimation in high-dimensions and show very good performance in simulation experiments. Finally, we apply the copula to the international stock market to study systemic risk. We find that PCCs lead to excellent performance on measures of systemic risk due to their ability to distinguish between parallel and orthogonal movements in the global market, which have a different impact on systemic risk and diversification. As a result, we consider the PCC promising for capital models, which financial institutions use to protect themselves against systemic risk.


[145] 2404.08073

Spurious Stationarity and Hardness Results for Bregman Proximal-Type Algorithms

Bregman proximal-type algorithms (BPs), such as mirror descent, have become popular tools in machine learning and data science for exploiting problem structures through non-Euclidean geometries. In this paper, we show that BPs can get trapped near a class of non-stationary points, which we term spurious stationary points. Such stagnation can persist for any finite number of iterations if the gradient of the Bregman kernel is not Lipschitz continuous, even in convex problems. The root cause lies in a fundamental contrast in descent behavior between Euclidean and Bregman geometries: While Euclidean gradient descent ensures sufficient decrease near any non-stationary point, BPs may exhibit arbitrarily slow decrease around spurious stationary points. As a result, commonly used Bregman-based stationarity measure, such as relative change in terms of Bregman divergence, can vanish near spurious stationary points. This may misleadingly suggest convergence, even when the iterates remain far from any true stationary point. Our analysis further reveals that spurious stationary points are not pathological, but rather occur generically in a broad class of nonconvex problems with polyhedral constraints. Taken together, our findings reveal a serious blind spot in Bregman-based optimization methods and calls for new theoretical tools and algorithmic safeguards to ensure reliable convergence.


[146] 2405.10289

On the Uniform Convergence of Subdifferentials in Stochastic Optimization and Learning

We investigate the uniform convergence of subdifferential mappings from empirical risk to population risk in nonsmooth, nonconvex stochastic optimization. This question is key to understanding how empirical stationary points approximate population ones, yet characterizing this convergence remains a fundamental challenge due to the set-valued and nonsmooth nature of subdifferentials. This work establishes a general reduction principle: for weakly convex stochastic objectives, over any open subset of the domain, we show that a uniform bound on the convergence of selected subgradients-chosen arbitrarily from subdifferential sets-yields a corresponding uniform bound on the Hausdorff distance between the subdifferentials. This deterministic result reduces the study of set-valued subdifferential convergence to simpler vector-valued subgradient convergence. We apply this reduction to derive sharp uniform convergence rates for subdifferential mappings in stochastic convex-composite optimization, without relying on differentiability assumptions on the population risk. These guarantees clarify the landscape of nonsmooth empirical objectives and offer new insight into the geometry of optimization problems arising in robust statistics and related applications.


[147] 2406.17709

MGA-Net: A Novel Mask-Guided Attention Neural Network for Precision Neonatal Brain Imaging

In this study, we introduce MGA-Net, a novel mask-guided attention neural network, which extends the U-net model for precision neonatal brain imaging. MGA-Net is designed to extract the brain from other structures and reconstruct high-quality brain images. The network employs a common encoder and two decoders: one for brain mask extraction and the other for brain region reconstruction. A key feature of MGA-Net is its high-level mask-guided attention module, which leverages features from the brain mask decoder to enhance image reconstruction. To enable the same encoder and decoder to process both MRI and ultrasound (US) images, MGA-Net integrates sinusoidal positional encoding. This encoding assigns distinct positional values to MRI and US images, allowing the model to effectively learn from both modalities. Consequently, features learned from a single modality can aid in learning a modality with less available data, such as US. We extensively validated the proposed MGA-Net on diverse and independent datasets from varied clinical settings and neonatal age groups. The metrics used for assessment included the DICE similarity coefficient, recall, and accuracy for image segmentation; structural similarity for image reconstruction; and root mean squared error for total brain volume estimation from 3D ultrasound images. Our results demonstrate that MGA-Net significantly outperforms traditional methods, offering superior performance in brain extraction and segmentation while achieving high precision in image reconstruction and volumetric analysis. Thus, MGA-Net represents a robust and effective preprocessing tool for MRI and 3D ultrasound images, marking a significant advance in neuroimaging that enhances both research and clinical diagnostics in the neonatal period and this http URL code is available at this https URL


[148] 2407.02419

Quantum Curriculum Learning

Quantum machine learning (QML) requires significant quantum resources to address practical real-world problems. When the underlying quantum information exhibits hierarchical structures in the data, limitations persist in training complexity and generalization. Research should prioritize both the efficient design of quantum architectures and the development of learning strategies to optimize resource usage. We propose a framework called quantum curriculum learning (Q-CurL) for quantum data, where the curriculum introduces simpler tasks or data to the learning model before progressing to more challenging ones. Q-CurL exhibits robustness to noise and data limitations, which is particularly relevant for current and near-term noisy intermediate-scale quantum devices. We achieve this through a curriculum design based on quantum data density ratios and a dynamic learning schedule that prioritizes the most informative quantum data. Empirical evidence shows that Q-CurL significantly enhances training convergence and generalization for unitary learning and improves the robustness of quantum phase recognition tasks. Q-CurL is effective with physical learning applications in physics and quantum chemistry.


[149] 2407.07290

Causal Discovery-Driven Change Point Detection in Time Series

Change point detection in time series aims to identify moments when the probability distribution of time series changes. It is widely applied in many areas, such as human activity sensing and medical science. In the context of multivariate time series, this typically involves examining the joint distribution of multiple variables: If the distribution of any one variable changes, the entire time series undergoes a distribution shift. However, in practical applications, we may be interested only in certain components of the time series, exploring abrupt changes in their distributions while accounting for the presence of other components. Here, assuming an underlying structural causal model that governs the time-series data generation, we address this task by proposing a two-stage non-parametric algorithm that first learns parts of the causal structure through constraint-based discovery methods, and then employs conditional relative Pearson divergence estimation to identify the change points. The conditional relative Pearson divergence quantifies the distribution difference between consecutive segments in the time series, while the causal discovery method allows a focus on the causal mechanism, facilitating access to independent and identically distributed (IID) samples. Theoretically, the typical assumption of samples being IID in conventional change point detection methods can be relaxed based on the Causal Markov Condition. Through experiments on both synthetic and real-world datasets, we validate the correctness and utility of our approach.


[150] 2408.05700

Quantification of Interdependent Emotion Dynamics in Online Interactions

A growing share of human interactions now occurs online, where the expression and perception of emotions are often amplified and distorted. Yet, the interplay between different emotions and the extent to which they are driven by external stimuli or social feedback remains poorly understood. We calibrate a multivariate Hawkes self-exciting point process to model the temporal expression of six basic emotions in YouTube Live chats. This framework captures both temporal and cross-emotional dependencies while allowing us to disentangle the influence of video content (exogenous) from peer interactions (endogenous). We find that emotional expressions are up to four times more strongly driven by peer interaction than by video content. Positivity is more contagious, spreading three times more readily, whereas negativity is more memorable, lingering nearly twice as long. Moreover, we observe asymmetric cross-excitation, with negative emotions frequently triggering positive ones, a pattern consistent with trolling dynamics, but not the reverse. These findings highlight the central role of social interaction in shaping emotional dynamics online and the risks of emotional manipulation as human-chatbot interactions become increasingly realistic.


[151] 2409.09243

Unconditional Randomization Tests for Interference

Researchers are often interested in the existence and extent of interference between units when conducting causal inference or designing policy. However, testing for interference presents significant econometric challenges, particularly due to complex clustering patterns and dependencies that can invalidate standard methods. This paper introduces the pairwise imputation-based randomization test (PIRT), a general and robust framework for assessing the existence and extent of interference in experimental settings. PIRT employs unconditional randomization testing and pairwise comparisons, enabling straightforward implementation and ensuring finite-sample validity under minimal assumptions about network structure. The method's practical value is demonstrated through an application to a large-scale policing experiment in Bogota, Colombia (Blattman et al., 2021), which evaluates the effects of hotspot policing on crime at the street segment level. The analysis reveals that increased police patrolling in hotspots significantly displaces violent crime, but not property crime. Simulations calibrated to this context further underscore the power and robustness of PIRT.


[152] 2410.18076

Leveraging Skills from Unlabeled Prior Data for Efficient Online Exploration

Unsupervised pretraining has been transformative in many supervised domains. However, applying such ideas to reinforcement learning (RL) presents a unique challenge in that fine-tuning does not involve mimicking task-specific data, but rather exploring and locating the solution through iterative self-improvement. In this work, we study how unlabeled offline trajectory data can be leveraged to learn efficient exploration strategies. While prior data can be used to pretrain a set of low-level skills, or as additional off-policy data for online RL, it has been unclear how to combine these ideas effectively for online exploration. Our method SUPE (Skills from Unlabeled Prior data for Exploration) demonstrates that a careful combination of these ideas compounds their benefits. Our method first extracts low-level skills using a variational autoencoder (VAE), and then pseudo-labels unlabeled trajectories with optimistic rewards and high-level action labels, transforming prior data into high-level, task-relevant examples that encourage novelty-seeking behavior. Finally, SUPE uses these transformed examples as additional off-policy data for online RL to learn a high-level policy that composes pretrained low-level skills to explore efficiently. In our experiments, SUPE consistently outperforms prior strategies across a suite of 42 long-horizon, sparse-reward tasks. Code: this https URL.


[153] 2410.18164

TabDPT: Scaling Tabular Foundation Models on Real Data

Tabular data is one of the most ubiquitous sources of information worldwide, spanning a wide variety of domains. This inherent heterogeneity has slowed the development of Tabular Foundation Models (TFMs) capable of fast generalization to unseen datasets. In-Context Learning (ICL) has recently emerged as a promising solution for TFMs, enabling dynamic adaptation to new tasks without additional tuning. While many studies have attempted to re-purpose large language models for tabular ICL, they have had limited success, so recent works have focused on developing tabular-specific foundation models. In this work, we propose an approach to combine ICL-based retrieval with self supervised learning to train tabular foundation models. We also investigate the utility of real vs. synthetic data for model pre-training, and show that real data can contain useful signal not easily captured in synthetic training. Specifically, we show that incorporating real data during the pre-training phase can lead to significantly faster training and better downstream generalization to unseen data. Our resulting model, TabDPT, achieves top performance on both regression (CTR23) and classification (CC18) benchmarks. Importantly, we also demonstrate that with our pre-training procedure, scaling both model and data size leads to consistent performance improvements that follow power laws. This echoes scaling laws in LLMs and other foundation models, and suggests that Internet-scale TFMs can be achievable. We open-source our full pipeline: inference code including trained model weights can be found at this http URL, and the training code to reproduce experiments can be found at this http URL.


[154] 2410.19333

Most Swiss-system tournaments are unfair: Evidence from chess

The Swiss-system is an increasingly popular tournament format as it provides an attractive trade-off between the number of matches and ranking accuracy. However, few research consider the optimal design of Swiss-system tournaments. We contribute to this topic by empirically investigating the fairness of 52 Swiss-system chess competitions containing an odd (9 or 11) number of rounds, where about half of the players have an extra game with the white pieces. It is verified that they often enjoy a significant advantage: they are expected to score more points and have higher chances of performing above certain thresholds. A potential solution could be to organise Swiss-system tournaments with an even number of rounds and guarantee a balanced colour assignment for all players using a recently proposed pairing mechanism.


[155] 2412.00648

DFRot: Achieving Outlier-Free and Massive Activation-Free for Rotated LLMs with Refined Rotation

Rotating the activation and weight matrices to reduce the influence of outliers in large language models (LLMs) has recently attracted significant attention, particularly in the context of model quantization. Prior studies have shown that in low-precision quantization scenarios, such as 4-bit weights and 4-bit activations (W4A4), randomized Hadamard transforms can achieve significantly higher accuracy than randomized orthogonal transforms. Notably, the reason behind this phenomenon remains unknown. In this paper, we find that these transformations show substantial improvement in eliminating outliers for common tokens and achieve similar quantization error. The primary reason for the accuracy difference lies in the fact that randomized Hadamard transforms can slightly reduce the quantization error for tokens with massive activations while randomized orthogonal transforms increase the quantization error. Due to the extreme rarity of these tokens and their critical impact on model accuracy, we consider this a long-tail optimization problem, and therefore construct a simple yet effective method: a weighted loss function. Additionally, we propose an optimization strategy for the rotation matrix that involves alternating optimization of quantization parameters while employing orthogonal Procrustes transforms to refine the rotation matrix. This makes the distribution of the rotated activation values more conducive to quantization, especially for tokens with massive activations. Our method enhances the Rotated LLMs by achieving dual free, Outlier-Free and Massive Activation-Free, dubbed as DFRot. Extensive experiments demonstrate the effectiveness and efficiency of DFRot. By tuning the rotation matrix using just a single sample, DFRot achieves a perplexity improvement of 0.98 and 0.95 on W4A4KV4 and W4A4KV16, respectively, for LLaMA3-70B, a model known for its quantization challenges.


[156] 2501.00555

Prune 'n Predict: Optimizing LLM Decision-making with Conformal Prediction

Large language models (LLMs) are empowering decision-making in several applications, including tool or API usage and answering multiple-choice questions (MCQs). However, incorrect outputs pose significant risks in high-stakes domains like healthcare and finance. To quantify LLM uncertainty and thereby mitigate these risks, recent works employ conformal prediction (CP), a model- and distribution-agnostic framework that uses LLM outputs to generate a \emph{prediction set} containing the true answer with high probability. Leveraging CP, we propose \emph{conformal revision of questions} (CROQ), which revises the question by narrowing down the available choices to those in the prediction set and asking the LLM the revised question. We expect LLMs to be more accurate on revised questions with fewer choices. Furthermore, we expect CROQ to be effective when the prediction sets from CP are small. Commonly used logit scores often lead to large sets, diminishing CROQ's effectiveness. To overcome this, we propose CP-OPT, an optimization framework to learn scores that minimize set sizes while maintaining coverage. Our extensive experiments on MMLU, ToolAlpaca, and TruthfulQA datasets with multiple LLMs show that CROQ improves accuracy over the standard inference, with more pronounced gains when paired with CP-OPT.


[157] 2501.08411

BiDepth: A Bidirectional-Depth Neural Network for Spatio-Temporal Prediction

Accurate spatial-temporal (ST) prediction for dynamic systems, such as urban mobility and weather patterns, is crucial but hindered by complex ST correlations and the challenge of concurrently modeling long-term trends with short-term fluctuations. Existing methods often falter in these areas. This paper proposes the BiDepth Multimodal Neural Network (BDMNN), which integrates two key innovations: 1) a bidirectional depth modulation mechanism that dynamically adjusts network depth to comprehensively capture both long-term seasonality and immediate short-term events; and 2) a novel convolutional self-attention cell (CSAC). Critically, unlike many attention mechanisms that can lose spatial acuity, our CSAC is specifically designed to preserve crucial spatial relationships throughout the network, akin to standard convolutional layers, while simultaneously capturing temporal dependencies. Evaluated on real-world urban traffic and precipitation datasets, BDMNN demonstrates significant accuracy improvements, achieving a 12% Mean Squared Error (MSE) reduction in urban traffic prediction and a 15% improvement in precipitation forecasting over leading deep learning benchmarks like ConvLSTM, using comparable computational resources. These advancements offer robust ST forecasting for smart city management, disaster prevention, and resource optimization.


[158] 2501.12596

Adapting OpenAI's CLIP Model for Few-Shot Image Inspection in Manufacturing Quality Control: An Expository Case Study with Multiple Application Examples

This expository paper introduces a simplified approach to image-based quality inspection in manufacturing using OpenAI's CLIP (Contrastive Language-Image Pretraining) model adapted for few-shot learning. While CLIP has demonstrated impressive capabilities in general computer vision tasks, its direct application to manufacturing inspection presents challenges due to the domain gap between its training data and industrial applications. We evaluate CLIP's effectiveness through five case studies: metallic pan surface inspection, 3D printing extrusion profile analysis, stochastic textured surface evaluation, automotive assembly inspection, and microstructure image classification. Our results show that CLIP can achieve high classification accuracy with relatively small learning sets (50-100 examples per class) for single-component and texture-based applications. However, the performance degrades with complex multi-component scenes. We provide a practical implementation framework that enables quality engineers to quickly assess CLIP's suitability for their specific applications before pursuing more complex solutions. This work establishes CLIP-based few-shot learning as an effective baseline approach that balances implementation simplicity with robust performance, demonstrated in several manufacturing quality control applications.


[159] 2502.10826

Improved Offline Contextual Bandits with Second-Order Bounds: Betting and Freezing

We consider off-policy selection and learning in contextual bandits, where the learner aims to select or train a reward-maximizing policy using data collected by a fixed behavior policy. Our contribution is two-fold. First, we propose a novel off-policy selection method that leverages a new betting-based confidence bound applied to an inverse propensity weight sequence. Our theoretical analysis reveals that this method achieves a significantly improved, variance-adaptive guarantee over prior work. Second, we propose a novel and generic condition on the optimization objective for off-policy learning that strikes a different balance between bias and variance. One special case, which we call freezing, tends to induce low variance, which is preferred in small-data regimes. Our analysis shows that it matches the best existing guarantees. In our empirical study, our selection method outperforms existing methods, and freezing exhibits improved performance in small-sample regimes.


[160] 2503.05979

Learning-Order Autoregressive Models with Application to Molecular Graph Generation

Autoregressive models (ARMs) have become the workhorse for sequence generation tasks, since many problems can be modeled as next-token prediction. While there appears to be a natural ordering for text (i.e., left-to-right), for many data types, such as graphs, the canonical ordering is less obvious. To address this problem, we introduce a variant of ARM that generates high-dimensional data using a probabilistic ordering that is sequentially inferred from data. This model incorporates a trainable probability distribution, referred to as an order-policy, that dynamically decides the autoregressive order in a state-dependent manner. To train the model, we introduce a variational lower bound on the log-likelihood, which we optimize with stochastic gradient estimation. We demonstrate experimentally that our method can learn meaningful autoregressive orderings in image and graph generation. On the challenging domain of molecular graph generation, we achieve state-of-the-art results on the QM9 and ZINC250k benchmarks, evaluated across key metrics for distribution similarity and drug-likeless.


[161] 2503.19126

Tractable downfall of basis pursuit in structured sparse optimization

The problem of finding the sparsest solution to a linear underdetermined system of equations, often appearing, e.g., in data analysis, optimal control, system identification or sensor selection problems, is considered. This non-convex problem is commonly solved by convexification via $\ell_1$-norm minimization, known as basis pursuit (BP). In this work, a class of structured matrices, representing the system of equations, is introduced for which (BP) tractably fails to recover the sparsest solution. In particular, this enables efficient identification of matrix columns corresponding to unrecoverable non-zero entries of the sparsest solution and determination of the uniqueness of such a solution. These deterministic guarantees contrast with popular probabilistic ones and provide insights into the a priori design of sparse optimization problems. As our matrix structures appear naturally in optimal control problems, we exemplify our findings based on a fuel-optimal control problem for a class of discrete-time linear time-invariant systems.


[162] 2504.08438

Diffusion Models for Robotic Manipulation: A Survey

Diffusion generative models have demonstrated remarkable success in visual domains such as image and video generation. They have also recently emerged as a promising approach in robotics, especially in robot manipulations. Diffusion models leverage a probabilistic framework, and they stand out with their ability to model multi-modal distributions and their robustness to high-dimensional input and output spaces. This survey provides a comprehensive review of state-of-the-art diffusion models in robotic manipulation, including grasp learning, trajectory planning, and data augmentation. Diffusion models for scene and image augmentation lie at the intersection of robotics and computer vision for vision-based tasks to enhance generalizability and data scarcity. This paper also presents the two main frameworks of diffusion models and their integration with imitation learning and reinforcement learning. In addition, it discusses the common architectures and benchmarks and points out the challenges and advantages of current state-of-the-art diffusion-based methods.


[163] 2504.11130

Divergence of Empirical Neural Tangent Kernel in Classification Problems

This paper demonstrates that in classification problems, fully connected neural networks (FCNs) and residual neural networks (ResNets) cannot be approximated by kernel logistic regression based on the Neural Tangent Kernel (NTK) under overtraining (i.e., when training time approaches infinity). Specifically, when using the cross-entropy loss, regardless of how large the network width is (as long as it is finite), the empirical NTK diverges from the NTK on the training samples as training time increases. To establish this result, we first demonstrate the strictly positive definiteness of the NTKs for multi-layer FCNs and ResNets. Then, we prove that during training, % with the cross-entropy loss, the neural network parameters diverge if the smallest eigenvalue of the empirical NTK matrix (Gram matrix) with respect to training samples is bounded below by a positive constant. This behavior contrasts sharply with the lazy training regime commonly observed in regression problems. Consequently, using a proof by contradiction, we show that the empirical NTK does not uniformly converge to the NTK across all times on the training samples as the network width increases. We validate our theoretical results through experiments on both synthetic data and the MNIST classification task. This finding implies that NTK theory is not applicable in this context, with significant theoretical implications for understanding neural networks in classification problems.


[164] 2505.02636

Phase retrieval and matrix sensing via benign and overparametrized nonconvex optimization

We study a nonconvex optimization algorithmic approach to phase retrieval and the more general problem of semidefinite low-rank matrix sensing. Specifically, we analyze the nonconvex landscape of a quartic Burer-Monteiro factored least-squares optimization problem. We develop a new analysis framework, taking advantage of the semidefinite problem structure, to understand the properties of second-order critical points -- specifically, whether they (approximately) recover the ground truth matrix. We show that it can be helpful to (mildly) overparametrize the problem, that is, to optimize over matrices of higher rank than the ground truth. We then apply this framework to several well-studied problem instances: in addition to recovering existing state-of-the-art phase retrieval landscape guarantees (without overparametrization), we show that overparametrizing by a factor at most logarithmic in the dimension allows recovery with optimal statistical sample complexity and error for the problems of (1) phase retrieval with sub-Gaussian measurements and (2) more general semidefinite matrix sensing with rank-1 Gaussian measurements. Previously, such statistical results had been shown only for estimators based on semidefinite programming. More generally, our analysis is partially based on the powerful method of convex dual certificates, suggesting that it could be applied to a much wider class of problems.


[165] 2505.05602

HiBayES: A Hierarchical Bayesian Modeling Framework for AI Evaluation Statistics

As Large Language Models (LLMs) and other AI systems evolve, robustly estimating their capabilities from inherently stochastic outputs while systematically quantifying uncertainty in these estimates becomes increasingly important. Further, advanced AI evaluations often have a nested hierarchical structure, exhibit high levels of complexity, and come with high costs in testing the most advanced AI systems. To address these challenges, we introduce HiBayES, a generalizable Hierarchical Bayesian modeling framework for AI Evaluation Statistics. HiBayES supports robust inferences in classical question-answer benchmarks and advanced agentic evaluations, particularly in low-data scenarios (e.g., < 20 data points per evaluation). Built on Generalized Linear Models (GLMs), Bayesian data analysis, and formal model comparison, HiBayES provides principled uncertainty quantification and robust parameter estimation. This paper offers a comprehensive introduction to HiBayES, including illustrative examples, comparisons to conventional statistical methods, and practical guidance for implementing multilevel Bayesian GLMs. Additionally, we provide a HiBayES software package [4] (Beta version) for out-of-the-box implementation.


[166] 2506.01393

Improved Regret Bounds for Gaussian Process Upper Confidence Bound in Bayesian Optimization

This paper addresses the Bayesian optimization problem (also referred to as the Bayesian setting of the Gaussian process bandit), where the learner seeks to minimize the regret under a function drawn from a known Gaussian process (GP). Under a Matérn kernel with a certain degree of smoothness, we show that the Gaussian process upper confidence bound (GP-UCB) algorithm achieves $\tilde{O}(\sqrt{T})$ cumulative regret with high probability. Furthermore, our analysis yields $O(\sqrt{T \ln^2 T})$ regret under a squared exponential kernel. These results fill the gap between the existing regret upper bound for GP-UCB and the best-known bound provided by Scarlett (2018). The key idea in our proof is to capture the concentration behavior of the input sequence realized by GP-UCB, enabling a more refined analysis of the GP's information gain.


[167] 2506.07614

Poisson Midpoint Method for Log Concave Sampling: Beyond the Strong Error Lower Bounds

We study the problem of sampling from strongly log-concave distributions over $\mathbb{R}^d$ using the Poisson midpoint discretization (a variant of the randomized midpoint method) for overdamped/underdamped Langevin dynamics. We prove its convergence in the 2-Wasserstein distance ($W_2$), achieving a cubic speedup in dependence on the target accuracy ($\epsilon$) over the Euler-Maruyama discretization, surpassing existing bounds for randomized midpoint methods. Notably, in the case of underdamped Langevin dynamics, we demonstrate the complexity of $W_2$ convergence is much smaller than the complexity lower bounds for convergence in $L^2$ strong error established in the literature.


[168] 2506.08874

On Limiting Probability Distributions of Higher Order Markov Chains

The limiting probability distribution is one of the key characteristics of a Markov chain since it shows its long-term behavior. In this paper, for a higher order Markov chain, we establish some properties related to its exact limiting probability distribution, including a sufficient condition for the existence of such a distribution. Our results extend the corresponding conclusions on first order chains. Besides, they complement the existing results concerning higher order chains which rely on approximation schemes or two-phase power iterations. Several illustrative example are also given.


[169] 2506.12569

Moment Restrictions for Nonlinear Panel Data Models with Feedback

Many panel data methods, while allowing for general dependence between covariates and time-invariant agent-specific heterogeneity, place strong a priori restrictions on feedback: how past outcomes, covariates, and heterogeneity map into future covariate levels. Ruling out feedback entirely, as often occurs in practice, is unattractive in many dynamic economic settings. We provide a general characterization of all feedback and heterogeneity robust (FHR) moment conditions for nonlinear panel data models and present constructive methods to derive feasible moment-based estimators for specific models. We also use our moment characterization to compute semiparametric efficiency bounds, allowing for a quantification of the information loss associated with accommodating feedback, as well as providing insight into how to construct estimators with good efficiency properties in practice. Our results apply both to the finite dimensional parameter indexing the parametric part of the model as well as to estimands that involve averages over the distribution of unobserved heterogeneity. We illustrate our methods by providing a complete characterization of all FHR moment functions in the multi-spell mixed proportional hazards model. We compute efficient moment functions for both model parameters and average effects in this setting.


[170] 2506.15723

Modern approaches to building interpretable models of the property market using machine learning on the base of mass cadastral valuation

In this article, we review modern approaches to building interpretable models of property markets using machine learning on the base of mass valuation of property in the Primorye region, Russia. The researcher, lacking expertise in this topic, encounters numerous difficulties in the effort to build a good model. The main source of this is the huge difference between noisy real market data and ideal data which is very common in all types of tutorials on machine learning. This paper covers all stages of modeling: the collection of initial data, identification of outliers, the search and analysis of patterns in the data, the formation and final choice of price factors, the building of the model, and the evaluation of its efficiency. For each stage, we highlight potential issues and describe sound methods for overcoming emerging difficulties on actual examples. We show that the combination of classical linear regression with interpolation methods of geostatistics allows to build an effective model for land parcels. For flats, when many objects are attributed to one spatial point the application of geostatistical methods is difficult. Therefore we suggest linear regression with automatic generation and selection of additional rules on the base of decision trees, so called the RuleFit method. Thus we show, that despite such a strong restriction as the requirement of interpretability which is important in practical aspects, for example, legal matters, it is still possible to build effective models of real property markets.


[171] 2506.22566

Exploration Behavior of Untrained Policies

Exploration remains a fundamental challenge in reinforcement learning (RL), particularly in environments with sparse or adversarial reward structures. In this work, we study how the architecture of deep neural policies implicitly shapes exploration before training. We theoretically and empirically demonstrate strategies for generating ballistic or diffusive trajectories from untrained policies in a toy model. Using the theory of infinite-width networks and a continuous-time limit, we show that untrained policies return correlated actions and result in non-trivial state-visitation distributions. We discuss the distributions of the corresponding trajectories for a standard architecture, revealing insights into inductive biases for tackling exploration. Our results establish a theoretical and experimental framework for using policy initialization as a design tool to understand exploration behavior in early training.


[172] 2507.05806

Predicting Graph Structure via Adapted Flux Balance Analysis

Many dynamic processes such as telecommunication and transport networks can be described through discrete time series of graphs. Modelling the dynamics of such time series enables prediction of graph structure at future time steps, which can be used in applications such as detection of anomalies. Existing approaches for graph prediction have limitations such as assuming that the vertices do not to change between consecutive graphs. To address this, we propose to exploit time series prediction methods in combination with an adapted form of flux balance analysis (FBA), a linear programming method originating from biochemistry. FBA is adapted to incorporate various constraints applicable to the scenario of growing graphs. Empirical evaluations on synthetic datasets (constructed via Preferential Attachment model) and real datasets (UCI Message, HePH, Facebook, Bitcoin) demonstrate the efficacy of the proposed approach.