Gaussian graphical models are widely used to infer dependence structures. Bayesian methods are appealing to quantify uncertainty associated with structural learning, i.e., the plausibility of conditional independence statements given the data, and parameter estimates. However, computational demands have limited their application when the number of variables is large, which prompted the use of pseudo-Bayesian approaches. We propose fully Bayesian algorithms that provably scale to high dimensions when the data-generating precision matrix is sparse, at a similar cost to the best pseudo-Bayesian methods. First, a Metropolis-Hastings-within-Block-Gibbs algorithm that allows row-wise updates of the precision matrix, using local moves. Second, a global proposal that enables adding or removing multiple edges within a row, which can help explore multi-modal posteriors. We obtain spectral gap bounds for both samplers that are dimension-free under suitable settings. We also provide worst-case polynomial bounds on per-iteration costs, though in practice the cost is lower by using sparse linear algebra. Our examples show that the methods extend the applicability of exact Bayesian inference from roughly 100 to roughly 1000 variables (equivalently, from 5,000 edges to 500,000 edges).
This paper introduces a new stochastic diffusion process to model the electricity production from natural gas sources (as a percentage of total electricity production) in the United States. The method employs trend function analysis to generate fits and forecasts with both conditional and unconditional estimated trend functions. Parameters are estimated using the maximum likelihood (ML) method, based on discrete sampling paths of the variable "electricity production from natural gas sources in the United States" with annual data from 1990 to 2021. The results show that the proposed model effectively fits the data and provides dependable medium-term forecasts for 2022-2023.
The problem of optimal linear estimation of functionals depending on the unknown values of a random field $\zeta(t,x)$, which is mean-square continuous periodically correlated with respect to time argument $t\in\mathbb R$ and isotropic on the unit sphere ${S_n}$ with respect to spatial argument $x\in{S_n}$. Estimates are based on observations of the field $\zeta(t,x)+\theta(t,x)$ at points $(t,x):t<0,x\in S_{n}$, where $\theta(t,x)$ is an uncorrelated with $\zeta(t,x)$ random field, which is mean-square continuous periodically correlated with respect to time argument $t\in\mathbb R$ and isotropic on the sphere ${S_n}$ with respect to spatial argument $x\in{S_n}$. Formulas for calculating the mean square errors and the spectral characteristics of the optimal linear estimate of functionals are derived in the case of spectral certainty where the spectral densities of the fields are exactly known. Formulas that determine the least favourable spectral densities and the minimax (robust) spectral characteristics are proposed in the case where the spectral densities are not exactly known while a class of admissible spectral densities is given.
Within the biological, physical, and social sciences, there are two broad quantitative traditions: statistical and mathematical modeling. Both traditions have the common pursuit of advancing our scientific knowledge, but these traditions have developed largely independently using distinct languages and inferential frameworks. This paper uses the notion of identification from causal inference, a field originating from the statistical modeling tradition, to develop a shared language. I first review foundational identification results for statistical models and then extend these ideas to mathematical models. Central to this framework is the use of bounds, ranges of plausible numerical values, to analyze both statistical and mathematical models. I discuss the implications of this perspective for the interpretation, comparison, and integration of different modeling approaches, and illustrate the framework with a simple pharmacodynamic model for hypertension. To conclude, I describe areas where the approach taken here should be extended in the future. By formalizing connections between statistical and mathematical modeling, this work contributes to a shared framework for quantitative science. My hope is that this work will advance interactions between these two traditions.
We develop a Gaussian process framework for learning interaction kernels in multi-species interacting particle systems from trajectory data. Such systems provide a canonical setting for multiscale modeling, where simple microscopic interaction rules generate complex macroscopic behaviors. While our earlier work established a Gaussian process approach and convergence theory for single-species systems, and later extended to second-order models with alignment and energy-type interactions, the multi-species setting introduces new challenges: heterogeneous populations interact both within and across species, the number of unknown kernels grows, and asymmetric interactions such as predator-prey dynamics must be accommodated. We formulate the learning problem in a nonparametric Bayesian setting and establish rigorous statistical guarantees. Our analysis shows recoverability of the interaction kernels, provides quantitative error bounds, and proves statistical optimality of posterior estimators, thereby unifying and generalizing previous single-species theory. Numerical experiments confirm the theoretical predictions and demonstrate the effectiveness of the proposed approach, highlighting its advantages over existing kernel-based methods. This work contributes a complete statistical framework for data-driven inference of interaction laws in multi-species systems, advancing the broader multiscale modeling program of connecting microscopic particle dynamics with emergent macroscopic behavior.
Objectives: Unsupervised learning with electronic health record (EHR) data has shown promise for phenotype discovery, but approaches typically disregard existing clinical information, limiting interpretability. We operationalize a Bayesian latent class framework for phenotyping that incorporates domain-specific knowledge to improve clinical meaningfulness of EHR-derived phenotypes and illustrate its utility by identifying an asthma sub-phenotype informed by features of Type 2 (T2) inflammation. Materials and methods: We illustrate a framework for incorporating clinical knowledge into a Bayesian latent class model via informative priors to guide unsupervised clustering toward clinically relevant subgroups. This approach models missingness, accounting for potential missing-not-at-random patterns, and provides patient-level probabilities for phenotype assignment with uncertainty. Using reusable and flexible code, we applied the model to a large asthma EHR cohort, specifying informative priors for T2 inflammation-related features and weakly informative priors for other clinical variables, allowing the data to inform posterior distributions. Results and Conclusion: Using encounter data from January 2017 to February 2024 for 44,642 adult asthma patients, we found a bimodal posterior distribution of phenotype assignment, indicating clear class separation. The T2 inflammation-informed class (38.7%) was characterized by elevated eosinophil levels and allergy markers, plus high healthcare utilization and medication use, despite weakly informative priors on the latter variables. These patterns suggest an "uncontrolled T2-high" sub-phenotype. This demonstrates how our Bayesian latent class modeling approach supports hypothesis generation and cohort identification in EHR-based studies of heterogeneous diseases without well-established phenotype definitions.
Time-series forecasting increasingly demands not only accurate observational predictions but also causal forecasting under interventional and counterfactual queries in multivariate systems. We present DoFlow, a flow based generative model defined over a causal DAG that delivers coherent observational and interventional predictions, as well as counterfactuals through the natural encoding and decoding mechanism of continuous normalizing flows (CNFs). We also provide a supporting counterfactual recovery result under certain assumptions. Beyond forecasting, DoFlow provides explicit likelihoods of future trajectories, enabling principled anomaly detection. Experiments on synthetic datasets with various causal DAG and real world hydropower and cancer treatment time series show that DoFlow achieves accurate system-wide observational forecasting, enables causal forecasting over interventional and counterfactual queries, and effectively detects anomalies. This work contributes to the broader goal of unifying causal reasoning and generative modeling for complex dynamical systems.
Estimating environmental exposures from multi-source data is central to public health research and policy. Integrating data from satellite products and ground monitors are increasingly used to produce exposure surfaces. However, spatio-temporal misalignment often induced from missing data introduces substantial uncertainty and reduces predictive accuracy. We propose a Bayesian weighted predictor regression framework that models spatio-temporal relationships when predictors are observed on irregular supports or have substantial missing data, and are not concurrent with the outcome. The key feature of our model is a spatio-temporal kernel that aggregates the predictor over local space-time neighborhoods, built directly into the likelihood, eliminating any separate gap-filling or forced data alignment stage. We introduce a numerical approximation using a Voronoi-based spatial quadrature combined with irregular temporal increments for estimation under data missingness and misalignment. We showed that misspecification of the spatial and temporal lags induced bias in the mean and parameter estimates, indicating the need for principled parameter selection. Simulation studies confirmed these theoretical findings, where careful tuning was critical to control bias and achieve accurate prediction, while the proposed quadrature performed well under severe missingness. As an illustrative application, we estimated fine particulate matter (PM$_{2.5}$) in northern California using satellite-derived aerosol optical depth (AOD) and wildfire smoke plume indicators. Relative to a traditional collocated linear model, our approach improved out-of-sample predictive performance (over 50\% increase in R$^2$), reduced uncertainty, and yielded robust temporal predictions and spatial surface estimation. Our framework is extensible to additional spatio-temporally varying covariates and other kernel families.
This paper outlines a grammar of data analysis, as distinct from grammars of data manipulation, in which the primitives are metrics and dimensions. We describe a Python implementation of this grammar called Meterstick, which is agnostic to the underlying data source, which may be a DataFrame or a SQL database.
Wavelet Transforms are a widely used technique for decomposing a signal into coefficient vectors that correspond to distinct frequency/scale bands while retaining time localization. This property enables an adaptive analysis of signals at different scales, capturing both temporal and spectral patterns. By examining how correlations between two signals vary across these scales, we obtain a more nuanced understanding of their relationship than what is possible from a single global correlation measure. In this work, we expand on the theory of wavelet-based correlations already used in the literature and elaborate on wavelet correlograms, partial wavelet correlations, and additive wavelet correlations using the Pearson and Kendall definitions. We use both Orthogonal and Non-decimated discrete Wavelet Transforms, and assess the robustness of these correlations under different wavelet bases. Simulation studies are conducted to illustrate these methods, and we conclude with applications to real-world datasets.
Early intervention in neurodegenerative diseases requires identifying periods before diagnosis when decline is rapid enough to detect whether a therapy is slowing progression. Since rapid decline typically occurs close to diagnosis, identifying these periods requires knowing each patient's time of diagnosis. Yet many patients exit studies before diagnosis, making time of diagnosis right-censored by time of study exit -- creating a right-censored covariate problem when estimating decline. Existing estimators either assume noninformative covariate censoring, where time of study exit is independent of time of diagnosis, or allow informative covariate censoring, but require correctly specifying how these times are related. We developed SPIRE (Semi-Parametric Informative Right-censored covariate Estimator), a super doubly robust estimator that remains consistent without correctly specifying densities governing time of diagnosis or time of study exit. Typical double robustness requires at least one density to be correct; SPIRE requires neither. When both densities are correctly specified, SPIRE achieves semiparametric efficiency. We also developed a test for detecting informative covariate censoring. Simulations with 85% right-censoring demonstrated SPIRE's robustness, efficiency and reliable detection of informative covariate censoring. Applied to Huntington disease data, SPIRE handled informative covariate censoring appropriately and remained consistent regardless of density specification, providing a reliable tool for early intervention.
Reliable outlier detection in high-dimensional data is crucial in modern science, yet it remains a challenging task. Traditional methods often break down in these settings due to their reliance on asymptotic behaviors with respect to sample size under fixed dimension. Furthermore, many modern alternatives introduce sophisticated statistical treatments and computational complexities. To overcome these issues, our approach leverages intuitive geometric properties of high-dimensional space, effectively turning the curse of dimensionality into an advantage. We propose two new outlyingness statistics based on observation's relational patterns with all other points, measured via pairwise distances or inner products. We establish a theoretical foundation for our statistics demonstrating that as the dimension grows, our statistics create a non-vanishing margin that asymptotically separates outliers from non-outliers. Based on this foundation, we develop practical outlier detection procedures, including a simple clustering-based algorithm and a distribution-free test using random rotations. Through simulation experiments and real data applications, we demonstrate that our proposed methods achieve a superior balance between detection power and false positive control, outperforming existing methods and establishing their practical utility in high-dimensional settings.
When releasing binary proportions computed using sensitive data, several government agencies and other data stewards protect confidentiality of the underlying values by ensuring the released statistics satisfy differential privacy. Typically, this is done by adding carefully chosen noise to the sample proportion computed using the confidential data. In this article, we describe and compare methods for turning this differentially private proportion into an interval estimate for an underlying population probability. Specifically, we consider differentially private versions of the Wald and Wilson intervals, Bayesian credible intervals based on denoising the differentially private proportion, and an exact interval motivated by the Clopper-Pearson confidence interval. We examine the repeated sampling performances of the intervals using simulation studies under both the Laplace mechanism and discrete Gaussian mechanism across a range of privacy guarantees. We find that while several methods can offer reasonable performances, the Bayesian credible intervals are the most attractive.
In this paper, we consider diffusion index forecast with both tensor and non-tensor predictors, where the tensor structure is preserved with a Canonical Polyadic (CP) tensor factor model. When the number of non-tensor predictors is small, we study the asymptotic properties of the least-squared estimator in this tensor factor-augmented regression, allowing for factors with different strengths. We derive an analytical formula for prediction intervals that accounts for the estimation uncertainty of the latent factors. In addition, we propose a novel thresholding estimator for the high-dimensional covariance matrix that is robust to cross-sectional dependence. When the number of non-tensor predictors exceeds or diverges with the sample size, we introduce a multi-source factor-augmented sparse regression model and establish the consistency of the corresponding penalized estimator. Simulation studies validate our theoretical results and an empirical application to US trade flows demonstrates the advantages of our approach over other popular methods in the literature.
This paper studies the high-dimensional scaling limits of online stochastic gradient descent (SGD) for single-layer networks. Building on the seminal work of Saad and Solla, which analyzed the deterministic (ballistic) scaling limits of SGD corresponding to the gradient flow of the population loss, we focus on the critical scaling regime of the step size. Below this critical scale, the effective dynamics are governed by ballistic (ODE) limits, but at the critical scale, new correction term appears that changes the phase diagram. In this regime, near the fixed points, the corresponding diffusive (SDE) limits of the effective dynamics reduces to an Ornstein-Uhlenbeck process under certain conditions. These results highlight how the information exponent controls sample complexity and illustrates the limitations of deterministic scaling limit in capturing the stochastic fluctuations of high-dimensional learning dynamics.
To achieve the United Nations Sustainable Development Goals, coordinated action across their interlinked indicators is required. Although most of the research on the interlinkages of the SDGs is done at the goal level, policies are usually made and implemented at the level of indicators (or targets). Our study examines the existing literature on SDG interlinkages and indicator (or target) prioritization, highlighting important drawbacks of current methodologies. To address these limitations, we propose a generic network-based model that can quantify the importance of the SDG indicators and help policymakers in identifying indicators for maximum synergistic impact. Our model applies to any country, offering a tool for national policymakers. We illustrate the application of this model using data from India, identifying important indicators that are crucial for accelerating progress in the SDGs. While our main contribution lies in developing this network-theoretic methodology, we also provide supporting empirical evidence from existing literature for selected key observations.
The Lasso has been widely used as a method for variable selection, valued for its simplicity and empirical performance. However, Lasso's selection stability deteriorates in the presence of correlated predictors. Several approaches have been developed to mitigate this limitation. In this paper, we provide a brief review of existing approaches, highlighting their limitations. We then propose a simple technique to improve the selection stability of Lasso by integrating a weighting scheme into the Lasso penalty function, where the weights are defined as an increasing function of a correlation-adjusted ranking that reflects the predictive power of predictors. Empirical evaluations on both simulated and real-world datasets demonstrate the efficacy of the proposed method. Additional numerical results demonstrate the effectiveness of the proposed approach in stabilizing other regularization-based selection methods, indicating its potential as a general-purpose solution.
This work addresses the problem of efficient sampling of Markov random fields (MRF). The sampling of Potts or Ising MRF is most often based on Gibbs sampling, and is thus computationally expensive. We consider in this work how to circumvent this bottleneck through a link with Gaussian Markov Random fields. The latter can be sampled in several cost-effective ways, and we introduce a mapping from real-valued GMRF to discrete-valued MRF. The resulting new class of MRF benefits from a few theoretical properties that validate the new model. Numerical results show the drastic performance gain in terms of computational efficiency, as we sample at least 35x faster than Gibbs sampling using at least 37x less energy, all the while exhibiting empirical properties close to classical MRFs.
We first study the generalization error of models that use a fixed feature representation (frozen intermediate layers) followed by a trainable readout layer. This setting encompasses a range of architectures, from deep random-feature models to echo-state networks (ESNs) with recurrent dynamics. Working in the high-dimensional regime, we apply Random Matrix Theory to derive a closed-form expression for the asymptotic generalization error. We then apply this analysis to recurrent representations and obtain concise formula that characterize their performance. Surprisingly, we show that a linear ESN is equivalent to ridge regression with an exponentially time-weighted (''memory'') input covariance, revealing a clear inductive bias toward recent inputs. Experiments match predictions: ESNs win in low-sample, short-memory regimes, while ridge prevails with more data or long-range dependencies. Our methodology provides a general framework for analyzing overparameterized models and offers insights into the behavior of deep learning networks.
Score-based Generative Models (SGMs) have achieved impressive performance in data generation across a wide range of applications and benefit from strong theoretical guarantees. Recently, methods inspired by statistical mechanics, in particular, Hamiltonian dynamics, have introduced Critically-damped Langevin Diffusions (CLDs), which define diffusion processes on extended spaces by coupling the data with auxiliary variables. These approaches, along with their associated score-matching and sampling procedures, have been shown to outperform standard diffusion-based samplers numerically. In this paper, we analyze a generalized dynamic that extends classical CLDs by introducing an additional hyperparameter controlling the noise applied to the data coordinate, thereby better exploiting the extended space. We further derive a novel upper bound on the sampling error of CLD-based generative models in the Wasserstein metric. This additional hyperparameter influences the smoothness of sample paths, and our discretization error analysis provides practical guidance for its tuning, leading to improved sampling performance.
All Resolutions Inference (ARI) is a post hoc inference method for functional Magnetic Resonance Imaging (fMRI) data analysis that provides valid lower bounds on the proportion of truly active voxels within any, possibly data-driven, cluster. As such, it addresses the paradox of spatial specificity encountered with more classical cluster-extent thresholding methods. It allows the cluster-forming threshold to be increased in order to locate the signal with greater spatial precision without overfitting, also known as the drill-down approach. Notip and pARI are two recent permutation-based extensions of ARI designed to increase statistical power by accounting for the strong dependence structure typical of fMRI data. A recent comparison between these papers based on large voxel clusters concluded that pARI outperforms Notip. We revisit this conclusion by conducting a systematic comparison of the two. Our reanalysis of the same fMRI data sets from the Neurovault database demonstrates the existence of complementary performance regimes: while pARI indeed achieves higher sensitivity for large clusters, Notip provides more informative and robust results for smaller clusters. In particular, while Notip supports informative ``drill-down'' exploration into subregions of activation, pARI often yields non-informative bounds in such cases, and can even underperform the baseline ARI method.
We present a suite of packages in R, Python, Julia, and C++ that efficiently solve the Sorted L-One Penalized Estimation (SLOPE) problem. The packages feature a highly efficient hybrid coordinate descent algorithm that fits generalized linear models (GLMs) and supports a variety of loss functions, including Gaussian, binomial, Poisson, and multinomial logistic regression. Our implementation is designed to be fast, memory-efficient, and flexible. The packages support a variety of data structures (dense, sparse, and out-of-memory matrices) and are designed to efficiently fit the full SLOPE path as well as handle cross-validation of SLOPE models, including the relaxed SLOPE. We present examples of how to use the packages and benchmarks that demonstrate the performance of the packages on both real and simulated data and show that our packages outperform existing implementations of SLOPE in terms of speed.
Concept drift and label scarcity are two critical challenges limiting the robustness of predictive models in dynamic industrial environments. Existing drift detection methods often assume global shifts and rely on dense supervision, making them ill-suited for regression tasks with local drifts and limited labels. This paper proposes an adaptive sampling framework that combines residual-based exploration and exploitation with EWMA monitoring to efficiently detect local concept drift under labeling budget constraints. Empirical results on synthetic benchmarks and a case study on electricity market demonstrate superior performance in label efficiency and drift detection accuracy.
This paper investigates global and local laws for sample covariance matrices with general growth rates of dimensions. The sample size $N$ and population dimension $M$ can have the same order in logarithm, which implies that their ratio $M/N$ can approach zero, a constant, or infinity. These theories are utilized to determine the convergence rate of spiked eigenvalue estimates.
High-throughput sequencing has transformed microbiome research, but it also produces inherently compositional data that challenge standard statistical and machine learning methods. In this work, we propose a multinomial classification framework for compositional microbiome data based on penalized log-ratio regression and pairwise separability screening. The method quantifies the discriminative ability of each OTU through the area under the receiver operating characteristic curve ($AUC$) for all pairwise log-ratios and aggregates these values into a global separability index $S_k$, yielding interpretable rankings of taxa together with confidence intervals. We illustrate the approach by reanalyzing the Baxter colorectal adenoma dataset and comparing our results with Greenacre's ordination-based analysis using Correspondence Analysis and Canonical Correspondence Analysis. Our models consistently recover a core subset of taxa previously identified as discriminant, thereby corroborating Greenacre's main findings, while also revealing additional OTUs that become important once demographic covariates are taken into account. In particular, adjustment for age, gender, and diabetes medication improves the precision of the separation index and highlights new, potentially relevant taxa, suggesting that part of the original signal may have been influenced by confounding. Overall, the integration of log-ratio modeling, covariate adjustment, and uncertainty estimation provides a robust and interpretable framework for OTU selection in compositional microbiome data. The proposed method complements existing ordination-based approaches by adding a probabilistic and inferential perspective, strengthening the identification of biologically meaningful microbial signatures.
In this article, we develop fully Bayesian, copula-based, spatial-statistical models for large, noisy, incomplete, and non-Gaussian spatial data. Our approach includes novel constructions of copulas that accommodate a spatial-random-effects structure, enabling low-rank representations and computationally efficient Bayesian inference. The spatial copula is used in a latent process model of the Bayesian hierarchical spatial-statistical model, and, conditional on the latent copula-based spatial process, the data model handles measurement errors and missing data. Our simulation studies show that a fully Bayesian approach delivers accurate and fast inference for both parameter estimation and spatial-process prediction, outperforming several benchmark methods, including fixed rank kriging (FRK). The new class of copula-based models is used to map atmospheric methane in the Bowen Basin, Queensland, Australia, from Sentinel 5P satellite data.
The synthetic control method estimates the causal effect by comparing the outcomes of a treated unit to a weighted average of control units that closely match the pre-treatment outcomes of the treated unit. This method presumes that the relationship between the potential outcomes of the treated and control units remains consistent before and after treatment. However, the estimator may become unreliable when these relationships shift or when control units are highly correlated. To address these challenges, we introduce the Distributionally Robust Synthetic Control (DRoSC) method by accommodating potential shifts in relationships and addressing high correlations among control units. The DRoSC method targets a new causal estimand defined as the optimizer of a worst-case optimization problem that checks through all possible synthetic weights that comply with the pre-treatment period. When the identification conditions for the classical synthetic control method hold, the DRoSC method targets the same causal effect as the synthetic control. When these conditions are violated, we show that this new causal estimand is a conservative proxy of the non-identifiable causal effect. We further show that the limiting distribution of the DRoSC estimator is non-normal and propose a novel inferential approach to characterize this non-normal limiting distribution. We demonstrate its finite-sample performance through numerical studies and an analysis of the economic impact of terrorism in the Basque Country.
In random matrix theory, the spectral distribution of the covariance matrix has been well studied under the large dimensional asymptotic regime when the dimensionality and the sample size tend to infinity at the same rate. However, most existing theories are built upon the assumption of independent and identically distributed samples, which may be violated in practice. For example, the observational data of continuous-time processes at discrete time points, namely, the high-frequency data. In this paper, we extend the classical spectral analysis for the covariance matrix in large dimensional random matrix to the spot volatility matrix by using the high-frequency data. We establish the first-order limiting spectral distribution and obtain a second-order result, that is, the central limit theorem for linear spectral statistics. Moreover, we apply the results to design some feasible tests for the spot volatility matrix, including the identity and sphericity tests. Simulation studies justify the finite sample performance of the test statistics and verify our established theory.
In this paper we first introduce the setting of filtering on Stiefel manifolds. Then, assuming the underlying system process is constant, the convergence of the extended Kalman filter with Stiefel manifold-valued observations is proved. This corresponds to the case where one has measurement errors that needs to be filtered. Finally, some simulations are presented for a selected few Stiefel manifolds and the speed of convergence is studied.
A generalisation of the extended Kalman filter for Stiefel manifold-valued measurements is presented. We provide simulations on the 2-sphere and the space of orthogonal 4-by-2 matrices which show significant improvement of the Extended Kalman Filter compared to only relying on raw measurements.
Kernel discrepancies are a powerful tool for analyzing worst-case errors in quasi-Monte Carlo (QMC) methods. Building on recent advances in optimizing such discrepancy measures, we extend the subset selection problem to the setting of kernel discrepancies, selecting an m-element subset from a large population of size $n \gg m$. We introduce a novel subset selection algorithm applicable to general kernel discrepancies to efficiently generate low-discrepancy samples from both the uniform distribution on the unit hypercube, the traditional setting of classical QMC, and from more general distributions $F$ with known density functions by employing the kernel Stein discrepancy. We also explore the relationship between the classical $L_2$ star discrepancy and its $L_\infty$ counterpart.
Classical probabilistic graphical models face fundamental challenges in modern data environments, which are characterized by high dimensionality, source heterogeneity, and stringent data-sharing constraints. In this work, we revisit the Ising model, a well-established member of the Markov Random Field (MRF) family, and develop a distributed framework that enables scalable and privacy-preserving representation learning from large-scale binary data with inherent low-rank structure. Our approach optimizes a non-convex surrogate loss function via bi-factored gradient descent, offering substantial computational and communication advantages over conventional convex approaches. We evaluate our algorithm on multi-institutional electronic health record (EHR) datasets from 58,248 patients across the University of Pittsburgh Medical Center (UPMC) and Mass General Brigham (MGB), demonstrating superior performance in global representation learning and downstream clinical tasks, including relationship detection, patient phenotyping, and patient clustering. These results highlight a broader potential for statistical inference in federated, high-dimensional settings while addressing the practical challenges of data complexity and multi-institutional integration.
As we exhaust methods that reduces variance without introducing bias, reducing variance in experiments often requires accepting some bias, using methods like winsorization or surrogate metrics. While this bias-variance tradeoff can be optimized for individual experiments, bias may accumulate over time, raising concerns for long-term optimization. We analyze whether bias is ever acceptable when it can accumulate, and show that a bias-variance tradeoff persists in long-term settings. Improving signal-to-noise remains beneficial, even if it introduces bias. This implies we should shift from thinking there is a single ``correct'', unbiased metric to thinking about how to make the best estimates and decisions when better precision can be achieved at the expense of bias. Furthermore, our model adds nuance to previous findings that suggest less stringent launch criterion leads to improved gains. We show while this is beneficial when the system is far from the optimum, more stringent launch criterion is preferable as the system matures.
We propose the use of the ``spin-opstring", derived from Stochastic Series Expansion Quantum Monte Carlo (QMC) simulations as machine learning (ML) input data. It offers a compact, memory-efficient representation of QMC simulation cells, combining the initial state with an operator string that encodes the state's evolution through imaginary time. Using supervised ML, we demonstrate the input's effectiveness in capturing both conventional and topological phase transitions, and in a regression task to predict non-local observables. We also demonstrate the capability of spin-opstring data in transfer learning by training models on one quantum system and successfully predicting on another, as well as showing that models trained on smaller system sizes generalize well to larger ones. Importantly, we illustrate a clear advantage of spin-opstring over conventional spin configurations in the accurate prediction of a quantum phase transition. Finally, we show how the inherent structure of spin-opstring provides an elegant framework for the interpretability of ML predictions. Using two state-of-the-art interpretability techniques, Layer-wise Relevance Propagation and SHapley Additive exPlanations, we show that the ML models learn and rely on physically meaningful features from the input data. Together, these findings establish the spin-opstring as a broadly-applicable and interpretable input format for ML in quantum many-body physics.
Large language models (LLMs) trained for step-by-step reasoning often become excessively verbose, raising inference cost. Standard Reinforcement Learning with Verifiable Rewards (RLVR) pipelines filter out ``easy'' problems for training efficiency, leaving the model to train primarily on harder problems that require longer reasoning chains. This skews the output length distribution upward, resulting in a \textbf{model that conflates ``thinking longer'' with ``thinking better''}. In this work, we show that retaining and modestly up-weighting moderately easy problems acts as an implicit length regularizer. Exposing the model to solvable short-chain tasks constrains its output distribution and prevents runaway verbosity. The result is \textbf{\emph{emergent brevity for free}}: the model learns to solve harder problems without inflating the output length, \textbf{ despite the absence of any explicit length penalization}. RLVR experiments using this approach on \textit{Qwen3-4B-Thinking-2507} (with a 16k token limit) achieve baseline pass@1 AIME25 accuracy while generating solutions that are, on average, nearly twice as short. The code is available at \href{this https URL}{GitHub}, with datasets and models on \href{this https URL}{Hugging Face}.
Variance-dependent regret bounds have received increasing attention in recent studies on contextual bandits. However, most of these studies are focused on upper confidence bound (UCB)-based bandit algorithms, while sampling based bandit algorithms such as Thompson sampling are still understudied. The only exception is the LinVDTS algorithm (Xu et al., 2023), which is limited to linear reward function and its regret bound is not optimal with respect to the model dimension. In this paper, we present FGTSVA, a variance-aware Thompson Sampling algorithm for contextual bandits with general reward function with optimal regret bound. At the core of our analysis is an extension of the decoupling coefficient, a technique commonly used in the analysis of Feel-good Thompson sampling (FGTS) that reflects the complexity of the model space. With the new decoupling coefficient denoted by $\mathrm{dc}$, FGTS-VA achieves the regret of $\tilde{O}(\sqrt{\mathrm{dc}\cdot\log|\mathcal{F}|\sum_{t=1}^T\sigma_t^2}+\mathrm{dc})$, where $|\mathcal{F}|$ is the size of the model space, $T$ is the total number of rounds, and $\sigma_t^2$ is the subgaussian norm of the noise (e.g., variance when the noise is Gaussian) at round $t$. In the setting of contextual linear bandits, the regret bound of FGTSVA matches that of UCB-based algorithms using weighted linear regression (Zhou and Gu, 2022).
Accurate quantification of pavement crack width plays a pivotal role in assessing structural integrity and guiding maintenance interventions. However, achieving precise crack width measurements presents significant challenges due to: (1) the complex, non-uniform morphology of crack boundaries, which limits the efficacy of conventional approaches, and (2) the demand for rapid measurement capabilities from arbitrary pixel locations to facilitate comprehensive pavement condition evaluation. To overcome these limitations, this study introduces a cascaded framework integrating Principal Component Analysis (PCA) and Robust PCA (RPCA) for efficient crack width extraction from digital images. The proposed methodology comprises three sequential stages: (1) initial crack segmentation using established detection algorithms to generate a binary representation, (2) determination of the primary orientation axis for quasi-parallel cracks through PCA, and (3) extraction of the Main Propagation Axis (MPA) for irregular crack geometries using RPCA. Comprehensive evaluations were conducted across three publicly available datasets, demonstrating that the proposed approach achieves superior performance in both computational efficiency and measurement accuracy compared to existing state-of-the-art techniques.
Accurately analyzing NMR and MRI diffusion experimental data relies on the theoretical expression used for signal attenuation or phase evolution. In a complex system, the encountered magnetic field is often inhomogeneous, which may be represented by a linear combination of z^n gradient fields, where n is the order. Additionally, the higher the order of the nonlinear gradient field, the more sensitive the phase variances are to differences in diffusion coefficients and delay times. Hence, studying higher-order fields has both theoretical and experimental importance, but this is a challenge for traditional methods. The recently proposed phase diffusion method proposed a general way to overcome the challenge. This method is used and demonstrated in detail in this paper to determine the phase evolution in a quadric field (n = 4). Three different types of phase evolution in the quadric gradient field are obtained. Moreover, a general signal attenuation expression is proposed to describe the signal attenuation for spin diffusion from the origin of the nonlinear gradient field. This approximation is based on the short gradient pulse (SGP) approximation but is extended to include the finite gradient pulse width (FGPW) effect by using the mean square phase. Compared to other forms of signal attenuation, such as Gaussian and Lorentzian, this method covers a broader range of attenuation, from small to relatively large. Additionally, this attenuation is easier to understand than the Mittag-Leffler function-based attenuation. The results, particularly the phase and signal attenuation expressions obtained in this paper, potentially advance PFG diffusion research in nonlinear gradient fields in NMR and MRI.
We introduce a unified operator-theoretic framework for analyzing mixing times of finite-state ergodic Markov chains that applies to both reversible and non-reversible dynamics. The central object in our analysis is the projected transition operator $PU_{\perp 1}$, where $P$ is the transition kernel and $U_{\perp 1}$ is orthogonal projection onto mean-zero subspace in $\ell^{2}(\pi)$, where $\pi$ is the stationary distribution. We show that explicitly computable matrix norms of $(PU_{\perp 1})^k$ gives non-asymptotic mixing times/distance to stationarity, and bound autocorrelations at lag $k$. We establish, for the first time, submultiplicativity of pointwise chi-squared divergence in the general non-reversible case. We provide for all times $\chi^{2}(k)$ bounds based on the spectrum of $PU_{\perp 1}$, i.e., magnitude of its distinct non-zero eigenvalues, discrepancy between their algebraic and geometric multiplicities, condition number of a similarity transform, and constant coming from smallest atom of stationary distribution(all scientifically computable). Furthermore, for diagonalizable $PU_{\perp 1}$, we provide explict constants satisfying hypocoercivity phenomenon for discrete time Markov Chains. Our framework enables direct computation of convergence bounds for challenging non-reversible chains, including momentum-based samplers for V-shaped distributions. We provide the sharpest known bounds for non-reversible walk on triangle. Our results combined with simple regression reveals a fundamental insight into momentum samplers: although for uniform distributions, $n\log{n}$ iterations suffice for $\chi^{2}$ mixing, for V-shaped distributions they remain diffusive as $n^{1.969}\log{n^{1.956}}$ iterations are sufficient. The framework shows that for ergodic chains relaxation times $\tau_{rel}=\|\sum_{k=0}^{\infty}P^{k}U_{\perp 1}\|_{\ell^{2}(\pi)}$.
Probabilistic relaxations of graph cuts offer a differentiable alternative to spectral clustering, enabling end-to-end and online learning without eigendecompositions, yet prior work centered on RatioCut and lacked general guarantees and principled gradients. We present a unified probabilistic framework that covers a wide class of cuts, including Normalized Cut. Our framework provides tight analytic upper bounds on expected discrete cuts via integral representations and Gauss hypergeometric functions with closed-form forward and backward. Together, these results deliver a rigorous, numerically stable foundation for scalable, differentiable graph partitioning covering a wide range of clustering and contrastive learning objectives.
We introduce Cycle-Sync, a robust and global framework for estimating camera poses (both rotations and locations). Our core innovation is a location solver that adapts message-passing least squares (MPLS) -- originally developed for group synchronization -- to camera location estimation. We modify MPLS to emphasize cycle-consistent information, redefine cycle consistencies using estimated distances from previous iterations, and incorporate a Welsch-type robust loss. We establish the strongest known deterministic exact-recovery guarantee for camera location estimation, showing that cycle consistency alone -- without access to inter-camera distances -- suffices to achieve the lowest sample complexity currently known. To further enhance robustness, we introduce a plug-and-play outlier rejection module inspired by robust subspace recovery, and we fully integrate cycle consistency into MPLS for rotation synchronization. Our global approach avoids the need for bundle adjustment. Experiments on synthetic and real datasets show that Cycle-Sync consistently outperforms leading pose estimators, including full structure-from-motion pipelines with bundle adjustment.
Preconditioning is a key component of MCMC algorithms that improves sampling efficiency by facilitating exploration of geometrically complex target distributions through an invertible map. While linear preconditioners are often sufficient for moderately complex target distributions, recent work has explored nonlinear preconditioning with invertible neural networks as components of normalizing flows (NFs). However, empirical and theoretical studies show that overparameterized NF preconditioners can degrade sampling efficiency and fit quality. Moreover, existing NF-based approaches do not adapt their architectures to the target distribution. Related work outside of MCMC similarly finds that suitably parameterized NFs can achieve comparable or superior performance with substantially less training time or data. We propose a factorized preconditioning architecture that reduces NF complexity by combining a linear component with a conditional NF, improving adaptability to target geometry. The linear preconditioner is applied to dimensions that are approximately Gaussian, as estimated from warmup samples, while the conditional NF models more complex dimensions. Our method yields significantly better tail samples on two complex synthetic distributions and consistently better performance on a sparse logistic regression posterior across varying likelihood and prior strengths. It also achieves higher effective sample sizes on hierarchical Bayesian model posteriors with weak likelihoods and strong funnel geometries. This approach is particularly relevant for hierarchical Bayesian model analyses with limited data and could inform current theoretical and software strides in neural MCMC design.
We study the problem of learning a $n$-variables $k$-CNF formula $\Phi$ from its i.i.d. uniform random solutions, which is equivalent to learning a Boolean Markov random field (MRF) with $k$-wise hard constraints. Revisiting Valiant's algorithm (Commun. ACM'84), we show that it can exactly learn (1) $k$-CNFs with bounded clause intersection size under Lovász local lemma type conditions, from $O(\log n)$ samples; and (2) random $k$-CNFs near the satisfiability threshold, from $\widetilde{O}(n^{\exp(-\sqrt{k})})$ samples. These results significantly improve the previous $O(n^k)$ sample complexity. We further establish new information-theoretic lower bounds on sample complexity for both exact and approximate learning from i.i.d. uniform random solutions.
In this thesis, we analyse the generalisations of the Ornstein-Uhlenbeck (OU) semigroup and study them in both quantum and classical setups. In the first three chapters, we analyse the dissipative dynamics on noncommutative/quantum spaces, in particular, the systems with multiparticle interactions associated to CCR algebras. We provide various models where the dissipative dynamics are constructed using noncommutative Dirichlet forms. Some of our models decay to equilibrium algebraically and the Poincare inequality does not hold. Using the classical representation of generators of nilpotent Lie algebras, we provide the noncommutative representations of Lie algebras in terms of creation and annihilation operators and discuss the construction of corresponding Dirichlet forms. This introduces the opportunity to explore quantum stochastic processes related to Lie algebras and nilpotent Lie algebras. Additionally, these representations enable the investigation of the noncommutative analogue of hypoellipticity. In another direction, we explore the potential for introducing statistical models within a quantum framework. In this thesis, however, we present a classical statistical model of multivariate Graph superposition of OU (Gr supOU) process which allows for long(er) memory in the modelling of sparse graphs. We estimate these processes using generalised method of moments and show that it yields consistent estimators. We demonstrate the asymptotic normality of the moment estimators and validate these estimators through a simulation study.
This paper presents a discrete-event stochastic model for the redistribution of indistinguishable personal items, exemplified by socks, among multiple cohabitants sharing a communal laundry system. Drawing on concepts from ecological population dynamics, diffusion processes, and stochastic exchange theory, the model captures the probabilistic mechanisms underlying item mixing, recovery, and loss. Each cohabitant is represented as an autonomous agent whose belongings interact through iterative cycles of collective washing, sorting, and partial correction. The system's evolution is characterized by random mixing events, selective recollection, and attrition over time. Implemented using the SimPy discrete-event simulation framework, the model demonstrates that even minimal exchange probabilities can generate emergent asymmetries, quasi-equilibrium distributions, and long-term disorder. The findings illustrate how stochastic processes inherent to shared domestic systems can produce persistent imbalances, offering a quantitative perspective on an everyday social phenomenon.
Zeroth-order or derivative-free optimization (MeZO) is an attractive strategy for finetuning large language models (LLMs) because it eliminates the memory overhead of backpropagation. However, it converges slowly due to the inherent curse of dimensionality when searching for descent directions in the high-dimensional parameter space of billion-scale LLMs. We propose ConMeZO, a novel zeroth-order optimizer that accelerates convergence by adaptive directional sampling. Instead of drawing the direction uniformly at random, ConMeZO restricts the sampling to a cone centered around a momentum estimate. This concentrates the search in directions where the true gradient is more likely to lie and thus reduces the effect of high dimensions. We prove that ConMeZO achieves the same worst-case convergence rate as MeZO. Empirically, when finetuning LLMs on natural language tasks, ConMeZO is up to 2X faster than MeZO while retaining the low-memory footprint of zeroth-order methods.
This paper introduces a kernel discrepancy-based framework for rerandomization to enhance the precision of causal inference in controlled experiments. We demonstrate that the kernel discrepancy is the key part of the variance upper bound for the difference-in-means estimator, thereby establishing a theoretical rationale for its use. It quantifies the difference between empirical covariate distributions of treatment groups. We can choose a suitable kernel function and the corresponding discrepancy to accommodate simple or complex relationships between the outcome and the covariates. The proposed framework efficiently applies to any number of treatment groups, overcoming a significant limitation of existing methods. Furthermore, we develop a computationally efficient composite strategy for factorial experiments by recursively applying two- or multi-group rerandomizations. Numerical studies demonstrate that our approach significantly reduces estimator variance, with the linear kernel being optimal for linear relationships and the $\mathcal{L}_2$-discrepancy offering robust performance under model uncertainty.
Bipartite Experiments are randomized experiments where the treatment is applied to a set of units (randomization units) that is different from the units of analysis, and randomization units and analysis units are connected through a bipartite graph. The scale of experimentation at large online platforms necessitates both accurate inference in the presence of a large bipartite interference graph, as well as a highly scalable implementation. In this paper, we describe new methods for inference that enable practical, scalable analysis of bipartite experiments: (1) We propose CA-ERL, a covariate-adjusted variant of the exposure-reweighted-linear (ERL) estimator [9], which empirically yields 60-90% variance reduction. (2) We introduce a randomization-based method for inference and prove asymptotic validity of a Wald-type confidence interval under graph sparsity assumptions. (3) We present a linear-time algorithm for randomization inference of the CA-ERL estimator, which can be easily implemented in query engines like Presto or Spark. We evaluate our methods both on a real experiment at Meta that randomized treatment on Facebook Groups and analyzed user-level metrics, as well as simulations on synthetic data. The real-world data shows that our CA-ERL estimator reduces the confidence interval (CI) width by 60-90% (compared to ERL) in a practical setting. The simulations using synthetic data show that our randomization inference procedure achieves correct coverage across instances, while the ERL estimator has incorrectly small CI widths for instances with large true effect sizes and is overly conservative when the bipartite graph is dense.
We introduce a novel statistical framework for the analysis of replicated point processes that allows for the study of point pattern variability at a population level. By treating point process realizations as random measures, we adopt a functional analysis perspective and propose a form of functional Principal Component Analysis (fPCA) for point processes. The originality of our method is to base our analysis on the cumulative mass functions of the random measures which gives us a direct and interpretable analysis. Key theoretical contributions include establishing a Karhunen-Loève expansion for the random measures and a Mercer Theorem for covariance measures. We establish convergence in a strong sense, and introduce the concept of principal measures, which can be seen as latent processes governing the dynamics of the observed point patterns. We propose an easy-to-implement estimation strategy of eigenelements for which parametric rates are achieved. We fully characterize the solutions of our approach to Poisson and Hawkes processes and validate our methodology via simulations and diverse applications in seismology, single-cell biology and neurosiences, demonstrating its versatility and effectiveness. Our method is implemented in the pppca R-package.
This work studies the statistical implications of using features comprised of general linear combinations of covariates to partition the data in randomized decision tree and forest regression algorithms. Using random tessellation theory in stochastic geometry, we provide a theoretical analysis of a class of efficiently generated random tree and forest estimators that allow for oblique splits along such features. We call these estimators \emph{oblique Mondrian} trees and forests, as the trees are generated by first selecting a set of features from linear combinations of the covariates and then running a Mondrian process that hierarchically partitions the data along these features. Generalization error bounds and convergence rates are obtained for the flexible function class of multi-index models for dimension reduction, where the output is assumed to depend on a low-dimensional relevant feature subspace of the input domain. The results highlight how the risk of these estimators depends on the choice of features and quantify how robust the risk is with respect to error in the estimation of relevant features. The asymptotic analysis also provides conditions on the consistency rates of the estimated features along which the data is split for these estimators to obtain minimax optimal rates of convergence with respect to the dimension of the relevant feature subspace. Additionally, a lower bound on the risk of axis-aligned Mondrian trees (where features are restricted to the set of covariates) is obtained, proving that these estimators are suboptimal for general ridge functions, no matter how the distribution over the covariates used to divide the data at each tree node is weighted.
In astronomical observations, the estimation of distances from parallaxes is a challenging task due to the inherent measurement errors and the non-linear relationship between the parallax and the distance. This study leverages ideas from robust Bayesian inference to tackle these challenges, investigating a broad class of prior densities for estimating distances with a reduced bias and variance. Through theoretical analysis, simulation experiments, and the application to data from the Gaia Data Release 1 (GDR1), we demonstrate that heavy-tailed priors provide more reliable distance estimates, particularly in the presence of large fractional parallax errors. Theoretical results highlight the "curse of a single observation," where the likelihood dominates the posterior, limiting the impact of the prior. Nevertheless, heavy-tailed priors can delay the explosion of posterior risk, offering a more robust framework for distance estimation. The findings suggest that reciprocal invariant priors, with polynomial decay in their tails, such as the Half-Cauchy and Product Half-Cauchy, are particularly well-suited for this task, providing a balance between bias reduction and variance control.
Convex and penalized robust regression methods often suffer from a persistent bias induced by large outliers, limiting their effectiveness in adversarial or heavy-tailed settings. In this work, we study a smooth redescending non-convex M-estimator, specifically the Welsch estimator, and show that it can eliminate this bias whenever it is statistically identifiable. We focus on high-dimensional linear regression under adversarial contamination, where a fraction of samples may be corrupted by an adversary with full knowledge of the data and underlying model. A central technical contribution of this paper is a practical algorithm that provably finds a statistically valid solution to this non-convex problem. We show that the Welsch objective remains locally convex within a well-characterized basin of attraction, and our algorithm is guaranteed to converge into this region and recover the desired estimator. We establish three main guarantees: (a) non-asymptotic minimax-optimal deviation bounds under contamination, (b) improved unbiasedness in the presence of large outliers, and (c) asymptotic normality, yielding statistical efficiency as the sample size grows. Finally, we support our theoretical findings with comprehensive experiments on synthetic and real datasets, demonstrating the estimator's superior robustness, efficiency, and effectiveness in mitigating outlier-induced bias relative to state-of-the-art robust regression methods.
Cluster-randomized trials (CRTs) are a well-established class of designs for evaluating community-based interventions. An essential task in planning these trials is determining the number of clusters and cluster sizes needed to achieve sufficient statistical power for detecting a clinically relevant effect size. While methods for evaluating the average treatment effect (ATE) for the entire study population are well-established, sample size methods for testing heterogeneity of treatment effects (HTEs), i.e., treatment-covariate interaction or difference in subpopulation-specific treatment effects, in CRTs have only recently been developed. For pre-specified analyses of HTEs in CRTs, effect-modifying covariates should, ideally, be accompanied by sample size or power calculations to ensure the trial has adequate power for the planned analyses. Power analysis for testing HTEs is more complex than for ATEs due to the additional design parameters that must be specified. Power and sample size formulas for testing HTEs via linear mixed effects (LME) models have been separately derived for different cluster-randomized designs, including single and multi-period parallel designs, crossover designs, and stepped-wedge designs, and for continuous and binary outcomes. This tutorial provides a consolidated reference guide for these methods and enhances their accessibility through an online R Shiny calculator. We further discuss key considerations for conducting sample size and power calculations to test pre-specified HTE hypotheses in CRTs, highlighting the importance of specifying advanced estimates of intracluster correlation coefficients for both outcomes and covariates, and their implications for power. The sample size methodology and calculator functionality are demonstrated through a real CRT example.
This work presents sparse invariant coordinate selection, SICS, a new method for sparse and robust independent component analysis. SICS is based on classical invariant coordinate selection, which is presented in such a form that a LASSO-type penalty can be applied to promote sparsity. Robustness is achieved by using robust scatter matrices. In the first part of the paper, the background and building blocks: scatter matrices, measures of robustness, ICS and independent component analysis, are carefully introduced. Then the proposed new method and its algorithm are derived and presented. This part also includes consistency and breakdown point results for a general case of sparse ICS-like methods. The performance of SICS in identifying sparse independent component loadings is investigated with multiple simulations. The method is illustrated with an example in constructing sparse causal graphs and we also propose a graphical tool for selecting the appropriate sparsity level in SICS.
Model-based clustering integrated with variable selection is a powerful tool for uncovering latent structures within complex data. However, its effectiveness is often hindered by challenges such as identifying relevant variables that define heterogeneous subgroups and handling data that are missing not at random, a prevalent issue in fields like transcriptomics. While several notable methods have been proposed to address these problems, they typically tackle each issue in isolation, thereby limiting their flexibility and adaptability. This paper introduces a unified framework designed to address these challenges simultaneously. Our approach incorporates a data-driven penalty matrix into penalized clustering to enable more flexible variable selection, along with a mechanism that explicitly models the relationship between missingness and latent class membership. We demonstrate that, under certain regularity conditions, the proposed framework achieves both asymptotic consistency and selection consistency, even in the presence of missing data. This unified strategy significantly enhances the capability and efficiency of model-based clustering, advancing methodologies for identifying informative variables that define homogeneous subgroups in the presence of complex missing data patterns. The performance of the framework, including its computational efficiency, is evaluated through simulations and demonstrated using both synthetic and real-world transcriptomic datasets.
We propose a principled method for autoencoding with random forests. Our strategy builds on foundational results from nonparametric statistics and spectral graph theory to learn a low-dimensional embedding of the model that optimally represents relationships in the data. We provide exact and approximate solutions to the decoding problem via constrained optimization, split relabeling, and nearest neighbors regression. These methods effectively invert the compression pipeline, establishing a map from the embedding space back to the input space using splits learned by the ensemble's constituent trees. The resulting decoders are universally consistent under common regularity assumptions. The procedure works with supervised or unsupervised models, providing a window into conditional or joint distributions. We demonstrate various applications of this autoencoder, including powerful new tools for visualization, compression, clustering, and denoising. Experiments illustrate the ease and utility of our method in a wide range of settings, including tabular, image, and genomic data.
We introduce an approach to topic modelling with document-level covariates that remains tractable in the face of large text corpora. This is achieved by de-emphasizing the role of parameter estimation in an underlying probabilistic model, assuming instead that the data come from a fixed but unknown distribution whose statistical functionals are of interest. We propose combining a convex formulation of non-negative matrix factorization with standard regression techniques as a fast-to-compute and useful estimate of such a functional. Uncertainty quantification can then be achieved by reposing non-parametric resampling methods on top of this scheme. This is in contrast to popular topic modelling paradigms, which posit a complex and often hard-to-fit generative model of the data. We argue that the simple, non-parametric approach advocated here is faster, more interpretable, and enjoys better inferential justification than said generative models. Finally, our methods are demonstrated with an application analysing covariate effects on discourse of flavours attributed to Canadian beers.
We consider a high-dimensional multi-outcome regression in which $q,$ possibly dependent, binary and continuous outcomes are regressed onto $p$ covariates. We model the observed outcome vector as a partially observed latent realization from a multivariate linear regression model. Our goal is to estimate simultaneously a sparse matrix ($B$) of latent regression coefficients (i.e., partial covariate effects) and a sparse latent residual precision matrix ($\Omega$), which induces partial correlations between the observed outcomes. To this end, we specify continuous spike-and-slab priors on all entries of $B$ and off-diagonal elements of $\Omega$ and introduce a Monte Carlo Expectation-Conditional Maximization algorithm to compute the maximum a posterior estimate of the model parameters. Under a set of mild assumptions, we derive the posterior contraction rate for our model in the high-dimensional regimes where both $p$ and $q$ diverge with the sample size $n$ and establish a sure screening property, which implies that, as $n$ increases, we can recover all truly non-zero elements of $B$ with probability tending to one. We demonstrate the excellent finite-sample properties of our proposed method, which we call mixed-mSSL, using extensive simulation studies and three applications spanning medicine to ecology.
Modern treatment targeting methods often rely on estimating the conditional average treatment effect (CATE) using machine learning tools. While effective in identifying who benefits from treatment on the individual level, these approaches typically overlook system-level dynamics that may arise when treatments induce strain on shared capacity. We study the problem of targeting in Markovian systems, where treatment decisions must be made one at a time as units arrive, and early decisions can impact later outcomes through delayed or limited access to resources. We show that optimal policies in such settings compare CATE-like quantities to state-specific thresholds, where each threshold reflects the expected cumulative impact on the system of treating an additional individual in the given state. We propose an algorithm that augments standard CATE estimation with off-policy evaluation techniques to estimate these thresholds from observational data. Theoretical results establish consistency and convergence guarantees, and empirical studies demonstrate that our method improves long-run outcomes considerably relative to individual-level CATE targeting rules.
Mechanistic mathematical models of within-host viral dynamics are tools for understanding how a virus' biology and its interaction with the immune system shape the infectivity of a host. The biology of the process is encoded by the structure and parameters of the model that can be inferred statistically by fitting to viral load data. The main drawback of mechanistic models is that this inference is computationally expensive because the model must be repeatedly solved. This limits the size of the datasets that can be considered or the complexity of the models fitted. In this paper we develop a much cheaper inference method for this class of models by implementing a novel approximation of the model dynamics that uses a combination of random and deterministic processes. This approximation also properly accounts for process noise early in the infection when cell and virion numbers are small, which is important for the viral dynamics but often overlooked. Our method runs on a consumer laptop and is fast enough to facilitate a full hierarchical Bayesian treatment of the problem with sharing of information to allow for individual level parameter differences. We apply our method to simulated datasets and a reanalysis of COVID-19 monitoring data in an National Basketball Association cohort of 163 individuals.
Reliable evaluation of AI systems remains a fundamental challenge when ground truth labels are unavailable, particularly for systems generating natural language outputs like AI chat and agent systems. Many of these AI agents and systems focus on entity-centric tasks. In enterprise contexts, organizations deploy AI systems for entity linking, data integration, and information retrieval where verification against gold standards is often infeasible due to proprietary data constraints. Academic deployments face similar challenges when evaluating AI systems on specialized datasets with ambiguous criteria. Conventional evaluation frameworks, rooted in supervised learning paradigms, fail in such scenarios where single correct answers cannot be defined. We introduce VB-Score, a variance-bounded evaluation framework for entity-centric AI systems that operates without ground truth by jointly measuring effectiveness and robustness. Given system inputs, VB-Score enumerates plausible interpretations through constraint relaxation and Monte Carlo sampling, assigning probabilities that reflect their likelihood. It then evaluates system outputs by their expected success across interpretations, penalized by variance to assess robustness of the system. We provide formal theoretical analysis establishing key properties including range, monotonicity, and stability along with concentration bounds for Monte Carlo estimation. Through case studies on AI systems with ambiguous inputs, we demonstrate that VB-Score reveals robustness differences hidden by conventional evaluation frameworks, offering a principled measurement framework for assessing AI system reliability in label-scarce domains.
System identification of autoregressive processes on Stiefel and Grassmann manifolds are presented and studied. We define the system parameters as elements in the orthogonal group and we show that the system can be estimated by averaging over observations. Then we propose an algorithm on how to compute these system parameters using conjugate gradient descent on Stiefel and Grassmann manifolds, respectively.
Learning graphical conditional independence structures from nonlinear, continuous or mixed data is a central challenge in machine learning and the sciences, and many existing methods struggle to scale to thousands of samples or hundreds of variables. We introduce two basis-expansion tools for scalable causal discovery. First, the Basis Function BIC (BF-BIC) score uses truncated additive expansions to approximate nonlinear dependencies. BF-BIC is theoretically consistent under additive models and extends to post-nonlinear (PNL) models via an invertible reparameterization. It remains robust under moderate interactions and supports mixed data through a degenerate-Gaussian embedding for discrete variables. In simulations with fully nonlinear neural causal models (NCMs), BF-BIC outperforms kernel- and constraint-based methods (e.g., KCI, RFCI) in both accuracy and runtime. Second, the Basis Function Likelihood Ratio Test (BF-LRT) provides an approximate conditional independence test that is substantially faster than kernel tests while retaining competitive accuracy. Extensive simulations and a real-data application to Canadian wildfire risk show that, when integrated into hybrid searches, BF-based methods enable interpretable and scalable causal discovery. Implementations are available in Python, R, and Java.
The limit distribution of the nonparametric maximum likelihood estimator for interval censored data with more than one observation time per unobservable observation, is still unknown in general. For the so-called separated case, where one has observation times which are at a distance larger than a fixed $\epsilon>0$, the limit distribution was derived in [4]. For the non-separated case there is a conjectured limit distribution, given in [9], Section 5.2 of Part 2. But the findings of the present paper suggest that this conjecture may not hold. We prove consistency of a closely related nonparametric isotonic least squares estimator and give a sketch of the proof for a result on its limit distribution. We also provide simulation results to show how the nonparametric MLE and least squares estimator behave in comparison. Moreover, we discuss a simpler least squares estimator that can be computed in one step, but is inferior to the other least squares estimator, since it does not use all information. For the simplest model of interval censoring, the current status model, the nonparametric maximum likelihood and least squares estimators are the same. This equivalence breaks down if there are more observation times per unobservable observation. The computations for the simulation of the more complicated interval censoring model were performed by using the iterative convex minorant algorithm. They are provided in the GitHub repository [6].
We examine the extent to which sublinear-sample property testing and estimation apply to settings where samples are independently but not identically distributed. Specifically, we consider the following distributional property testing framework: Suppose there is a set of distributions over a discrete support of size $k$, $p_1, p_2,\ldots,p_T$, and we obtain $c$ independent draws from each distribution. Suppose the goal is to learn or test a property of the average distribution, $p_{avg}$. This setup models a number of important practical settings where the individual distributions correspond to heterogeneous entities -- either individuals, chronologically distinct time periods, spatially separated data sources, etc. From a learning standpoint, even with $c=1$ samples from each distribution, $\Theta(k/\varepsilon^2)$ samples are necessary and sufficient to learn $p_{avg}$ to within error $\varepsilon$ in $\ell_1$ distance. To test uniformity or identity -- distinguishing the case that $p_{avg}$ is equal to some reference distribution, versus has $\ell_1$ distance at least $\varepsilon$ from the reference distribution, we show that a linear number of samples in $k$ is necessary given $c=1$ samples from each distribution. In contrast, for $c \ge 2$, we recover the usual sublinear sample testing guarantees of the i.i.d.\ setting: we show that $O(\sqrt{k}/\varepsilon^2 + 1/\varepsilon^4)$ total samples are sufficient, matching the optimal sample complexity in the i.i.d.\ case in the regime where $\varepsilon \ge k^{-1/4}$. Additionally, we show that in the $c=2$ case, there is a constant $\rho > 0$ such that even in the linear regime with $\rho k$ samples, no tester that considers the multiset of samples (ignoring which samples were drawn from the same $p_i$) can perform uniformity testing. We also extend our techniques to the problem of testing "closeness" of two distributions.
Tracking the solution of time-varying variational inequalities is an important problem with applications in game theory, optimization, and machine learning. Existing work considers time-varying games or time-varying optimization problems. For strongly convex optimization problems or strongly monotone games, these results provide tracking guarantees under the assumption that the variation of the time-varying problem is restrained, that is, problems with a sublinear solution path. In this work we extend existing results in two ways: In our first result, we provide tracking bounds for (1) variational inequalities with a sublinear solution path but not necessarily monotone functions, and (2) for periodic time-varying variational inequalities that do not necessarily have a sublinear solution path-length. Our second main contribution is an extensive study of the convergence behavior and trajectory of discrete dynamical systems of periodic time-varying VI. We show that these systems can exhibit provably chaotic behavior or can converge to the solution. Finally, we illustrate our theoretical results with experiments.
In this paper, we consider a multi-stage dynamic assortment optimization problem with multi-nomial choice modeling (MNL) under resource knapsack constraints. Given the current resource inventory levels, the retailer makes an assortment decision at each period, and the goal of the retailer is to maximize the total profit from purchases. With the exact optimal dynamic assortment solution being computationally intractable, a practical strategy is to adopt the re-solving technique that periodically re-optimizes deterministic linear programs (LP) arising from fluid approximation. However, the fractional structure of MNL makes the fluid approximation in assortment optimization non-linear, which brings new technical challenges. To address this challenge, we propose a new epoch-based re-solving algorithm that effectively transforms the denominator of the objective into the constraint, so that the re-solving technique is applied to a linear program with additional slack variables amenable to practical computations and theoretical analysis. Theoretically, we prove that the regret (i.e., the gap between the resolving policy and the optimal objective of the fluid approximation) scales logarithmically with the length of time horizon and resource capacities.
ShinyItemAnalysis (SIA) is an R package and shiny application for an interactive presentation of psychometric methods and analysis of multi-item measurements in psychology, education, and social sciences in general. In this article, we present a new feature introduced in the recent version of the package, called "SIA modules", which allows researchers and practitioners to offer new analytical methods for broader use via add-on extensions. SIA modules are designed to integrate with and build upon the SIA interactive application, enabling them to leverage the existing infrastructure for tasks such as data uploading and processing. They can access and further use a range of outputs from various analyses, including models and datasets. Because SIA modules come in R packages (or extend the existing ones), they may come bundled with their datasets, use object-oriented systems, or even compiled code. We illustrate the concepts using sample modules from the newly introduced SIAmodules package and other packages. After providing a general overview of building Shiny applications, we describe how to develop the SIA add-on modules with the support of the new SIAtools package. Finally, we discuss possibilities of future development and emphasize the importance of freely available, interactive psychometric software for dissemination of methodological innovations.
Conventional Multi-Armed Bandit (MAB) algorithms are designed for stationary environments, where the reward distributions associated with the arms do not change with time. In many applications, however, the environment is more accurately modeled as being non-stationary. In this work, piecewise stationary MAB (PS-MAB) environments are investigated, in which the reward distributions associated with a subset of the arms change at some change-points and remain stationary between change-points. Our focus is on the asymptotic analysis of PS-MABs, for which practical algorithms based on change detection have been previously proposed. Our goal is to modularize the design and analysis of such Detection Augmented Bandit (DAB) procedures. To this end, we first provide novel, improved performance lower bounds for PS-MABs. Then, we identify the requirements for stationary bandit algorithms and change detectors in a DAB procedure that are needed for the modularization. We assume that the rewards are sub-Gaussian. Under this assumption and a condition on the separation of the change-points, we show that the analysis of DAB procedures can indeed be modularized, so that the regret bounds can be obtained in a unified manner for various combinations of change detectors and bandit algorithms. Through this analysis, we develop new modular DAB procedures that are order-optimal. Finally, we showcase the practical effectiveness of our modular DAB approach in our experiments, studying its regret performance compared to other methods and investigating its detection capabilities.
Bayesian optimization is highly effective for optimizing expensive-to-evaluate black-box functions, but it faces significant computational challenges due to the high computational complexity of Gaussian processes, which results in a total time complexity that is quartic with respect to the number of iterations. To address this limitation, we propose the Bayesian Optimization by Kernel regression and density-based Exploration (BOKE) algorithm. BOKE uses kernel regression for efficient function approximation, kernel density for exploration, and integrates them into the confidence bound criteria to guide the optimization process, thus reducing computational costs to quadratic. Our theoretical analysis rigorously establishes the global convergence of BOKE and ensures its robustness in noisy settings. Through extensive numerical experiments on both synthetic and real-world optimization tasks, we demonstrate that BOKE not only performs competitively compared to Gaussian process-based methods and several other baseline methods but also exhibits superior computational efficiency. These results highlight BOKE's effectiveness in resource-constrained environments, providing a practical approach for optimization problems in engineering applications.
We study the problem of preconditioning in sequential prediction. From the theoretical lens of linear dynamical systems, we show that convolving the target sequence corresponds to applying a polynomial to the hidden transition matrix. Building on this insight, we propose a universal preconditioning method that convolves the target with coefficients from orthogonal polynomials such as Chebyshev or Legendre. We prove that this approach reduces regret for two distinct prediction algorithms and yields the first ever sublinear and hidden-dimension-independent regret bounds (up to logarithmic factors) that hold for systems with marginally table and asymmetric transition matrices. Finally, extensive synthetic and real-world experiments show that this simple preconditioning strategy improves the performance of a diverse range of algorithms, including recurrent neural networks, and generalizes to signals beyond linear dynamical systems.
The increasing use of generative ML foundation models for image restoration tasks such as super-resolution calls for robust and interpretable uncertainty quantification methods. We address this need by presenting a novel approach based on conformal prediction techniques to create a 'confidence mask' capable of reliably and intuitively communicating where the generated image can be trusted. Our method is adaptable to any black-box generative model, including those locked behind an opaque API, requires only easily attainable data for calibration, and is highly customizable via the choice of a local image similarity metric. We prove strong theoretical guarantees for our method that span fidelity error control (according to our local image similarity metric), reconstruction quality, and robustness in the face of data leakage. Finally, we empirically evaluate these results and establish our method's solid performance.
Molecular discovery has brought great benefits to the chemical industry. Various molecule design techniques are developed to identify molecules with desirable properties. Traditional optimization methods, such as genetic algorithms, continue to achieve state-of-the-art results across multiple molecular design benchmarks. However, these techniques rely solely on random walk exploration, which hinders both the quality of the final solution and the convergence speed. To address this limitation, we propose a novel approach called Gradient Genetic Algorithm (Gradient GA), which incorporates gradient information from the objective function into genetic algorithms. Instead of random exploration, each proposed sample iteratively progresses toward an optimal solution by following the gradient direction. We achieve this by designing a differentiable objective function parameterized by a neural network and utilizing the Discrete Langevin Proposal to enable gradient guidance in discrete molecular spaces. Experimental results demonstrate that our method significantly improves both convergence speed and solution quality, outperforming cutting-edge techniques. For example, it achieves up to a 25% improvement in the top-10 score over the vanilla genetic algorithm. The code is publicly available at this https URL.
Part of the success of diffusion models stems from their ability to perform iterative refinement, i.e., repeatedly correcting outputs during generation. However, modern masked discrete diffusion lacks this capability: when a token is generated, it cannot be updated again, even when it introduces an error. Here, we address this limitation by introducing the remasking diffusion model (ReMDM) sampler, a method that can be applied to pretrained masked diffusion models in a principled way and that is derived from a discrete diffusion model with a custom remasking backward process. Most interestingly, ReMDM endows discrete diffusion with a form of inference-time compute scaling. By increasing the number of sampling steps, ReMDM generates natural language outputs that approach the quality of autoregressive models, whereas when the computation budget is limited, ReMDM better maintains quality. ReMDM also improves sample quality of masked diffusion models for discretized images, and in scientific domains such as molecule design, ReMDM facilitates diffusion guidance and pushes the Pareto frontier of controllability relative to classical masking and uniform noise diffusion. We provide the code along with a blog post on the project page: this https URL
We study the complexity of online stochastic gradient descent (SGD) for learning a two-layer neural network with $P$ neurons on isotropic Gaussian data: $f_*(\boldsymbol{x}) = \sum_{p=1}^P a_p\cdot \sigma(\langle\boldsymbol{x},\boldsymbol{v}_p^*\rangle)$, $\boldsymbol{x} \sim \mathcal{N}(0,\boldsymbol{I}_d)$, where the activation $\sigma:\mathbb{R}\to\mathbb{R}$ is an even function with information exponent $k_*>2$ (defined as the lowest degree in the Hermite expansion), $\{\boldsymbol{v}^*_p\}_{p\in[P]}\subset \mathbb{R}^d$ are orthonormal signal directions, and the non-negative second-layer coefficients satisfy $\sum_{p} a_p^2=1$. We focus on the challenging ``extensive-width'' regime $P\gg 1$ and permit diverging condition number in the second-layer, covering as a special case the power-law scaling $a_p\asymp p^{-\beta}$ where $\beta\in\mathbb{R}_{\ge 0}$. We provide a precise analysis of SGD dynamics for the training of a student two-layer network to minimize the mean squared error (MSE) objective, and explicitly identify sharp transition times to recover each signal direction. In the power-law setting, we characterize scaling law exponents for the MSE loss with respect to the number of training samples and SGD steps, as well as the number of parameters in the student neural network. Our analysis entails that while the learning of individual teacher neurons exhibits abrupt transitions, the juxtaposition of $P\gg 1$ emergent learning curves at different timescales leads to a smooth scaling law in the cumulative objective.
Why and when is deep better than shallow? We answer this question in a framework that is agnostic to network implementation. We formulate a deep model as an abstract state-transition semigroup acting on a general metric space, and separate the implementation (e.g., ReLU nets, transformers, and chain-of-thought) from the abstract state transition. We prove a bias-variance decomposition in which the variance depends only on the abstract depth-$k$ network and not on the implementation (Theorem 1). We further split the bounds into output and hidden parts to tie the depth dependence of the variance to the metric entropy of the state-transition semigroup (Theorem 2). We then investigate implementation-free conditions under which the variance grow polynomially or logarithmically with depth (Section 4). Combining these with exponential or polynomial bias decay identifies four canonical bias-variance trade-off regimes (EL/EP/PL/PP) and produces explicit optimal depths $k^\ast$. Across regimes, $k^\ast>1$ typically holds, giving a rigorous form of depth supremacy. The lowest generalization error bound is achieved under the EL regime (exp-decay bias + log-growth variance), explaining why and when deep is better, especially for iterative or hierarchical concept classes such as neural ODEs, diffusion/score-matching models, and chain-of-thought reasoning.
A phenomenon known as ''Neural Collapse (NC)'' in deep classification tasks, in which the penultimate-layer features and the final classifiers exhibit an extremely simple geometric structure, has recently attracted considerable attention, with the expectation that it can deepen our understanding of how deep neural networks behave. The Unconstrained Feature Model (UFM) has been proposed to explain NC theoretically, and there emerges a growing body of work that extends NC to tasks other than classification and leverages it for practical applications. In this study, we investigate whether a similar phenomenon arises in deep Ordinal Regression (OR) tasks, via combining the cumulative link model for OR and UFM. We show that a phenomenon we call Ordinal Neural Collapse (ONC) indeed emerges and is characterized by the following three properties: (ONC1) all optimal features in the same class collapse to their within-class mean when regularization is applied; (ONC2) these class means align with the classifier, meaning that they collapse onto a one-dimensional subspace; (ONC3) the optimal latent variables (corresponding to logits or preactivations in classification tasks) are aligned according to the class order, and in particular, in the zero-regularization limit, a highly local and simple geometric relationship emerges between the latent variables and the threshold values. We prove these properties analytically within the UFM framework with fixed threshold values and corroborate them empirically across a variety of datasets. We also discuss how these insights can be leveraged in OR, highlighting the use of fixed thresholds.
In recent years, diffusion models trained on equilibrium molecular distributions have proven effective for sampling biomolecules. Beyond direct sampling, the score of such a model can also be used to derive the forces that act on molecular systems. However, while classical diffusion sampling usually recovers the training distribution, the corresponding energy-based interpretation of the learned score is often inconsistent with this distribution, even for low-dimensional toy systems. We trace this inconsistency to inaccuracies of the learned score at very small diffusion timesteps, where the model must capture the correct evolution of the data distribution. In this regime, diffusion models fail to satisfy the Fokker--Planck equation, which governs the evolution of the score. We interpret this deviation as one source of the observed inconsistencies and propose an energy-based diffusion model with a Fokker--Planck-derived regularization term to enforce consistency. We demonstrate our approach by sampling and simulating multiple biomolecular systems, including fast-folding proteins, and by introducing a state-of-the-art transferable Boltzmann emulator for dipeptides that supports simulation and achieves improved consistency and efficient sampling. Our code, model weights, and self-contained JAX and PyTorch notebooks are available at this https URL.
Simulation-based problems involving mixed-variable inputs frequently feature domains that are hierarchical, conditional, heterogeneous, or tree-structured. These characteristics pose challenges for data representation, modeling, and optimization. This paper reviews extensive literature on these structured input spaces and proposes a unified framework that generalizes existing approaches. In this framework, input variables may be continuous, integer, or categorical. A variable is described as meta if its value governs the presence of other decreed variables, enabling the modeling of conditional and hierarchical structures. We further introduce the concept of partially-decreed variables, whose activation depends on contextual conditions. To capture these inter-variable hierarchical relationships, we introduce design space graphs, combining principles from feature modeling and graph theory. This allows the definition of general hierarchical domains suitable for describing complex system architectures. Our framework defines hierarchical distances and kernels to enable surrogate modeling and optimization on hierarchical domains. We demonstrate its effectiveness on complex system design problems, including a neural network and a green-aircraft case study. Our methods are available in the open-source Surrogate Modeling Toolbox (SMT 2.0).
Acceleration is a celebrated cornerstone of convex optimization, enabling gradient-based algorithms to converge sublinearly in the condition number. A major open question is whether an analogous acceleration phenomenon is possible for log-concave sampling. Underdamped Langevin dynamics (ULD) has long been conjectured to be the natural candidate for acceleration, but a central challenge is that its degeneracy necessitates the development of new analysis approaches, e.g., the theory of hypocoercivity. Although recent breakthroughs established ballistic acceleration for the (continuous-time) ULD diffusion via space-time Poincare inequalities, (discrete-time) algorithmic results remain entirely open: the discretization error of existing analysis techniques dominates any continuous-time acceleration. In this paper, we give a new coupling-based local error framework for analyzing ULD and its numerical discretizations in KL divergence. This extends the framework in Shifted Composition III from uniformly elliptic diffusions to degenerate diffusions, and shares its virtues: the framework is user-friendly, applies to sophisticated discretization schemes, and does not require contractivity. Applying this framework to the randomized midpoint discretization of ULD establishes the first ballistic acceleration result for log-concave sampling (i.e., sublinear dependence on the condition number). Along the way, we also obtain the first $d^{1/3}$ iteration complexity guarantee for sampling to constant total variation error in dimension $d$.
This paper presents a theoretical analysis of two of the most impactful interventions in modern learning from demonstration in robotics and continuous control: the practice of action-chunking (predicting sequences of actions in open-loop) and exploratory augmentation of expert demonstrations. Though recent results show that learning from demonstration, also known as imitation learning (IL), can suffer errors that compound exponentially with task horizon in continuous settings, we demonstrate that action chunking and exploratory data collection circumvent exponential compounding errors in different regimes. Our results identify control-theoretic stability as the key mechanism underlying the benefits of these interventions. On the empirical side, we validate our predictions and the role of control-theoretic stability through experimentation on popular robot learning benchmarks. On the theoretical side, we demonstrate that the control-theoretic lens provides fine-grained insights into how compounding error arises, leading to tighter statistical guarantees on imitation learning error when these interventions are applied than previous techniques based on information-theoretic considerations alone.
Generative modeling, representation learning, and classification are three core problems in machine learning (ML), yet their state-of-the-art (SoTA) solutions remain largely disjoint. In this paper, we ask: Can a unified principle address all three? Such unification could simplify ML pipelines and foster greater synergy across tasks. We introduce Latent Zoning Network (LZN) as a step toward this goal. At its core, LZN creates a shared Gaussian latent space that encodes information across all tasks. Each data type (e.g., images, text, labels) is equipped with an encoder that maps samples to disjoint latent zones, and a decoder that maps latents back to data. ML tasks are expressed as compositions of these encoders and decoders: for example, label-conditional image generation uses a label encoder and image decoder; image embedding uses an image encoder; classification uses an image encoder and label decoder. We demonstrate the promise of LZN in three increasingly complex scenarios: (1) LZN can enhance existing models (image generation): When combined with the SoTA Rectified Flow model, LZN improves FID on CIFAR10 from 2.76 to 2.59-without modifying the training objective. (2) LZN can solve tasks independently (representation learning): LZN can implement unsupervised representation learning without auxiliary loss functions, outperforming the seminal MoCo and SimCLR methods by 9.3% and 0.2%, respectively, on downstream linear classification on ImageNet. (3) LZN can solve multiple tasks simultaneously (joint generation and classification): With image and label encoders/decoders, LZN performs both tasks jointly by design, improving FID and achieving SoTA classification accuracy on CIFAR10. The code and trained models are available at this https URL. The project website is at this https URL.
Missing values are common in real-world time series, and multivariate time series forecasting with missing values (MTSF-M) has become a crucial area of research for ensuring reliable predictions. To address the challenge of missing data, current approaches have developed an imputation-then-prediction framework that uses imputation modules to fill in missing values, followed by forecasting on the imputed data. However, this framework overlooks a critical issue: there is no ground truth for the missing values, making the imputation process susceptible to errors that can degrade prediction accuracy. In this paper, we conduct a systematic empirical study and reveal that imputation without direct supervision can corrupt the underlying data distribution and actively degrade prediction accuracy. To address this, we propose a paradigm shift that moves away from imputation and directly predicts from the partially observed time series. We introduce Consistency-Regularized Information Bottleneck (CRIB), a novel framework built on the Information Bottleneck principle. CRIB combines a unified-variate attention mechanism with a consistency regularization scheme to learn robust representations that filter out noise introduced by missing values while preserving essential predictive signals. Comprehensive experiments on four real-world datasets demonstrate the effectiveness of CRIB, which predicts accurately even under high missing rates. Our code is available in this https URL.
Precipitation Nowcasting, which aims to predict precipitation within the next 0 to 6 hours, is critical for disaster mitigation and real-time response planning. However, most time series forecasting benchmarks in meteorology are evaluated on variables with strong periodicity, such as temperature and humidity, which fail to reflect model capabilities in more complex and practically meteorology scenarios like precipitation nowcasting. To address this gap, we propose RainfallBench, a benchmark designed for precipitation nowcasting, a highly challenging and practically relevant task characterized by zero inflation, temporal decay, and non-stationarity, focusing on predicting precipitation within the next 0 to 6 hours. The dataset is derived from five years of meteorological observations, recorded at hourly intervals across six essential variables, and collected from more than 140 Global Navigation Satellite System (GNSS) stations globally. In particular, it incorporates precipitable water vapor (PWV), a crucial indicator of rainfall that is absent in other datasets. We further design specialized evaluation protocols to assess model performance on key meteorological challenges, including multi-scale prediction, multi-resolution forecasting, and extreme rainfall events, benchmarking 17 state-of-the-art models across six major architectures on RainfallBench. Additionally, to address the zero-inflation and temporal decay issues overlooked by existing models, we introduce Bi-Focus Precipitation Forecaster (BFPF), a plug-and-play module that incorporates domain-specific priors to enhance rainfall time series forecasting. Statistical analysis and ablation studies validate the comprehensiveness of our dataset as well as the superiority of our methodology.
We study the problem of long-term (multiple days) mapping of a river plume using multiple autonomous underwater vehicles (AUVs), focusing on the Douro river representative use-case. We propose an energy - and communication - efficient multi-agent reinforcement learning approach in which a central coordinator intermittently communicates with the AUVs, collecting measurements and issuing commands. Our approach integrates spatiotemporal Gaussian process regression (GPR) with a multi-head Q-network controller that regulates direction and speed for each AUV. Simulations using the Delft3D ocean model demonstrate that our method consistently outperforms both single- and multi-agent benchmarks, with scaling the number of agents both improving mean squared error (MSE) and operational endurance. In some instances, our algorithm demonstrates that doubling the number of AUVs can more than double endurance while maintaining or improving accuracy, underscoring the benefits of multi-agent coordination. Our learned policies generalize across unseen seasonal regimes over different months and years, demonstrating promise for future developments of data-driven long-term monitoring of dynamic plume environments.
It is known that the minimum-mean-squared-error (MMSE) denoiser under Gaussian noise can be written as a proximal operator, which suffices for asymptotic convergence of plug-and-play (PnP) methods but does not reveal the structure of the induced regularizer or give convergence rates. We show that the MMSE denoiser corresponds to a regularizer that can be written explicitly as an upper Moreau envelope of the negative log-marginal density, which in turn implies that the regularizer is 1-weakly convex. Using this property, we derive (to the best of our knowledge) the first sublinear convergence guarantee for PnP proximal gradient descent with an MMSE denoiser. We validate the theory with a one-dimensional synthetic study that recovers the implicit regularizer. We also validate the theory with imaging experiments (deblurring and computed tomography), which exhibit the predicted sublinear behavior.