We present a new method for the statistical process control of lattice structures using tools from Topological Data Analysis. Motivated by applications in additive manufacturing, such as aerospace components and biomedical implants, where hollow lattice geometries are critical, the proposed framework is based on monitoring the persistent homology properties of parts. Specifically, we focus on homological features of dimensions zero and one, corresponding to connected components and one-dimensional loops, to characterize and detect changes in the topology of lattice structures. A nonparametric hypothesis testing procedure and a control charting scheme are introduced to monitor these features during production. Furthermore, we conduct extensive run-length analysis via various simulated but real-life lattice-structured parts. Our results demonstrate that persistent homology is well-suited for detecting topological anomalies in complex geometries and offers a robust, intrinsically geometrical alternative to other SPC methods for mesh and point data.
We study the convergence rate of learning pairwise interactions in single-layer attention-style models, where tokens interact through a weight matrix and a non-linear activation function. We prove that the minimax rate is $M^{-\frac{2\beta}{2\beta+1}}$ with $M$ being the sample size, depending only on the smoothness $\beta$ of the activation, and crucially independent of token count, ambient dimension, or rank of the weight matrix. These results highlight a fundamental dimension-free statistical efficiency of attention-style nonlocal models, even when the weight matrix and activation are not separately identifiable and provide a theoretical understanding of the attention mechanism and its training.
In Bayesian Optimization (BO), additive assumptions can mitigate the twin difficulties of modeling and searching a complex function in high dimension. However, common acquisition functions, like the Additive Lower Confidence Bound, ignore pairwise covariances between dimensions, which we'll call \textit{bilateral uncertainty} (BU), imposing a second layer of approximations. While theoretical results indicate that asymptotically not much is lost in doing so, little is known about the practical effects of this assumption in small budgets. In this article, we show that by exploiting conditional independence, Thompson Sampling respecting BU can be efficiently conducted. We use this fact to execute an empirical investigation into the loss incurred by ignoring BU, finding that the additive approximation to Thompson Sampling does indeed have, on balance, worse performance than the exact method, but that this difference is of little practical significance. This buttresses the theoretical understanding and suggests that the BU-ignoring approximation is sufficient for BO in practice, even in the non-asymptotic regime.
Random geometric graphs (RGGs) offer a powerful tool for analyzing the geometric and dependence structures in real-world networks. For example, it has been observed that RGGs are a good model for protein-protein interaction networks. In RGGs, nodes are randomly distributed over an $m$-dimensional metric space, and edges connect the nodes if and only if their distance is less than some threshold. When fitting RGGs to real-world networks, the first step is probably to input or estimate the dimension $m$. However, it is not clear whether the prespecified dimension is equal to the true dimension. In this paper, we investigate this problem using hypothesis testing. Under the null hypothesis, the dimension is equal to a specific value, while the alternative hypothesis asserts the dimension is not equal to that value. We propose the first statistical test. Under the null hypothesis, the proposed test statistic converges in law to the standard normal distribution, and under the alternative hypothesis, the test statistic is unbounded in probability. We derive the asymptotic distribution by leveraging the asymptotic theory of degenerate U-statistics with kernel function dependent on the number of nodes. This approach differs significantly from prevailing methods used in network hypothesis testing problems. Moreover, we also propose an efficient approach to compute the test statistic based on the adjacency matrix. Simulation studies show that the proposed test performs well. We also apply the proposed test to multiple real-world networks to test their dimensions.
Contrastive dimension reduction (CDR) methods aim to extract signal unique to or enriched in a treatment (foreground) group relative to a control (background) group. This setting arises in many scientific domains, such as genomics, imaging, and time series analysis, where traditional dimension reduction techniques such as Principal Component Analysis (PCA) may fail to isolate the signal of interest. In this review, we provide a systematic overview of existing CDR methods. We propose a pipeline for analyzing case-control studies together with a taxonomy of CDR methods based on their assumptions, objectives, and mathematical formulations, unifying disparate approaches under a shared conceptual framework. We highlight key applications and challenges in existing CDR methods, and identify open questions and future directions. By providing a clear framework for CDR and its applications, we aim to facilitate broader adoption and motivate further developments in this emerging field.
The Maximum Mean Discrepancy (MMD) is a widely used multivariate distance metric for two-sample testing. The standard MMD test statistic has an intractable null distribution typically requiring costly resampling or permutation approaches for calibration. In this work we leverage a martingale interpretation of the estimated squared MMD to propose martingale MMD (mMMD), a quadratic-time statistic which has a limiting standard Gaussian distribution under the null. Moreover we show that the test is consistent against any fixed alternative and for large sample sizes, mMMD offers substantial computational savings over the standard MMD test, with only a minor loss in power.
In many causal inference problems, multiple action variables share the same causal role, such as mediators, factors, network units, or genotypes, yet lack a natural ordering. To avoid ambiguity in interpretation, causal estimands should remain unchanged under relabeling, an implicit principle we refer to as permutation invariance. We formally characterize this principle, analyze its algebraic and combinatorial structure for verification, and present a class of weighted estimands that are permutation-invariant while capturing interactions of all orders. We further provide guidance on selecting weights that yield residual-free estimands, whose inclusion-exclusion sums capture the maximal effect, and extend our results to ratio effect measures.
Active subspace analysis uses the leading eigenspace of the gradient's second moment to conduct supervised dimension reduction. In this article, we extend this methodology to real-valued functionals on Hilbert space. We define an operator which coincides with the active subspace matrix when applied to a Euclidean space. We show that many of the desirable properties of Active Subspace analysis extend directly to the infinite dimensional setting. We also propose a Monte Carlo procedure and discuss its convergence properties. Finally, we deploy this methodology to create visualizations and improve modeling and optimization on complex test problems.
We study statistical estimation under local differential privacy (LDP) when users may hold heterogeneous privacy levels and accuracy must be guaranteed with high probability. Departing from the common in-expectation analyses, and for one-dimensional and multi-dimensional mean estimation problems, we develop finite sample upper bounds in $\ell_2$-norm that hold with probability at least $1-\beta$. We complement these results with matching minimax lower bounds, establishing the optimality (up to constants) of our guarantees in the heterogeneous LDP regime. We further study distribution learning in $\ell_\infty$-distance, designing an algorithm with high-probability guarantees under heterogeneous privacy demands. Our techniques offer principled guidance for designing mechanisms in settings with user-specific privacy levels.
We propose a new general framework for recovering low-rank structure in optimal transport using Schatten-$p$ norm regularization. Our approach extends existing methods that promote sparse and interpretable transport maps or plans, while providing a unified and principled family of convex programs that encourage low-dimensional structure. The convexity of our formulation enables direct theoretical analysis: we derive optimality conditions and prove recovery guarantees for low-rank couplings and barycentric maps in simplified settings. To efficiently solve the proposed program, we develop a mirror descent algorithm with convergence guarantees for $p \geq 1$. Experiments on synthetic and real data demonstrate the method's efficiency, scalability, and ability to recover low-rank transport structures.
Extreme value theory (EVT) is well suited to model extreme events, such as floods, heatwaves, or mechanical failures, which is required for reliability assessment of systems across multiple domains for risk management and loss prevention. The block maxima (BM) method, a particular approach within EVT, starts by dividing the historical observations into blocks. Then the sample of the maxima for each block can be shown, under some assumptions, to converge to a known class of distributions, which can then be used for analysis. The question of automatic (i.e., without explicit expert input) selection of the block size remains an open challenge. This work proposes a novel Bayesian framework, namely, multi-objective Bayesian optimization (MOBO-D*), to optimize BM blocking for accurate modeling and prediction of extremes in EVT. MOBO-D* formulates two objectives: goodness-of-fit of the distribution of extreme events and the accurate prediction of extreme events to construct an estimated Pareto front for optimal blocking choices. The efficacy of the proposed framework is illustrated by applying it to a real-world case study from the domain of additive manufacturing as well as a synthetic dataset. MOBO-D* outperforms a number of benchmarks and can be naturally extended to high-dimensional cases. The computational experiments show that it can be a promising approach in applications that require repeated automated block size selection, such as optimization or analysis of many datasets at once.
Reconstructing evolutionary histories and estimating the rate of evolution from molecular sequence data is of central importance in evolutionary biology and infectious disease research. We introduce a flexible Bayesian phylogenetic inference framework that accommodates changing evolutionary rates over time by modeling sequence character substitution processes as inhomogeneous continuous-time Markov chains (ICTMCs) acting along the unknown phylogeny, where the rate remains as an unknown, positive and integrable function of time. The integral of the rate function appears in the finite-time transition probabilities of the ICTMCs that must be efficiently computed for all branches of the phylogeny to evaluate the observed data likelihood. Circumventing computational challenges that arise from a fully nonparametric function, we successfully parameterize the rate function as piecewise constant with a large number of epochs that we call the polyepoch clock model. This makes the transition probability computation relatively inexpensive and continues to flexibly capture rate change over time. We employ a Gaussian Markov random field prior to achieve temporal smoothing of the estimated rate function. Hamiltonian Monte Carlo sampling enabled by scalable gradient evaluation under this model makes our framework computationally efficient. We assess the performance of the polyepoch clock model in recovering the true timescales and rates through simulations under two different evolutionary scenarios. We then apply the polyepoch clock model to examine the rates of West Nile virus, Dengue virus and influenza A/H3N2 evolution, and estimate the time-varying rate of SARS-CoV-2 spread in Europe in 2020.
Stochastic Gradient Descent (SGD) and its Ruppert-Polyak averaged variant (ASGD) lie at the heart of modern large-scale learning, yet their theoretical properties in high-dimensional settings are rarely understood. In this paper, we provide rigorous statistical guarantees for constant learning-rate SGD and ASGD in high-dimensional regimes. Our key innovation is to transfer powerful tools from high-dimensional time series to online learning. Specifically, by viewing SGD as a nonlinear autoregressive process and adapting existing coupling techniques, we prove the geometric-moment contraction of high-dimensional SGD for constant learning rates, thereby establishing asymptotic stationarity of the iterates. Building on this, we derive the $q$-th moment convergence of SGD and ASGD for any $q\ge2$ in general $\ell^s$-norms, and, in particular, the $\ell^{\infty}$-norm that is frequently adopted in high-dimensional sparse or structured models. Furthermore, we provide sharp high-probability concentration analysis which entails the probabilistic bound of high-dimensional ASGD. Beyond closing a critical gap in SGD theory, our proposed framework offers a novel toolkit for analyzing a broad class of high-dimensional learning algorithms.
Functional logistic regression is a popular model to capture a linear relationship between binary response and functional predictor variables. However, many methods used for parameter estimation in functional logistic regression are sensitive to outliers, which may lead to inaccurate parameter estimates and inferior classification accuracy. We propose a robust estimation procedure for functional logistic regression, in which the observations of the functional predictor are projected onto a set of finite-dimensional subspaces via robust functional principal component analysis. This dimension-reduction step reduces the outlying effects in the functional predictor. The logistic regression coefficient is estimated using an M-type estimator based on binary response and robust principal component scores. In doing so, we provide robust estimates by minimizing the effects of outliers in the binary response and functional predictor variables. Via a series of Monte-Carlo simulations and using hand radiograph data, we examine the parameter estimation and classification accuracy for the response variable. We find that the robust procedure outperforms some existing robust and non-robust methods when outliers are present, while producing competitive results when outliers are absent. In addition, the proposed method is computationally more efficient than some existing robust alternatives.
We study neural network compressibility by using singular learning theory to extend the minimum description length (MDL) principle to singular models like neural networks. Through extensive experiments on the Pythia suite with quantization, factorization, and other compression techniques, we find that complexity estimates based on the local learning coefficient (LLC) are closely, and in some cases, linearly correlated with compressibility. Our results provide a path toward rigorously evaluating the limits of model compression.
We study the decoupled multi-armed bandit (MAB) problem, where the learner selects one arm for exploration and one arm for exploitation in each round. The loss of the explored arm is observed but not counted, while the loss of the exploited arm is incurred without being observed. We propose a policy within the Follow-the-Perturbed-Leader (FTPL) framework using Pareto perturbations. Our policy achieves (near-)optimal regret regardless of the environment, i.e., Best-of-Both-Worlds (BOBW): constant regret in the stochastic regime, improving upon the optimal bound of the standard MABs, and minimax optimal regret in the adversarial regime. Moreover, the practicality of our policy stems from avoiding both the convex optimization step required by the previous BOBW policy, Decoupled-Tsallis-INF (Rouyer & Seldin, 2020), and the resampling step that is typically necessary in FTPL. Consequently, it achieves substantial computational improvement, about $20$ times faster than Decoupled-Tsallis-INF, while also demonstrating better empirical performance in both regimes. Finally, we empirically show that our approach outperforms a pure exploration policy, and that naively combining a pure exploration with a standard exploitation policy is suboptimal.
Accurate intraday forecasts are essential for power system operations, complementing day-ahead forecasts that gradually lose relevance as new information becomes available. This paper introduces a Bayesian updating mechanism that converts fully probabilistic day-ahead forecasts into intraday forecasts without retraining or re-inference. The approach conditions the Gaussian mixture output of a conditional variational autoencoder-based forecaster on observed measurements, yielding an updated distribution for the remaining horizon that preserves its probabilistic structure. This enables consistent point, quantile, and ensemble forecasts while remaining computationally efficient and suitable for real-time applications. Experiments on household electricity consumption and photovoltaic generation datasets demonstrate that the proposed method improves forecast accuracy up to 25% across likelihood-, sample-, quantile-, and point-based metrics. The largest gains occur in time steps with strong temporal correlation to observed data, and the use of pattern dictionary-based covariance structures further enhances performance. The results highlight a theoretically grounded framework for intraday forecasting in modern power systems.
Meta-analysis can be formulated as combining $p$-values across studies into a joint $p$-value function, from which point estimates and confidence intervals can be derived. We extend the meta-analytic estimation framework based on combined $p$-value functions to incorporate uncertainty in heterogeneity estimation by employing a confidence distribution approach. Specifically, the confidence distribution of Edgington's method is adjusted according to the confidence distribution of the heterogeneity parameter constructed from the generalized heterogeneity statistic. Simulation results suggest that 95% confidence intervals approach nominal coverage under most scenarios involving more than three studies and heterogeneity. Under no heterogeneity or for only three studies, the confidence interval typically overcovers, but is often narrower than the Hartung-Knapp-Sidik-Jonkman interval. The point estimator exhibits small bias under model misspecification and moderate to large heterogeneity. Edgington's method provides a practical alternative to classical approaches, with adjustment for heterogeneity estimation uncertainty often improving confidence interval coverage.
We develop interacting particle algorithms for learning latent variable models with energy-based priors. To do so, we leverage recent developments in particle-based methods for solving maximum marginal likelihood estimation (MMLE) problems. Specifically, we provide a continuous-time framework for learning latent energy-based models, by defining stochastic differential equations (SDEs) that provably solve the MMLE problem. We obtain a practical algorithm as a discretisation of these SDEs and provide theoretical guarantees for the convergence of the proposed algorithm. Finally, we demonstrate the empirical effectiveness of our method on synthetic and image datasets.
In precision medicine, one of the most important problems is estimating the optimal individualized treatment rules (ITR), which typically involves recommending treatment decisions based on fully observed individual characteristics of patients to maximize overall clinical benefit. In practice, however, there may be missing covariates that are not necessarily confounders, and it remains uncertain whether these missing covariates should be included for learning optimal ITRs. In this paper, we propose a covariate-balancing doubly robust estimator for constructing optimal ITRs, which is particularly suitable for situations with additional predictive covariates. The proposed method is based on two main steps: First, the propensity scores are estimated by solving the covariate-balancing equation. Second, an objective function is minimized to estimate the outcome model, with the function defined by the asymptotic variance under the correctly specified propensity score. The method has three significant advantages: (i) It is doubly robust, ensuring consistency when either the propensity score or outcome model is correctly specified. (ii) It minimizes variance within the class of augmented inverse probability weighted estimators. (iii) When applied to partially observed covariates related to the outcome, the method may further improve estimation efficiency. We demonstrate the proposed method through extensive numerical simulations and two real-world datasets.
Nonlinear and delayed effects of covariates often render time series forecasting challenging. To this end, we propose a novel forecasting framework based on ridge regression with signature features calculated on sliding windows. These features capture complex temporal dynamics without relying on learned or hand-crafted representations. Focusing on the discrete-time setting, we establish theoretical guarantees, namely universality of approximation and stationarity of signatures. We introduce an efficient sequential algorithm for computing signatures on sliding windows. The method is evaluated on both synthetic and real electricity demand data. Results show that signature features effectively encode temporal and nonlinear dependencies, yielding accurate forecasts competitive with those based on expert knowledge.
We develop a version of variational inference for Bayesian count response regression-type models that possesses attractive attributes such as convexity and closed form updates. The convex solution aspect entails numerically stable fitting algorithms, whilst the closed form aspect makes the methodology fast and easy to implement. The essence of the approach is the use of Pólya-Gamma augmentation of a Negative Binomial likelihood, a finite-valued prior on the shape parameter and the structured mean field variational Bayes paradigm. The approach applies to general count response situations. For concreteness, we focus on generalized linear mixed models within the semiparametric regression class of models. Real-time fitting is also described.
In this paper, we refine the Berry-Esseen bounds for the multivariate normal approximation of Polyak-Ruppert averaged iterates arising from the linear stochastic approximation (LSA) algorithm with decreasing step size. We consider the normal approximation by the Gaussian distribution with covariance matrix predicted by the Polyak-Juditsky central limit theorem and establish the rate up to order $n^{-1/3}$ in convex distance, where $n$ is the number of samples used in the algorithm. We also prove a non-asymptotic validity of the multiplier bootstrap procedure for approximating the distribution of the rescaled error of the averaged LSA estimator. We establish approximation rates of order up to $1/\sqrt{n}$ for the latter distribution, which significantly improves upon the previous results obtained by Samsonov et al. (2024).
Weak convergence of joint distributions generally does not imply convergence of conditional distributions. In particular, conditional distributions need not converge when joint Gaussian distributions converge to a singular Gaussian limit. Algebraically, this is due to the fact that at singular covariance matrices, Schur complements are not continuous functions of the matrix entries. Our results lay out special conditions under which convergence of Gaussian conditional distributions nevertheless occurs, and we exemplify how this allows one to reason about conditional independence in a new class of graphical models.
We introduce a novel high-frequency daily panel dataset of both markets and news-based indicators -- including Geopolitical Risk, Economic Policy Uncertainty, Trade Policy Uncertainty, and Political Sentiment -- for 42 countries across both emerging and developed markets. Using this dataset, we study how sentiment dynamics shape sovereign risk, measured by Credit Default Swap (CDS) spreads, and evaluate their forecasting value relative to traditional drivers such as global monetary policy and market volatility. Our horse-race analysis of forecasting models demonstrates that incorporating news-based indicators significantly enhances predictive accuracy and enriches the analysis, with non-linear machine learning methods -- particularly Random Forests -- delivering the largest gains. Our analysis reveals that while global financial variables remain the dominant drivers of sovereign risk, geopolitical risk and economic policy uncertainty also play a meaningful role. Crucially, their effects are amplified through non-linear interactions with global financial conditions. Finally, we document pronounced regional heterogeneity, as certain asset classes and emerging markets exhibit heightened sensitivity to shocks in policy rates, global financial volatility, and geopolitical risk.
Recently, weighted cumulative residual Tsallis entropy has been introduced in the literature as a generalization of weighted cumulative residual entropy. We study some new properties of weighted cumulative residual Tsallis entropy measure. Next, we propose some non-parametric estimators of this measure. Asymptotic properties of these estimators are discussed. Performance of these estimators are compared by mean squared error. Non-parametric estimators for weighted cumulative residual entropy measure are also discussed. Two uniformity tests are proposed based on an estimator of these two measures and power of the tests are compared with some popular tests. The tests perform reasonably well.
We propose a novel framework for reconstructing the chronology of genetic regulation using causal inference based on Pearl's theory. The approach proceeds in three main stages: causal discovery, causal inference, and chronology construction. We apply it to the ndhB and ndhD genes of the chloroplast in Arabidopsis thaliana, generating four alternative maturation timeline models per gene, each derived from a different causal discovery algorithm (HC, PC, LiNGAM, or NOTEARS). Two methodological challenges are addressed: the presence of missing data, handled via an EM algorithm that jointly imputes missing values and estimates the Bayesian network, and the selection of the $\ell_1$-regularization parameter in NOTEARS, for which we introduce a stability selection strategy. The resulting causal models consistently outperform reference chronologies in terms of both reliability and model fit. Moreover, by combining causal reasoning with domain expertise, the framework enables the formulation of testable hypotheses and the design of targeted experimental interventions grounded in theoretical predictions.
An open problem in Machine Learning is how to avoid models to exploit spurious correlations in the data; a famous example is the background-label shortcut in the Waterbirds dataset. A common remedy is to train a model across multiple environments; in the Waterbirds dataset, this corresponds to training by randomizing the background. However, selecting the right environments is a challenging problem, given that these are rarely known a priori. We propose Universal Adaptive Environment Discovery (UAED), a unified framework that learns a distribution over data transformations that instantiate environments, and optimizes any robust objective averaged over this learned distribution. UAED yields adaptive variants of IRM, REx, GroupDRO, and CORAL without predefined groups or manual environment design. We provide a theoretical analysis by providing PAC-Bayes bounds and by showing robustness to test environment distributions under standard conditions. Empirically, UAED discovers interpretable environment distributions and improves worst-case accuracy on standard benchmarks, while remaining competitive on mean accuracy. Our results indicate that making environments adaptive is a practical route to out-of-distribution generalization.
This paper develops a statistical framework for goodness-of-fit testing of volatility functions in McKean-Vlasov stochastic differential equations, which describe large systems of interacting particles with distribution-dependent dynamics. While integrated volatility estimation in classical SDEs is now well established, formal model validation and goodness-of-fit testing for McKean-Vlasov systems remain largely unexplored, particularly in regimes with both large particle limits and high-frequency sampling. We propose a test statistic based on discrete observations of particle systems, analysed in a joint regime where both the number of particles and the sampling frequency increase. The estimators involved are proven to be consistent, and the test statistic is shown to satisfy a central limit theorem, converging in distribution to a centred Gaussian law.
We introduce a general framework for constructing generative models using one-dimensional noising processes. Beyond diffusion processes, we outline examples that demonstrate the flexibility of our approach. Motivated by this, we propose a novel framework in which the 1D processes themselves are learnable, achieved by parameterizing the noise distribution through quantile functions that adapt to the data. Our construction integrates seamlessly with standard objectives, including Flow Matching and consistency models. Learning quantile-based noise naturally captures heavy tails and compact supports when present. Numerical experiments highlight both the flexibility and the effectiveness of our method.
Recently, the vanishing-step-size limit of the Sinkhorn algorithm at finite regularization parameter $\varepsilon$ was shown to be a mirror descent in the space of probability measures. We give $L^2$ contraction criteria in two time-dependent metrics induced by the mirror Hessian, which reduce to the coercivity of certain conditional expectation operators. We then give an exact identity for the entropy production rate of the Sinkhorn flow, which was previously known only to be nonpositive. Examining this rate shows that the standard semigroup analysis of diffusion processes extends systematically to the Sinkhorn flow. We show that the flow induces a reversible Markov dynamics on the target marginal as an Onsager gradient flow. We define the Dirichlet form associated to its (nonlocal) infinitesimal generator, prove a Poincaré inequality for it, and show that the spectral gap is strictly positive along the Sinkhorn flow whenever $\varepsilon > 0$. Lastly, we show that the entropy decay is exponential if and only if a logarithmic Sobolev inequality (LSI) holds. We give for illustration two immediate practical use-cases for the Sinkhorn LSI: as a design principle for the latent space in which generative models are trained, and as a stopping heuristic for discrete-time algorithms.
Compositional data-vectors of non--negative components summing to unity--frequently arise in scientific applications where covariates influence the relative proportions of components, yet traditional regression approaches struggle with the unit-sum constraint and zero values. This paper revisits the $\alpha$--regression framework, which uses a flexible power transformation parameterized by $\alpha$ to interpolate between raw data analysis and log-ratio methods, naturally handling zeros without imputation while allowing data-driven transformation selection. We formulate $\alpha$--regression as a non-linear least squares problem, provide efficient estimation via the Levenberg-Marquardt algorithm with explicit gradient and Hessian derivations, establish asymptotic normality of the estimators, and derive marginal effects for interpretation. The framework is extended to spatial settings through two models: the $\alpha$--spatially lagged X regression model, which incorporates spatial spillover effects via spatially lagged covariates with decomposition into direct and indirect effects, and the geographically weighted $\alpha$--regression, which allows coefficients to vary spatially for capturing local relationships. Application to Greek agricultural land-use data demonstrates that spatial extensions substantially improve predictive performance.
We develop a unified statistical framework for softmax-gated Gaussian mixture of experts (SGMoE) that addresses three long-standing obstacles in parameter estimation and model selection: (i) non-identifiability of gating parameters up to common translations, (ii) intrinsic gate-expert interactions that induce coupled differential relations in the likelihood, and (iii) the tight numerator-denominator coupling in the softmax-induced conditional density. Our approach introduces Voronoi-type loss functions aligned with the gate-partition geometry and establishes finite-sample convergence rates for the maximum likelihood estimator (MLE). In over-specified models, we reveal a link between the MLE's convergence rate and the solvability of an associated system of polynomial equations characterizing near-nonidentifiable directions. For model selection, we adapt dendrograms of mixing measures to SGMoE, yielding a consistent, sweep-free selector of the number of experts that attains pointwise-optimal parameter rates under overfitting while avoiding multi-size training. Simulations on synthetic data corroborate the theory, accurately recovering the expert count and achieving the predicted rates for parameter estimation while closely approximating the regression function. Under model misspecification (e.g., $\epsilon$-contamination), the dendrogram selection criterion is robust, recovering the true number of mixture components, while the Akaike information criterion, the Bayesian information criterion, and the integrated completed likelihood tend to overselect as sample size grows. On a maize proteomics dataset of drought-responsive traits, our dendrogram-guided SGMoE selects two experts, exposes a clear mixing-measure hierarchy, stabilizes the likelihood early, and yields interpretable genotype-phenotype maps, outperforming standard criteria without multi-size training.
We design replicable algorithms in the context of statistical clustering under the recently introduced notion of replicability from Impagliazzo et al. [2022]. According to this definition, a clustering algorithm is replicable if, with high probability, its output induces the exact same partition of the sample space after two executions on different inputs drawn from the same distribution, when its internal randomness is shared across the executions. We propose such algorithms for the statistical $k$-medians, statistical $k$-means, and statistical $k$-centers problems by utilizing approximation routines for their combinatorial counterparts in a black-box manner. In particular, we demonstrate a replicable $O(1)$-approximation algorithm for statistical Euclidean $k$-medians ($k$-means) with $\operatorname{poly}(d)$ sample complexity. We also describe an $O(1)$-approximation algorithm with an additional $O(1)$-additive error for statistical Euclidean $k$-centers, albeit with $\exp(d)$ sample complexity. In addition, we provide experiments on synthetic distributions in 2D using the $k$-means++ implementation from sklearn as a black-box that validate our theoretical results.
We provide efficient replicable algorithms for the problem of learning large-margin halfspaces. Our results improve upon the algorithms provided by Impagliazzo, Lei, Pitassi, and Sorrell [STOC, 2022]. We design the first dimension-independent replicable algorithms for this task which runs in polynomial time, is proper, and has strictly improved sample complexity compared to the one achieved by Impagliazzo et al. [2022] with respect to all the relevant parameters. Moreover, our first algorithm has sample complexity that is optimal with respect to the accuracy parameter $\epsilon$. We also design an SGD-based replicable algorithm that, in some parameters' regimes, achieves better sample and time complexity than our first algorithm. Departing from the requirement of polynomial time algorithms, using the DP-to-Replicability reduction of Bun, Gaboardi, Hopkins, Impagliazzo, Lei, Pitassi, Sorrell, and Sivakumar [STOC, 2023], we show how to obtain a replicable algorithm for large-margin halfspaces with improved sample complexity with respect to the margin parameter $\tau$, but running time doubly exponential in $1/\tau^2$ and worse sample complexity dependence on $\epsilon$ than one of our previous algorithms. We then design an improved algorithm with better sample complexity than all three of our previous algorithms and running time exponential in $1/\tau^{2}$.
Stochastic processes with renewal properties are powerful tools for modeling systems where memory effects and long-time correlations play a significant role. In this work, we study a broad class of renewal processes where a variable's value changes according to a prescribed Probability Density Function (PDF), $p(\xi)$, after random waiting times $\theta$. This model is relevant across many fields, including classical chaos, nonlinear hydrodynamics, quantum dots, cold atom dynamics, biological motion, foraging, and finance. We derive a general analytical expression for the $n$-time correlation function by averaging over process realizations. Our analysis identifies the conditions for stationarity, aging, and long-range correlations based on the waiting time and jump distributions. Among the many consequences of our analysis, two new key results emerge. First, for Poissonian waiting times, the correlation function quickly approaches that of telegraphic noise. Second, for power-law waiting times with $\mu>2$, , \emph{any $n$-time correlation function asymptotically reduces to the two-time correlation evaluated at the earliest and latest time points}. This second result reveals a universal long-time behavior where the system's full statistical structure becomes effectively two-time reducible. Furthermore, if the jump PDF $p(\xi)$ has fat tails, this convergence becomes independent of the waiting time PDF and is significantly accelerated, requiring only modest increases in either the number of realizations or the trajectory lengths. Building upon earlier work that established the universality of the two-point correlation function (i.e., a unique formal expression depending solely on the variance of $\xi$ and on the waiting-time PDF), the present study extends that universality to the full statistical description of a broad class of renewal-type stochastic processes.
Conflicting clinical trial results on omega-3 highly unsaturated fatty acids (n-3 HUFA) have prompted uncertainty about their cardioprotective effects. While the VITAL trial found no overall cardiovascular benefit from n-3 HUFA supplementation, its substantial African American (AfAm) enrollment provided a unique opportunity to explore racial differences in response to n-3 HUFA supplementation. The current observational study aimed to simulate randomized clinical trial (RCT) conditions by matching 3,766 AfAm and 15,553 non-Hispanic White (NHW) individuals from the VITAL trial utilizing propensity score matching to address the limitations related to differences in confounding variables between the two groups. Within matched groups (3,766 AfAm and 3,766 NHW), n-3 HUFA supplementation's impact on myocardial infarction (MI), stroke, and cardiovascular disease (CVD) mortality was assessed. A weighted decision tree analysis revealed belonging to the n-3 supplementation group as the most significant predictor of MI among AfAm but not NHW. Further logistic regression using the LASSO method and bootstrap estimation of standard errors indicated n-3 supplementation significantly lowered MI risk in AfAm (OR 0.17, 95% CI [0.048, 0.60]), with no such effect in NHW. This study underscores the critical need for future RCT to explore racial disparities in MI risk associated with n-3 HUFA supplementation and highlights potential causal differences between supplementation health outcomes in AfAm versus NHW populations.
Robust reinforcement learning (Robust RL) seeks to handle epistemic uncertainty in environment dynamics, but existing approaches often rely on nested min--max optimization, which is computationally expensive and yields overly conservative policies. We propose \textbf{Adaptive Rank Representation (AdaRL)}, a bi-level optimization framework that improves robustness by aligning policy complexity with the intrinsic dimension of the task. At the lower level, AdaRL performs policy optimization under fixed-rank constraints with dynamics sampled from a Wasserstein ball around a centroid model. At the upper level, it adaptively adjusts the rank to balance the bias--variance trade-off, projecting policy parameters onto a low-rank manifold. This design avoids solving adversarial worst-case dynamics while ensuring robustness without over-parameterization. Empirical results on MuJoCo continuous control benchmarks demonstrate that AdaRL not only consistently outperforms fixed-rank baselines (e.g., SAC) and state-of-the-art robust RL methods (e.g., RNAC, Parseval), but also converges toward the intrinsic rank of the underlying tasks. These results highlight that adaptive low-rank policy representations provide an efficient and principled alternative for robust RL under model uncertainty.
Diffusion-based samplers learn to sample complex, high-dimensional distributions using energies or log densities alone, without training data. Yet, they remain impractical for molecular sampling because they are often slower than molecular dynamics and miss thermodynamically relevant modes. Inspired by enhanced sampling, we encourage exploration by introducing a sequential bias along bespoke, information-rich, low-dimensional projections of atomic coordinates known as collective variables (CVs). We introduce a repulsive potential centered on the CVs from recent samples, which pushes future samples towards novel CV regions and effectively increases the temperature in the projected space. Our resulting method improves efficiency, mode discovery, enables the estimation of free energy differences, and retains independent sampling from the approximate Boltzmann distribution via reweighting by the bias. On standard peptide conformational sampling benchmarks, the method recovers diverse conformational states and accurate free energy profiles. We are the first to demonstrate reactive sampling using a diffusion-based sampler, capturing bond breaking and formation with universal interatomic potentials at near-first-principles accuracy. The approach resolves reactive energy landscapes at a fraction of the wall-clock time of standard sampling methods, advancing diffusion-based sampling towards practical use in molecular sciences.
Characterizing interactions between brain areas is a fundamental goal of systems neuroscience. While such analyses are possible when areas are recorded simultaneously, it is rare to observe all combinations of areas of interest within a single animal or recording session. How can we leverage multi-animal datasets to better understand multi-area interactions? Building on recent progress in large-scale, multi-animal models, we introduce NeuroPaint, a masked autoencoding approach for inferring the dynamics of unrecorded brain areas. By training across animals with overlapping subsets of recorded areas, NeuroPaint learns to reconstruct activity in missing areas based on shared structure across individuals. We train and evaluate our approach on synthetic data and two multi-animal, multi-area Neuropixels datasets. Our results demonstrate that models trained across animals with partial observations can successfully in-paint the dynamics of unrecorded areas, enabling multi-area analyses that transcend the limitations of any single experiment.
Mamba, a recently proposed linear-time sequence model, has attracted significant attention for its computational efficiency and strong empirical performance. However, a rigorous theoretical understanding of its underlying mechanisms remains limited. In this work, we provide a theoretical analysis of Mamba's in-context learning (ICL) capability by focusing on tasks defined by low-dimensional nonlinear target functions. Specifically, we study in-context learning of a single-index model $y \approx g_*(\langle \boldsymbol{\beta}, \boldsymbol{x} \rangle)$, which depends on only a single relevant direction $\boldsymbol{\beta}$, referred to as feature. We prove that Mamba, pretrained by gradient-based methods, can achieve efficient ICL via test-time feature learning, extracting the relevant direction directly from context examples. Consequently, we establish a test-time sample complexity that improves upon linear Transformers -- analyzed to behave like kernel methods -- and is comparable to nonlinear Transformers, which have been shown to surpass the Correlational Statistical Query (CSQ) lower bound and achieve near information-theoretically optimal rate in previous works. Our analysis reveals the crucial role of the nonlinear gating mechanism in Mamba for feature extraction, highlighting it as the fundamental driver behind Mamba's ability to achieve both computational efficiency and high performance.
We develop a class of optimal tests for a structural break occurring at an unknown date in infinite and growing-order time series regression models, such as AR($\infty$), linear regression with increasingly many covariates, and nonparametric regression. Under an auxiliary i.i.d. Gaussian error assumption, we derive an average power optimal test, establishing a growing-dimensional analog of the exponential tests of Andrews and Ploberger (1994) to handle identification failure under the null hypothesis of no break. Relaxing the i.i.d. Gaussian assumption to a more general dependence structure, we establish a functional central limit theorem for the underlying stochastic processes, which features an extra high-order serial dependence term due to the growing dimension. We robustify our test both against this term and finite sample bias and illustrate its excellent performance and practical relevance in a Monte Carlo study and a real data empirical example.
Progress in AI is hindered by the lack of a programming language with all the requisite features. Libraries like PyTorch and TensorFlow provide automatic differentiation and efficient GPU implementation, but are additions to Python, which was never intended for AI. Their lack of support for automated reasoning and knowledge acquisition has led to a long and costly series of hacky attempts to tack them on. On the other hand, AI languages like LISP an Prolog lack scalability and support for learning. This paper proposes tensor logic, a language that solves these problems by unifying neural and symbolic AI at a fundamental level. The sole construct in tensor logic is the tensor equation, based on the observation that logical rules and Einstein summation are essentially the same operation, and all else can be reduced to them. I show how to elegantly implement key forms of neural, symbolic and statistical AI in tensor logic, including transformers, formal reasoning, kernel machines and graphical models. Most importantly, tensor logic makes new directions possible, such as sound reasoning in embedding space. This combines the scalability and learnability of neural networks with the reliability and transparency of symbolic reasoning, and is potentially a basis for the wider adoption of AI.
Wind power producers can benefit from forming coalitions to participate cooperatively in electricity markets. To support such collaboration, various profit allocation rules rooted in cooperative game theory have been proposed. However, existing approaches overlook the lack of coherence among producers regarding forecast information, which may lead to ambiguity in offering and allocations. In this paper, we introduce a ``reconcile-then-optimize'' framework for cooperative market offerings. This framework first aligns the individual forecasts into a coherent joint forecast before determining market offers. With such forecasts, we formulate and solve a two-stage stochastic programming problem to derive both the aggregate offer and the corresponding scenario-based dual values for each trading hour. Based on these dual values, we construct a profit allocation rule that is budget-balanced and stable. Finally, we validate the proposed method through empirical case studies, demonstrating its practical effectiveness and theoretical soundness.
We introduce Cautious Weight Decay (CWD), a one-line, optimizer-agnostic modification that applies weight decay only to parameter coordinates whose signs align with the optimizer update. Unlike standard decoupled decay, which implicitly optimizes a regularized or constrained objective, CWD preserves the original loss and admits a bilevel interpretation: it induces sliding-mode behavior upon reaching the stationary manifold, allowing it to search for locally Pareto-optimal stationary points of the unmodified objective. In practice, CWD is a drop-in change for optimizers such as AdamW, Lion, and Muon, requiring no new hyperparameters or additional tuning. For language model pre-training and ImageNet classification, CWD consistently improves final loss and accuracy at million- to billion-parameter scales.
Time series anomaly detection plays a crucial role in a wide range of real-world applications. Given that time series data can exhibit different patterns at different sampling granularities, multi-scale modeling has proven beneficial for uncovering latent anomaly patterns that may not be apparent at a single scale. However, existing methods often model multi-scale information independently or rely on simple feature fusion strategies, neglecting the dynamic changes in cross-scale associations that occur during anomalies. Moreover, most approaches perform multi-scale modeling based on fixed sliding windows, which limits their ability to capture comprehensive contextual information. In this work, we propose CrossAD, a novel framework for time series Anomaly Detection that takes Cross-scale associations and Cross-window modeling into account. We propose a cross-scale reconstruction that reconstructs fine-grained series from coarser series, explicitly capturing cross-scale associations. Furthermore, we design a query library and incorporate global multi-scale context to overcome the limitations imposed by fixed window sizes. Extensive experiments conducted on multiple real-world datasets using nine evaluation metrics validate the effectiveness of CrossAD, demonstrating state-of-the-art performance in anomaly detection.
Causal discovery aims to learn causal relationships between variables from targeted data, making it a fundamental task in machine learning. However, causal discovery algorithms often rely on unverifiable causal assumptions, which are usually difficult to satisfy in real-world data, thereby limiting the broad application of causal discovery in practical scenarios. Inspired by these considerations, this work extensively benchmarks the empirical performance of various mainstream causal discovery algorithms, which assume i.i.d. data, under eight model assumption violations. Our experimental results show that differentiable causal discovery methods exhibit robustness under the metrics of Structural Hamming Distance and Structural Intervention Distance of the inferred graphs in commonly used challenging scenarios, except for scale variation. We also provide the theoretical explanations for the performance of differentiable causal discovery methods. Finally, our work aims to comprehensively benchmark the performance of recent differentiable causal discovery methods under model assumption violations, and provide the standard for reasonable evaluation of causal discovery, as well as to further promote its application in real-world scenarios.
This paper explores the topological signatures of ReLU neural network activation patterns. We consider feedforward neural networks with ReLU activation functions and analyze the polytope decomposition of the feature space induced by the network. Mainly, we investigate how the Fiedler partition of the dual graph and show that it appears to correlate with the decision boundary -- in the case of binary classification. Additionally, we compute the homology of the cellular decomposition -- in a regression task -- to draw similar patterns in behavior between the training loss and polyhedral cell-count, as the model is trained.
We consider the problem of constructing probabilistic predictions that lead to accurate decisions when employed by downstream users to inform actions. For a single decision maker, designing an optimal predictor is equivalent to minimizing a proper loss function corresponding to the negative utility of that individual. For multiple decision makers, our problem can be viewed as a variant of omniprediction in which the goal is to design a single predictor that simultaneously minimizes multiple losses. Existing algorithms for achieving omniprediction broadly fall into two categories: 1) boosting methods that optimize other auxiliary targets such as multicalibration and obtain omniprediction as a corollary, and 2) adversarial two-player game based approaches that estimate and respond to the ``worst-case" loss in an online fashion. We give lower bounds demonstrating that multicalibration is a strictly more difficult problem than omniprediction and thus the former approach must incur suboptimal sample complexity. For the latter approach, we discuss how these ideas can be used to obtain a sample-efficient algorithm through an online-to-batch conversion. This conversion has the downside of returning a complex, randomized predictor. We improve on this method by designing a more direct, unrandomized algorithm that exploits structural elements of the set of proper losses.
We present CuMPerLay, a novel differentiable vectorization layer that enables the integration of Cubical Multiparameter Persistence (CMP) into deep learning pipelines. While CMP presents a natural and powerful way to topologically work with images, its use is hindered by the complexity of multifiltration structures as well as the vectorization of CMP. In face of these challenges, we introduce a new algorithm for vectorizing MP homologies of cubical complexes. Our CuMPerLay decomposes the CMP into a combination of individual, learnable single-parameter persistence, where the bifiltration functions are jointly learned. Thanks to the differentiability, its robust topological feature vectors can be seamlessly used within state-of-the-art architectures such as Swin Transformers. We establish theoretical guarantees for the stability of our vectorization under generalized Wasserstein metrics. Our experiments on benchmark medical imaging and computer vision datasets show the benefit CuMPerLay on classification and segmentation performance, particularly in limited-data scenarios. Overall, CuMPerLay offers a promising direction for integrating global structural information into deep networks for structured image analysis.
A mathematical model is a function taking certain arguments and returning a theoretical prediction of a feature of a physical system. The arguments to the mathematical model can be split into two groups; (a) controllable variables of the system; and (b) calibration parameters: unknown characteristics of the physical system that cannot be controlled or directly measured. Of interest is the estimation of the calibration parameter using physical observations. Since the mathematical model will be an inexact representation of the physical system: the aim is to estimate values for the calibration parameters to make the mathematical model ``close" to the physical system. Closeness is defined as the squared $L^2$ norm of the difference between the mathematical model and the physical system. Different Bayesian and general Bayesian methods are introduced, developed and compared for this task.
Scholars frequently use covariate balance tests to test the validity of natural experiments and related designs. Unfortunately, when measured covariates are unrelated to potential outcomes, balance is uninformative about key identification conditions. We show that balance tests can then lead to erroneous conclusions. To build stronger tests, researchers should identify covariates that are jointly predictive of potential outcomes; formally measure and report covariate prognosis; and prioritize the most individually informative variables in tests. Building on prior research on ``prognostic scores," we develop bootstrap balance tests that upweight covariates associated with the outcome. We adapt this approach for regression-discontinuity designs and use simulations to compare weighting methods based on linear regression and more flexible methods, including machine learning. The results show how prognosis weighting can avoid both false negatives and false positives. To illustrate key points, we study empirical examples from a sample of published studies, including an important debate over close elections.
We study the identifiability of parameters and falsifiability of predictions under the process of model expansion in a Bayesian setting. Identifiability is represented by the closeness of the posterior to the prior distribution and falsifiability by the power of posterior predictive tests against alternatives. To study these two concepts formally, we develop information-theoretic proxies, which we term the identifiability and falsifiability mutual information. We argue that these are useful indicators, with lower values indicating a risk of poor parameter inference and underpowered model checks, respectively. Our main result establishes that a sufficiently complex expansion of a base statistical model forces a trade-off between these two mutual information quantities -- at least one of the two must decrease relative to the base model. We illustrate our result in three worked examples and extract implications for model expansion in practice. In particular, we show as an implication of our result that the negative impacts of model expansion can be limited by offsetting complexity in the likelihood with sufficiently constraining prior distributions.
Over the last decade, hidden Markov models (HMMs) have become increasingly popular in statistical ecology, where they constitute natural tools for studying animal behavior based on complex sensor data. Corresponding analyses sometimes explicitly focus on - and in any case need to take into account - periodic variation, for example by quantifying the activity distribution over the daily cycle or seasonal variation such as migratory behavior. For HMMs including periodic components, we establish important mathematical properties that allow for comprehensive statistical inference related to periodic variation, thereby also providing guidance for model building and model checking. Specifically, we derive the periodically varying unconditional state distribution as well as the time-varying and overall state dwell-time distributions - all of which are of key interest when the inferential focus lies on the dynamics of the state process. We use the associated novel inference and model-checking tools to investigate changes in the diel activity patterns of fruit flies in response to changing light conditions.
Knowing whether vaccine protection wanes over time is important for health policy and drug development. However, quantifying waning effects is difficult. A simple contrast of vaccine efficacy at two different times compares different populations of individuals: those who were uninfected at the first time versus those who remain uninfected until the second time. Thus, the contrast of vaccine efficacy at early and late times can not be interpreted as a causal effect. We propose to quantify vaccine waning using the challenge effect, which is a contrast of outcomes under controlled exposures to the infectious agent following vaccination. We identify sharp bounds on the challenge effect under non-parametric assumptions that are broadly applicable in vaccine trials using routinely collected data. We demonstrate that the challenge effect can differ substantially from the conventional vaccine efficacy due to depletion of susceptible individuals from the risk set over time. Finally, we apply the methods to derive bounds on the waning of the BNT162b2 COVID-19 vaccine using data from a placebo-controlled randomized trial. Our estimates of the challenge effect suggest waning protection after 2 months beyond administration of the second vaccine dose.
We consider the problem of constructing exact goodness-of-fit tests for discrete exponential family models. This classical problem remains practically unsolved for many types of structured or sparse data, as it rests on a computationally difficult core task: to produce a reliable sample from lattice points in a high-dimensional polytope. We translate the problem into a Markov decision process and demonstrate a reinforcement learning approach for learning `good moves' for sampling. We illustrate the approach on data sets and models for which traditional MCMC samplers converge too slowly due to problem size, sparsity structure, and the requirement to use prohibitive non-linear algebra computations in the process. The differentiating factor is the use of scalable tools from \emph{linear} algebra in the context of theoretical guarantees provided by \emph{non-linear} algebra. Our algorithm is based on an actor-critic sampling scheme, with provable convergence. The discovered moves can be used to efficiently obtain an exchangeable sample, significantly cutting computational times with regards to statistical testing.
The Intrinsic Dimension (ID) is a key concept in unsupervised learning and feature selection, as it is a lower bound to the number of variables which are necessary to describe a system. However, in almost any real-world dataset the ID depends on the scale at which the data are analysed. Quite typically at a small scale, the ID is very large, as the data are affected by measurement errors. At large scale, the ID can also be erroneously large, due to the curvature and the topology of the manifold containing the data. In this work, we introduce an automatic protocol to select the sweet spot, namely the correct range of scales in which the ID is meaningful and useful. This protocol is based on imposing that for distances smaller than the correct scale the density of the data is constant. In the presented framework, to estimate the density it is necessary to know the ID, therefore, this condition is imposed self-consistently. We illustrate the usefulness and robustness of this procedure by benchmarks on artificial and real-world datasets.
Numerical simulations are widely used to predict the behavior of physical systems, with Bayesian approaches being particularly well suited for this purpose. However, experimental observations are necessary to calibrate certain simulator parameters for the prediction. In this work, we use a multi-output simulator to predict all its outputs, including those that have never been experimentally observed. This situation is referred to as the transposition context. To accurately quantify the discrepancy between model outputs and real data in this context, conventional methods cannot be applied, and the Bayesian calibration must be augmented by incorporating a joint model error across all outputs. To achieve this, the proposed method is to consider additional numerical input parameters within a hierarchical Bayesian model, which includes hyperparameters for the prior distribution of the calibration variables. This approach is applied on a computer code with three outputs that models the Taylor cylinder impact test with a small number of observations. The outputs are considered as the observed variables one at a time, to work with three different transposition situations. The proposed method is compared with other approaches that embed model errors to demonstrate the significance of the hierarchical formulation.
We introduce the \textit{almost goodness-of-fit} test, a procedure to assess whether a (parametric) model provides a good representation of the probability distribution generating the observed sample. Specifically, given a distribution function $F$ and a parametric family $\mathcal{G}=\{ G(\boldsymbol{\theta}) : \boldsymbol{\theta} \in \Theta\}$, we consider the testing problem \[ H_0: \| F - G(\boldsymbol{\theta}_F) \|_p \geq \epsilon \quad \text{vs} \quad H_1: \| F - G(\boldsymbol{\theta}_F) \|_p < \epsilon, \] where $\epsilon>0$ is a margin of error and $G(\boldsymbol{\theta}_F)$ denotes a representative of $F$ within the parametric class. The approximate model is determined via an M-estimator of the parameters. %The objective is the approximate validation of a distribution or an entire parametric family up to a pre-specified threshold value. The methodology also quantifies the percentage improvement of the proposed model relative to a non-informative (constant) benchmark. The test statistic is the $\mathrm{L}^p$-distance between the empirical distribution function and that of the estimated model. We present two consistent, easy-to-implement, and flexible bootstrap schemes to carry out the test. The performance of the proposal is illustrated through simulation studies and analysis and real-data applications.
In this paper, we characterize the extremal dependence of $d$ asymptotically dependent variables by a class of random vectors on the $(d-1)$-dimensional hyperplane perpendicular to the diagonal vector $\mathbf1=(1,\ldots,1)$. This translates analyses of multivariate extremes to that on a linear vector space, opening up possibilities for applying existing statistical techniques that are based on linear operations. As an example, we demonstrate obtaining lower-dimensional approximations of the tail dependence through principal component analysis. Additionally, we show that the widely used Hüsler-Reiss family is characterized by a Gaussian family residing on the hyperplane.
We propose a new method based on sparse optimal discriminant clustering (SODC), incorporating a penalty term into the scoring matrix based on convex clustering. With the addition of this penalty term, it is expected to improve the accuracy of cluster identification by pulling points within the same cluster closer together and points from different clusters further apart. When the estimation results are visualized, the clustering structure can be depicted more clearly. Moreover, we develop a novel algorithm to derive the updated formula of this scoring matrix using a majorizing function. The scoring matrix is updated using the alternating direction method of multipliers (ADMM), which is often employed to calculate the parameters of the objective function in the convex clustering. In the proposed method, as in the conventional SODC, the scoring matrix is subject to an orthogonal constraint. Therefore, it is necessary to satisfy the orthogonal constraint on the scoring matrix while maintaining the clustering structure. Using a majorizing function, we adress the challenge of enforcing both orthogonal constraint and the clustering structure within the scoring matrix. We demonstrate numerical simulations and an application to real data to assess the performance of the proposed method.
Updating $\textit{a priori}$ information given some observed data is the core tenet of Bayesian inference. Bayesian transfer learning extends this idea by incorporating information from a related dataset to improve the inference on the observed target dataset which may have been collected under slightly different settings. The use of related information can be useful when the target dataset is scarce, for example. There exist various Bayesian transfer learning methods that decide how to incorporate the related data in different ways. Unfortunately, there is no principled approach for comparing Bayesian transfer methods in real data settings. Additionally, some Bayesian transfer learning methods, such as the so-called power prior approaches, rely on conjugacy or costly specialised techniques. In this paper, we find an effective approach to compare Bayesian transfer learning methods is to apply leave-one-out cross validation on the target dataset. Further, we introduce a new framework, $\textit{transfer sequential Monte Carlo}$, that efficiently implements power prior methods in an automated fashion. We demonstrate the performance of our proposed methods in two comprehensive simulation studies.
Contrastive learning -- a modern approach to extract useful representations from unlabeled data by training models to distinguish similar samples from dissimilar ones -- has driven significant progress in foundation models. In this work, we develop a new theoretical framework for analyzing data augmentation-based contrastive learning, with a focus on SimCLR as a representative example. Our approach is based on the concept of \emph{approximate sufficient statistics}, which we extend beyond its original definition in \cite{oko2025statistical} for contrastive language-image pretraining (CLIP) using KL-divergence. We generalize it to equivalent forms and general f-divergences, and show that minimizing SimCLR and other contrastive losses yields encoders that are approximately sufficient. Furthermore, we demonstrate that these near-sufficient encoders can be effectively adapted to downstream regression and classification tasks, with performance depending on their sufficiency and the error induced by data augmentation in contrastive learning. Concrete examples in linear regression and topic classification are provided to illustrate the broad applicability of our results.
In this paper, we derive power guarantees of some sequential tests for bounded mean under general alternatives. We focus on testing procedures using nonnegative supermartingales which are anytime valid and consider alternatives which coincide asymptotically with the null (e.g. vanishing mean) while still allowing to reject in finite time. Introducing variance constraints, we show that the alternative can be broaden while keeping power guarantees for certain second-order testing procedures. We also compare different test procedures in multidimensional setting using characteristics of the rejection times. Finally, we extend our analysis to other functionals as well as testing and comparing forecasters. Our results are illustrated with numerical simulations including bounded mean testing and comparison of forecasters.
Google Trends reports how frequently specific queries are searched on Google over time. It is widely used in research and industry to gain early insights into public interest. However, its data generation mechanism introduces missing values, sampling variability, noise, and trends. These issues arise from privacy thresholds mapping low search volumes to zeros, daily sampling variations causing discrepancies across historical downloads, and algorithm updates altering volume magnitudes over time. Data quality has recently deteriorated, with more zeros and noise, even for previously stable queries. We propose a comprehensive statistical methodology to preprocess Google Trends search information using hierarchical clustering, smoothing splines, and detrending. We validate our approach by forecasting U.S. influenza hospitalizations up to three weeks ahead with several statistical and machine learning models. Compared to omitting exogenous variables, our results show that preprocessed signals enhance forecast accuracy, while raw Google Trends data often degrades performance in statistical models.
Motivated by small bandwidth asymptotics for kernel-based semiparametric estimators in econometrics, this paper establishes Gaussian approximation results for high-dimensional fixed-order $U$-statistics whose kernels depend on the sample size. Our results allow for a situation where the dominant component of the Hoeffding decomposition is absent or unknown, including cases with known degrees of degeneracy as special forms. The obtained error bounds for Gaussian approximations are sharp enough to almost recover the weakest bandwidth condition of small bandwidth asymptotics in the fixed-dimensional setting when applied to a canonical semiparametric estimation problem. We also present an application to an adaptive goodness-of-fit testing and the simultaneous inference on high-dimensional density weighted averaged derivatives, along with discussions about several potential applications.
Model-based clustering integrated with variable selection is a powerful tool for uncovering latent structures within complex data. However, its effectiveness is often hindered by challenges such as identifying relevant variables that define heterogeneous subgroups and handling data that are missing not at random, a prevalent issue in fields like transcriptomics. While several notable methods have been proposed to address these problems, they typically tackle each issue in isolation, thereby limiting their flexibility and adaptability. This paper introduces a unified framework designed to address these challenges simultaneously. Our approach incorporates a data-driven penalty matrix into penalized clustering to enable more flexible variable selection, along with a mechanism that explicitly models the relationship between missingness and latent class membership. We demonstrate that, under certain regularity conditions, the proposed framework achieves both asymptotic consistency and selection consistency, even in the presence of missing data. This unified strategy significantly enhances the capability and efficiency of model-based clustering, advancing methodologies for identifying informative variables that define homogeneous subgroups in the presence of complex missing data patterns. The performance of the framework, including its computational efficiency, is evaluated through simulations and demonstrated using both synthetic and real-world transcriptomic datasets.
Building sustainable food systems that are resilient to climate change will require improved agricultural management and policy. One common practice that is well-known to benefit crop yields is crop rotation, yet there remains limited understanding of how the benefits of crop rotation vary for different crop sequences and for different weather conditions. To address these gaps, we leverage crop type maps, satellite data, and causal machine learning to study how precrop effects on subsequent yields vary with cropping sequence choice and weather. Complementing and going beyond what is known from randomized field trials, we find that (i) for those farmers who do rotate, the most common precrop choices tend to be among the most beneficial, (ii) the effects of switching from a simple rotation (which alternates between two crops) to a more diverse rotation were typically small and sometimes even negative, (iii) precrop effects tended to be greater under rainier conditions, (iv) precrop effects were greater under warmer conditions for soybean yields but not for other crops, and (v) legume precrops conferred smaller benefits under warmer conditions. Our results and the methods we use can enable farmers and policy makers to identify which rotations will be most effective at improving crop yields in a changing climate.
When two variables depend on the same or similar underlying network, their shared network dependence structure can lead to spurious associations. While statistical associations between two variables sampled from interconnected subjects are a common inferential goal across various fields, little research has focused on how to disentangle shared dependence for valid statistical inference. We revisit two different approaches from distinct fields that may address shared network dependence: the pre-whitening approach, commonly used in time series analysis to remove the shared temporal dependence, and the network autocorrelation model, widely used in network analysis often to examine or account for autocorrelation of the outcome variable. We demonstrate how each approach implicitly entails assumptions about how a variable of interest propagates among nodes via network ties given the network structure. We further propose adaptations of existing pre-whitening methods to the network setting by explicitly reflecting underlying assumptions about "level of interaction" that induce network dependence, while accounting for its unique complexities. Our simulation studies demonstrate the effectiveness of the two approaches in reducing spurious associations due to shared network dependence when their respective assumptions hold. However, the results also show the sensitivity to assumption violations, underscoring the importance of correctly specifying the shared dependence structure based on available network information and prior knowledge about the interactions driving dependence.
This paper advances the general theory of continuous sparse regularisation on measures with the Beurling-LASSO (BLASSO). This TV-regularised convex program on the space of measures allows to recover a sparse measure using a noisy observation from a measurement operator. While previous works have uncovered the central role played by this operator and its associated kernel in order to get estimation error bounds, the latter requires a technical local positive curvature (LPC) assumption to be verified on a case-by-case basis. In practice, this yields only few LPC-kernels for which this condition is proved. In this paper, we prove that the ``sinc-4'' kernel, used for signal recovery and mixture problems, does satisfy the LPC assumption. Furthermore, we introduce the kernel switch analysis, which allows to leverage on a known LPC-kernel as a pivot kernel to prove error bounds. Together, these results provide easy-to-check conditions to get error bounds for a large family of translation-invariant model kernels. Besides, we also show that known BLASSO guarantees can be made adaptive to the noise level. This improves on known results where this error is fixed with some parameters depending on the model kernel. We illustrate the interest of our results in the case of mixture model estimation, using band-limiting smoothing and sketching techniques to reduce the computational burden of BLASSO.
The principle of allocating an equal number of patients to each arm in a randomized controlled trial remains widely believed to be optimal for maximising statistical power. However, this long-held belief only holds true if the treatment groups have equal outcome variances, a condition that is often not met or, is simply not assessed in practice. This paper reasserts the fact that a departure from a 1:1 ratio can maintain or improve statistical power while increasing the benefits to participants. The benefit is particularly self-evident for binary and time-to-event endpoints, where variances are determined by the assumed success or event rates. To illustrate this, we present two case studies: a small-scale metastatic melanoma trial with a binary endpoint and a larger trial evaluating virtual reality for pain reduction with a continuous endpoint. Our simulations compare equal randomisation, preplanned fixed unequal randomisation, and response-adaptive randomisation targeting Neyman allocation. Results show that unequal allocation can increase the proportion of patients receiving the superior treatment without reducing power, with modest power gains observed in both binary and continuous settings, highlighting the practical relevance of optimised allocation strategies across trial types and sizes.
We study the Gaussian sequence model, i.e. $X \sim N(\mathbf{\theta}, I_\infty)$, where $\mathbf{\theta} \in \Gamma \subset \ell_2$ is assumed to be convex and compact. We show that goodness-of-fit testing sample complexity is lower bounded by the square-root of the estimation complexity, whenever $\Gamma$ is orthosymmetric. This lower bound is tight when $\Gamma$ is also quadratically convex (as shown by [Donoho et al. 1990, Neykov 2023]). We also completely characterize likelihood-free hypothesis testing (LFHT) complexity for $\ell_p$-bodies, discovering new types of tradeoff between the numbers of simulation and observation samples, compared to the case of ellipsoids (p = 2) studied in [Gerber and Polyanskiy 2024].
Stepped-wedge cluster-randomized trials (SW-CRTs) are widely used in healthcare and implementation science, providing an ethical advantage by ensuring all clusters eventually receive the intervention. The staggered rollout of treatment introduces complexities in defining and estimating treatment effect estimands, particularly under informative sizes. Traditional model-based methods, including generalized estimating equations (GEE) and linear mixed models (LMM), produce estimates that depend on implicit weighting schemes and parametric assumptions, leading to bias for different types of estimands in the presence of informative sizes. While recent methods have attempted to provide robust estimation in SW-CRTs, they are restrictive on modeling assumptions or lack of general framework for consistent estimating multiple estimands under informative size. In this article, we propose a model-robust standardization framework for SW-CRTs that generalizes existing methods from parallel-arm CRTs. We define causal estimands including horizontal-individual, horizontal-cluster, vertical-individual, and vertical-cluster average treatment effects under a super population framework and introduce an augmented standardization estimator that standardizes parametric and semiparametric working models while maintaining robustness to informative cluster size under arbitrary misspecification. We evaluate the finite-sample properties of our proposed estimators through extensive simulation studies, assessing their performance under various SW-CRT designs. Finally, we illustrate the practical application of model-robust estimation through a reanalysis of two real-world SW-CRTs.
Sliced Optimal Transport (SOT) is a rapidly developing branch of optimal transport (OT) that exploits the tractability of one-dimensional OT problems. By combining tools from OT, integral geometry, and computational statistics, SOT enables fast and scalable computation of distances, barycenters, and kernels for probability measures, while retaining rich geometric structure. This paper provides a comprehensive review of SOT, covering its mathematical foundations, methodological advances, computational methods, and applications. We discuss key concepts of OT and one-dimensional OT, the role of tools from integral geometry such as Radon transform in projecting measures, and statistical techniques for estimating sliced distances. The paper further explores recent methodological advances, including non-linear projections, improved Monte Carlo approximations, statistical estimation techniques for one-dimensional optimal transport, weighted slicing techniques, and transportation plan estimation methods. Variational problems, such as minimum sliced Wasserstein estimation, barycenters, gradient flows, kernel constructions, and embeddings are examined alongside extensions to unbalanced, partial, multi-marginal, and Gromov-Wasserstein settings. Applications span machine learning, statistics, computer graphics and computer visions, highlighting SOT's versatility as a practical computational tool. This work will be of interest to researchers and practitioners in machine learning, data sciences, and computational disciplines seeking efficient alternatives to classical OT.
This paper introduces SpaPool, a novel pooling method that combines the strengths of both dense and sparse techniques for a graph neural network. SpaPool groups vertices into an adaptive number of clusters, leveraging the benefits of both dense and sparse approaches. It aims to maintain the structural integrity of the graph while reducing its size efficiently. Experimental results on several datasets demonstrate that SpaPool achieves competitive performance compared to existing pooling techniques and excels particularly on small-scale graphs. This makes SpaPool a promising method for applications requiring efficient and effective graph processing.
Methods for split conformal prediction leverage calibration samples to transform any prediction rule into a set-prediction rule that complies with a target coverage probability. Existing methods provide remarkably strong performance guarantees with minimal computational costs. However, they require to use calibration samples composed by labeled examples different to those used for training. This requirement can be highly inconvenient, as it prevents the use of all labeled examples for training and may require acquiring additional labels solely for calibration. This paper presents an effective methodology for split conformal prediction with unsupervised calibration for classification tasks. In the proposed approach, set-prediction rules are obtained using unsupervised calibration samples together with supervised training samples previously used to learn the classification rule. Theoretical and experimental results show that the presented methods can achieve performance comparable to that with supervised calibration, at the expenses of a moderate degradation in performance guarantees and computational efficiency.
We consider a problem of covariance estimation from a sample of i.i.d. high-dimensional random vectors. To avoid the curse of dimensionality we impose an additional assumption on the structure of the covariance matrix $\Sigma$. To be more precise we study the case when $\Sigma$ can be approximated by a sum of double Kronecker products of smaller matrices in a tensor train (TT) format. Our setup naturally extends widely known Kronecker sum and CANDECOMP/PARAFAC models but admits richer interaction across modes. We suggest an iterative polynomial time algorithm based on TT-SVD and higher-order orthogonal iteration (HOOI) adapted to Tucker-2 hybrid structure. We derive non-asymptotic dimension-free bounds on the accuracy of covariance estimation taking into account hidden Kronecker product and tensor train structures. The efficiency of our approach is illustrated with numerical experiments.
Concerns about the misuse and misinterpretation of p-values and statistical significance have motivated alternatives for quantifying evidence. We define a generalized form of Jeffreys's approximate objective Bayes factor (eJAB), a one-line calculation that is a function of the p-value, sample size, and parameter dimension. We establish conditions under which eJAB is model-selection consistent and verify them for ten statistical tests. We assess finite-sample accuracy by comparing eJAB with Markov chain Monte Carlo computed Bayes factors in 12 simulation studies. We then apply eJAB to 71,126 results from this http URL (CTG) and find that the proportion of findings with $\text{p-value} \le \alpha$ yet $eJAB_{01}>1$ (favoring the null) closely tracks the significance level $\alpha$, suggesting that such contradictions are pointing to the type I errors. We catalog 4,088 such candidate type I errors and provide details for 131 with reported $\text{p-value} \le 0.01$. We also identify 487 instances of the Jeffreys-Lindley paradox. Finally, we estimate that 75% (6%) of clinical trial plans from CTG set $\alpha \ge 0.05$ as the target evidence threshold, and that 35.5% (0.22%) of results significant at $\alpha =0.05$ correspond to evidence that is no stronger than anecdotal under eJAB.
The tidyverse is a popular meta-package comprising several core R packages to aid in various data science tasks, including data import, manipulation and visualisation. Although functionalities offered by the tidyverse can generally be replicated using other packages, its widespread adoption in both teaching and practice indicates there are factors contributing to its preference, despite some debate over its usage. This suggests that particular aspects, such as interface design, may play a significant role in its selection. Examining the interface design can potentially reveal aspects that aid the design process for developers. While Tidyverse has been lauded for adopting a user-centered design, arguably some elements of the design focus on the work domain instead of the end-user. We examine the Tidyverse interface design via the lens of human computer interaction, with an emphasis on data visualisation and data wrangling, to identify factors that might serve as a model for developers designing their packages. We recommend that developers adopt an iterative design that is informed by user feedback, analysis and complete coverage of the work domain, and ensure perceptual visibility of system constraints and relationships.
Multi-type Markov point processes offer a flexible framework for modelling complex multi-type point patterns where it is pertinent to capture both interactions between points as well as large scale trends depending on observed covariates. However, estimation of interaction and covariate effects may be seriously biased in the presence of unobserved spatial confounders. In this paper we introduce a new class of semi-parametric Markov point processes that adjusts for spatial confounding through a non-parametric factor that accommodates effects of latent spatial variables common to all types of points. We introduce a conditional pseudo likelihood for parameter estimation and show that the resulting estimator has desirable asymptotic properties. Our methodology not least has great potential in studies of industry agglomeration and we apply it to study spatial patterns of locations of two types of banks in France.
We develop an extension of posterior sampling for reinforcement learning (PSRL) that is suited for a continuing agent-environment interface and integrates naturally into agent designs that scale to complex environments. The approach, continuing PSRL, maintains a statistically plausible model of the environment and follows a policy that maximizes expected $\gamma$-discounted return in that model. At each time, with probability $1-\gamma$, the model is replaced by a sample from the posterior distribution over environments. For a choice of discount factor that suitably depends on the horizon $T$, we establish an $\tilde{O}(\tau S \sqrt{A T})$ bound on the Bayesian regret, where $S$ is the number of environment states, $A$ is the number of actions, and $\tau$ denotes the reward averaging time, which is a bound on the duration required to accurately estimate the average reward of any policy. Our work is the first to formalize and rigorously analyze the resampling approach with randomized exploration.
The PAC-Bayesian framework has significantly advanced the understanding of statistical learning, particularly for majority voting methods. Despite its successes, its application to multi-view learning -- a setting with multiple complementary data representations -- remains underexplored. In this work, we extend PAC-Bayesian theory to multi-view learning, introducing novel generalization bounds based on Rényi divergence. These bounds provide an alternative to traditional Kullback-Leibler divergence-based counterparts, leveraging the flexibility of Rényi divergence. Furthermore, we propose first- and second-order oracle PAC-Bayesian bounds and extend the C-bound to multi-view settings. To bridge theory and practice, we design efficient self-bounding optimization algorithms that align with our theoretical results.
Graph Neural Networks (GNNs) have demonstrated strong performance in graph representation learning across various real-world applications. However, they often produce biased predictions caused by sensitive attributes, such as religion or gender, an issue that has been largely overlooked in existing methods. Recently, numerous studies have focused on reducing biases in GNNs. However, these approaches often rely on training with partial data (e.g., using either node features or graph structure alone), which can enhance fairness but frequently compromises model utility due to the limited utilization of available graph information. To address this tradeoff, we propose an effective strategy to balance fairness and utility in knowledge distillation. Specifically, we introduce FairDTD, a novel Fair representation learning framework built on Dual-Teacher Distillation, leveraging a causal graph model to guide and optimize the design of the distillation process. Specifically, FairDTD employs two fairness-oriented teacher models: a feature teacher and a structure teacher, to facilitate dual distillation, with the student model learning fairness knowledge from the teachers while also leveraging full data to mitigate utility loss. To enhance information transfer, we incorporate graph-level distillation to provide an indirect supplement of graph information during training, as well as a node-specific temperature module to improve the comprehensive transfer of fair knowledge. Experiments on diverse benchmark datasets demonstrate that FairDTD achieves optimal fairness while preserving high model utility, showcasing its effectiveness in fair representation learning for GNNs.
We study the identification of causal effects in the presence of different types of constraints (e.g., logical constraints) in addition to the causal graph. These constraints impose restrictions on the models (parameterizations) induced by the causal graph, reducing the set of models considered by the identifiability problem. We formalize the notion of constrained identifiability, which takes a set of constraints as another input to the classical definition of identifiability. We then introduce a framework for testing constrained identifiability by employing tractable Arithmetic Circuits (ACs), which enables us to accommodate constraints systematically. We show that this AC-based approach is at least as complete as existing algorithms (e.g., do-calculus) for testing classical identifiability, which only assumes the constraint of strict positivity. We use examples to demonstrate the effectiveness of this AC-based approach by showing that unidentifiable causal effects may become identifiable under different types of constraints.
Many data clustering applications must handle objects that cannot be represented as vectors. In this context, the bag-of-vectors representation describes complex objects through discrete distributions, for which the Wasserstein distance provides a well-conditioned dissimilarity measure. Kernel methods extend this by embedding distance information into feature spaces that facilitate analysis. However, an unsupervised framework that combines kernels with Wasserstein distances for clustering distributional data is still lacking. We address this gap by introducing a computationally tractable framework that integrates Wasserstein metrics with kernel methods for clustering. The framework can accommodate both vectorial and distributional data, enabling applications in various domains. It comprises three components: (i) an efficient approximation of pairwise Wasserstein distances using multiple reference distributions; (ii) shifted positive definite kernel functions based on Wasserstein distances, combined with kernel principal component analysis for feature mapping; and (iii) scalable, distance-agnostic validity indices for clustering evaluation and kernel parameter optimization. Experiments on power distribution graphs and real-world time series demonstrate the effectiveness and efficiency of the proposed framework.
Complex-valued neural networks (CVNNs) are particularly suitable for handling phase-sensitive signals, including electrocardiography (ECG), radar/sonar, and wireless in-phase/quadrature (I/Q) streams. Nevertheless, their \emph{interpretability} and \emph{probability calibration} remain insufficiently investigated. In this work, we present a Newton--Puiseux framework that examines the \emph{local decision geometry} of a trained CVNN by (i) fitting a small, kink-aware polynomial surrogate to the \emph{logit difference} in the vicinity of uncertain inputs, and (ii) factorizing this surrogate using Newton--Puiseux expansions to derive analytic branch descriptors, including exponents, multiplicities, and orientations. These descriptors provide phase-aligned directions that induce class flips in the original network and allow for a straightforward, \emph{multiplicity-guided} temperature adjustment for improved calibration. We outline assumptions and diagnostic measures under which the surrogate proves informative and characterize potential failure modes arising from piecewise-holomorphic activations (e.g., modReLU). Our phase-aware analysis identifies sensitive directions and enhances Expected Calibration Error in two case studies beyond a controlled $\C^2$ synthetic benchmark -- namely, the MIT--BIH arrhythmia (ECG) dataset and RadioML 2016.10a (wireless modulation) -- when compared to uncalibrated softmax and standard post-hoc baselines. We also present confidence intervals, non-parametric tests, and quantify sensitivity to inaccuracies in estimating branch multiplicity. Crucially, this method requires no modifications to the architecture and applies to any CVNN with complex logits transformed to real moduli.
Chaotic systems are intrinsically sensitive to small errors, challenging efforts to construct predictive data-driven models of real-world dynamical systems such as fluid flows or neuronal activity. Prior efforts comprise either specialized models trained separately on individual time series, or foundation models trained on vast time series databases with little underlying dynamical structure. Motivated by dynamical systems theory, we present Panda, Patched Attention for Nonlinear Dynamics. We train Panda on a novel synthetic, extensible dataset of $2 \times 10^4$ chaotic dynamical systems that we discover using an evolutionary algorithm. Trained purely on simulated data, Panda exhibits emergent properties: zero-shot forecasting of unseen chaotic systems preserving both short-term accuracy and long-term statistics. Despite having been trained only on low-dimensional ordinary differential equations, Panda spontaneously develops the ability to predict partial differential equations without retraining. We also demonstrate a neural scaling law for differential equations, underscoring the potential of pretrained models for probing abstract mathematical domains like nonlinear dynamics.
Linear TD($\lambda$) is one of the most fundamental reinforcement learning algorithms for policy evaluation. Previously, convergence rates are typically established under the assumption of linearly independent features, which does not hold in many practical scenarios. This paper instead establishes the first $L^2$ convergence rates for linear TD($\lambda$) operating under arbitrary features, without making any algorithmic modification or additional assumptions. Our results apply to both the discounted and average-reward settings. To address the potential non-uniqueness of solutions resulting from arbitrary features, we develop a novel stochastic approximation result featuring convergence rates to the solution set instead of a single point.
We study decision-making problems where data comprises points from a collection of binary polytopes, capturing aggregate information stemming from various combinatorial selection environments. We propose a nonparametric approach for counterfactual inference in this setting based on a representative agent model, where the available data is viewed as arising from maximizing separable concave utility functions over the respective binary polytopes. Our first contribution is to precisely characterize the selection probabilities representable under this model and show that verifying the consistency of any given aggregated selection dataset reduces to solving a polynomial-sized linear program. Building on this characterization, we develop a nonparametric method for counterfactual prediction. When data is inconsistent with the model, finding a best-fitting approximation for prediction reduces to solving a compact mixed-integer convex program. Numerical experiments based on synthetic data demonstrate the method's flexibility, predictive accuracy, and strong representational power even under model misspecification.
Riemannian representation learning typically relies on an encoder to estimate densities on chosen manifolds. This involves optimizing numerically brittle objectives, potentially harming model training and quality. To completely circumvent this issue, we introduce the Riemannian generative decoder, a unifying approach for finding manifold-valued latents on any Riemannian manifold. Latents are learned with a Riemannian optimizer while jointly training a decoder network. By discarding the encoder, we vastly simplify the manifold constraint compared to current approaches which often only handle few specific manifolds. We validate our approach on three case studies -- a synthetic branching diffusion process, human migrations inferred from mitochondrial DNA, and cells undergoing a cell division cycle -- each showing that learned representations respect the prescribed geometry and capture intrinsic non-Euclidean structure. Our method requires only a decoder, is compatible with existing architectures, and yields interpretable latent spaces aligned with data geometry. Code available on this https URL.
We consider off-policy evaluation (OPE) in contextual bandits with finite action space. Inverse Propensity Score (IPS) weighting is a widely used method for OPE due to its unbiased, but it suffers from significant variance when the action space is large or when some parts of the context-action space are underexplored. Recently introduced Marginalized IPS (MIPS) estimators mitigate this issue by leveraging action embeddings. However, these embeddings do not minimize the mean squared error (MSE) of the estimators and do not consider context information. To address these limitations, we introduce Context-Action Embedding Learning for MIPS, or CAEL-MIPS, which learns context-action embeddings from offline data to minimize the MSE of the MIPS estimator. Building on the theoretical analysis of bias and variance of MIPS, we present an MSE-minimizing objective for CAEL-MIPS. In the empirical studies on a synthetic dataset and a real-world dataset, we demonstrate that our estimator outperforms baselines in terms of MSE.
This paper establishes central limit theorems for Polyak-Ruppert averaged Q-learning under asynchronous updates. We prove a non-asymptotic central limit theorem, where the convergence rate in Wasserstein distance explicitly reflects the dependence on the number of iterations, state-action space size, the discount factor, and the quality of exploration. In addition, we derive a functional central limit theorem, showing that the partial-sum process converges weakly to a Brownian motion.