This note corrects a technical error in Guardiola (2020, Journal of Statistical Distributions and Applications), presents updated derivations, and offers an extended discussion of the properties of the spherical Dirichlet distribution. Today, data mining and gene expressions are at the forefront of modern data analysis. Here we introduce a novel probability distribution that is applicable in these fields. This paper develops the proposed Spherical-Dirichlet Distribution designed to fit vectors located at the positive orthant of the hypersphere, as it is often the case for data in these fields, avoiding unnecessary probability mass. Basic properties of the proposed distribution, including normalizing constants and moments are developed. Relationships with other distributions are also explored. Estimators based on classical inferential statistics, such as method of moments and maximum likelihood estimators are obtained. Two applications are developed: the first one uses simulated data, and the second uses a real text mining example. Both examples are fitted using the proposed Spherical-Dirichlet Distribution and their results are discussed.
Many modern products exhibit high reliability, often resulting in long times to failure. Consequently, conducting experiments under normal operating conditions may require an impractically long duration to obtain sufficient failure data for reliable statistical inference. As an alternative, accelerated life tests (ALTs) are employed to induce earlier failures and thereby reduce testing time. In step-stress experiments a stress factor that accelerates product degradation is identified and systematically increased to provoke early failures. The stress level is increased at predetermined time points and maintained constant between these intervals. Failure data observed under increased levels of stress is statistically analyzed, and results are then extrapolate to normal operating conditions. Classical estimation methods such analysis rely on the maximum likelihood estimator (MLE) which is know to be very efficient, but lack robustness in the presence of outlying data. In this work, Minimum Density Power Divergence Estimators (MDPDEs) are proposed as a robust alternative, demonstrating an appealing compromise between efficiency and robustness. The MDPDE based on mixed distributions is developed, and its theoretical properties, including the expression for the asymptotic distribution of the model parameters, are derived under exponential lifetime assumptions. The good performance of the proposed method is evaluated through simulation studies, and its applicability is demonstrated using real data.
This paper focuses on Geodesic Principal Component Analysis (GPCA) on a collection of probability distributions using the Otto-Wasserstein geometry. The goal is to identify geodesic curves in the space of probability measures that best capture the modes of variation of the underlying dataset. We first address the case of a collection of Gaussian distributions, and show how to lift the computations in the space of invertible linear maps. For the more general setting of absolutely continuous probability measures, we leverage a novel approach to parameterizing geodesics in Wasserstein space with neural networks. Finally, we compare to classical tangent PCA through various examples and provide illustrations on real-world datasets.
In an ecological context, panel data arise when time series measurements are made on a collection of ecological processes. Each process may correspond to a spatial location for field data, or to an experimental ecosystem in a designed experiment. Statistical models for ecological panel data should capture the high levels of nonlinearity, stochasticity, and measurement uncertainty inherent in ecological systems. Furthermore, the system dynamics may depend on unobservable variables. This study applies iterated particle filtering techniques to explore new possibilities for likelihood-based statistical analysis of these complex systems. We analyze data from a mesocosm experiment in which two species of the freshwater planktonic crustacean genus, {\it Daphnia}, coexist with an alga and a fungal parasite. Time series data were collected on replicated mesocosms under six treatment conditions. Iterated filtering enables maximization of the likelihood for scientifically motivated nonlinear partially observed Markov process models, providing access to standard likelihood-based methods for parameter estimation, confidence intervals, hypothesis testing, model selection and diagnostics. This toolbox allows scientists to propose and evaluate scientifically motivated stochastic dynamic models for panel data, constrained only by the requirement to write code to simulate from the model and to specify a measurement distribution describing how the system state is observed.
Exponential Random Graph Models (ERGMs) are an inferential model for analysing statistical networks. Recent development in ERGMs uses hierarchical Bayesian setup to jointly model a group of networks, which is called a multiple-network Exponential Random Graph Model (MN-ERGMs). MN-ERGM has been successfully applied on real-world resting-state fMRI data from the Cam-CAN project to infer the brain connectivity on aging. However, conventional Bayesian ERGM estimation approach is computationally intensive and lacks implementation scalability due to intractable ERGM likelihood. We address this key limitation by using neural posterior estimation (NPE), which trains a neural network-based conditional density estimator to infer the posterior.\\ We proposed an Amortised Hierarchical Sequential Neural Posterior Estimation (AHS-NPE) and various ERGM-specific adjustment schemes to target the Bayesian hierarchical structure of MN-ERGMs. Our proposed method contributes to the ERGM literature as a very scalable solution, and we used AHS-NPE to re-show the fitting results on the Cam-CAN data application and further scaled it up to a larger implementation sample size. More importantly, our AHS-NPE contributes to the general NPE literature as a new hierarchical NPE approach that preserves the amortisation and sequential refinement, which can be applied to a variety of study fields.
System reliability assessment(SRA) is a challenging task due to the limited experimental data and the complex nature of the system structures. Despite a long history dating back to \cite{buehler1957confidence}, exact methods have only been applied to SRA for simple systems. High-order asymptotic methods, such as the Cornish-Fisher expansion, have become popular for balancing computational efficiency with improved accuracy when data are limited, but frequently encounter the "bend-back" problem in high-reliability scenarios and require complex analytical computations. To overcome these limitations, we propose a novel method for SRA by modifying the double bootstrap framework, termed the double bootstrap percentile with transformed resamples. In particular, we design a nested resampling process for log-location-scale lifetime models, eliminating the computational burden caused by the iterative resampling process involved in the conventional double bootstrap. We prove that the proposed method maintains the high-order convergence property, thus providing a highly accurate yet computationally efficient confidence limit for system reliability. Moreover, the proposed procedure is straightforward to implement, involving only a simple resampling operation and efficient moment estimation steps. Numerical studies further demonstrate that our approach outperforms the state-of-the-art SRA methods and, at the same time, is much less susceptible to the bend-back issue.
Computer simulations serve as powerful tools for scientists and engineers to gain insights into complex systems. Less costly than physical experiments, computer experiments sometimes involve large number of trials. Conventional design optimization and model fitting methods for computer experiments are inefficient for large-scale problems. In this paper, we propose new methods to optimize good lattice point sets, using less computation to construct designs with enhanced space-filling properties such as high separation distance, low discrepancy, and high separation distance on projections. These designs show promising performance in uncertainty quantification as well as physics-informed neural networks. We also propose a new type of space-filling design called regularly repeated lattice-based Latin hypercube designs, which contain lots of local space-filling Latin hypercube designs as subdesigns. Such designs facilitate rapid fitting of multiple local Gaussian process models in a moving window type of modeling approach and thus are useful for large-scale emulation problems.
Motivated by real-world settings where data collection and policy deployment -- whether for a single agent or across multiple agents -- are costly, we study the problem of on-policy single-agent reinforcement learning (RL) and federated RL (FRL) with a focus on minimizing burn-in costs (the sample sizes needed to reach near-optimal regret) and policy switching or communication costs. In parallel finite-horizon episodic Markov Decision Processes (MDPs) with $S$ states and $A$ actions, existing methods either require superlinear burn-in costs in $S$ and $A$ or fail to achieve logarithmic switching or communication costs. We propose two novel model-free RL algorithms -- Q-EarlySettled-LowCost and FedQ-EarlySettled-LowCost -- that are the first in the literature to simultaneously achieve: (i) the best near-optimal regret among all known model-free RL or FRL algorithms, (ii) low burn-in cost that scales linearly with $S$ and $A$, and (iii) logarithmic policy switching cost for single-agent RL or communication cost for FRL. Additionally, we establish gap-dependent theoretical guarantees for both regret and switching/communication costs, improving or matching the best-known gap-dependent bounds.
Accurately identifying the extremal dependence structure in multivariate heavy-tailed data is a fundamental yet challenging task, particularly in financial applications. Following a recently proposed bootstrap-based testing procedure, we apply the methodology to absolute log returns of U.S. S&P 500 and Chinese A-share stocks over a time period well before the U.S. election in 2024. The procedure reveals more isolated clustering of dependent assets in the U.S. economy compared with China which exhibits different characteristics and a more interconnected pattern of extremal dependence. Cross-market analysis identifies strong extremal linkages in sectors such as materials, consumer staples and consumer discretionary, highlighting the effectiveness of the testing procedure for large-scale empirical applications.
The Cox regression model and its Bayesian extensions are widely used in survival analysis. However, standard Bayesian approaches require modeling of the baseline hazard, and their full conditional distributions lack closed-form expressions. Therefore, the Metropolis-Hastings sampling algorithm is typically employed, whose efficiency is highly sensitive to the choice of proposal distribution. To address these issues, we propose the GS4Cox, an efficient Gibbs sampling algorithm for the Cox regression model based on four key components: (i) general Bayesian framework, (ii) composite partial likelihood, (iii) P\'olya-Gamma augmentation scheme, and (iv) finite corrections. Our experiments on both synthetic and actual datasets demonstrate that the GS4Cox algorithm outperforms existing sampling methods in terms of convergence speed and sampling efficiency.
The expressiveness of flow-based models combined with stochastic variational inference (SVI) has, in recent years, expanded the application of optimization-based Bayesian inference to include problems with complex data relationships. However, until now, SVI using flow-based models has been limited to problems of fixed dimension. We introduce CoSMIC, normalizing flows (COntextually-Specified Masking for Identity-mapped Components), an extension to neural autoregressive conditional normalizing flow architectures that enables using a single amortized variational density for inference over a transdimensional target distribution. We propose a combined stochastic variational transdimensional inference (VTI) approach to training CoSMIC flows using techniques from Bayesian optimization and Monte Carlo gradient estimation. Numerical experiments demonstrate the performance of VTI on challenging problems that scale to high-cardinality model spaces.
Gaussian Process (GP) regression is a popular and sample-efficient approach for many engineering applications, where observations are expensive to acquire, and is also a central ingredient of Bayesian optimization (BO), a highly prevailing method for the optimization of black-box functions. However, when all or some input variables are categorical, building a predictive and computationally efficient GP remains challenging. Starting from the naive target encoding idea, where the original categorical values are replaced with the mean of the target variable for that category, we propose a generalization based on distributional encoding (DE) which makes use of all samples of the target variable for a category. To handle this type of encoding inside the GP, we build upon recent results on characteristic kernels for probability distributions, based on the maximum mean discrepancy and the Wasserstein distance. We also discuss several extensions for classification, multi-task learning and incorporation or auxiliary information. Our approach is validated empirically, and we demonstrate state-of-the-art predictive performance on a variety of synthetic and real-world datasets. DE is naturally complementary to recent advances in BO over discrete and mixed-spaces.
Distributed lag non-linear models (DLNM) have gained popularity for modeling nonlinear lagged relationships between exposures and outcomes. When applied to spatially referenced data, these models must account for spatial dependence, a challenge that has yet to be thoroughly explored within the penalized DLNM framework. This gap is mainly due to the complex model structure and high computational demands, particularly when dealing with large spatio-temporal datasets. To address this, we propose a novel Bayesian DLNM-Laplacian-P-splines (DLNM-LPS) approach that incorporates spatial dependence using conditional autoregressive (CAR) priors, a method commonly applied in disease mapping. Our approach offers a flexible framework for capturing nonlinear associations while accounting for spatial dependence. It uses the Laplace approximation to approximate the conditional posterior distribution of the regression parameters, eliminating the need for Markov chain Monte Carlo (MCMC) sampling, often used in Bayesian inference, thus improving computational efficiency. The methodology is evaluated through simulation studies and applied to analyze the relationship between temperature and mortality in London.
In recent years, a variety of novel measures of dependence have been introduced being capable of characterizing diverse types of directed dependence, hence diverse types of how a number of predictor variables $\mathbf{X} = (X_1, \dots, X_p)$, $p \in \mathbb{N}$, may affect a response variable $Y$. This includes perfect dependence of $Y$ on $\mathbf{X}$ and independence between $\mathbf{X}$ and $Y$, but also less well-known concepts such as zero-explainability, stochastic comparability and complete separation. Certain such measures offer a representation in terms of the Markov product $(Y,Y')$, with $Y'$ being a conditionally independent copy of $Y$ given $\mathbf{X}$. This dimension reduction principle allows these measures to be estimated via the powerful nearest neighbor based estimation principle introduced in [4]. To achieve a deeper insight into the dimension reduction principle, this paper aims at translating the extreme variants of directed dependence, typically formulated in terms of the random vector $(\mathbf{X},Y)$, into the Markov product $(Y,Y')$.
In observational studies, propensity score methods are central for estimating causal effects while adjusting for confounders. Among them, the doubly robust (DR) estimator has gained considerable attention because it provides consistent estimates when either the propensity score model or the outcome model is correctly specified. Like other propensity score approaches, the DR estimator typically involves two-step estimation: first, estimating the propensity score and outcome models, and then estimating the causal effects using the estimated values. However, this sequential procedure does not naturally align with the Bayesian framework, which centers on updating prior beliefs solely through the likelihood. In this manuscript, we propose novel Bayesian DR estimation via posterior coupling, which incorporates propensity score information via moment conditions directly into the posterior distribution. This design avoids the feedback problem and enables a fully Bayesian interpretation of DR estimation without requiring two-step estimation. We detail the theoretical properties of the proposed method and demonstrate its advantages over existing Bayesian approaches through comprehensive simulation studies and real data applications.
Motivated by applications in deep learning, where the global Lipschitz continuity condition is often not satisfied, we examine the problem of sampling from distributions with super-linearly growing log-gradients. We propose a novel tamed Langevin dynamics-based algorithm, called kTULA, to solve the aforementioned sampling problem, and provide a theoretical guarantee for its performance. More precisely, we establish a non-asymptotic convergence bound in Kullback-Leibler (KL) divergence with the best-known rate of convergence equal to $2-\overline{\epsilon}$, $\overline{\epsilon}>0$, which significantly improves relevant results in existing literature. This enables us to obtain an improved non-asymptotic error bound in Wasserstein-2 distance, which can be used to further derive a non-asymptotic guarantee for kTULA to solve the associated optimization problems. To illustrate the applicability of kTULA, we apply the proposed algorithm to the problem of sampling from a high-dimensional double-well potential distribution and to an optimization problem involving a neural network. We show that our main results can be used to provide theoretical guarantees for the performance of kTULA.
Estimating causal effects of joint interventions on multiple variables is crucial in many domains, but obtaining data from such simultaneous interventions can be challenging. Our study explores how to learn joint interventional effects using only observational data and single-variable interventions. We present an identifiability result for this problem, showing that for a class of nonlinear additive outcome mechanisms, joint effects can be inferred without access to joint interventional data. We propose a practical estimator that decomposes the causal effect into confounded and unconfounded contributions for each intervention variable. Experiments on synthetic data demonstrate that our method achieves performance comparable to models trained directly on joint interventional data, outperforming a purely observational estimator.
Regression models were evaluated to estimate stand-level growing stock volume (GSV), quadratic mean diameter (QMD), basal area (BA), and stem density (N) in the Brixen im Thale forest district of Austria. Field measurements for GSV, QMD, and BA were collected on 146 inventory plots using a handheld mobile personal laser scanning system. Predictor variables were derived from airborne laser scanning (ALS)-derived normalized digital surface and terrain models. The objective was to generate stand-level estimates and associated uncertainty for GSV, QMD, BA, and N across 824 stands. A unit-level small area estimation framework was used to generate stand-level posterior predictive distributions by aggregating predictions from finer spatial scales. Both univariate and multivariate models, with and without spatially varying intercepts, were considered. Predictive performance was assessed via spatially blocked cross-validation, focusing on bias, accuracy, and precision. Despite exploratory analysis suggesting advantages of complex multivariate spatial models, simpler univariate spatial -- and in some cases, non-spatial -- models exhibited comparable predictive performance.
We consider two division models for structured cell populations, where cells can grow, age and divide. These models have been introduced in the literature under the denomination of `mitosis' and `adder' models. In the recent years, there has been an increasing interest in Biology to understand whether the cells divide equally or not, as this can be related to important mechanisms in cellular aging or recovery. We are therefore interested in testing the null hypothesis $H_0$ where the division of a mother cell results into two daughters of equal size or age, against the alternative hypothesis $H_1$ where the division is asymmetric and ruled by a kernel that is absolutely continuous with respect to the Lebesgue measure. The sample consists of i.i.d. observations of cell sizes and ages drawn from the population, and the division is not directly observed. The hypotheses of the test are reformulated as hypotheses on the stationary size and age distributions of the models, which we assume are also the distributions of the observations. We propose a goodness-of-fit test that we study numerically on simulated data before applying it on real data.
For nonparametric inference about a function, multiscale testing procedures resolve the need for bandwidth selection and achieve asymptotically optimal detection performance against a broad range of alternatives. However, critical values strongly depend on the noise distribution, and we argue that existing methods are either statistically infeasible, or asymptotically sub-optimal. To address this methodological challenge, we show how to develop a feasible multiscale test via weak convergence arguments, by replacing the additive multiscale penalty with a multiplicative weighting. This new theoretical foundation preserves the optimal detection properties of multiscale tests and extends their applicability to nonstationary nonlinear time series via a tailored bootstrap scheme. Inference for signal discovery, goodness-of-fit testing of regression functions, and multiple changepoint detection is studied in detail, and we apply the new methodology to analyze the April 2025 power blackout on the Iberian peninsula. Our methodology is enabled by a novel functional central limit in H\"older spaces with critical modulus of continuity, where Donsker's theorem fails to hold due to lack of tightness. Probabilistically, we discover a novel form of thresholded weak convergence that holds only in the upper support of the distribution.
In many imaging applications it is important to assess how well the edges of the original object, $f$, are resolved in an image, $f^\text{rec}$, reconstructed from the measured data, $g$. In this paper we consider the case of image reconstruction in 2D X-ray Computed Tomography (CT). Let $f$ be a function describing the object being scanned, and $g=Rf + \eta$ be the Radon transform data in $\mathbb{R}^2$ corrupted by noise, $\eta$, and sampled with step size $\sim\epsilon$. Conventional microlocal analysis provides conditions for edge detectability based on the scanner geometry in the case of continuous, noiseless data (when $\eta = 0$), but does not account for noise and finite sampling step size. We develop a novel technique called \emph{Statistical Microlocal Analysis} (SMA), which uses a statistical hypothesis testing framework to determine if an image edge (singularity) of $f$ is detectable from $f^\text{rec}$, and we quantify edge detectability using the statistical power of the test. Our approach is based on the theory we developed in \cite{AKW2024_1}, which provides a characterization of $f^\text{rec}$ in local $O(\epsilon)$-size neighborhoods when $\eta \neq 0$. We derive a statistical test for the presence and direction of an edge microlocally given the magnitude of $\eta$ and data sampling step size. Using the properties of the null distribution of the test, we quantify the uncertainty of the edge magnitude and direction. We validate our theory using simulations, which show strong agreement between our predictions and experimental observations. Our work is not only of practical value, but of theoretical value as well. SMA is a natural extension of classical microlocal analysis theory which accounts for practical measurement imperfections, such as noise and finite step size, at the highest possible resolution compatible with the data.
Factor models are essential tools for analyzing high-dimensional data, particularly in economics and finance. However, standard methods for determining the number of factors often overestimate the true number when data exhibit heavy-tailed randomness, misinterpreting noise-induced outliers as genuine factors. This paper addresses this challenge within the framework of Elliptical Factor Models (EFM), which accommodate both heavy tails and potential non-linear dependencies common in real-world data. We demonstrate theoretically and empirically that heavy-tailed noise generates spurious eigenvalues that mimic true factor signals. To distinguish these, we propose a novel methodology based on a fluctuation magnification algorithm. We show that under magnifying perturbations, the eigenvalues associated with real factors exhibit significantly less fluctuation (stabilizing asymptotically) compared to spurious eigenvalues arising from heavy-tailed effects. This differential behavior allows the identification and detection of the true and spurious factors. We develop a formal testing procedure based on this principle and apply it to the problem of accurately selecting the number of common factors in heavy-tailed EFMs. Simulation studies and real data analysis confirm the effectiveness of our approach compared to existing methods, particularly in scenarios with pronounced heavy-tailedness.
Inferring cause-effect relationships from observational data has gained significant attention in recent years, but most methods are limited to scalar random variables. In many important domains, including neuroscience, psychology, social science, and industrial manufacturing, the causal units of interest are groups of variables rather than individual scalar measurements. Motivated by these applications, we extend nonlinear additive noise models to handle random vectors, establishing a two-step approach for causal graph learning: First, infer the causal order among random vectors. Second, perform model selection to identify the best graph consistent with this order. We introduce effective and novel solutions for both steps in the vector case, demonstrating strong performance in simulations. Finally, we apply our method to real-world assembly line data with partial knowledge of causal ordering among variable groups.
This paper investigates causal effect identification in latent variable Linear Non-Gaussian Acyclic Models (lvLiNGAM) using higher-order cumulants, addressing two prominent setups that are challenging in the presence of latent confounding: (1) a single proxy variable that may causally influence the treatment and (2) underspecified instrumental variable cases where fewer instruments exist than treatments. We prove that causal effects are identifiable with a single proxy or instrument and provide corresponding estimation methods. Experimental results demonstrate the accuracy and robustness of our approaches compared to existing methods, advancing the theoretical and practical understanding of causal inference in linear systems with latent confounders.
A configuration of the NCAR WRF-Hydro model was sought using well established data models to guide the initial hydrologic model setup, as well as a seasonal streamflow post-processing by neural networks. Discharge was simulated using an eastern Canadian river network at two-km resolution. The river network was taken from a digital elevation model that was made to conform to observed catchment boundaries. Perturbations of a subset of model parameters were examined with reference to streamflow from 25 gauged catchments during the 2019 warm season. A data model defines the similarity of modelled streamflow to observations, and improvements were found in about half the individual catchments. With reference to 183 gauged catchments (1990-2022), further improvements were obtained at monthly and annual scales by neural network post-processing that targets all catchments at once as well as individual catchments. This seasonal calibration was applied to uncoupled WRF-Hydro simulations for the 1990-2100 warming period. Historic and future forcing were provided, respectively, by a European Centre for Medium-Range Weather Forecasting reanalysis (ERA5), and by a WRF atmospheric model downscaling of a set of Coupled Model Intercomparison Project (CMIP) models, where the latter were also seasonally calibrated. Eastern Canadian freshwater discharge peaks at about 10$^5$ m$^3$ s$^{-1}$, and as previous studies have shown, there is a trend toward increasing low flows during the cold season and an earlier peak discharge in spring. By design, neural networks yield more precise estimates by compensating for different hydrologic process representations.
Randomized controlled experiments (''A/B testing'') are fundamental for assessing interventions in dynamic technology-driven environments, such as recommendation systems, online marketplaces, and digital health interventions. In these systems, interventions typically impact not only the current state of the system, but also future states; therefore, accurate estimation of the global average treatment effect (or GATE) from experiments requires accounting for the dynamic temporal behavior of the system. To address this, recent literature has analyzed a range of estimators applied to Bernoulli randomized experiments in stationary environments, ranging from the standard difference-in-means (DM) estimator to methods building on reinforcement learning techniques, such as off-policy evaluation and the recently proposed difference-in-Q's (DQ) estimator. However, all these estimators exhibit high bias and variance when the environment is nonstationary. This paper addresses the challenge of estimation under nonstationarity. We show that a simple extension of the DM estimator using differences in truncated outcome trajectories yields favorable bias and variance in nonstationary Markovian settings. Our theoretical analysis establishes this result by first showing that the truncated estimator is in fact estimating an appropriate policy gradient that can be expressed as a difference in Q-values; thus we refer to our estimator as the truncated DQ estimator (by analogy to the DQ estimator). We then show that the corresponding policy gradient is a first-order approximation to the GATE. Combining these insights yields our bias and variance bounds. We validate our results through synthetic and realistic simulations-including hospital and ride-sharing settings-and show that a well-calibrated truncated DQ estimator achieves low bias and variance even in nonstationary environments.
When an experimenter has the option of running an adaptive trial, is it admissible to ignore this option and run a non-adaptive trial instead? We provide a negative answer to this question in the best-arm identification problem, where the experimenter aims to allocate measurement efforts judiciously to confidently deploy the most effective treatment arm. We find that, whenever there are at least three treatment arms, there exist simple adaptive designs that universally and strictly dominate non-adaptive completely randomized trials. This dominance is characterized by a notion called efficiency exponent, which quantifies a design's statistical efficiency when the experimental sample is large. Our analysis focuses on the class of batched arm elimination designs, which progressively eliminate underperforming arms at pre-specified batch intervals. We characterize simple sufficient conditions under which these designs universally and strictly dominate completely randomized trials. These results resolve the second open problem posed in Qin [2022].
Money laundering poses a significant challenge as it is estimated to account for 2%-5% of the global GDP. This has compelled regulators to impose stringent controls on financial institutions. One prominent laundering method for evading these controls, called smurfing, involves breaking up large transactions into smaller amounts. Given the complexity of smurfing schemes, which involve multiple transactions distributed among diverse parties, network analytics has become an important anti-money laundering tool. However, recent advances have focused predominantly on black-box network embedding methods, which has hindered their adoption in businesses. In this paper, we introduce GARG-AML, a novel graph-based method that quantifies smurfing risk through a single interpretable metric derived from the structure of the second-order transaction network of each individual node in the network. Unlike traditional methods, GARG-AML strikes an effective balance among computational efficiency, detection power and transparency, which enables its integration into existing AML workflows. To enhance its capabilities, we combine the GARG-AML score calculation with different tree-based methods and also incorporate the scores of the node's neighbours. An experimental evaluation on large-scale synthetic and open-source networks demonstrate that the GARG-AML outperforms the current state-of-the-art smurfing detection methods. By leveraging only the adjacency matrix of the second-order neighbourhood and basic network features, this work highlights the potential of fundamental network properties towards advancing fraud detection.
A core motivation of science is to evaluate which scientific model best explains observed data. Bayesian model comparison provides a principled statistical approach to comparing scientific models and has found widespread application within cosmology and astrophysics. Calculating the Bayesian evidence is computationally challenging, especially as we continue to explore increasingly more complex models. The Savage-Dickey density ratio (SDDR) provides a method to calculate the Bayes factor (evidence ratio) between two nested models using only posterior samples from the super model. The SDDR requires the calculation of a normalised marginal distribution over the extra parameters of the super model, which has typically been performed using classical density estimators, such as histograms. Classical density estimators, however, can struggle to scale to high-dimensional settings. We introduce a neural SDDR approach using normalizing flows that can scale to settings where the super model contains a large number of extra parameters. We demonstrate the effectiveness of this neural SDDR methodology applied to both toy and realistic cosmological examples. For a field-level inference setting, we show that Bayes factors computed for a Bayesian hierarchical model (BHM) and simulation-based inference (SBI) approach are consistent, providing further validation that SBI extracts as much cosmological information from the field as the BHM approach. The SDDR estimator with normalizing flows is implemented in the open-source harmonic Python package.
We study the well-known grokking phenomena in neural networks (NNs) using a 3-layer MLP trained on 1 k-sample subset of MNIST, with and without weight decay, and discover a novel third phase -- \emph{anti-grokking} -- that occurs very late in training and resembles but is distinct from the familiar \emph{pre-grokking} phases: test accuracy collapses while training accuracy stays perfect. This late-stage collapse is distinct, from the known pre-grokking and grokking phases, and is not detected by other proposed grokking progress measures. Leveraging Heavy-Tailed Self-Regularization HTSR through the open-source WeightWatcher tool, we show that the HTSR layer quality metric $\alpha$ alone delineates all three phases, whereas the best competing metrics detect only the first two. The \emph{anti-grokking} is revealed by training for $10^7$ and is invariably heralded by $\alpha < 2$ and the appearance of \emph{Correlation Traps} -- outlier singular values in the randomized layer weight matrices that make the layer weight matrix atypical and signal overfitting of the training set. Such traps are verified by visual inspection of the layer-wise empirical spectral densities, and by using Kolmogorov--Smirnov tests on randomized spectra. Comparative metrics, including activation sparsity, absolute weight entropy, circuit complexity, and $l^2$ weight norms track pre-grokking and grokking but fail to distinguish grokking from anti-grokking. This discovery provides a way to measure overfitting and generalization collapse without direct access to the test data. These results strengthen the claim that the \emph{HTSR} $\alpha$ provides universal layer-convergence target at $\alpha \approx 2$ and underscore the value of using the HTSR alpha $(\alpha)$ metric as a measure of generalization.
We provide evidence that orthogonalizing gradients during training improves model calibration without sacrificing accuracy. On CIFAR-10 with 10% labeled data, $\perp$Grad matches SGD in accuracy but yields consistently improved calibration metrics such as lower test loss, reduced softmax overconfidence, and higher predictive entropy. These benefits persist under input corruption (CIFAR-10C) and extended training, where $\perp$Grad models degrade more gracefully than SGD-trained counterparts. $\perp$Grad is optimizer-agnostic, incurs minimal overhead, and works well with post-hoc calibration techniques like temperature scaling. Theoretically, we prove convergence of a simplified version of $\perp$Grad under mild assumptions and characterize its stationary points in positive homogeneous networks: $\perp$Grad converges to solutions where further loss reduction requires confidence scaling rather than decision boundary improvement.
Computational inverse problems for biomedical simulators suffer from limited data and relatively high parameter dimensionality. This often requires sensitivity analysis, where parameters of the model are ranked based on their influence on the specific quantities of interest. This is especially important for simulators used to build medical digital twins, as the amount of data is typically limited. For expensive models, such as blood flow models, emulation is employed to expedite the simulation time. Parameter ranking and fixing using sensitivity analysis are often heuristic, though, and vary with the specific application or simulator used. The present study provides an innovative solution to this problem by leveraging polynomial chaos expansions (PCEs) for both multioutput global sensitivity analysis and formal parameter identifiability. For the former, we use dimension reduction to efficiently quantify time-series sensitivity of a one-dimensional pulmonary hemodynamics model. We consider both Windkessel and structured tree boundary conditions. We then use PCEs to construct profile-likelihood confidence intervals to formally assess parameter identifiability, and show how changes in experimental design improve identifiability. Our work presents a novel approach to determining parameter identifiability and leverages a common emulation strategy for enabling profile-likelihood analysis in problems governed by partial differential equations.
Unsupervised machine learning is widely used to mine large, unlabeled datasets to make data-driven discoveries in critical domains such as climate science, biomedicine, astronomy, chemistry, and more. However, despite its widespread utilization, there is a lack of standardization in unsupervised learning workflows for making reliable and reproducible scientific discoveries. In this paper, we present a structured workflow for using unsupervised learning techniques in science. We highlight and discuss best practices starting with formulating validatable scientific questions, conducting robust data preparation and exploration, using a range of modeling techniques, performing rigorous validation by evaluating the stability and generalizability of unsupervised learning conclusions, and promoting effective communication and documentation of results to ensure reproducible scientific discoveries. To illustrate our proposed workflow, we present a case study from astronomy, seeking to refine globular clusters of Milky Way stars based upon their chemical composition. Our case study highlights the importance of validation and illustrates how the benefits of a carefully-designed workflow for unsupervised learning can advance scientific discovery.
Existing studies of innovation emphasize the power of social structures to shape innovation capacity. Emerging machine learning approaches, however, enable us to model innovators' personal perspectives and interpersonal innovation opportunities as a function of their prior trajectories of experience. We theorize then quantify subjective perspectives and innovation opportunities based on innovator positions within the geometric space of concepts inscribed by dynamic language representations. Using data on millions of scientists, inventors, writers, entrepreneurs, and Wikipedia contributors across the creative domains of science, technology, film, entrepreneurship, and Wikipedia, here we show that measured subjective perspectives anticipate what ideas individuals and groups creatively attend to and successfully combine in future. When perspective and background diversity are decomposed as the angular difference between collaborators' perspectives on their creation and between their experiences, the former consistently anticipates creative achievement while the latter portends its opposite, across all cases and time periods examined. We analyze a natural experiment and simulate creative collaborations between AI (large language model) agents designed with various perspective and background diversity, which are consistent with our observational findings. We explore mechanisms underlying these findings and identify how successful collaborators leverage common language to weave together diverse experience obtained through trajectories of prior work that converge to provoke one another and innovate. We explore the importance of these findings for team assembly and research policy.
Given the continuous increase in dataset sizes and the complexity of forecasting models, the trade-off between forecast accuracy and computational cost is emerging as an extremely relevant topic, especially in the context of ensemble learning for time series forecasting. To asses it, we evaluated ten base models and eight ensemble configurations across two large-scale retail datasets (M5 and VN1), considering both point and probabilistic accuracy under varying retraining frequencies. We showed that ensembles consistently improve forecasting performance, particularly in probabilistic settings. However, these gains come at a substantial computational cost, especially for larger, accuracy-driven ensembles. We found that reducing retraining frequency significantly lowers costs, with minimal impact on accuracy, particularly for point forecasts. Moreover, efficiency-driven ensembles offer a strong balance, achieving competitive accuracy with considerably lower costs compared to accuracy-optimized combinations. Most importantly, small ensembles of two or three models are often sufficient to achieve near-optimal results. These findings provide practical guidelines for deploying scalable and cost-efficient forecasting systems, supporting the broader goals of sustainable AI in forecasting. Overall, this work shows that careful ensemble design and retraining strategy selection can yield accurate, robust, and cost-effective forecasts suitable for real-world applications.
Reinforcement learning (RL) has demonstrated remarkable success in enhancing model capabilities, including instruction-following, preference learning, and reasoning. Yet despite its empirical successes, the mechanisms by which RL improves reasoning abilities remain poorly understood. We present a systematic study of Reinforcement Learning with Verifiable Rewards (RLVR), showing that its primary benefit comes from optimizing the selection of existing reasoning patterns. Through extensive experiments, we demonstrate that RLVR-trained models preferentially adopt high-success-rate reasoning patterns while mostly maintaining stable performance on individual patterns. We further develop theoretical analyses on the convergence and training dynamics of RLVR based on a simplified question-reason-answer model. We study the gradient flow and show that RLVR can indeed find the solution that selects the reason pattern with the highest success rate. Besides, our theoretical results reveal two distinct regimes regarding the convergence of RLVR training: (1) rapid convergence for models with relatively strong initial reasoning capabilities versus (2) slower optimization dynamics for weaker models. Furthermore, we show that the slower optimization for weaker models can be mitigated by applying the supervised fine-tuning (SFT) before RLVR, when using a feasibly high-quality SFT dataset. We validate the theoretical findings through extensive experiments. This work advances our theoretical understanding of RL's role in LLM fine-tuning and offers insights for further enhancing reasoning capabilities.
Rank-based statistical metrics, such as the invariant statistical loss (ISL), have recently emerged as robust and practically effective tools for training implicit generative models. In this work, we introduce dual-ISL, a novel likelihood-free objective for training implicit generative models that interchanges the roles of the target and model distributions in the ISL framework, yielding a convex optimization problem in the space of model densities. We prove that the resulting rank-based discrepancy $d_K$ is i) continuous under weak convergence and with respect to the $L^1$ norm, and ii) convex in its first argument-properties not shared by classical divergences such as KL or Wasserstein distances. Building on this, we develop a theoretical framework that interprets $d_K$ as an $L^2$-projection of the density ratio $q = p/\tilde p$ onto a Bernstein polynomial basis, from which we derive exact bounds on the truncation error, precise convergence rates, and a closed-form expression for the truncated density approximation. We further extend our analysis to the multivariate setting via random one-dimensional projections, defining a sliced dual-ISL divergence that retains both convexity and continuity. We empirically show that these theoretical advantages translate into practical ones. Specifically, across several benchmarks dual-ISL converges more rapidly, delivers markedly smoother and more stable training, and more effectively prevents mode collapse than classical ISL and other leading implicit generative methods-while also providing an explicit density approximation.
We study stochastic linear bandits with heavy-tailed rewards, where the rewards have a finite $(1+\epsilon)$-absolute central moment bounded by $\upsilon$ for some $\epsilon \in (0,1]$. We improve both upper and lower bounds on the minimax regret compared to prior work. When $\upsilon = \mathcal{O}(1)$, the best prior known regret upper bound is $\tilde{\mathcal{O}}(d T^{\frac{1}{1+\epsilon}})$. While a lower with the same scaling has been given, it relies on a construction using $\upsilon = \mathcal{O}(d)$, and adapting the construction to the bounded-moment regime with $\upsilon = \mathcal{O}(1)$ yields only a $\Omega(d^{\frac{\epsilon}{1+\epsilon}} T^{\frac{1}{1+\epsilon}})$ lower bound. This matches the known rate for multi-armed bandits and is generally loose for linear bandits, in particular being $\sqrt{d}$ below the optimal rate in the finite-variance case ($\epsilon = 1$). We propose a new elimination-based algorithm guided by experimental design, which achieves regret $\tilde{\mathcal{O}}(d^{\frac{1+3\epsilon}{2(1+\epsilon)}} T^{\frac{1}{1+\epsilon}})$, thus improving the dependence on $d$ for all $\epsilon \in (0,1)$ and recovering a known optimal result for $\epsilon = 1$. We also establish a lower bound of $\Omega(d^{\frac{2\epsilon}{1+\epsilon}} T^{\frac{1}{1+\epsilon}})$, which strictly improves upon the multi-armed bandit rate and highlights the hardness of heavy-tailed linear bandit problems. For finite action sets, we derive similarly improved upper and lower bounds for regret. Finally, we provide action set dependent regret upper bounds showing that for some geometries, such as $l_p$-norm balls for $p \le 1 + \epsilon$, we can further reduce the dependence on $d$, and we can handle infinite-dimensional settings via the kernel trick, in particular establishing new regret bounds for the Mat\'ern kernel that are the first to be sublinear for all $\epsilon \in (0, 1]$.
Distributionally robust optimization offers a compelling framework for model fitting in machine learning, as it systematically accounts for data uncertainty. Focusing on Wasserstein distributionally robust optimization, we investigate the regularized problem where entropic smoothing yields a sampling-based approximation of the original objective. We establish the convergence of the approximate gradient over a compact set, leading to the concentration of the regularized problem critical points onto the original problem critical set as regularization diminishes and the number of approximation samples increases. Finally, we deduce convergence guarantees for a projected stochastic gradient method. Our analysis covers a general machine learning situation with an unbounded sample space and mixed continuous-discrete data.
In this paper, we propose $\textbf{C}$oncept $\textbf{REA}$soning $\textbf{M}$odels (CREAM), a novel family of Concept Bottleneck Models (CBMs) that: (i) explicitly encodes concept-concept (${\texttt{C-C}}$) and concept-task (${\texttt{C$\rightarrow$Y}}$) relationships to enforce a desired model reasoning; and (ii) use a regularized side-channel to achieve competitive task performance, while keeping high concept importance. Specifically, CREAM architecturally embeds (bi)directed concept-concept, and concept to task relationships specified by a human expert, while severing undesired information flows (e.g., to handle mutually exclusive concepts). Moreover, CREAM integrates a black-box side-channel that is regularized to encourage task predictions to be grounded in the relevant concepts, thereby utilizing the side-channel only when necessary to enhance performance. Our experiments show that: (i) CREAM mainly relies on concepts while achieving task performance on par with black-box models; and (ii) the embedded ${\texttt{C-C}}$ and ${\texttt{C$\rightarrow$Y}}$ relationships ease model interventions and mitigate concept leakage.
Neural networks (NNs) have achieved tremendous success over the past decade, yet they are still extremely difficult to interpret. In contrast, linear models are less expressive but offer inherent interpretability. Linear coefficients are interpretable as the marginal effect of a feature on the prediction, assuming all other features are kept fixed. To combine the benefits of both approaches, we introduce NIMO (Nonlinear Interpretable MOdel). The key idea is to define a model where the NN is designed to learn nonlinear corrections to the linear model predictions, while also maintaining the original interpretability of the linear coefficients. Relevantly, we develop an optimization algorithm based on profile likelihood that elegantly allows for optimizing over the NN parameters while updating the linear coefficients analytically. By relying on adaptive ridge regression we can easily incorporate sparsity constraints as well. We show empirically that we can recover the underlying linear coefficients while significantly improving the predictive accuracy. Compared to other hybrid interpretable approaches, our model is the only one that actually maintains the same interpretability of linear coefficients as in linear models. We also achieve higher performance on various regression and classification settings.
State space models are emerging as a dominant model class for sequence problems with many relying on the HiPPO framework to initialize their dynamics. However, HiPPO fundamentally assumes data to be noise-free; an assumption often violated in practice. We extend the HiPPO theory with measurement noise and derive an uncertainty-aware initialization for state space model dynamics. In our analysis, we interpret HiPPO as a linear stochastic control problem where the data enters as a noise-free control signal. We then reformulate the problem so that the data become noisy outputs of a latent system and arrive at an alternative dynamics initialization that infers the posterior of this latent system from the data without increasing runtime. Our experiments show that our initialization improves the resistance of state-space models to noise both at training and inference time. Find our implementation at https://cs.cit.tum.de/daml/unhippo.
Semi-implicit variational inference (SIVI) is a powerful framework for approximating complex posterior distributions, but training with the Kullback-Leibler (KL) divergence can be challenging due to high variance and bias in high-dimensional settings. While current state-of-the-art semi-implicit variational inference methods, particularly Kernel Semi-Implicit Variational Inference (KSIVI), have been shown to work in high dimensions, training remains moderately expensive. In this work, we propose a kernelized KL divergence estimator that stabilizes training through nonparametric smoothing. To further reduce the bias, we introduce an importance sampling correction. We provide a theoretical connection to the amortized version of the Stein variational gradient descent, which estimates the score gradient via Stein's identity, showing that both methods minimize the same objective, but our semi-implicit approach achieves lower gradient variance. In addition, our method's bias in function space is benign, leading to more stable and efficient optimization. Empirical results demonstrate that our method outperforms or matches state-of-the-art SIVI methods in both performance and training efficiency.
Synthetic data inherits the differential privacy guarantees of the model used to generate it. Additionally, synthetic data may benefit from privacy amplification when the generative model is kept hidden. While empirical studies suggest this phenomenon, a rigorous theoretical understanding is still lacking. In this paper, we investigate this question through the well-understood framework of linear regression. First, we establish negative results showing that if an adversary controls the seed of the generative model, a single synthetic data point can leak as much information as releasing the model itself. Conversely, we show that when synthetic data is generated from random inputs, releasing a limited number of synthetic data points amplifies privacy beyond the model's inherent guarantees. We believe our findings in linear regression can serve as a foundation for deriving more general bounds in the future.
Gravitational-wave astronomy has entered a regime where it can extract information about the population properties of the observed binary black holes. The steep increase in the number of detections will offer deeper insights, but it will also significantly raise the computational cost of testing multiple models. To address this challenge, we propose a procedure that first performs a non-parametric (data-driven) reconstruction of the underlying distribution, and then remaps these results onto a posterior for the parameters of a parametric (informed) model. The computational cost is primarily absorbed by the initial non-parametric step, while the remapping procedure is both significantly easier to perform and computationally cheaper. In addition to yielding the posterior distribution of the model parameters, this method also provides a measure of the model's goodness-of-fit, opening for a new quantitative comparison across models.
Large-scale neural language models (LMs) exhibit remarkable performance in in-context learning: the ability to learn and reason the input context on the fly without parameter update. This work studies in-context counterfactual reasoning in language models, that is, to predict the consequences of changes under hypothetical scenarios. We focus on studying a well-defined synthetic setup: a linear regression task that requires noise abduction, where accurate prediction is based on inferring and copying the contextual noise from factual observations. We show that language models are capable of counterfactual reasoning in this controlled setup and provide insights that counterfactual reasoning for a broad class of functions can be reduced to a transformation on in-context observations; we find self-attention, model depth, and data diversity in pre-training drive performance in Transformers. More interestingly, our findings extend beyond regression tasks and show that Transformers can perform noise abduction on sequential data, providing preliminary evidence on the potential for counterfactual story generation. Our code is available under https://github.com/moXmiller/counterfactual-reasoning.git .
Modern large language models are capable of in-context learning, the ability to perform new tasks at inference time using only a handful of input-output examples in the prompt, without any fine-tuning or parameter updates. We develop a universal approximation theory to better understand how transformers enable in-context learning. For any class of functions (each representing a distinct task), we demonstrate how to construct a transformer that, without any further weight updates, can perform reliable prediction given only a few in-context examples. In contrast to much of the recent literature that frames transformers as algorithm approximators -- i.e., constructing transformers to emulate the iterations of optimization algorithms as a means to approximate solutions of learning problems -- our work adopts a fundamentally different approach rooted in universal function approximation. This alternative approach offers approximation guarantees that are not constrained by the effectiveness of the optimization algorithms being approximated, thereby extending far beyond convex problems and linear function classes. Our construction sheds light on how transformers can simultaneously learn general-purpose representations and adapt dynamically to in-context examples.
Recent research has focused on designing neural samplers that amortize the process of sampling from unnormalized densities. However, despite significant advancements, they still fall short of the state-of-the-art MCMC approach, Parallel Tempering (PT), when it comes to the efficiency of target evaluations. On the other hand, unlike a well-trained neural sampler, PT yields only dependent samples and needs to be rerun -- at considerable computational cost -- whenever new samples are required. To address these weaknesses, we propose the Progressive Tempering Sampler with Diffusion (PTSD), which trains diffusion models sequentially across temperatures, leveraging the advantages of PT to improve the training of neural samplers. We also introduce a novel method to combine high-temperature diffusion models to generate approximate lower-temperature samples, which are minimally refined using MCMC and used to train the next diffusion model. PTSD enables efficient reuse of sample information across temperature levels while generating well-mixed, uncorrelated samples. Our method significantly improves target evaluation efficiency, outperforming diffusion-based neural samplers.
Test-time scaling paradigms have significantly advanced the capabilities of large language models (LLMs) on complex tasks. Despite their empirical success, theoretical understanding of the sample efficiency of various test-time strategies -- such as self-consistency, best-of-$n$, and self-correction -- remains limited. In this work, we first establish a separation result between two repeated sampling strategies: self-consistency requires $\Theta(1/\Delta^2)$ samples to produce the correct answer, while best-of-$n$ only needs $\Theta(1/\Delta)$, where $\Delta < 1$ denotes the probability gap between the correct and second most likely answers. Next, we present an expressiveness result for the self-correction approach with verifier feedback: it enables Transformers to simulate online learning over a pool of experts at test time. Therefore, a single Transformer architecture can provably solve multiple tasks without prior knowledge of the specific task associated with a user query, extending the representation theory of Transformers from single-task to multi-task settings. Finally, we empirically validate our theoretical results, demonstrating the practical effectiveness of self-correction methods.