New articles on Statistics


[1] 2410.14754

On the Sparsity of the Strong Lottery Ticket Hypothesis

Considerable research efforts have recently been made to show that a random neural network $N$ contains subnetworks capable of accurately approximating any given neural network that is sufficiently smaller than $N$, without any training. This line of research, known as the Strong Lottery Ticket Hypothesis (SLTH), was originally motivated by the weaker Lottery Ticket Hypothesis, which states that a sufficiently large random neural network $N$ contains \emph{sparse} subnetworks that can be trained efficiently to achieve performance comparable to that of training the entire network $N$. Despite its original motivation, results on the SLTH have so far not provided any guarantee on the size of subnetworks. Such limitation is due to the nature of the main technical tool leveraged by these results, the Random Subset Sum (RSS) Problem. Informally, the RSS Problem asks how large a random i.i.d. sample $\Omega$ should be so that we are able to approximate any number in $[-1,1]$, up to an error of $ \epsilon$, as the sum of a suitable subset of $\Omega$. We provide the first proof of the SLTH in classical settings, such as dense and equivariant networks, with guarantees on the sparsity of the subnetworks. Central to our results, is the proof of an essentially tight bound on the Random Fixed-Size Subset Sum Problem (RFSS), a variant of the RSS Problem in which we only ask for subsets of a given size, which is of independent interest.


[2] 2410.14759

Universal approximation results for neural networks with non-polynomial activation function over non-compact domains

In this paper, we generalize the universal approximation property of single-hidden-layer feed-forward neural networks beyond the classical formulation over compact domains. More precisely, by assuming that the activation function is non-polynomial, we derive universal approximation results for neural networks within function spaces over non-compact subsets of a Euclidean space, e.g., weighted spaces, $L^p$-spaces, and (weighted) Sobolev spaces over unbounded domains, where the latter includes the approximation of the (weak) derivatives. Furthermore, we provide some dimension-independent rates for approximating a function with sufficiently regular and integrable Fourier transform by neural networks with non-polynomial activation function.


[3] 2410.14783

High-Dimensional Tensor Discriminant Analysis with Incomplete Tensors

Tensor classification has gained prominence across various fields, yet the challenge of handling partially observed tensor data in real-world applications remains largely unaddressed. This paper introduces a novel approach to tensor classification with incomplete data, framed within the tensor high-dimensional linear discriminant analysis. Specifically, we consider a high-dimensional tensor predictor with missing observations under the Missing Completely at Random (MCR) assumption and employ the Tensor Gaussian Mixture Model to capture the relationship between the tensor predictor and class label. We propose the Tensor LDA-MD algorithm, which manages high-dimensional tensor predictors with missing entries by leveraging the low-rank structure of the discriminant tensor. A key feature of our approach is a novel covariance estimation method under the tensor-based MCR model, supported by theoretical results that allow for correlated entries under mild conditions. Our work establishes the convergence rate of the estimation error of the discriminant tensor with incomplete data and minimax optimal bounds for the misclassification rate, addressing key gaps in the literature. Additionally, we derive large deviation results for the generalized mode-wise (separable) sample covariance matrix and its inverse, which are crucial tools in our analysis and hold independent interest. Our method demonstrates excellent performance in simulations and real data analysis, even with significant proportions of missing data. This research advances high-dimensional LDA and tensor learning, providing practical tools for applications with incomplete data and a solid theoretical foundation for classification accuracy in complex settings.


[4] 2410.14787

Privacy for Free in the Over-Parameterized Regime

Differentially private gradient descent (DP-GD) is a popular algorithm to train deep learning models with provable guarantees on the privacy of the training data. In the last decade, the problem of understanding its performance cost with respect to standard GD has received remarkable attention from the research community, which formally derived upper bounds on the excess population risk $R_{P}$ in different learning settings. However, existing bounds typically degrade with over-parameterization, i.e., as the number of parameters $p$ gets larger than the number of training samples $n$ -- a regime which is ubiquitous in current deep-learning practice. As a result, the lack of theoretical insights leaves practitioners without clear guidance, leading some to reduce the effective number of trainable parameters to improve performance, while others use larger models to achieve better results through scale. In this work, we show that in the popular random features model with quadratic loss, for any sufficiently large $p$, privacy can be obtained for free, i.e., $\left|R_{P} \right| = o(1)$, not only when the privacy parameter $\varepsilon$ has constant order, but also in the strongly private setting $\varepsilon = o(1)$. This challenges the common wisdom that over-parameterization inherently hinders performance in private learning.


[5] 2410.14789

Differentially Private Covariate Balancing Causal Inference

Differential privacy is the leading mathematical framework for privacy protection, providing a probabilistic guarantee that safeguards individuals' private information when publishing statistics from a dataset. This guarantee is achieved by applying a randomized algorithm to the original data, which introduces unique challenges in data analysis by distorting inherent patterns. In particular, causal inference using observational data in privacy-sensitive contexts is challenging because it requires covariate balance between treatment groups, yet checking the true covariates is prohibited to prevent leakage of sensitive information. In this article, we present a differentially private two-stage covariate balancing weighting estimator to infer causal effects from observational data. Our algorithm produces both point and interval estimators with statistical guarantees, such as consistency and rate optimality, under a given privacy budget.


[6] 2410.14843

Predictive variational inference: Learn the predictively optimal posterior distribution

Vanilla variational inference finds an optimal approximation to the Bayesian posterior distribution, but even the exact Bayesian posterior is often not meaningful under model misspecification. We propose predictive variational inference (PVI): a general inference framework that seeks and samples from an optimal posterior density such that the resulting posterior predictive distribution is as close to the true data generating process as possible, while this this closeness is measured by multiple scoring rules. By optimizing the objective, the predictive variational inference is generally not the same as, or even attempting to approximate, the Bayesian posterior, even asymptotically. Rather, we interpret it as implicit hierarchical expansion. Further, the learned posterior uncertainty detects heterogeneity of parameters among the population, enabling automatic model diagnosis. This framework applies to both likelihood-exact and likelihood-free models. We demonstrate its application in real data examples.


[7] 2410.14857

A New One Parameter Unit Distribution: Median Based Unit Rayleigh (MBUR): Parametric Quantile Regression Model

Parametric quantile regression is illustrated for the one parameter new unit Rayleigh distribution called Median Based Unit Rayleigh distribution (MBUR) distribution. The estimation process using re-parameterized maximum likelihood function is highlighted with real dataset example. The inference and goodness of fit is also explored.


[8] 2410.14866

Fast and Optimal Changepoint Detection and Localization using Bonferroni Triplets

The paper considers the problem of detecting and localizing changepoints in a sequence of independent observations. We propose to evaluate a local test statistic on a triplet of time points, for each such triplet in a particular collection. This collection is sparse enough so that the results of the local tests can simply be combined with a weighted Bonferroni correction. This results in a simple and fast method, {\sl Lean Bonferroni Changepoint detection} (LBD), that provides finite sample guarantees for the existance of changepoints as well as simultaneous confidence intervals for their locations. LBD is free of tuning parameters, and we show that LBD allows optimal inference for the detection of changepoints. To this end, we provide a lower bound for the critical constant that measures the difficulty of the changepoint detection problem, and we show that LBD attains this critical constant. We illustrate LBD for a number of distributional settings, namely when the observations are homoscedastic normal with known or unknown variance, for observations from a natural exponential family, and in a nonparametric setting where we assume only exchangeability for segments without a changepoint.


[9] 2410.14985

Stochastic Loss Reserving: Dependence and Estimation

Nowadays insurers have to account for potentially complex dependence between risks. In the field of loss reserving, there are many parametric and non-parametric models attempting to capture dependence between business lines. One common approach has been to use additive background risk models (ABRMs) which provide rich and interpretable dependence structures via a common shock model. Unfortunately, ABRMs are often restrictive. Models that capture necessary features may have impractical to estimate parameters. For example models without a closed-form likelihood function for lack of a probability density function (e.g. some Tweedie, Stable Distributions, etc). We apply a modification of the continuous generalised method of moments (CGMM) of [Carrasco and Florens, 2000] which delivers comparable estimators to the MLE to loss reserving. We examine models such as the one proposed by [Avanzi et al., 2016] and a related but novel one derived from the stable family of distributions. Our CGMM method of estimation provides conventional non-Bayesian estimates in the case where MLEs are impractical.


[10] 2410.15022

Statistical Inference for Feature Selection after Optimal Transport-based Domain Adaptation

Feature Selection (FS) under domain adaptation (DA) is a critical task in machine learning, especially when dealing with limited target data. However, existing methods lack the capability to guarantee the reliability of FS under DA. In this paper, we introduce a novel statistical method to statistically test FS reliability under DA, named SFS-DA (statistical FS-DA). The key strength of SFS-DA lies in its ability to control the false positive rate (FPR) below a pre-specified level $\alpha$ (e.g., 0.05) while maximizing the true positive rate. Compared to the literature on statistical FS, SFS-DA presents a unique challenge in addressing the effect of DA to ensure the validity of the inference on FS results. We overcome this challenge by leveraging the Selective Inference (SI) framework. Specifically, by carefully examining the FS process under DA whose operations can be characterized by linear and quadratic inequalities, we prove that achieving FPR control in SFS-DA is indeed possible. Furthermore, we enhance the true detection rate by introducing a more strategic approach. Experiments conducted on both synthetic and real-world datasets robustly support our theoretical results, showcasing the superior performance of the proposed SFS-DA method.


[11] 2410.15049

Modeling Time-Varying Effects of Mobile Health Interventions Using Longitudinal Functional Data from HeartSteps Micro-Randomized Trial

To optimize mobile health interventions and advance domain knowledge on intervention design, it is critical to understand how the intervention effect varies over time and with contextual information. This study aims to assess how a push notification suggesting physical activity influences individuals' step counts using data from the HeartSteps micro-randomized trial (MRT). The statistical challenges include the time-varying treatments and longitudinal functional step count measurements. We propose the first semiparametric causal excursion effect model with varying coefficients to model the time-varying effects within a decision point and across decision points in an MRT. The proposed model incorporates double time indices to accommodate the longitudinal functional outcome, enabling the assessment of time-varying effect moderation by contextual variables. We propose a two-stage causal effect estimator that is robust against a misspecified high-dimensional outcome regression nuisance model. We establish asymptotic theory and conduct simulation studies to validate the proposed estimator. Our analysis provides new insights into individuals' change in response profiles (such as how soon a response occurs) due to the activity suggestions, how such changes differ by the type of suggestions received, and how such changes depend on other contextual information such as being recently sedentary and the day being a weekday.


[12] 2410.15057

Asymptotic Time-Uniform Inference for Parameters in Averaged Stochastic Approximation

We study time-uniform statistical inference for parameters in stochastic approximation (SA), which encompasses a bunch of applications in optimization and machine learning. To that end, we analyze the almost-sure convergence rates of the averaged iterates to a scaled sum of Gaussians in both linear and nonlinear SA problems. We then construct three types of asymptotic confidence sequences that are valid uniformly across all times with coverage guarantees, in an asymptotic sense that the starting time is sufficiently large. These coverage guarantees remain valid if the unknown covariance matrix is replaced by its plug-in estimator, and we conduct experiments to validate our methodology.


[13] 2410.15102

Bayesian-based Propensity Score Subclassification Estimator

Subclassification estimators are one of the methods used to estimate causal effects of interest using the propensity score. This method is more stable compared to other weighting methods, such as inverse probability weighting estimators, in terms of the variance of the estimators. In subclassification estimators, the number of strata is traditionally set at five, and this number is not typically chosen based on data information. Even when the number of strata is selected, the uncertainty from the selection process is often not properly accounted for. In this study, we propose a novel Bayesian-based subclassification estimator that can assess the uncertainty in the number of strata, rather than selecting a single optimal number, using a Bayesian paradigm. To achieve this, we apply a general Bayesian procedure that does not rely on a likelihood function. This procedure allows us to avoid making strong assumptions about the outcome model, maintaining the same flexibility as traditional causal inference methods. With the proposed Bayesian procedure, it is expected that uncertainties from the design phase can be appropriately reflected in the analysis phase, which is sometimes overlooked in non-Bayesian contexts.


[14] 2410.15133

Controllable RANSAC-based Anomaly Detection via Hypothesis Testing

Detecting the presence of anomalies in regression models is a crucial task in machine learning, as anomalies can significantly impact the accuracy and reliability of predictions. Random Sample Consensus (RANSAC) is one of the most popular robust regression methods for addressing this challenge. However, this method lacks the capability to guarantee the reliability of the anomaly detection (AD) results. In this paper, we propose a novel statistical method for testing the AD results obtained by RANSAC, named CTRL-RANSAC (controllable RANSAC). The key strength of the proposed method lies in its ability to control the probability of misidentifying anomalies below a pre-specified level $\alpha$ (e.g., $\alpha = 0.05$). By examining the selection strategy of RANSAC and leveraging the Selective Inference (SI) framework, we prove that achieving controllable RANSAC is indeed feasible. Furthermore, we introduce a more strategic and computationally efficient approach to enhance the true detection rate and overall performance of the CTRL-RANSAC. Experiments conducted on synthetic and real-world datasets robustly support our theoretical results, showcasing the superior performance of the proposed method.


[15] 2410.15166

Joint Probability Estimation of Many Binary Outcomes via Localized Adversarial Lasso

In this work we consider estimating the probability of many (possibly dependent) binary outcomes which is at the core of many applications, e.g., multi-level treatments in causal inference, demands for bundle of products, etc. Without further conditions, the probability distribution of an M dimensional binary vector is characterized by exponentially in M coefficients which can lead to a high-dimensional problem even without the presence of covariates. Understanding the (in)dependence structure allows us to substantially improve the estimation as it allows for an effective factorization of the probability distribution. In order to estimate the probability distribution of a M dimensional binary vector, we leverage a Bahadur representation that connects the sparsity of its coefficients with independence across the components. We propose to use regularized and adversarial regularized estimators to obtain an adaptive estimator with respect to the dependence structure which allows for rates of convergence to depend on this intrinsic (lower) dimension. These estimators are needed to handle several challenges within this setting, including estimating nuisance parameters, estimating covariates, and nonseparable moment conditions. Our main results consider the presence of (low dimensional) covariates for which we propose a locally penalized estimator. We provide pointwise rates of convergence addressing several issues in the theoretical analyses as we strive for making a computationally tractable formulation. We apply our results in the estimation of causal effects with multiple binary treatments and show how our estimators can improve the finite sample performance when compared with non-adaptive estimators that try to estimate all the probabilities directly. We also provide simulations that are consistent with our theoretical findings.


[16] 2410.15180

HACSurv: A Hierarchical Copula-based Approach for Survival Analysis with Dependent Competing Risks

In survival analysis, subjects often face competing risks; for example, individuals with cancer may also suffer from heart disease or other illnesses, which can jointly influence the prognosis of risks and censoring. Traditional survival analysis methods often treat competing risks as independent and fail to accommodate the dependencies between different conditions. In this paper, we introduce HACSurv, a survival analysis method that learns Hierarchical Archimedean Copulas structures and cause-specific survival functions from data with competing risks. HACSurv employs a flexible dependency structure using hierarchical Archimedean copulas to represent the relationships between competing risks and censoring. By capturing the dependencies between risks and censoring, HACSurv achieves better survival predictions and offers insights into risk interactions. Experiments on synthetic datasets demonstrate that our method can accurately identify the complex dependency structure and precisely predict survival distributions, whereas the compared methods exhibit significant deviations between their predictions and the true distributions. Experiments on multiple real-world datasets also demonstrate that our method achieves better survival prediction compared to previous state-of-the-art methods.


[17] 2410.15187

Polyspectral Mean Estimation of General Nonlinear Processes

Higher-order spectra (or polyspectra), defined as the Fourier Transform of a stationary process' autocumulants, are useful in the analysis of nonlinear and non Gaussian processes. Polyspectral means are weighted averages over Fourier frequencies of the polyspectra, and estimators can be constructed from analogous weighted averages of the higher-order periodogram (a statistic computed from the data sample's discrete Fourier Transform). We derive the asymptotic distribution of a class of polyspectral mean estimators, obtaining an exact expression for the limit distribution that depends on both the given weighting function as well as on higher-order spectra. Secondly, we use bispectral means to define a new test of the linear process hypothesis. Simulations document the finite sample properties of the asymptotic results. Two applications illustrate our results' utility: we test the linear process hypothesis for a Sunspot time series, and for the Gross Domestic Product we conduct a clustering exercise based on bispectral means with different weight functions.


[18] 2410.15307

Volatility estimation from a view point of entropy

In the present paper, we first revisit the volatility estimation approach proposed by N. Kunitomo and S. Sato, and second, we show that the volatility estimator proposed by P. Malliavin and M.E. Mancino can be understood in a unified way by the approach. Third, we introduce an alternative estimator that might overcome the inconsistency caused by the microstructure noise of the initial observation.


[19] 2410.15320

Amortized Probabilistic Conditioning for Optimization, Simulation and Inference

Amortized meta-learning methods based on pre-training have propelled fields like natural language processing and vision. Transformer-based neural processes and their variants are leading models for probabilistic meta-learning with a tractable objective. Often trained on synthetic data, these models implicitly capture essential latent information in the data-generation process. However, existing methods do not allow users to flexibly inject (condition on) and extract (predict) this probabilistic latent information at runtime, which is key to many tasks. We introduce the Amortized Conditioning Engine (ACE), a new transformer-based meta-learning model that explicitly represents latent variables of interest. ACE affords conditioning on both observed data and interpretable latent variables, the inclusion of priors at runtime, and outputs predictive distributions for discrete and continuous data and latents. We show ACE's modeling flexibility and performance in diverse tasks such as image completion and classification, Bayesian optimization, and simulation-based inference.


[20] 2410.15336

Diffusion-PINN Sampler

Recent success of diffusion models has inspired a surge of interest in developing sampling techniques using reverse diffusion processes. However, accurately estimating the drift term in the reverse stochastic differential equation (SDE) solely from the unnormalized target density poses significant challenges, hindering existing methods from achieving state-of-the-art performance. In this paper, we introduce the Diffusion-PINN Sampler (DPS), a novel diffusion-based sampling algorithm that estimates the drift term by solving the governing partial differential equation of the log-density of the underlying SDE marginals via physics-informed neural networks (PINN). We prove that the error of log-density approximation can be controlled by the PINN residual loss, enabling us to establish convergence guarantees of DPS. Experiments on a variety of sampling tasks demonstrate the effectiveness of our approach, particularly in accurately identifying mixing proportions when the target contains isolated components.


[21] 2410.15361

A Novel Characterization of the Population Area Under the Risk Coverage Curve (AURC) and Rates of Finite Sample Estimators

The selective classifier (SC) has garnered increasing interest in areas such as medical diagnostics, autonomous driving, and the justice system. The Area Under the Risk-Coverage Curve (AURC) has emerged as the foremost evaluation metric for assessing the performance of SC systems. In this work, we introduce a more straightforward representation of the population AURC, interpretable as a weighted risk function, and propose a Monte Carlo plug-in estimator applicable to finite sample scenarios. We demonstrate that our estimator is consistent and offers a low-bias estimation of the actual weights, with a tightly bounded mean squared error (MSE). We empirically show the effectiveness of this estimator on a comprehensive benchmark across multiple datasets, model architectures, and Confidence Score Functions (CSFs).


[22] 2410.15381

High-dimensional prediction for count response via sparse exponential weights

Count data is prevalent in various fields like ecology, medical research, and genomics. In high-dimensional settings, where the number of features exceeds the sample size, feature selection becomes essential. While frequentist methods like Lasso have advanced in handling high-dimensional count data, Bayesian approaches remain under-explored with no theoretical results on prediction performance. This paper introduces a novel probabilistic machine learning framework for high-dimensional count data prediction. We propose a pseudo-Bayesian method that integrates a scaled Student prior to promote sparsity and uses an exponential weight aggregation procedure. A key contribution is a novel risk measure tailored to count data prediction, with theoretical guarantees for prediction risk using PAC-Bayesian bounds. Our results include non-asymptotic oracle inequalities, demonstrating rate-optimal prediction error without prior knowledge of sparsity. We implement this approach efficiently using Langevin Monte Carlo method. Simulations and a real data application highlight the strong performance of our method compared to the Lasso in various settings.


[23] 2410.15383

Probabilities for asymmetric p-outside values

In 2017-2020 Jordanova and co-authors investigate probabilities for p-outside values and determine them in many particular cases. They show that these probabilities are closely related to the concept for heavy tails. Tukey's boxplots are very popular and useful in practice. Analogously to the chi-square-criterion, the relative frequencies of the events an observation to fall in different their parts, compared with the corresponding probabilities an observation of a fixed probability distribution to fall in the same parts, help the practitioners to find the accurate probability distribution of the observed random variable. These open the door to work with the distribution sensitive estimators which in many cases are more accurate, especially for small sample investigations. All these methods, however, suffer from the disadvantage that they use inter quantile range in a symmetric way. The concept for outside values should take into account the form of the distribution. Therefore, here, we give possibility for more asymmetry in analysis of the tails of the distributions. We suggest new theoretical and empirical box-plots and characteristics of the tails of the distributions. These are theoretical asymmetric p-outside values functions. We partially investigate some of their properties and give some examples. It turns out that they do not depend on the center and the scaling factor of the distribution. Therefore, they are very appropriate for comparison of the tails of the distribution, and later on, for estimation of the parameters, which govern the tail behaviour of the cumulative distribution function.


[24] 2410.15421

A New Framework for Bayesian Function Registration

Function registration, also referred to as alignment, has been one of the fundamental problems in the field of functional data analysis. Classical registration methods such as the Fisher-Rao alignment focus on estimating optimal time warping function between functions. In recent studies, a model on time warping has attracted more attention, and it can be used as a prior term to combine with the classical method (as a likelihood term) in a Bayesian framework. The Bayesian approaches have been shown improvement over the classical methods. However, its prior model on time warping is often based a nonlinear approximation, which may introduce inaccuracy and inefficiency. To overcome these problems, we propose a new Bayesian approach by adopting a prior which provides a linear representation and various stochastic processes (Gaussian or non-Gaussian) can be effectively utilized on time warping. No linearization approximation is needed in the time warping computation, and the posterior can be obtained via a conventional Markov Chain Monte Carlo approach. We thoroughly investigate the impact of the prior on the performance of functional registration with multiple simulation examples, which demonstrate the superiority of the new framework over the previous methods. We finally utilize the new method in a real dataset and obtain desirable alignment result.


[25] 2410.15477

Randomization Inference for Before-and-After Studies with Multiple Units: An Application to a Criminal Procedure Reform in Uruguay

We study the immediate impact of a new code of criminal procedure on crime. In November 2017, Uruguay switched from an inquisitorial system (where a single judge leads the investigation and decides the appropriate punishment for a particular crime) to an adversarial system (where the investigation is now led by prosecutors and the judge plays an overseeing role). To analyze the short-term effects of this reform, we develop a randomization-based approach for before-and-after studies with multiple units. Our framework avoids parametric time series assumptions and eliminates extrapolation by basing statistical inferences on finite-sample methods that rely only on the time periods closest to the time of the policy intervention. A key identification assumption underlying our method is that there would have been no time trends in the absence of the intervention, which is most plausible in a small window around the time of the reform. We also discuss several falsification methods to assess the plausibility of this assumption. Using our proposed inferential approach, we find statistically significant short-term causal effects of the crime reform. Our unbiased estimate shows an average increase of approximately 25 police reports per day in the week following the implementation of the new adversarial system in Montevideo, representing an 8 percent increase compared to the previous week under the old system.


[26] 2410.15530

Simultaneous Inference in Multiple Matrix-Variate Graphs for High-Dimensional Neural Recordings

As large-scale neural recordings become common, many neuroscientific investigations are focused on identifying functional connectivity from spatio-temporal measurements in two or more brain areas across multiple sessions. Spatial-temporal data in neural recordings can be represented as matrix-variate data, with time as the first dimension and space as the second. In this paper, we exploit the multiple matrix-variate Gaussian Graphical model to encode the common underlying spatial functional connectivity across multiple sessions of neural recordings. By effectively integrating information across multiple graphs, we develop a novel inferential framework that allows simultaneous testing to detect meaningful connectivity for a target edge subset of arbitrary size. Our test statistics are based on a group penalized regression approach and a high-dimensional Gaussian approximation technique. The validity of simultaneous testing is demonstrated theoretically under mild assumptions on sample size and non-stationary autoregressive temporal dependence. Our test is nearly optimal in achieving the testable region boundary. Additionally, our method involves only convex optimization and parametric bootstrap, making it computationally attractive. We demonstrate the efficacy of the new method through both simulations and an experimental study involving multiple local field potential (LFP) recordings in the Prefrontal Cortex (PFC) and visual area V4 during a memory-guided saccade task.


[27] 2410.15560

Ablation Studies for Novel Treatment Effect Estimation Models

Ablation studies are essential for understanding the contribution of individual components within complex models, yet their application in nonparametric treatment effect estimation remains limited. This paper emphasizes the importance of ablation studies by examining the Bayesian Causal Forest (BCF) model, particularly the inclusion of the estimated propensity score $\hat{\pi}(x_i)$ intended to mitigate regularization-induced confounding (RIC). Through a partial ablation study utilizing five synthetic data-generating processes with varying baseline and propensity score complexities, we demonstrate that excluding $\hat{\pi}(x_i)$ does not diminish the model's performance in estimating average and conditional average treatment effects or in uncertainty quantification. Moreover, omitting $\hat{\pi}(x_i)$ reduces computational time by approximately 21\%. These findings suggest that the BCF model's inherent flexibility suffices in adjusting for confounding without explicitly incorporating the propensity score. The study advocates for the routine use of ablation studies in treatment effect estimation to ensure model components are essential and to prevent unnecessary complexity.


[28] 2410.15571

changepointGA: An R package for Fast Changepoint Detection via Genetic Algorithm

Genetic algorithm (GA) are stochastic search techniques designed to address combinatorial optimization problems by mimicking the principles of natural selection and evolution. GAs have proven effective in both single and multiple changepoint analyses within time series data, where each chromosome encodes the hyperparameters, number, and locations of changepoints, along with the associated model parameters. Starting with a population of potential changepoint configurations, GAs utilize genetic operators -- selection, crossover, and mutation -- to evolve toward solutions with enhanced fitness. This paper presents the R package changepointGA, which encodes changepoint chromosomes in an integer format. Furthermore, changepointGA facilitates the dynamic and simultaneous estimation of changepoint models hyperparameters, changepoint configurations, and model parameters, leading to more robust and accurate analyses. Several simulation studies and real-world applications are discussed to illustrate the package capabilities.


[29] 2410.15596

Assessing mediation in cross-sectional stepped wedge cluster randomized trials

Mediation analysis has been comprehensively studied for independent data but relatively little work has been done for correlated data, especially for the increasingly adopted stepped wedge cluster randomized trials (SW-CRTs). Motivated by challenges in underlying the effect mechanisms in pragmatic and implementation science clinical trials, we develop new methods for mediation analysis in SW-CRTs. Specifically, based on a linear and generalized linear mixed models, we demonstrate how to estimate the natural indirect effect and mediation proportion in typical SW-CRTs with four data types, including both continuous and binary mediators and outcomes. Furthermore, to address the emerging challenges in exposure-time treatment effect heterogeneity, we derive the mediation expressions in SW-CRTs when the total effect varies as a function of the exposure time. The cluster jackknife approach is considered for inference across all data types and treatment effect structures. We conduct extensive simulations to evaluate the finite-sample performances of proposed mediation estimators and demonstrate the proposed approach in a real data example. A user-friendly R package mediateSWCRT has been developed to facilitate the practical implementation of the estimators.


[30] 2410.15705

Variable screening for covariate dependent extreme value index estimation

One of the main topics of extreme value analysis is to estimate the extreme value index, an important parameter that controls the tail behavior of the distribution. In many cases, estimating the extreme value index of the target variable associated with covariates is useful. Although the estimation of the covariate-dependent extreme value index has been developed by numerous researchers, no results have been presented regarding covariate selection. This paper proposes a sure independence screening method for covariate-dependent extreme value index estimation. For the screening, the marginal utility between the target variable and each covariate is calculated using the conditional Pickands estimator. A single-index model that uses the covariates selected by screening is further provided to estimate the extreme value index after screening. Monte Carlo simulations confirmed the finite sample performance of the proposed method. In addition, a real-data application is presented.


[31] 2410.15711

Quantiles and Quantile Regression on Riemannian Manifolds: a measure-transportation-based approach

Increased attention has been given recently to the statistical analysis of variables with values on nonlinear manifolds. A natural but nontrivial problem in that context is the definition of quantile concepts. We are proposing a solution for compact Riemannian manifolds without boundaries; typical examples are polyspheres, hyperspheres, and toro\"{\i}dal manifolds equipped with their Riemannian metrics. Our concept of quantile function comes along with a concept of distribution function and, in the empirical case, ranks and signs. The absence of a canonical ordering is offset by resorting to the data-driven ordering induced by optimal transports. Theoretical properties, such as the uniform convergence of the empirical distribution and conditional (and unconditional) quantile functions and distribution-freeness of ranks and signs, are established. Statistical inference applications, from goodness-of-fit to distribution-free rank-based testing, are without number. Of particular importance is the case of quantile regression with directional or toro\"{\i}dal multiple output, which is given special attention in this paper. Extensive simulations are carried out to illustrate these novel concepts.


[32] 2410.15713

Nonparametric method of structural break detection in stochastic time series regression model

We propose a nonparametric algorithm to detect structural breaks in the conditional mean and/or variance of a time series. Our method does not assume any specific parametric form for the dependence structure of the regressor, the time series model, or the distribution of the model noise. This flexibility allows our algorithm to be applicable to a wide range of time series structures commonly encountered in financial econometrics. The effectiveness of the proposed algorithm is validated through an extensive simulation study and a real data application in detecting structural breaks in the mean and volatility of Bitcoin returns. The algorithm's ability to identify structural breaks in the data highlights its practical utility in econometric analysis and financial modeling.


[33] 2410.15719

Robust evaluation of vaccine effects based on estimation of vaccine efficacy curve

Background: The Cox model and its extensions assuming proportional hazards is widely used to estimate vaccine efficacy (VE). In the typical situation that VE wanes over time, the VE estimates are not only sensitive to study duration and timing of vaccine delivery in relation to disease seasonality but also biased in the presence of sample attrition. Furthermore, estimates of vaccine impact such as number of cases averted (NCA) are sensitive to background disease incidence and timing of vaccine delivery. Comparison of the estimates between trials with different features can be misleading. Methods: We propose estimation of VE as a function of time in the Cox model framework, using the area under the VE curve as a summary measure of VE, and extension of the method to estimate vaccine impact. We use simulations and re-analysis of a RTS,S/AS01 malaria vaccine trial dataset to demonstrate their properties and applications. Results: Simulation under scenarios with different trial duration, magnitude of sample attrition and timing of vaccine delivery, all assuming vaccine protection wanes over time, demonstrated the problems of conventional methods assuming proportional hazard, robustness and unbiasedness of the proposed methods, and comparability of the proposed estimates of vaccine efficacy and impact across trials with different features. Furthermore, the proposed NCA estimators are informative in determining the optimal vaccine delivery strategy in regions with highly seasonal disease transmission. Conclusions: The proposed method based on estimation of vaccine efficacy trajectory provides a robust, unbiased, and flexible approach to evaluate vaccine effects.


[34] 2410.15721

Learning signals defined on graphs with optimal transport and Gaussian process regression

In computational physics, machine learning has now emerged as a powerful complementary tool to explore efficiently candidate designs in engineering studies. Outputs in such supervised problems are signals defined on meshes, and a natural question is the extension of general scalar output regression models to such complex outputs. Changes between input geometries in terms of both size and adjacency structure in particular make this transition non-trivial. In this work, we propose an innovative strategy for Gaussian process regression where inputs are large and sparse graphs with continuous node attributes and outputs are signals defined on the nodes of the associated inputs. The methodology relies on the combination of regularized optimal transport, dimension reduction techniques, and the use of Gaussian processes indexed by graphs. In addition to enabling signal prediction, the main point of our proposal is to come with confidence intervals on node values, which is crucial for uncertainty quantification and active learning. Numerical experiments highlight the efficiency of the method to solve real problems in fluid dynamics and solid mechanics.


[35] 2410.15729

Two-stage Learning-to-Defer for Multi-Task Learning

The Learning-to-Defer approach has been explored for classification and, more recently, regression tasks separately. Many contemporary learning tasks, however, involves both classification and regression components. In this paper, we introduce a Learning-to-Defer approach for multi-task learning that encompasses both classification and regression tasks. Our two-stage approach utilizes a rejector that defers decisions to the most accurate agent among a pre-trained joint classifier-regressor models and one or more external experts. We show that our surrogate loss is $(\mathcal{H}, \mathcal{F}, \mathcal{R})$ and Bayes--consistent, ensuring an effective approximation of the optimal solution. Additionally, we derive learning bounds that demonstrate the benefits of employing multiple confident experts along a rich model in a two-stage learning framework. Empirical experiments conducted on electronic health record analysis tasks underscore the performance enhancements achieved through our method.


[36] 2410.15874

A measure of departure from symmetry via the Fisher-Rao distance for contingency tables

A measure of asymmetry is a quantification method that allows for the comparison of categorical evaluations before and after treatment effects or among different target populations, irrespective of sample size. We focus on square contingency tables that summarize survey results between two time points or cohorts, represented by the same categorical variables. We propose a measure to evaluate the degree of departure from a symmetry model using cosine similarity. This proposal is based on the Fisher-Rao distance, allowing asymmetry to be interpreted as a geodesic distance between two distributions. Various measures of asymmetry have been proposed, but visualizing the relationship of these quantification methods on a two-dimensional plane demonstrates that the proposed measure provides the geometrically simplest and most natural quantification. Moreover, the visualized figure indicates that the proposed method for measuring departures from symmetry is less affected by very few cells with extreme asymmetry. A simulation study shows that for square contingency tables with an underlying asymmetry model, our method can directly extract and quantify only the asymmetric structure of the model, and can more sensitively detect departures from symmetry than divergence-type measures.


[37] 2410.15888

Conditional Dependence via U-Statistics Pruning

The problem of measuring conditional dependence between two random phenomena arises when a third one (a confounder) has a potential influence on the amount of information shared by the original pair. A typical issue in this challenging problem is the inversion of ill-conditioned autocorrelation matrices. This paper presents a novel measure of conditional dependence based on the use of incomplete unbiased statistics of degree two, which allows to re-interpret independence as uncorrelatedness on a finite-dimensional feature space. This formulation enables to prune data according to the observations of the confounder itself, thus avoiding matrix inversions altogether. Moreover, the proposed approach is articulated as an extension of the Hilbert-Schmidt independence criterion, which becomes expressible through kernels that operate on 4-tuples of data.


[38] 2410.15931

Towards more realistic climate model outputs: A multivariate bias correction based on zero-inflated vine copulas

Climate model large ensembles are an essential research tool for analysing and quantifying natural climate variability and providing robust information for rare extreme events. The models simulated representations of reality are susceptible to bias due to incomplete understanding of physical processes. This paper aims to correct the bias of five climate variables from the CRCM5 Large Ensemble over Central Europe at a 3-hourly temporal resolution. At this high temporal resolution, two variables, precipitation and radiation, exhibit a high share of zero inflation. We propose a novel bias-correction method, VBC (Vine copula bias correction), that models and transfers multivariate dependence structures for zero-inflated margins in the data from its error-prone model domain to a reference domain. VBC estimates the model and reference distribution using vine copulas and corrects the model distribution via (inverse) Rosenblatt transformation. To deal with the variables' zero-inflated nature, we develop a new vine density decomposition that accommodates such variables and employs an adequately randomized version of the Rosenblatt transform. This novel approach allows for more accurate modelling of multivariate zero-inflated climate data. Compared with state-of-the-art correction methods, VBC is generally the best-performing correction and the most accurate method for correcting zero-inflated events.


[39] 2410.15955

The mutual arrangement of Wright-Fisher diffusion path measures and its impact on parameter estimation

The Wright-Fisher diffusion is a fundamentally important model of evolution encompassing genetic drift, mutation, and natural selection. Suppose you want to infer the parameters associated with these processes from an observed sample path. Then to write down the likelihood one first needs to know the mutual arrangement of two path measures under different parametrizations; that is, whether they are absolutely continuous, equivalent, singular, and so on. In this paper we give a complete answer to this question by finding the separating times for the diffusion - the stopping time before which one measure is absolutely continuous with respect to the other and after which the pair is mutually singular. In one dimension this extends a classical result of Dawson on the local equivalence between neutral and non-neutral Wright-Fisher diffusion measures. Along the way we also develop new zero-one type laws for the diffusion on its approach to, and emergence from, the boundary. As an application we derive an explicit expression for the joint maximum likelihood estimator of the mutation and selection parameters and show that its convergence properties are closely related to the separating time.


[40] 2410.15968

A Causal Transformation Model for Time-to-Event Data Affected by Unobserved Confounding: Revisiting the Illinois Reemployment Bonus Experiment

Motivated by studies investigating causal effects in survival analysis, we propose a transformation model to quantify the impact of a binary treatment on a time-to-event outcome. The approach is based on a flexible linear transformation structural model that links a monotone function of the time-to-event with the propensity for treatment through a bivariate Gaussian distribution. The model equations are specified as functions of additive predictors, allowing the impacts of observed confounders to be accounted for flexibly. Furthermore, the effect of the instrumental variable may be regularized through a ridge penalty, while interactions between the treatment and modifier variables can be incorporated into the model to acknowledge potential variations in treatment effects across different subgroups. The baseline survival function is estimated in a flexible manner using monotonic P-splines, while unobserved confounding is captured through the dependence parameter of the bivariate Gaussian. Parameter estimation is achieved via a computationally efficient and stable penalized maximum likelihood estimation approach and intervals constructed using the related inferential results. We revisit a dataset from the Illinois Reemployment Bonus Experiment to estimate the causal effect of a cash bonus on unemployment duration, unveiling new insights. The modeling framework is incorporated into the R package GJRM, enabling researchers and practitioners to fit the proposed causal survival model and obtain easy-to-interpret numerical and visual summaries.


[41] 2410.16004

Nonparametric Bayesian networks are typically faithful in the total variation metric

We show that for a given DAG $G$, among all observational distributions of Bayesian networks over $G$ with arbitrary outcome spaces, the faithful distributions are `typical': they constitute a dense, open set with respect to the total variation metric. As a consequence, the set of faithful distributions is non-empty, and the unfaithful distributions are nowhere dense. We extend this result to the space of Bayesian networks, where the properties hold for Bayesian networks instead of distributions of Bayesian networks. As special cases, we show that these results also hold for the faithful parameters of the subclasses of linear Gaussian -- and discrete Bayesian networks, giving a topological analogue of the measure-zero results of Spirtes et al. (1993) and Meek (1995). Finally, we extend our topological results and the measure-zero results of Spirtes et al. and Meek to Bayesian networks with latent variables.


[42] 2410.16071

Stochastic Exploration of Real Varieties via Variety Distributions

Nonlinear systems of polynomial equations arise naturally in many applied settings, for example loglinear models on contingency tables and Gaussian graphical models. The solution sets to these systems over the reals are often positive dimensional spaces that in general may be very complicated yet have very nice local behavior almost everywhere. Standard methods in real algebraic geometry for describing positive dimensional real solution sets include cylindrical algebraic decomposition and numerical cell decomposition, both of which can be costly to compute in many practical applications. In this work we communicate recent progress towards a Monte Carlo framework for exploring such real solution sets. After describing how to construct probability distributions whose mass focuses on a variety of interest, we describe how Hamiltonian Monte Carlo methods can be used to sample points near the variety that may then be moved to the variety using endgames. We conclude by showcasing trial experiments using practical implementations of the method in the Bayesian engine Stan.


[43] 2410.16073

On the Geometry of Regularization in Adversarial Training: High-Dimensional Asymptotics and Generalization Bounds

Regularization, whether explicit in terms of a penalty in the loss or implicit in the choice of algorithm, is a cornerstone of modern machine learning. Indeed, controlling the complexity of the model class is particularly important when data is scarce, noisy or contaminated, as it translates a statistical belief on the underlying structure of the data. This work investigates the question of how to choose the regularization norm $\lVert \cdot \rVert$ in the context of high-dimensional adversarial training for binary classification. To this end, we first derive an exact asymptotic description of the robust, regularized empirical risk minimizer for various types of adversarial attacks and regularization norms (including non-$\ell_p$ norms). We complement this analysis with a uniform convergence analysis, deriving bounds on the Rademacher Complexity for this class of problems. Leveraging our theoretical results, we quantitatively characterize the relationship between perturbation size and the optimal choice of $\lVert \cdot \rVert$, confirming the intuition that, in the data scarce regime, the type of regularization becomes increasingly important for adversarial training as perturbations grow in size.


[44] 2410.16076

Improving the (approximate) sequential probability ratio test by avoiding overshoot

The sequential probability ratio test (SPRT) by Wald (1945) is a cornerstone of sequential analysis. Based on desired type-I, II error levels $\alpha, \beta \in (0,1)$, it stops when the likelihood ratio statistic crosses certain upper and lower thresholds, guaranteeing optimality of the expected sample size. However, these thresholds are not closed form and the test is often applied with approximate thresholds $(1-\beta)/\alpha$ and $\beta/(1-\alpha)$ (approximate SPRT). When $\beta > 0$, this neither guarantees type I,II error control at $\alpha,\beta$ nor optimality. When $\beta=0$ (power-one SPRT), it guarantees type I error control at $\alpha$ that is in general conservative, and thus not optimal. The looseness in both cases is caused by overshoot: the test statistic overshoots the thresholds at the stopping time. One standard way to address this is to calculate the right thresholds numerically, but many papers and software packages do not do this. In this paper, we describe a different way to improve the approximate SPRT: we change the test statistic to avoid overshoot. Our technique uniformly improves power-one SPRTs $(\beta=0)$ for simple nulls and alternatives, or for one-sided nulls and alternatives in exponential families. When $\beta > 0$, our techniques provide valid type I error guarantees, lead to similar type II error as Wald's, but often needs less samples. These improved sequential tests can also be used for deriving tighter parametric confidence sequences, and can be extended to nontrivial settings like sampling without replacement and conformal martingales.


[45] 2410.16096

Dynamic Time Warping-based imputation of long gaps in human mobility trajectories

Individual mobility trajectories are difficult to measure and often incur long periods of missingness. Aggregation of this mobility data without accounting for the missingness leads to erroneous results, underestimating travel behavior. This paper proposes Dynamic Time Warping-Based Multiple Imputation (DTWBMI) as a method of filling long gaps in human mobility trajectories in order to use the available data to the fullest extent. This method reduces spatiotemporal trajectories to time series of particular travel behavior, then selects candidates for multiple imputation on the basis of the dynamic time warping distance between the potential donor series and the series preceding and following the gap in the recipient series and finally imputes values multiple times. A simulation study designed to establish optimal parameters for DTWBMI provides two versions of the method. These two methods are applied to a real-world dataset of individual mobility trajectories with simulated missingness and compared against other methods of handling missingness. Linear interpolation outperforms DTWBMI and other methods when gaps are short and data are limited. DTWBMI outperforms other methods when gaps become longer and when more data are available.


[46] 2410.16106

Statistical Inference for Temporal Difference Learning with Linear Function Approximation

Statistical inference with finite-sample validity for the value function of a given policy in Markov decision processes (MDPs) is crucial for ensuring the reliability of reinforcement learning. Temporal Difference (TD) learning, arguably the most widely used algorithm for policy evaluation, serves as a natural framework for this purpose.In this paper, we study the consistency properties of TD learning with Polyak-Ruppert averaging and linear function approximation, and obtain three significant improvements over existing results. First, we derive a novel sharp high-dimensional probability convergence guarantee that depends explicitly on the asymptotic variance and holds under weak conditions. We further establish refined high-dimensional Berry-Esseen bounds over the class of convex sets that guarantee faster rates than those in the literature. Finally, we propose a plug-in estimator for the asymptotic covariance matrix, designed for efficient online computation. These results enable the construction of confidence regions and simultaneous confidence intervals for the linear parameters of the value function, with guaranteed finite-sample coverage. We demonstrate the applicability of our theoretical findings through numerical experiments.


[47] 2410.16187

Interpretable Prediction Rule Ensembles in the Presence of Missing Data

Prediction Rule Ensembles (PREs) are robust and interpretable statistical learning techniques with potential for predictive analytics, yet their efficacy in the presence of missing data is untested. This study uses multiple imputation to fill in missing values, but uses a data stacking approach instead of a traditional model pooling approach to combine the results. We perform a simulation study to compare imputation methods under realistic conditions, focusing on sample sizes of $N=200$ and $N=400$ across 1,000 replications. Evaluated techniques include multiple imputation by chained equations with predictive mean matching (MICE PMM), MICE with Random Forest (MICE RF), Random Forest imputation with the ranger algorithm (missRanger), and imputation using extreme gradient boosting (MIXGBoost), with results compared to listwise deletion. Because stacking multiple imputed datasets can overly complicate models, we additionally explore different coarsening levels to simplify and enhance the interpretability and performance of PRE models. Our findings highlight a trade-off between predictive performance and model complexity in selecting imputation methods. While MIXGBoost and MICE PMM yield high rule recovery rates, they also increase false positives in rule selection. In contrast, MICE RF and missRanger promote rule sparsity. MIXGBoost achieved the greatest MSE reduction, followed by MICE PMM, MICE RF, and missRanger. Avoiding too-course rounding of variables helps to reduce model size with marginal loss in performance. Listwise deletion has an adverse impact on model validity. Our results emphasize the importance of choosing suitable imputation techniques based on research goals and of advancing methods for handling missing data in statistical learning.


[48] 2410.16201

Theoretical Limitations of Ensembles in the Age of Overparameterization

Classic tree-based ensembles generalize better than any single decision tree. In contrast, recent empirical studies find that modern ensembles of (overparameterized) neural networks may not provide any inherent generalization advantage over single but larger neural networks. This paper clarifies how modern overparameterized ensembles differ from their classic underparameterized counterparts, using ensembles of random feature (RF) regressors as a basis for developing theory. In contrast to the underparameterized regime, where ensembling typically induces regularization and increases generalization, we prove that infinite ensembles of overparameterized RF regressors become pointwise equivalent to (single) infinite-width RF regressors. This equivalence, which is exact for ridgeless models and approximate for small ridge penalties, implies that overparameterized ensembles and single large models exhibit nearly identical generalization. As a consequence, we can characterize the predictive variance amongst ensemble members, and demonstrate that it quantifies the expected effects of increasing capacity rather than capturing any conventional notion of uncertainty. Our results challenge common assumptions about the advantages of ensembles in overparameterized settings, prompting a reconsideration of how well intuitions from underparameterized ensembles transfer to deep ensembles and the overparameterized regime.


[49] 2410.14681

The impact of the spatial resolution of wind data on multi-decadal wind power forecasts in Germany

Accurate multi-decadal wind power predictions are crucial for sustainable energy transitions but are challenged by the coarse spatial resolution of global climate models (GCMs). This study examines the impact of spatial resolution on wind power forecasts by analyzing historical wind speed outputs from ten CMIP6 GCMs in Germany, using ERA5 reanalysis as a reference. Results show that the choice of GCM is the primary influence on wind speed output, with higher resolution models partly, but not consistently, improving predictions. While high-resolution models better capture extreme wind speeds, they do not systematically improve the prediction of the whole wind speed distribution. The data set MPI-ESM1-2-HR (MPI-HR) was found to represent the wind speed distribution particularly faithfully, while the MIROC6 (JAP) data set showed substantial underestimation for the German region compared to ERA5. These findings underscore the complexity of wind speed modeling for power predictions and emphasize the need for careful GCM selection and appropriate downscaling and bias correction methods.


[50] 2410.14730

On the Relation Between Linear Diffusion and Power Iteration

Recently, diffusion models have gained popularity due to their impressive generative abilities. These models learn the implicit distribution given by the training dataset, and sample new data by transforming random noise through the reverse process, which can be thought of as gradual denoising. In this work, we examine the generation process as a ``correlation machine'', where random noise is repeatedly enhanced in correlation with the implicit given distribution. To this end, we explore the linear case, where the optimal denoiser in the MSE sense is known to be the PCA projection. This enables us to connect the theory of diffusion models to the spiked covariance model, where the dependence of the denoiser on the noise level and the amount of training data can be expressed analytically, in the rank-1 case. In a series of numerical experiments, we extend this result to general low rank data, and show that low frequencies emerge earlier in the generation process, where the denoising basis vectors are more aligned to the true data with a rate depending on their eigenvalues. This model allows us to show that the linear diffusion model converges in mean to the leading eigenvector of the underlying data, similarly to the prevalent power iteration method. Finally, we empirically demonstrate the applicability of our findings beyond the linear case, in the Jacobians of a deep, non-linear denoiser, used in general image generation tasks.


[51] 2410.14741

CAKD: A Correlation-Aware Knowledge Distillation Framework Based on Decoupling Kullback-Leibler Divergence

In knowledge distillation, a primary focus has been on transforming and balancing multiple distillation components. In this work, we emphasize the importance of thoroughly examining each distillation component, as we observe that not all elements are equally crucial. From this perspective,we decouple the Kullback-Leibler (KL) divergence into three unique elements: Binary Classification Divergence (BCD), Strong Correlation Divergence (SCD), and Weak Correlation Divergence (WCD). Each of these elements presents varying degrees of influence. Leveraging these insights, we present the Correlation-Aware Knowledge Distillation (CAKD) framework. CAKD is designed to prioritize the facets of the distillation components that have the most substantial influence on predictions, thereby optimizing knowledge transfer from teacher to student models. Our experiments demonstrate that adjusting the effect of each element enhances the effectiveness of knowledge transformation. Furthermore, evidence shows that our novel CAKD framework consistently outperforms the baseline across diverse models and datasets. Our work further highlights the importance and effectiveness of closely examining the impact of different parts of distillation process.


[52] 2410.14761

Constrained Recurrent Bayesian Forecasting for Crack Propagation

Predictive maintenance of railway infrastructure, especially railroads, is essential to ensure safety. However, accurate prediction of crack evolution represents a major challenge due to the complex interactions between intrinsic and external factors, as well as measurement uncertainties. Effective modeling requires a multidimensional approach and a comprehensive understanding of these dynamics and uncertainties. Motivated by an industrial use case based on collected real data containing measured crack lengths, this paper introduces a robust Bayesian multi-horizon approach for predicting the temporal evolution of crack lengths on rails. This model captures the intricate interplay between various factors influencing crack growth. Additionally, the Bayesian approach quantifies both epistemic and aleatoric uncertainties, providing a confidence interval around predictions. To enhance the model's reliability for railroad maintenance, specific constraints are incorporated. These constraints limit non-physical crack propagation behavior and prioritize safety. The findings reveal a trade-off between prediction accuracy and constraint compliance, highlighting the nuanced decision-making process in model training. This study offers insights into advanced predictive modeling for dynamic temporal forecasting, particularly in railway maintenance, with potential applications in other domains.


[53] 2410.14802

Implicit Regularization of Sharpness-Aware Minimization for Scale-Invariant Problems

Sharpness-aware minimization (SAM) improves generalization of various deep learning tasks. Motivated by popular architectures such as LoRA, we explore the implicit regularization of SAM for scale-invariant problems involving two groups of variables. Instead of focusing on commonly used sharpness, this work introduces a concept termed balancedness, defined as the difference between the squared norm of two variables. This allows us to depict richer global behaviors of SAM. In particular, our theoretical and empirical findings reveal that i) SAM promotes balancedness; and ii) the regularization on balancedness is data-responsive -- outliers have stronger impact. The latter coincides with empirical observations that SAM outperforms SGD in the presence of outliers. Leveraging the implicit regularization, we develop a resource-efficient SAM variant, balancedness-aware regularization (BAR), tailored for scale-invariant problems such as finetuning language models with LoRA. BAR saves 95% computational overhead of SAM, with enhanced test performance across various tasks on RoBERTa, GPT2, and OPT-1.3B.


[54] 2410.14812

Isolated Causal Effects of Natural Language

As language technologies become widespread, it is important to understand how variations in language affect reader perceptions -- formalized as the isolated causal effect of some focal language-encoded intervention on an external outcome. A core challenge of estimating isolated effects is the need to approximate all non-focal language outside of the intervention. In this paper, we introduce a formal estimation framework for isolated causal effects and explore how different approximations of non-focal language impact effect estimates. Drawing on the principle of omitted variable bias, we present metrics for evaluating the quality of isolated effect estimation and non-focal language approximation along the axes of fidelity and overlap. In experiments on semi-synthetic and real-world data, we validate the ability of our framework to recover ground truth isolated effects, and we demonstrate the utility of our proposed metrics as measures of quality for both isolated effect estimates and non-focal language approximations.


[55] 2410.14838

Rank Suggestion in Non-negative Matrix Factorization: Residual Sensitivity to Initial Conditions (RSIC)

Determining the appropriate rank in Non-negative Matrix Factorization (NMF) is a critical challenge that often requires extensive parameter tuning and domain-specific knowledge. Traditional methods for rank determination focus on identifying a single optimal rank, which may not capture the complex structure inherent in real-world datasets. In this study, we introduce a novel approach called Residual Sensitivity to Intial Conditions (RSIC) that suggests potentially multiple ranks of interest by analyzing the sensitivity of the relative residuals (e.g. relative reconstruction error) to different initializations. By computing the Mean Coordinatewise Interquartile Range (MCI) of the residuals across multiple random initializations, our method identifies regions where the NMF solutions are less sensitive to initial conditions and potentially more meaningful. We evaluate RSIC on a diverse set of datasets, including single-cell gene expression data, image data, and text data, and compare it against current state-of-the-art existing rank determination methods. Our experiments demonstrate that RSIC effectively identifies relevant ranks consistent with the underlying structure of the data, outperforming traditional methods in scenarios where they are computationally infeasible or less accurate. This approach provides a more scalable and generalizable solution for rank determination in NMF that does not rely on domain-specific knowledge or assumptions.


[56] 2410.14871

Learning the Effect of Persuasion via Difference-In-Differences

The persuasion rate is a key parameter for measuring the causal effect of a directional message on influencing the recipient's behavior. Its identification analysis has largely relied on the availability of credible instruments, but the requirement is not always satisfied in observational studies. Therefore, we develop a framework for identifying, estimating, and conducting inference for the average persuasion rates on the treated using a difference-in-differences approach. The average treatment effect on the treated is a standard parameter with difference-in-differences, but it underestimates the persuasion rate in our setting. Our estimation and inference methods include regression-based approaches and semiparametrically efficient estimators. Beginning with the canonical two-period case, we extend the framework to staggered treatment settings, where we show how to conduct rich analyses like the event-study design. We revisit previous studies of the British election and the Chinese curriculum reform to illustrate the usefulness of our methodology.


[57] 2410.14885

Beyond Discretization: Learning the Optimal Solution Path

Many applications require minimizing a family of optimization problems indexed by some hyperparameter $\lambda \in \Lambda$ to obtain an entire solution path. Traditional approaches proceed by discretizing $\Lambda$ and solving a series of optimization problems. We propose an alternative approach that parameterizes the solution path with a set of basis functions and solves a \emph{single} stochastic optimization problem to learn the entire solution path. Our method offers substantial complexity improvements over discretization. When using constant-step size SGD, the uniform error of our learned solution path relative to the true path exhibits linear convergence to a constant related to the expressiveness of the basis. When the true solution path lies in the span of the basis, this constant is zero. We also prove stronger results for special cases common in machine learning: When $\lambda \in [-1, 1]$ and the solution path is $\nu$-times differentiable, constant step-size SGD learns a path with $\epsilon$ uniform error after at most $O(\epsilon^{\frac{1}{1-\nu}} \log(1/\epsilon))$ iterations, and when the solution path is analytic, it only requires $O\left(\log^2(1/\epsilon)\log\log(1/\epsilon)\right)$. By comparison, the best-known discretization schemes in these settings require at least $O(\epsilon^{-1/2})$ discretization points (and even more gradient calls). Finally, we propose an adaptive variant of our method that sequentially adds basis functions and demonstrates strong numerical performance through experiments.


[58] 2410.14949

Straightness of Rectified Flow: A Theoretical Insight into Wasserstein Convergence

Diffusion models have emerged as a powerful tool for image generation and denoising. Typically, generative models learn a trajectory between the starting noise distribution and the target data distribution. Recently Liu et al. (2023b) designed a novel alternative generative model Rectified Flow (RF), which aims to learn straight flow trajectories from noise to data using a sequence of convex optimization problems with close ties to optimal transport. If the trajectory is curved, one must use many Euler discretization steps or novel strategies, such as exponential integrators, to achieve a satisfactory generation quality. In contrast, RF has been shown to theoretically straighten the trajectory through successive rectifications, reducing the number of function evaluations (NFEs) while sampling. It has also been shown empirically that RF may improve the straightness in two rectifications if one can solve the underlying optimization problem within a sufficiently small error. In this paper, we make two key theoretical contributions: 1) we provide the first theoretical analysis of the Wasserstein distance between the sampling distribution of RF and the target distribution. Our error rate is characterized by the number of discretization steps and a new formulation of straightness stronger than that in the original work. 2) In line with the previous empirical findings, we show that, for a rectified flow from a Gaussian to a mixture of two Gaussians, two rectifications are sufficient to achieve a straight flow. Additionally, we also present empirical results on both simulated and real datasets to validate our theoretical findings.


[59] 2410.15001

Faster Inference Time for GNNs using coarsening

Graph Neural Networks (GNNs) have shown remarkable success in various graph-based tasks, including node classification, node regression, graph classification, and graph regression. However, their scalability remains a significant challenge, particularly when dealing with large-scale graphs. To tackle this challenge, coarsening-based methods are used to reduce the graph into a smaller one, resulting in faster computation. However, no previous research has tackled the computation cost during the inference. This motivated us to ponder whether we can trade off the improvement in training time of coarsening-based approaches with inference time. This paper presents a novel approach to improve the scalability of GNNs through subgraph-based techniques. We reduce the computational burden during the training and inference phases by using the coarsening algorithm to partition large graphs into smaller, manageable subgraphs. Previously, graph-level tasks had not been explored using this approach. We propose a novel approach for using the coarsening algorithm for graph-level tasks such as graph classification and graph regression. We conduct extensive experiments on multiple benchmark datasets to evaluate the performance of our approach. The results demonstrate that our subgraph-based GNN method achieves competitive results in node classification, node regression, graph classification, and graph regression tasks compared to traditional GNN models. Furthermore, our approach significantly reduces the inference time, enabling the practical application of GNNs to large-scale graphs.


[60] 2410.15018

Latency correction in sparse neuronal spike trains with overlapping global events

Background: In Kreuz et al., J Neurosci Methods 381, 109703 (2022) two methods were proposed that perform latency correction, i.e., optimize the spike time alignment of sparse neuronal spike trains with well defined global spiking events. The first one based on direct shifts is fast but uses only partial latency information, while the other one makes use of the full information but relies on the computationally costly simulated annealing. Both methods reach their limits and can become unreliable when successive global events are not sufficiently separated or even overlap. New Method: Here we propose an iterative scheme that combines the advantages of the two original methods by using in each step as much of the latency information as possible and by employing a very fast extrapolation direct shift method instead of the much slower simulated annealing. Results: We illustrate the effectiveness and the improved performance, measured in terms of the relative shift error, of the new iterative scheme not only on simulated data with known ground truths but also on single-unit recordings from two medial superior olive neurons of a gerbil. Comparison with Existing Method(s): The iterative scheme outperforms the existing approaches on both the simulated and the experimental data. Due to its low computational demands, and in contrast to simulated annealing, it can also be applied to very large datasets. Conclusions: The new method generalizes and improves on the original method both in terms of accuracy and speed. Importantly, it is the only method that allows to disentangle global events with overlap.


[61] 2410.15117

Accelerating k-Means Clustering with Cover Trees

The k-means clustering algorithm is a popular algorithm that partitions data into k clusters. There are many improvements to accelerate the standard algorithm. Most current research employs upper and lower bounds on point-to-cluster distances and the triangle inequality to reduce the number of distance computations, with only arrays as underlying data structures. These approaches cannot exploit that nearby points are likely assigned to the same cluster. We propose a new k-means algorithm based on the cover tree index, that has relatively low overhead and performs well, for a wider parameter range, than previous approaches based on the k-d tree. By combining this with upper and lower bounds, as in state-of-the-art approaches, we obtain a hybrid algorithm that combines the benefits of tree aggregation and bounds-based filtering.


[62] 2410.15233

A Semidefinite Relaxation Approach for Fair Graph Clustering

Fair graph clustering is crucial for ensuring equitable representation and treatment of diverse communities in network analysis. Traditional methods often ignore disparities among social, economic, and demographic groups, perpetuating biased outcomes and reinforcing inequalities. This study introduces fair graph clustering within the framework of the disparate impact doctrine, treating it as a joint optimization problem integrating clustering quality and fairness constraints. Given the NP-hard nature of this problem, we employ a semidefinite relaxation approach to approximate the underlying optimization problem. For up to medium-sized graphs, we utilize a singular value decomposition-based algorithm, while for larger graphs, we propose a novel algorithm based on the alternative direction method of multipliers. Unlike existing methods, our formulation allows for tuning the trade-off between clustering quality and fairness. Experimental results on graphs generated from the standard stochastic block model demonstrate the superiority of our approach in achieving an optimal accuracy-fairness trade-off compared to state-of-the-art methods.


[63] 2410.15239

Conditional Prediction ROC Bands for Graph Classification

Graph classification in medical imaging and drug discovery requires accuracy and robust uncertainty quantification. To address this need, we introduce Conditional Prediction ROC (CP-ROC) bands, offering uncertainty quantification for ROC curves and robustness to distributional shifts in test data. Although developed for Tensorized Graph Neural Networks (TGNNs), CP-ROC is adaptable to general Graph Neural Networks (GNNs) and other machine learning models. We establish statistically guaranteed coverage for CP-ROC under a local exchangeability condition. This addresses uncertainty challenges for ROC curves under non-iid setting, ensuring reliability when test graph distributions differ from training data. Empirically, to establish local exchangeability for TGNNs, we introduce a data-driven approach to construct local calibration sets for graphs. Comprehensive evaluations show that CP-ROC significantly improves prediction reliability across diverse tasks. This method enhances uncertainty quantification efficiency and reliability for ROC curves, proving valuable for real-world applications with non-iid objects.


[64] 2410.15241

Conditional Uncertainty Quantification for Tensorized Topological Neural Networks

Graph Neural Networks (GNNs) have become the de facto standard for analyzing graph-structured data, leveraging message-passing techniques to capture both structural and node feature information. However, recent studies have raised concerns about the statistical reliability of uncertainty estimates produced by GNNs. This paper addresses this crucial challenge by introducing a novel technique for quantifying uncertainty in non-exchangeable graph-structured data, while simultaneously reducing the size of label prediction sets in graph classification tasks. We propose Conformalized Tensor-based Topological Neural Networks (CF-T2NN), a new approach for rigorous prediction inference over graphs. CF-T2NN employs tensor decomposition and topological knowledge learning to navigate and interpret the inherent uncertainty in decision-making processes. This method enables a more nuanced understanding and handling of prediction uncertainties, enhancing the reliability and interpretability of neural network outcomes. Our empirical validation, conducted across 10 real-world datasets, demonstrates the superiority of CF-T2NN over a wide array of state-of-the-art methods on various graph benchmarks. This work not only enhances the GNN framework with robust uncertainty quantification capabilities but also sets a new standard for reliability and precision in graph-structured data analysis.


[65] 2410.15244

Extensions on low-complexity DCT approximations for larger blocklengths based on minimal angle similarity

The discrete cosine transform (DCT) is a central tool for image and video coding because it can be related to the Karhunen-Lo\`eve transform (KLT), which is the optimal transform in terms of retained transform coefficients and data decorrelation. In this paper, we introduce 16-, 32-, and 64-point low-complexity DCT approximations by minimizing individually the angle between the rows of the exact DCT matrix and the matrix induced by the approximate transforms. According to some classical figures of merit, the proposed transforms outperformed the approximations for the DCT already known in the literature. Fast algorithms were also developed for the low-complexity transforms, asserting a good balance between the performance and its computational cost. Practical applications in image encoding showed the relevance of the transforms in this context. In fact, the experiments showed that the proposed transforms had better results than the known approximations in the literature for the cases of 16, 32, and 64 blocklength.


[66] 2410.15259

Optimizing adaptive sampling via Policy Ranking

Efficient sampling in biomolecular simulations is critical for accurately capturing the complex dynamical behaviors of biological systems. Adaptive sampling techniques aim to improve efficiency by focusing computational resources on the most relevant regions of phase space. In this work, we present a framework for identifying the optimal sampling policy through metric driven ranking. Our approach systematically evaluates the policy ensemble and ranks the policies based on their ability to explore the conformational space effectively. Through a series of biomolecular simulation case studies, we demonstrate that choice of a different adaptive sampling policy at each round significantly outperforms single policy sampling, leading to faster convergence and improved sampling performance. This approach takes an ensemble of adaptive sampling policies and identifies the optimal policy for the next round based on current data. Beyond presenting this ensemble view of adaptive sampling, we also propose two sampling algorithms that approximate this ranking framework on the fly. The modularity of this framework allows incorporation of any adaptive sampling policy making it versatile and suitable as a comprehensive adaptive sampling scheme.


[67] 2410.15280

Neural Normalized Compression Distance and the Disconnect Between Compression and Classification

It is generally well understood that predictive classification and compression are intrinsically related concepts in information theory. Indeed, many deep learning methods are explained as learning a kind of compression, and that better compression leads to better performance. We interrogate this hypothesis via the Normalized Compression Distance (NCD), which explicitly relies on compression as the means of measuring similarity between sequences and thus enables nearest-neighbor classification. By turning popular large language models (LLMs) into lossless compressors, we develop a Neural NCD and compare LLMs to classic general-purpose algorithms like gzip. In doing so, we find that classification accuracy is not predictable by compression rate alone, among other empirical aberrations not predicted by current understanding. Our results imply that our intuition on what it means for a neural network to ``compress'' and what is needed for effective classification are not yet well understood.


[68] 2410.15310

On Cold Posteriors of Probabilistic Neural Networks: Understanding the Cold Posterior Effect and A New Way to Learn Cold Posteriors with Tight Generalization Guarantees

Bayesian inference provides a principled probabilistic framework for quantifying uncertainty by updating beliefs based on prior knowledge and observed data through Bayes' theorem. In Bayesian deep learning, neural network weights are treated as random variables with prior distributions, allowing for a probabilistic interpretation and quantification of predictive uncertainty. However, Bayesian methods lack theoretical generalization guarantees for unseen data. PAC-Bayesian analysis addresses this limitation by offering a frequentist framework to derive generalization bounds for randomized predictors, thereby certifying the reliability of Bayesian methods in machine learning. Temperature $T$, or inverse-temperature $\lambda = \frac{1}{T}$, originally from statistical mechanics in physics, naturally arises in various areas of statistical inference, including Bayesian inference and PAC-Bayesian analysis. In Bayesian inference, when $T < 1$ (``cold'' posteriors), the likelihood is up-weighted, resulting in a sharper posterior distribution. Conversely, when $T > 1$ (``warm'' posteriors), the likelihood is down-weighted, leading to a more diffuse posterior distribution. By balancing the influence of observed data and prior regularization, temperature adjustments can address issues of underfitting or overfitting in Bayesian models, bringing improved predictive performance.


[69] 2410.15319

Causality for Large Language Models

Recent breakthroughs in artificial intelligence have driven a paradigm shift, where large language models (LLMs) with billions or trillions of parameters are trained on vast datasets, achieving unprecedented success across a series of language tasks. However, despite these successes, LLMs still rely on probabilistic modeling, which often captures spurious correlations rooted in linguistic patterns and social stereotypes, rather than the true causal relationships between entities and events. This limitation renders LLMs vulnerable to issues such as demographic biases, social stereotypes, and LLM hallucinations. These challenges highlight the urgent need to integrate causality into LLMs, moving beyond correlation-driven paradigms to build more reliable and ethically aligned AI systems. While many existing surveys and studies focus on utilizing prompt engineering to activate LLMs for causal knowledge or developing benchmarks to assess their causal reasoning abilities, most of these efforts rely on human intervention to activate pre-trained models. How to embed causality into the training process of LLMs and build more general and intelligent models remains unexplored. Recent research highlights that LLMs function as causal parrots, capable of reciting causal knowledge without truly understanding or applying it. These prompt-based methods are still limited to human interventional improvements. This survey aims to address this gap by exploring how causality can enhance LLMs at every stage of their lifecycle-from token embedding learning and foundation model training to fine-tuning, alignment, inference, and evaluation-paving the way for more interpretable, reliable, and causally-informed models. Additionally, we further outline six promising future directions to advance LLM development, enhance their causal reasoning capabilities, and address the current limitations these models face.


[70] 2410.15368

Tighter Performance Theory of FedExProx

We revisit FedExProx - a recently proposed distributed optimization method designed to enhance convergence properties of parallel proximal algorithms via extrapolation. In the process, we uncover a surprising flaw: its known theoretical guarantees on quadratic optimization tasks are no better than those offered by the vanilla Gradient Descent (GD) method. Motivated by this observation, we develop a novel analysis framework, establishing a tighter linear convergence rate for non-strongly convex quadratic problems. By incorporating both computation and communication costs, we demonstrate that FedExProx can indeed provably outperform GD, in stark contrast to the original analysis. Furthermore, we consider partial participation scenarios and analyze two adaptive extrapolation strategies - based on gradient diversity and Polyak stepsizes - again significantly outperforming previous results. Moving beyond quadratics, we extend the applicability of our analysis to general functions satisfying the Polyak-Lojasiewicz condition, outperforming the previous strongly convex analysis while operating under weaker assumptions. Backed by empirical results, our findings point to a new and stronger potential of FedExProx, paving the way for further exploration of the benefits of extrapolation in federated learning.


[71] 2410.15433

Discriminating image representations with principal distortions

Image representations (artificial or biological) are often compared in terms of their global geometry; however, representations with similar global structure can have strikingly different local geometries. Here, we propose a framework for comparing a set of image representations in terms of their local geometries. We quantify the local geometry of a representation using the Fisher information matrix, a standard statistical tool for characterizing the sensitivity to local stimulus distortions, and use this as a substrate for a metric on the local geometry in the vicinity of a base image. This metric may then be used to optimally differentiate a set of models, by finding a pair of "principal distortions" that maximize the variance of the models under this metric. We use this framework to compare a set of simple models of the early visual system, identifying a novel set of image distortions that allow immediate comparison of the models by visual inspection. In a second example, we apply our method to a set of deep neural network models and reveal differences in the local geometry that arise due to architecture and training types. These examples highlight how our framework can be used to probe for informative differences in local sensitivities between complex computational models, and suggest how it could be used to compare model representations with human perception.


[72] 2410.15473

Bayesian data fusion for distributed learning

One of the main challenges of federated learning (FL) is handling non-independent and identically distributed (non-IID) client data, which may occur in practice due to unbalanced datasets and use of different data sources across clients. Knowledge sharing and model personalization are key strategies for addressing this issue. Clustered federated learning is a class of FL methods that groups clients that observe similarly distributed data into clusters, such that every client is typically associated with one data distribution and participates in training a model for that distribution along their cluster peers. In this paper, we present a unified Bayesian framework for clustered FL which associates clients to clusters. Then we propose several practical algorithms to handle the, otherwise growing, data associations in a way that trades off performance and computational complexity. This work provides insights on client-cluster associations and enables client knowledge sharing in new ways. The proposed framework circumvents the need for unique client-cluster associations, which is seen to increase the performance of the resulting models in a variety of experiments.


[73] 2410.15491

Structural Causality-based Generalizable Concept Discovery Models

The rising need for explainable deep neural network architectures has utilized semantic concepts as explainable units. Several approaches utilizing disentangled representation learning estimate the generative factors and utilize them as concepts for explaining DNNs. However, even though the generative factors for a dataset remain fixed, concepts are not fixed entities and vary based on downstream tasks. In this paper, we propose a disentanglement mechanism utilizing a variational autoencoder (VAE) for learning mutually independent generative factors for a given dataset and subsequently learning task-specific concepts using a structural causal model (SCM). Our method assumes generative factors and concepts to form a bipartite graph, with directed causal edges from generative factors to concepts. Experiments are conducted on datasets with known generative factors: D-sprites and Shapes3D. On specific downstream tasks, our proposed method successfully learns task-specific concepts which are explained well by the causal edges from the generative factors. Lastly, separate from current causal concept discovery methods, our methodology is generalizable to an arbitrary number of concepts and flexible to any downstream tasks.


[74] 2410.15543

Distributed Thompson sampling under constrained communication

In Bayesian optimization, a black-box function is maximized via the use of a surrogate model. We apply distributed Thompson sampling, using a Gaussian process as a surrogate model, to approach the multi-agent Bayesian optimization problem. In our distributed Thompson sampling implementation, each agent receives sampled points from neighbors, where the communication network is encoded in a graph; each agent utilizes a Gaussian process to model the objective function. We demonstrate a theoretical bound on Bayesian Simple Regret, where the bound depends on the size of the largest complete subgraph of the communication graph. Unlike in batch Bayesian optimization, this bound is applicable in cases where the communication graph amongst agents is constrained. When compared to sequential Thompson sampling, our bound guarantees faster convergence with respect to time as long as there is a fully connected subgraph of at least two agents. We confirm the efficacy of our algorithm with numerical simulations on traditional optimization test functions, illustrating the significance of graph connectivity on improving regret convergence.


[75] 2410.15555

Bayesian Concept Bottleneck Models with LLM Priors

Concept Bottleneck Models (CBMs) have been proposed as a compromise between white-box and black-box models, aiming to achieve interpretability without sacrificing accuracy. The standard training procedure for CBMs is to predefine a candidate set of human-interpretable concepts, extract their values from the training data, and identify a sparse subset as inputs to a transparent prediction model. However, such approaches are often hampered by the tradeoff between enumerating a sufficiently large set of concepts to include those that are truly relevant versus controlling the cost of obtaining concept extractions. This work investigates a novel approach that sidesteps these challenges: BC-LLM iteratively searches over a potentially infinite set of concepts within a Bayesian framework, in which Large Language Models (LLMs) serve as both a concept extraction mechanism and prior. BC-LLM is broadly applicable and multi-modal. Despite imperfections in LLMs, we prove that BC-LLM can provide rigorous statistical inference and uncertainty quantification. In experiments, it outperforms comparator methods including black-box models, converges more rapidly towards relevant concepts and away from spuriously correlated ones, and is more robust to out-of-distribution samples.


[76] 2410.15557

How to Find the Exact Pareto Front for Multi-Objective MDPs?

Multi-objective Markov Decision Processes (MDPs) are receiving increasing attention, as real-world decision-making problems often involve conflicting objectives that cannot be addressed by a single-objective MDP. The Pareto front identifies the set of policies that cannot be dominated, providing a foundation for finding optimal solutions that can efficiently adapt to various preferences. However, finding the Pareto front is a highly challenging problem. Most existing methods either (i) rely on traversing the continuous preference space, which is impractical and results in approximations that are difficult to evaluate against the true Pareto front, or (ii) focus solely on deterministic Pareto optimal policies, from which there are no known techniques to characterize the full Pareto front. Moreover, finding the structure of the Pareto front itself remains unclear even in the context of dynamic programming. This work addresses the challenge of efficiently discovering the Pareto front. By investigating the geometric structure of the Pareto front in MO-MDP, we uncover a key property: the Pareto front is on the boundary of a convex polytope whose vertices all correspond to deterministic policies, and neighboring vertices of the Pareto front differ by only one state-action pair of the deterministic policy, almost surely. This insight transforms the global comparison across all policies into a localized search among deterministic policies that differ by only one state-action pair, drastically reducing the complexity of searching for the exact Pareto front. We develop an efficient algorithm that identifies the vertices of the Pareto front by solving a single-objective MDP only once and then traversing the edges of the Pareto front, making it more efficient than existing methods. Our empirical studies demonstrate the effectiveness of our theoretical strategy in discovering the Pareto front.


[77] 2410.15564

Reward Maximization for Pure Exploration: Minimax Optimal Good Arm Identification for Nonparametric Multi-Armed Bandits

In multi-armed bandits, the tasks of reward maximization and pure exploration are often at odds with each other. The former focuses on exploiting arms with the highest means, while the latter may require constant exploration across all arms. In this work, we focus on good arm identification (GAI), a practical bandit inference objective that aims to label arms with means above a threshold as quickly as possible. We show that GAI can be efficiently solved by combining a reward-maximizing sampling algorithm with a novel nonparametric anytime-valid sequential test for labeling arm means. We first establish that our sequential test maintains error control under highly nonparametric assumptions and asymptotically achieves the minimax optimal e-power, a notion of power for anytime-valid tests. Next, by pairing regret-minimizing sampling schemes with our sequential test, we provide an approach that achieves minimax optimal stopping times for labeling arms with means above a threshold, under an error probability constraint. Our empirical results validate our approach beyond the minimax setting, reducing the expected number of samples for all stopping times by at least 50% across both synthetic and real-world settings.


[78] 2410.15634

Distributionally Robust Instrumental Variables Estimation

Instrumental variables (IV) estimation is a fundamental method in econometrics and statistics for estimating causal effects in the presence of unobserved confounding. However, challenges such as untestable model assumptions and poor finite sample properties have undermined its reliability in practice. Viewing common issues in IV estimation as distributional uncertainties, we propose DRIVE, a distributionally robust framework of the classical IV estimation method. When the ambiguity set is based on a Wasserstein distance, DRIVE minimizes a square root ridge regularized variant of the two stage least squares (TSLS) objective. We develop a novel asymptotic theory for this regularized regression estimator based on the square root ridge, showing that it achieves consistency without requiring the regularization parameter to vanish. This result follows from a fundamental property of the square root ridge, which we call ``delayed shrinkage''. This novel property, which also holds for a class of generalized method of moments (GMM) estimators, ensures that the estimator is robust to distributional uncertainties that persist in large samples. We further derive the asymptotic distribution of Wasserstein DRIVE and propose data-driven procedures to select the regularization parameter based on theoretical results. Simulation studies confirm the superior finite sample performance of Wasserstein DRIVE. Thanks to its regularization and robustness properties, Wasserstein DRIVE could be preferable in practice, particularly when the practitioner is uncertain about model assumptions or distributional shifts in data.


[79] 2410.15648

Linking Model Intervention to Causal Interpretation in Model Explanation

Intervention intuition is often used in model explanation where the intervention effect of a feature on the outcome is quantified by the difference of a model prediction when the feature value is changed from the current value to the baseline value. Such a model intervention effect of a feature is inherently association. In this paper, we will study the conditions when an intuitive model intervention effect has a causal interpretation, i.e., when it indicates whether a feature is a direct cause of the outcome. This work links the model intervention effect to the causal interpretation of a model. Such an interpretation capability is important since it indicates whether a machine learning model is trustworthy to domain experts. The conditions also reveal the limitations of using a model intervention effect for causal interpretation in an environment with unobserved features. Experiments on semi-synthetic datasets have been conducted to validate theorems and show the potential for using the model intervention effect for model interpretation.


[80] 2410.15652

A note on the sparse Hanson-Wright inequality

We obtain Hanson-Wright inequalities for the quadratic form of a random vector with independent sparse random variables. Specifically, we consider cases where the components of the random vector are sparse $\alpha$-sub-exponential random variables with $\alpha>0$. Our proof relies on a novel combinatorial approach to estimate the moments of the random quadratic form.


[81] 2410.15655

Accounting for Missing Covariates in Heterogeneous Treatment Estimation

Many applications of causal inference require using treatment effects estimated on a study population to make decisions in a separate target population. We consider the challenging setting where there are covariates that are observed in the target population that were not seen in the original study. Our goal is to estimate the tightest possible bounds on heterogeneous treatment effects conditioned on such newly observed covariates. We introduce a novel partial identification strategy based on ideas from ecological inference; the main idea is that estimates of conditional treatment effects for the full covariate set must marginalize correctly when restricted to only the covariates observed in both populations. Furthermore, we introduce a bias-corrected estimator for these bounds and prove that it enjoys fast convergence rates and statistical guarantees (e.g., asymptotic normality). Experimental results on both real and synthetic data demonstrate that our framework can produce bounds that are much tighter than would otherwise be possible.


[82] 2410.15688

MIK: Modified Isolation Kernel for Biological Sequence Visualization, Classification, and Clustering

The t-Distributed Stochastic Neighbor Embedding (t-SNE) has emerged as a popular dimensionality reduction technique for visualizing high-dimensional data. It computes pairwise similarities between data points by default using an RBF kernel and random initialization (in low-dimensional space), which successfully captures the overall structure but may struggle to preserve the local structure efficiently. This research proposes a novel approach called the Modified Isolation Kernel (MIK) as an alternative to the Gaussian kernel, which is built upon the concept of the Isolation Kernel. MIK uses adaptive density estimation to capture local structures more accurately and integrates robustness measures. It also assigns higher similarity values to nearby points and lower values to distant points. Comparative research using the normal Gaussian kernel, the isolation kernel, and several initialization techniques, including random, PCA, and random walk initializations, are used to assess the proposed approach (MIK). Additionally, we compare the computational efficiency of all $3$ kernels with $3$ different initialization methods. Our experimental results demonstrate several advantages of the proposed kernel (MIK) and initialization method selection. It exhibits improved preservation of the local and global structure and enables better visualization of clusters and subclusters in the embedded space. These findings contribute to advancing dimensionality reduction techniques and provide researchers and practitioners with an effective tool for data exploration, visualization, and analysis in various domains.


[83] 2410.15706

Estimating Individual Dose-Response Curves under Unobserved Confounders from Observational Data

Estimating an individual's potential response to continuously varied treatments is crucial for addressing causal questions across diverse domains, from healthcare to social sciences. However, existing methods are limited either to estimating causal effects of binary treatments, or scenarios where all confounding variables are measurable. In this work, we present ContiVAE, a novel framework for estimating causal effects of continuous treatments, measured by individual dose-response curves, considering the presence of unobserved confounders using observational data. Leveraging a variational auto-encoder with a Tilted Gaussian prior distribution, ContiVAE models the hidden confounders as latent variables, and is able to predict the potential outcome of any treatment level for each individual while effectively capture the heterogeneity among individuals. Experiments on semi-synthetic datasets show that ContiVAE outperforms existing methods by up to 62%, demonstrating its robustness and flexibility. Application on a real-world dataset illustrates its practical utility.


[84] 2410.15734

A Kernelization-Based Approach to Nonparametric Binary Choice Models

We propose a new estimator for nonparametric binary choice models that does not impose a parametric structure on either the systematic function of covariates or the distribution of the error term. A key advantage of our approach is its computational efficiency. For instance, even when assuming a normal error distribution as in probit models, commonly used sieves for approximating an unknown function of covariates can lead to a large-dimensional optimization problem when the number of covariates is moderate. Our approach, motivated by kernel methods in machine learning, views certain reproducing kernel Hilbert spaces as special sieve spaces, coupled with spectral cut-off regularization for dimension reduction. We establish the consistency of the proposed estimator for both the systematic function of covariates and the distribution function of the error term, and asymptotic normality of the plug-in estimator for weighted average partial derivatives. Simulation studies show that, compared to parametric estimation methods, the proposed method effectively improves finite sample performance in cases of misspecification, and has a rather mild efficiency loss if the model is correctly specified. Using administrative data on the grant decisions of US asylum applications to immigration courts, along with nine case-day variables on weather and pollution, we re-examine the effect of outdoor temperature on court judges' "mood", and thus, their grant decisions.


[85] 2410.15761

Learning-to-Defer for Extractive Question Answering

Pre-trained language models have profoundly impacted the field of extractive question-answering, leveraging large-scale textual corpora to enhance contextual language understanding. Despite their success, these models struggle in complex scenarios that demand nuanced interpretation or inferential reasoning beyond immediate textual cues. Furthermore, their size poses deployment challenges on resource-constrained devices. Addressing these limitations, we introduce an adapted two-stage Learning-to-Defer mechanism that enhances decision-making by enabling selective deference to human experts or larger models without retraining language models in the context of question-answering. This approach not only maintains computational efficiency but also significantly improves model reliability and accuracy in ambiguous contexts. We establish the theoretical soundness of our methodology by proving Bayes and $(\mathcal{H}, \mathcal{R})$--consistency of our surrogate loss function, guaranteeing the optimality of the final solution. Empirical evaluations on the SQuADv2 dataset illustrate performance gains from integrating human expertise and leveraging larger models. Our results further demonstrate that deferring a minimal number of queries allows the smaller model to achieve performance comparable to their larger counterparts while preserving computing efficiency, thus broadening the applicability of pre-trained language models in diverse operational environments.


[86] 2410.15762

Solving Sparse \& High-Dimensional-Output Regression via Compression

Multi-Output Regression (MOR) has been widely used in scientific data analysis for decision-making. Unlike traditional regression models, MOR aims to simultaneously predict multiple real-valued outputs given an input. However, the increasing dimensionality of the outputs poses significant challenges regarding interpretability and computational scalability for modern MOR applications. As a first step to address these challenges, this paper proposes a Sparse \& High-dimensional-Output REgression (SHORE) model by incorporating additional sparsity requirements to resolve the output interpretability, and then designs a computationally efficient two-stage optimization framework capable of solving SHORE with provable accuracy via compression on outputs. Theoretically, we show that the proposed framework is computationally scalable while maintaining the same order of training loss and prediction loss before-and-after compression under arbitrary or relatively weak sample set conditions. Empirically, numerical results further validate the theoretical findings, showcasing the efficiency and accuracy of the proposed framework.


[87] 2410.15777

High-Fidelity Transfer of Functional Priors for Wide Bayesian Neural Networks by Learning Activations

Function-space priors in Bayesian Neural Networks provide a more intuitive approach to embedding beliefs directly into the model's output, thereby enhancing regularization, uncertainty quantification, and risk-aware decision-making. However, imposing function-space priors on BNNs is challenging. We address this task through optimization techniques that explore how trainable activations can accommodate complex priors and match intricate target function distributions. We discuss critical learning challenges, including identifiability, loss construction, and symmetries that arise in this context. Furthermore, we enable evidence maximization to facilitate model selection by conditioning the functional priors on additional hyperparameters. Our empirical findings demonstrate that even BNNs with a single wide hidden layer, when equipped with these adaptive trainable activations and conditioning strategies, can effectively achieve high-fidelity function-space priors, providing a robust and flexible framework for enhancing Bayesian neural network performance.


[88] 2410.15800

On the VC dimension of deep group convolutional neural networks

We study the generalization capabilities of Group Convolutional Neural Networks (GCNNs) with ReLU activation function by deriving upper and lower bounds for their Vapnik-Chervonenkis (VC) dimension. Specifically, we analyze how factors such as the number of layers, weights, and input dimension affect the VC dimension. We further compare the derived bounds to those known for other types of neural networks. Our findings extend previous results on the VC dimension of continuous GCNNs with two layers, thereby providing new insights into the generalization properties of GCNNs, particularly regarding the dependence on the input resolution of the data.


[89] 2410.15824

Long time behavior of semi-Markov modulated perpetuity and some related processes

Examples of stochastic processes whose state space representations involve functions of an integral type structure $$I_{t}^{(a,b)}:=\int_{0}^{t}b(Y_{s})e^{-\int_{s}^{t}a(Y_{r})dr}ds, \quad t\ge 0$$ are studied under an ergodic semi-Markovian environment described by an $S$ valued jump type process $Y:=(Y_{s}:s\in\mathbb{R}^{+})$ that is ergodic with a limiting distribution $\pi\in\mathcal{P}(S)$. Under different assumptions on signs of $E_{\pi}a(\cdot):=\sum_{j\in S}\pi_{j}a(j)$ and tail properties of the sojourn times of $Y$ we obtain different long time limit results for $I^{(a,b)}_{}:=(I^{(a,b)}_{t}:t\ge 0).$ In all cases mixture type of laws emerge which are naturally represented through an affine stochastic recurrence equation (SRE) $X\stackrel{d}{=}AX+B,\,\, X\perp\!\!\!\perp (A, B)$. Examples include explicit long-time representations of pitchfork bifurcation, and regime-switching diffusions under semi-Markov modulated environments, etc.


[90] 2410.15910

Diverse Policies Recovering via Pointwise Mutual Information Weighted Imitation Learning

Recovering a spectrum of diverse policies from a set of expert trajectories is an important research topic in imitation learning. After determining a latent style for a trajectory, previous diverse policies recovering methods usually employ a vanilla behavioral cloning learning objective conditioned on the latent style, treating each state-action pair in the trajectory with equal importance. Based on an observation that in many scenarios, behavioral styles are often highly relevant with only a subset of state-action pairs, this paper presents a new principled method in diverse polices recovery. In particular, after inferring or assigning a latent style for a trajectory, we enhance the vanilla behavioral cloning by incorporating a weighting mechanism based on pointwise mutual information. This additional weighting reflects the significance of each state-action pair's contribution to learning the style, thus allowing our method to focus on state-action pairs most representative of that style. We provide theoretical justifications for our new objective, and extensive empirical evaluations confirm the effectiveness of our method in recovering diverse policies from expert data.


[91] 2410.16012

Massimo: Public Queue Monitoring and Management using Mass-Spring Model

An efficient system of a queue control and regulation in public spaces is very important in order to avoid the traffic jams and to improve the customer satisfaction. This article offers a detailed road map based on a merger of intelligent systems and creating an efficient systems of queues in public places. Through the utilization of different technologies i.e. computer vision, machine learning algorithms, deep learning our system provide accurate information about the place is crowded or not and the necessary efforts to be taken.


[92] 2410.16052

Near-Optimal Algorithm for Non-Stationary Kernelized Bandits

This paper studies a non-stationary kernelized bandit (KB) problem, also called time-varying Bayesian optimization, where one seeks to minimize the regret under an unknown reward function that varies over time. In particular, we focus on a near-optimal algorithm whose regret upper bound matches the regret lower bound. For this goal, we show the first algorithm-independent regret lower bound for non-stationary KB with squared exponential and Mat\'ern kernels, which reveals that an existing optimization-based KB algorithm with slight modification is near-optimal. However, this existing algorithm suffers from feasibility issues due to its huge computational cost. Therefore, we propose a novel near-optimal algorithm called restarting phased elimination with random permutation (R-PERP), which bypasses the huge computational cost. A technical key point is the simple permutation procedures of query candidates, which enable us to derive a novel tighter confidence bound tailored to the non-stationary problems.


[93] 2410.16058

Shorter Is Different: Characterizing the Dynamics of Short-Form Video Platforms

The emerging short-form video platforms have been growing tremendously and become one of the leading social media recently. Although the expanded popularity of these platforms has attracted increasing research attention, there has been a lack of understanding of whether and how they deviate from traditional long-form video-sharing platforms such as YouTube and Bilibili. To address this, we conduct a large-scale data-driven analysis of Kuaishou, one of the largest short-form video platforms in China. Based on 248 million videos uploaded to the platform across all categories, we identify their notable differences from long-form video platforms through a comparison study with Bilibili, a leading long-form video platform in China. We find that videos are shortened by multiples on Kuaishou, with distinctive categorical distributions over-represented by life-related rather than interest-based videos. Users interact with videos less per view, but top videos can even more effectively acquire users' collective attention. More importantly, ordinary content creators have higher probabilities of producing hit videos. Our results shed light on the uniqueness of short-form video platforms and pave the way for future research and design for better short-form video ecology.


[94] 2410.16100

ExDBN: Exact learning of Dynamic Bayesian Networks

Causal learning from data has received much attention in recent years. One way of capturing causal relationships is by utilizing Bayesian networks. There, one recovers a weighted directed acyclic graph, in which random variables are represented by vertices, and the weights associated with each edge represent the strengths of the causal relationships between them. This concept is extended to capture dynamic effects by introducing a dependency on past data, which may be captured by the structural equation model, which is utilized in the present contribution to formulate a score-based learning approach. A mixed-integer quadratic program is formulated and an algorithmic solution proposed, in which the pre-generation of exponentially many acyclicity constraints is avoided by utilizing the so-called branch-and-cut ("lazy constraint") method. Comparing the novel approach to the state of the art, we show that the proposed approach turns out to produce excellent results when applied to small and medium-sized synthetic instances of up to 25 time-series. Lastly, two interesting applications in bio-science and finance, to which the method is directly applied, further stress the opportunities in developing highly accurate, globally convergent solvers that can handle modest instances.


[95] 2410.16103

LDAdam: Adaptive Optimization from Low-Dimensional Gradient Statistics

We introduce LDAdam, a memory-efficient optimizer for training large models, that performs adaptive optimization steps within lower dimensional subspaces, while consistently exploring the full parameter space during training. This strategy keeps the optimizer's memory footprint to a fraction of the model size. LDAdam relies on a new projection-aware update rule for the optimizer states that allows for transitioning between subspaces, i.e., estimation of the statistics of the projected gradients. To mitigate the errors due to low-rank projection, LDAdam integrates a new generalized error feedback mechanism, which explicitly accounts for both gradient and optimizer state compression. We prove the convergence of LDAdam under standard assumptions, and show that LDAdam allows for accurate and efficient fine-tuning and pre-training of language models.


[96] 2410.16124

MNIST-Nd: a set of naturalistic datasets to benchmark clustering across dimensions

Driven by advances in recording technology, large-scale high-dimensional datasets have emerged across many scientific disciplines. Especially in biology, clustering is often used to gain insights into the structure of such datasets, for instance to understand the organization of different cell types. However, clustering is known to scale poorly to high dimensions, even though the exact impact of dimensionality is unclear as current benchmark datasets are mostly two-dimensional. Here we propose MNIST-Nd, a set of synthetic datasets that share a key property of real-world datasets, namely that individual samples are noisy and clusters do not perfectly separate. MNIST-Nd is obtained by training mixture variational autoencoders with 2 to 64 latent dimensions on MNIST, resulting in six datasets with comparable structure but varying dimensionality. It thus offers the chance to disentangle the impact of dimensionality on clustering. Preliminary common clustering algorithm benchmarks on MNIST-Nd suggest that Leiden is the most robust for growing dimensions.


[97] 2410.16138

Theoretical Insights into Line Graph Transformation on Graph Learning

Line graph transformation has been widely studied in graph theory, where each node in a line graph corresponds to an edge in the original graph. This has inspired a series of graph neural networks (GNNs) applied to transformed line graphs, which have proven effective in various graph representation learning tasks. However, there is limited theoretical study on how line graph transformation affects the expressivity of GNN models. In this study, we focus on two types of graphs known to be challenging to the Weisfeiler-Leman (WL) tests: Cai-F\"urer-Immerman (CFI) graphs and strongly regular graphs, and show that applying line graph transformation helps exclude these challenging graph properties, thus potentially assist WL tests in distinguishing these graphs. We empirically validate our findings by conducting a series of experiments that compare the accuracy and efficiency of graph isomorphism tests and GNNs on both line-transformed and original graphs across these graph structure types.


[98] 2410.16195

A Trust-Region Method for Graphical Stein Variational Inference

Stein variational inference (SVI) is a sample-based approximate Bayesian inference technique that generates a sample set by jointly optimizing the samples' locations to minimize an information-theoretic measure of discrepancy with the target probability distribution. SVI thus provides a fast and significantly more sample-efficient approach to Bayesian inference than traditional (random-sampling-based) alternatives. However, the optimization techniques employed in existing SVI methods struggle to address problems in which the target distribution is high-dimensional, poorly-conditioned, or non-convex, which severely limits the range of their practical applicability. In this paper, we propose a novel trust-region optimization approach for SVI that successfully addresses each of these challenges. Our method builds upon prior work in SVI by leveraging conditional independences in the target distribution (to achieve high-dimensional scaling) and second-order information (to address poor conditioning), while additionally providing an effective adaptive step control procedure, which is essential for ensuring convergence on challenging non-convex optimization problems. Experimental results show our method achieves superior numerical performance, both in convergence rate and sample accuracy, and scales better in high-dimensional distributions, than previous SVI techniques.


[99] 2410.16214

Asymmetries in Financial Spillovers

This paper analyzes nonlinearities in the international transmission of financial shocks originating in the US. To do so, we develop a flexible nonlinear multi-country model. Our framework is capable of producing asymmetries in the responses to financial shocks for shock size and sign, and over time. We show that international reactions to US-based financial shocks are asymmetric along these dimensions. Particularly, we find that adverse shocks trigger stronger declines in output, inflation, and stock markets than benign shocks. Further, we investigate time variation in the estimated dynamic effects and characterize the responsiveness of three major central banks to financial shocks.


[100] 2410.16247

Implicit Regularization for Tubal Tensor Factorizations via Gradient Descent

We provide a rigorous analysis of implicit regularization in an overparametrized tensor factorization problem beyond the lazy training regime. For matrix factorization problems, this phenomenon has been studied in a number of works. A particular challenge has been to design universal initialization strategies which provably lead to implicit regularization in gradient-descent methods. At the same time, it has been argued by Cohen et. al. 2016 that more general classes of neural networks can be captured by considering tensor factorizations. However, in the tensor case, implicit regularization has only been rigorously established for gradient flow or in the lazy training regime. In this paper, we prove the first tensor result of its kind for gradient descent rather than gradient flow. We focus on the tubal tensor product and the associated notion of low tubal rank, encouraged by the relevance of this model for image data. We establish that gradient descent in an overparametrized tensor factorization model with a small random initialization exhibits an implicit bias towards solutions of low tubal rank. Our theoretical findings are illustrated in an extensive set of numerical simulations show-casing the dynamics predicted by our theory as well as the crucial role of using a small random initialization.