New articles on Electrical Engineering and Systems Science


[1] 2603.13321

BrainWhisperer: Leveraging Large-Scale ASR Models for Neural Speech Decoding

Decoding continuous speech from intracortical recordings is a central challenge for brain-computer interfaces (BCIs), with transformative potential for individuals with conditions that impair their ability to speak. While recent microelectrode array (MEA) decoders achieve impressive accuracy, their performance is fundamentally limited by the small size of existing datasets, they remain brittle to session-to-session variability, and their ability to generalize across participants remains unexplored. We introduce BrainWhisperer, a neural speech decoder that integrates high-resolution MEA recordings with a large pretrained automatic speech recognition (ASR) model. Building on interpretability findings showing that Whisper's encoder learns phoneme-selective representations with localized attention, we train a customized version of Whisper, modified to process neural features, using a hybrid objective that combines CTC loss on phonemes--predicted from the third encoder layer--and cross-entropy loss on word tokens. We introduce domain-informed modifications including windowed self-attention to capture articulatory continuity, hierarchical month/day-specific low-rank projections to address non-stationarity, and subject-specific embedders enabling cross-subject training. Evaluated on a publicly available MEA dataset (Card et al.), BrainWhisperer matches or outperforms prior state-of-the-art decoders. Critically, cross-dataset training improves performance even on individual datasets without fine-tuning, demonstrating unprecedented generalization. The model supports dual decoding paths: a high-accuracy phoneme-based path with external language model rescoring, and a fast direct text generation path enabling sub-100ms inference with minimal hardware requirements.


[2] 2603.13328

Self-Supervised Multi-Stage Domain Unlearning for White-Matter Lesion Segmentation

Inter-scanner variability of magnetic resonance imaging has an adverse impact on the diagnostic and prognostic quality of the scans and necessitates the development of models robust to domain shift inflicted by the unseen scanner data. Review of recent advances in domain adaptation showed that efficacy of strategies involving modifications or constraints on the latent space appears to be contingent upon the level and/or depth of supervision during model training. In this paper, we therefore propose an unsupervised domain adaptation technique based on self-supervised multi-stage unlearning (SSMSU). Building upon the state-of-the-art segmentation framework nnU-Net, we employ deep supervision at deep encoder stages using domain classifier unlearning, applied sequentially across the deep stages to suppress domain-related latent features. Following self-configurable approach of the nnU-Net, the auxiliary feedback loop implements a self-supervised backpropagation schedule for the unlearning process, since continuous unlearning was found to have a detrimental effect on the main segmentation task. Experiments were carried out on four public datasets for benchmarking white-matter lesion segmentation methods. Five benchmark models and/or strategies, covering passive to active unsupervised domain adaptation, were tested. In comparison, the SSMSU demonstrated the advantage of unlearning by enhancing lesion sensitivity and limiting false detections, which resulted in higher overall segmentation quality in terms of segmentation overlap and relative lesion volume error. The proposed model inputs only the FLAIR modality, which simplifies preprocessing pipelines, eliminates the need for inter-modality registration errors and harmonization, which can introduce variability. Source code is available on this https URL.


[3] 2603.13422

Projection Guided Personalized Federated Learning for Low Dose CT Denoising

Low-dose CT (LDCT) reduces radiation exposure but introduces protocol-dependent noise and artifacts that vary across institutions. While federated learning enables collaborative training without centralizing patient data, existing methods personalize in image space, making it difficult to separate scanner noise from patient anatomy. We propose ProFed (Projection Guided Personalized Federated Learning), a framework that complements the image space approach by performing dual-level personalization in the projection space, where noise originates during CT measurements before reconstruction combines protocol and anatomy effects. ProFed introduces: (i) anatomy-aware and protocol-aware networks that personalize CT reconstruction to patient and scanner-specific features, (ii) multi-constraint projection losses that enforce consistency with CT measurements, and (iii) uncertainty-guided selective aggregation that weights clients by prediction confidence. Extensive experiments on the Mayo Clinic 2016 dataset demonstrate that ProFed achieves 42.56 dB PSNR with CNN backbones and 44.83 dB with Transformers, outperforming 11 federated learning baselines, including the physics-informed SCAN-PhysFed by +1.42 dB.


[4] 2603.13439

Bayesian Uncertainty-Aware MRI Reconstruction

We propose a novel framework for joint magnetic resonance image reconstruction and uncertainty quantification using under-sampled k-space measurements. The problem is formulated as a Bayesian linear inverse problem, where prior distributions are assigned to the unknown model parameters. Specifically, we assume the target image is sparse in its spatial gradient and impose a total variation prior model. A Markov chain Monte Carlo (MCMC) method, based on a split-and-augmented Gibbs sampler, is then used to sample from the resulting joint posterior distribution of the unknown parameters. Experiments conducted using single- and multi-coil datasets demonstrate the superior performance of the proposed framework over optimisation-based compressed sensing algorithms. Additionally, our framework effectively quantifies uncertainty, showing strong correlation with error maps computed from reconstructed and ground-truth images.


[5] 2603.13447

MGMAR: Metal-Guided Metal Artifact Reduction for X-ray Computed Tomography

An X-ray computed tomography (CT), metal artifact reduction (MAR) remains a major challenge because metallic implants violate standard CT forward-model assumptions, producing severe streaking and shadowing artifacts that degrade diagnostic quality. We propose MGMAR, a metal-guided MAR method that explicitly leverages metal-related information throughout the reconstruction pipeline. MGMAR first generates a high-quality prior image by training a conditioned implicit neural representation (INR) using metal-unaffected projections, and then incorporates this prior into a normalized MAR (NMAR) framework for projection completion. To improve robustness under severe metal corruption, we pretrain the encoder-conditioned INR on paired metal-corrupted and artifact-free CT images, thereby embedding data-driven prior knowledge into the INR parameter space. This prior-embedded initialization reduces sensitivity to random initialization and accelerates convergence during measurement-specific refinement. The encoder takes a metal-corrupted reconstruction together with a recursively constructed metal artifact image, enabling the latent field to capture metal-dependent global artifact patterns. After projection completion using the INR prior, we further suppress residual artifacts using a metal-conditioned correction network, where the metal mask modulates intermediate features via adaptive instance normalization to target metal-dependent secondary artifacts while preserving anatomical structures. Experiments on the public AAPM-MAR benchmark demonstrate that MGMAR achieves state-of-the-art performance, attaining an average final score of 0.89 on 29 clinical test cases.


[6] 2603.13466

Open World MRI Reconstruction with Bias-Calibrated Adaptation

Real-world MRI reconstruction systems face the open-world challenge: test data from unseen imaging centers, anatomical structures, or acquisition protocols can differ drastically from training data, causing severe performance degradation. Existing methods struggle with this challenge. To address this, we propose BiasRecon, a bias-calibrated adaptation framework grounded in the minimal intervention principle: preserve what transfers, calibrate what does not. Concretely, BiasRecon formulates open-world adaptation as an alternating optimization framework that jointly optimizes three components: (1) frequency-guided prior calibration that introduces layer-wise calibration variables to selectively modulate frequency-specific features of the pre-trained score network via self-supervised k-space signals, (2) score-based denoising that leverages the calibrated generative prior for high-fidelity image reconstruction, and (3) adaptive regularization that employs Stein's Unbiased Risk Estimator to dynamically balance the prior-measurement trade-off, matching test-time noise characteristics without requiring ground truth. By intervening minimally and precisely through this alternating scheme, BiasRecon achieves robust adaptation with fewer than 100 tunable parameters. Extensive experiments across four datasets demonstrate state-of-the-art performance on open-world reconstruction tasks.


[7] 2603.13488

Understanding the strengths and weaknesses of SSL models for audio deepfake model attribution

Audio deepfake model attribution aims to mitigate the misuse of synthetic speech by identifying the source model responsible for generating a given audio sample, enabling accountability and informing vendors. The task is challenging, but self-supervised learning (SSL)-derived acoustic features have demonstrated state-of-the-art attribution capabilities, yet the underlying factors driving their success and the limits of their discriminative power remain unclear. In this paper, we systematically investigate how SSL-derived features capture architectural signatures in audio deepfakes. By controlling multiple dimensions of the audio generation process we reveal how subtle perturbations in model checkpoints, text prompts, vocoders, or speaker identity influence attribution. Our results provide new insights into the robustness, biases, and limitations of SSL-based deepfake attribution, highlighting both its strengths and vulnerabilities in realistic scenarios.


[8] 2603.13509

Verification and Forward Invariance of Control Barrier Functions for Differential-Algebraic Systems

Differential-algebraic equations (DAEs) arise in power networks, chemical processes, and multibody systems, where algebraic constraints encode physical conservation laws. The safety of such systems is critical, yet safe control is challenging because algebraic constraints restrict allowable state trajectories. Control barrier functions (CBFs) provide computationally efficient safety filters for ordinary differential equation (ODE) systems. However, existing CBF methods are not directly applicable to DAEs due to potential conflicts between the CBF condition and the constraint manifold. This paper introduces DAE-aware CBFs that incorporate the differential-algebraic structure through projected vector fields. We derive conditions that ensure forward invariance of safe sets while preserving algebraic constraints and extend the framework to higher-index DAEs. A systematic verification framework is developed, establishing necessary and sufficient conditions for geometric correctness and feasibility of DAE-aware CBFs. For polynomial systems, sum-of-squares certificates are provided, while for nonpolynomial and neural network candidates, satisfiability modulo theories are used for falsification. The approach is validated on wind turbine and flexible-link manipulator systems.


[9] 2603.13518

VoXtream2: Full-stream TTS with dynamic speaking rate control

Full-stream text-to-speech (TTS) for interactive systems must start speaking with minimal delay while remaining controllable as text arrives incrementally. We present VoXtream2, a zero-shot full-stream TTS model with dynamic speaking-rate control that can be updated mid-utterance on the fly. VoXtream2 combines a distribution matching mechanism over duration states with classifier-free guidance across conditioning signals to improve controllability and synthesis quality. Prompt-text masking enables textless audio prompting, removing the need for prompt transcription. Across standard zero-shot benchmarks and a dedicated speaking-rate test set, VoXtream2 achieves competitive objective and subjective results against public baselines despite a smaller model and less training data. In full-stream mode, it runs 4 times faster than real time with 74 ms first-packet latency on a consumer GPU.


[10] 2603.13560

Task-Oriented Wireless Transmission of 3D Point Clouds: Geometric Versus Semantic Robustness

Wireless transmission of high-dimensional 3D point clouds (PCs) is increasingly required in industrial collaborative robotics systems. Conventional compression methods prioritize geometric fidelity, although many practical applications ultimately depend on reliable task-level inference rather than exact coordinate reconstruction. In this paper, we propose an end-to-end semantic communication framework for wireless 3D PC transmission and conduct a systematic study of the relationship between geometric reconstruction fidelity and semantic robustness under channel impairments. The proposed architecture jointly supports geometric recovery and object classification from a shared transmitted representation, enabling direct comparison between coordinate-level and task-level sensitivity to noise. Experimental evaluation on a real industrial dataset reveals a pronounced asymmetry: semantic inference remains stable across a broad signal-to-noise ratio (SNR) range even when geometric reconstruction quality degrades significantly. These results demonstrate that reliable task execution does not require high-fidelity geometric recovery and provide design insights for task-oriented wireless perception systems in bandwidth- and power-constrained industrial environments.


[11] 2603.13580

Extended Target Sensing in MIMO-OFDM ISAC Systems: Modeling, Optimization and Estimation

This paper develops a comprehensive target modeling, beamforming optimization, and parameter estimation framework for extended-target sensing in wideband MIMO-OFDM integrated sensing and communication systems. We propose a parametric scattering model (PSM) that decouples target geometry from electromagnetic scattering characteristics, requiring only six nonlinear geometric parameters and linear radar cross-section (RCS) coefficients. Based on this compact structure, we derive a hybrid Bayesian Cramér-Rao bound (CRB) for joint estimation of azimuth, elevation, and range-related parameters. To handle inherent range ambiguities due to OFDM signaling, we analyze the range ambiguity function and introduce range sidelobe suppression constraints around the true range. Based on these constraints, we formulate an ambiguity-aware transmit beamforming design that minimizes a weighted geometric CRB subject to per-user signal-to-interference-plus-noise ratio (SINR) requirements and a total power budget. As benchmarks, we extend two other common models to the same wideband MIMO-OFDM scenario. We also derive maximum a posteriori estimators and a computational complexity analysis for all three models. Simulation results demonstrate that the proposed PSM-based approach achieves improved target localization with significantly reduced runtime for beamforming optimization and parameter estimation, while consistently satisfying communication SINR requirements.


[12] 2603.13597

DQ-Ladder: A Deep Reinforcement Learning-based Bitrate Ladder for Adaptive Video Streaming

Adaptive streaming of segmented video over HTTP typically relies on a predefined set of bitrate-resolution pairs, known as a bitrate ladder. However, fixed ladders often overlook variations in content and decoding complexities, leading to suboptimal trade-offs between encoding time, decoding efficiency, and video quality. This article introduces DQ-Ladder, a deep reinforcement learning (DRL)-based scheme for constructing time- and quality-aware bitrate ladders for adaptive video streaming applications. DQ-Ladder employs predicted decoding time, quality scores, and bitrate levels per segment as inputs to a Deep Q-Network (DQN) agent, guided by a weighted reward function of decoding time, video quality, and resolution smoothness. We leverage machine learning models to predict decoding time, bitrate level, and objective quality metrics (VMAF, XPSNR), eliminating the need for exhaustive encoding or quality metric computation. We evaluate DQ-Ladder using the Versatile Video Coding (VVC) toolchain (VVenC/VVdeC) on 750 video sequences across six Apple HLS-compliant resolutions and 41 quantization parameters. Experimental results against four baselines show that DQ-Ladder achieves BD-rate reductions of at least 10.3% for XPSNR compared to the HLS ladder, while reducing decoding time by 22%. DQ-Ladder shows significantly lower sensitivity to prediction errors than competing methods, remaining robust even with up to 20% noise.


[13] 2603.13602

Expressivity of Programmable-Metasurface-Based Physical Neural Networks: Encoding Non-Linearity, Structural Non-Linearity, and Depth

Wave-based signal processing conventionally encodes input data into the input wavefront, making it challenging to implement non-linear operations. Programmable wave systems enable an alternative approach: encoding the input data into the scattering properties of tunable components. With such structural input encoding, two potentially non-linear mappings are involved: first, from the input data to the tunable components' scattering characteristics, and, second, from these scattering characteristics to the output wavefront. In this paper, we systematically examine the expressivity of a wave-based physical neural network (WPNN) with structural input encoding. Our analysis is based on a physics-consistent multiport-network model of a compact D-band rich-scattering cavity parametrized by a 100-element programmable metasurface. We separately control encoding non-linearity, structural non-linearity, and network depth in order to examine their interplay, considering a controlled scalar regression task. With phase encoding and strong inter-element mutual coupling (MC), both aforementioned mappings are strongly non-linear and the WPNN performs very well even with a single layer. We further observe that additional layers can partially compensate for weak inter-element MC. In addition, we demonstrate that WPNN depth can improve expressivity even when it is not associated with an increase in trainable weights. Altogether, our results provide a physics-consistent picture of how encoding choice, MC strength, and depth jointly govern the expressive power of PM-based WPNNs, informing design choices for future experimental implementations of WPNNs.


[14] 2603.13608

A Lyapunov Characterization of Robust D-Stability with Application to Decentralized Integral Control of LTI Systems

The concept of matrix D-stability plays an important role in applications, ranging from economic and biological system models to decentralized control. Here we provide necessary and sufficient Lyapunov-type conditions for the robust (block) D-stability property. We leverage this characterization as part of a novel Lyapunov analysis of decentralized integral control for MIMO LTI systems, providing sufficient conditions guaranteeing stability under low-gain and under arbitrary connection and disconnection of individual control loops.


[15] 2603.13666

Unsupervised Adaptation from FDG to PSMA PET/CT for 3D Lesion Detection under Label Shift

In this work, we propose an unsupervised domain adaptation (UDA) framework for 3D volumetric lesion detection that adapts a detector trained on labeled FDG PET/CT to unlabeled PSMA PET/CT. Beyond covariate shift, cross tracer adaptation also exhibits label shift in both lesion size composition and the number of lesions per subject. We introduce self-training with two mechanisms that explicitly model and compensate for this label shift. First, we adaptively adjust the detection anchor shapes by re-estimating target domain box scales from selected pseudo labels and updating anchors with an exponential moving average. This increases positive anchor coverage for small PSMA lesions and stabilizes box regression. Second, instead of a fixed confidence threshold for pseudo-label selection, we allocate size bin-wise quotas according to the estimated target domain histogram over lesion volumes. The self-training alternates between supervised learning with prior-guided pseudo labeling on PSMA and supervised learning on labeled FDG. On AutoPET 2024, adapting from 501 labeled FDG studies to 369 $^{18}$F-PSMA studies, the proposed method improves both AP and FROC over the source-only baseline and conventional self-training without label-shift mitigation, indicating that modeling target lesion prevalence and size composition is an effective path to robust cross-tracer detection.


[16] 2603.13678

Peak-Load Pricing and Investment Cost Recovery with Duration-Limited Storage

Energy storage shifts energy from off-peak periods to on-peak periods. Unlike conventional generation, storage is duration-limited: the stored energy capacity constrains the duration over which it can supply power. To understand how these constraints affect optimal pricing and investment decisions, we extend the classic two-period peak-load pricing model to include duration-limited storage. By adopting assumptions typical of solar-dominated systems, we link on- and off-peak prices to storage investment costs, round-trip efficiency, and the duration of the peak period. The bulk of the scarcity premium from on-peak prices is associated with the fixed costs of storage as opposed to variable costs stemming from round-trip efficiency losses. Unlike conventional generators, the binding duration constraints lead storage to recover energy capacity costs on a per-peak-event basis instead of amortizing these costs over total peak hours. A numerical example illustrates the implications for equilibrium prices and capacity investment.


[17] 2603.13699

D-Compress: Detail-Preserving LiDAR Range Image Compression for Real-Time Streaming on Resource-Constrained Robots

Efficient 3D LiDAR point cloud compression (LPCC) and streaming are critical for edge server-assisted robotic systems, enabling real-time communication with compact data representations. A widely adopted approach represents LiDAR point clouds as range images, enabling the direct use of mature image and video compression codecs. However, because these codecs are designed with human visual perception in mind, they often compromise geometric details, which downgrades the performance of downstream robotic tasks such as mapping and object detection. Furthermore, rate-distortion optimization (RDO)-based rate control remains largely underexplored for range image compression (RIC) under dynamic bandwidth conditions. To address these limitations, we propose D-Compress, a new detail-preserving and fast RIC framework tailored for real-time streaming. D-Compress integrates both intra- and inter-frame prediction with an adaptive discrete wavelet transform approach for precise residual compression. Additionally, we introduce a new RDO-based rate control algorithm for RIC through new rate-distortion modeling. Extensive evaluations on various datasets demonstrate the superiority of D-Compress, which outperforms state-of-the-art (SOTA) compression methods in both geometric accuracy and downstream task performance, particularly at compression ratios exceeding 100x, while maintaining real-time execution on resource-constrained hardware. Moreover, evaluations under dynamic bandwidth conditions validate the robustness of its rate control mechanism.


[18] 2603.13731

Online Model Predictive Control for Trajectory and Beamforming Optimization in UAV-Enabled URLLC

This paper investigates joint trajectory and active beamforming design for unmanned aerial vehicle (UAV)-enabled ultra-reliable low-latency communication (URLLC) systems under finite blocklength (FBL) transmission. Unlike conventional Shannon-capacity formulations, the FBL regime introduces a signal-to-interference-plus-noise ratio (SINR)-dependent dispersion penalty that increases the sensitivity of reliability to mobility-induced channel variations. To address this challenge, we develop a propulsion-aware model predictive control (MPC) framework that performs receding-horizon joint trajectory and multi-user beamforming optimization while enforcing FBL-based rate constraints. The resulting long-horizon nonconvex problem is decomposed into beamforming and trajectory subproblems using alternating optimization. Concave surrogate is constructed for the Shannon-capacity term, while convex approximations are derived for the dispersion term and the nonlinear propulsion power model, yielding tractable convex subproblems solved iteratively. Compared with an offline MPC baseline, where the predictive problem is solved once over the entire mission horizon without feedback updates, and a conventional offline trajectory-beamforming optimization, the proposed closed-loop framework achieves disturbance-resilient mission completion under UAV position disturbances. Simulation results show that, compared with maximum ratio transmission (MRT) and equal-power allocation, the proposed interference-aware design significantly improves URLLC reliability under stringent minimum rate constraints. The results also quantify the impact of antenna scaling, transmit power, and transmission time on FBL performance, providing insights for reliability-centric UAV-enabled wireless networks in 5G and beyond.


[19] 2603.13780

Integrated Spoofing-Robust Automatic Speaker Verification via a Three-Class Formulation and LLR

Spoofing-robust automatic speaker verification (SASV) aims to integrate automatic speaker verification (ASV) and countermeasure (CM). A popular solution is fusion of independent ASV and CM scores. To better modeling SASV, some frameworks integrate ASV and CM within a single network. However, these solutions are typically bi-encoder based, offer limited interpretability, and cannot be readily adapted to new evaluation parameters without retraining. Based on this, we propose a unified end-to-end framework via a three-class formulation that enables log-likelihood ratio (LLR) inference from class logits for a more interpretable decision pipeline. Experiments show comparable performance to existing methods on ASVSpoof5 and better results on SpoofCeleb. The visualization and analysis also prove that the three-class reformulation provides more interpretability.


[20] 2603.13828

Non-trivial consensus on directed signed matrix-weighted networks with compound measurement noises and time-varying topologies

This paper studies non-trivial consensus--a relatively novel and unexplored convergence behavior--on directed signed matrix-weighted networks subject to both additive and multiplicative measurement noises under time-varying topologies. Building upon grounded matrix-weighted Laplacian properties, a stochastic dynamic model is established that simultaneously captures inter-dimensional cooperative and antagonistic interactions, compound measurement noises and time-varying network structures. Based on stochastic differential equations theory, protocols that guarantee mean square and almost sure non-trivial consensus are proposed. Specifically, for any predetermined non-trivial consensus state, all agents are proven to converge toward this non-zero value in the mean-square and almost-sure senses. The design of control gain function in our protocols highlights a balanced consideration of the cumulative effect over time, the asymptotic decay property and the finite energy corresponding to measurement noises. Notably, the conditions on time-varying topologies in our protocols only require boundedness of elements in edge weight matrices, which facilitate the practicality of concept "time-varying topology" in matrix-weighted network consensus algorithms. Furthermore, the proposed protocols operate under milder connectivity conditions and no requirements on structural (un)balance properties. The work in this paper demonstrates that groups with both cooperative and antagonistic inter-dimensional interactions can achieve consensus even in the presence of compound measurement noises and time-varying topologies, challenging the conventional belief that consensus is attainable only in fully cooperative settings.


[21] 2603.13861

Active Beyond-Diagonal Reconfigurable Intelligent Surfaces: Modeling, Architecture Design, and Optimization

Beyond-diagonal reconfigurable intelligent surfaces (BD-RISs) are an emerging RIS 2.0 technology for future wireless communication. However, BD-RISs are primarily passive without active amplification, suffering from severe multiplicative path loss. To address the concern of multiplicative path loss, in this work we investigate the active BD-RIS including the modeling, architecture design, and optimization. We first analyze the active BD-RIS using multiport network theory with scattering parameters and derive a physical and electromagnetic compliant active BD-RIS aided communication model. We also design two new active BD-RIS architectures, namely fully- and group-connected active BD-RISs. Based on the proposed model and architecture, we investigate the active BD-RIS aided single-input single-output system and derive the closed-form optimal solution and scaling law of the signal-to-noise ratio. We further investigate the active BD-RIS aided multiple-input multiple-output system and propose an iterative algorithm based on quadratically constrained quadratic programming to maximize the spectral efficiency. Numerical results are provided and show that the active BD-RIS can achieve higher spectral efficiency than the active/passive diagonal RIS and passive BD-RIS. For example, to achieve the same spectral efficiency, the number of elements required by active BD-RIS is less than half of that required by active diagonal RIS, showing the advantages of active BD-RIS.


[22] 2603.13862

Fully distributed consensus control for stochastic multi-agent systems under undirected and directed topologies

This work aims to address the design of fully distributed control protocols for stochastic consensus, and, for the first time, establishes the existence and uniqueness of solutions for the path-dependent and highly nonlinear closed-loop systems under both undirected and directed topologies, bridging a critical gap in the literature. For the case of directed graphs, a unified fully distributed control protocol is designed for the first time to guarantee mean square and almost sure consensus of stochastic multi-agent systems under directed graphs. Moreover, an enhanced fully distributed protocol with additional tunable parameters designed for undirected graphs is proposed, which guarantees stochastic consensus while achieving superior convergence speed. Additionally, our work provides explicit exponential estimates for the corresponding convergence rates of stochastic consensus, elucidating the relationship between the exponential convergence rate and the system parameters. Simulations validate the theoretical results.


[23] 2603.13866

Airy Beam Engineering in Near-field Communications: A Tractable Closed-Form Analysis in the Terahertz Band

Terahertz (THz) communication can offer terabit-per-second rates in future wireless systems, thanks to the ultra-wide bandwidths, but require large antenna arrays. As antenna apertures expand and we enter the near-field scenarios, the conventional binary classification of communication links as either Line-of-Sight (LoS) or Non-Line-of-Sight (NLoS) becomes insufficient. Instead, quasi-LoS scenarios, where the LoS path is partially obstructed, are increasingly prevalent, posing significant challenges for traditional LoS focusing and steering beams. The Airy beam serves as a promising alternative, utilizing its non-diffracting and curved trajectory properties to mitigate such blockages. However, while existing electromagnetics literature primarily explores their physical patterns without practical generation schemes, recent communication-oriented designs predominantly rely on learning-based frameworks lacking interpretable closed-form solutions. To address this issue, this paper investigates a closed-form Airy beam design to efficiently synthesize Airy beam phase profiles based on the positions of the transceivers and obstacles. Specifically, rigorous analytical derivations of the electric field and trajectory are presented to establish a deterministic closed-form design for ULA Airy beamforming. Leveraging 3D wavefront separability, this framework is extended to uniform planar arrays (UPAs) with two operation modes: the hybrid focusing-Airy mode and the dual Airy mode. Simulation results verify the effectiveness of our derived trajectory equations and demonstrate that the proposed closed-form design significantly outperforms conventional beamforming schemes in quasi-LoS scenarios. Furthermore, the proposed method achieves performance comparable to exhaustive numerical searches with low computational complexity and enhanced physical interpretability.


[24] 2603.13871

Evaluating Pretrained General-Purpose Audio Representations for Music Genre Classification

This study investigates the use of self-supervised learning embeddings, particularly BYOL-A, in conjunction with a deep neural network classifier for Music Genre Classification. Our experiments demonstrate that BYOL-A embeddings outperform other pre-trained models, such as PANNs and VGGish, achieving an accuracy of 81.5% on the GTZAN dataset and 64.3% on FMA-Small. The proposed DNN classifier improved performance by 10-16% over linear classifiers. We explore the effects of contrastive and triplet loss and multitask training with optimized loss weights, achieving the highest accuracy. To address cross dataset challenges, we combined GTZAN and FMA-Small into a unified 18-class label space for joint training, resulting in slight performance drops on GTZAN but comparable results on FMA-Small. The scripts developed in this work are publicly available.


[25] 2603.13883

Fully Distributed Adaptive Consensus Approach for Economic Dispatch Problem

This research presents a novel approach to solving the economic load dispatch (ELD) problem in smart grid systems by leveraging a multi-agent distributed consensus strategy. The core idea revolves around achieving agreement among generators on their incremental cost values, thereby enabling an optimal allocation of power generation. To enhance convergence and robustness, the study introduces an adaptive coupling weight mechanism within a fully decentralized consensus framework, carefully designed with appropriate initial settings for incremental costs. The proposed distributed control protocol is versatile it functions effectively in both constrained and unconstrained generator capacity scenarios. Importantly, the methodology ensures that total power generation continuously matches dynamic load demands throughout the dispatch process, maintaining system-wide balance. To accommodate fluctuating and time varying load profiles, a dummy node is incorporated into the network architecture, acting as a flexible proxy for real time demand changes. The resilience of the method is further evaluated under communication disruptions, specifically by analyzing generator link failures through a switching network topology. Stability of the system is rigorously established using a Lyapunov-based analysis, assuming an undirected and connected communication graph among agents. To validate the practical efficacy of the proposed technique, comprehensive simulations are conducted on the IEEE 30 bus test system within the MATLAB environment, confirming its accuracy, adaptability, and computational efficiency in realistic smart grid conditions.


[26] 2603.13922

On the Impact of Operating Points on Small-Signal Stability: Decentralized Stability Sets via Scaled Relative Graphs

This paper presents a decentralized frequency-domain framework to characterize the influence of the operating point on the small-signal stability of converter-dominated power systems. The approach builds on Scaled Relative Graph (SRG) analysis, extended here to address Linear Parameter-Varying (LPV) systems. By exploiting the affine dependence of converter admittances on their steady-state operating points, the centralized small-signal stability assessment of the grid is decomposed into decentralized, frequency-wise geometric tests. Each converter can independently evaluate its feasible stability region, expressed as a set of linear inequalities in its parameter space. The framework provides closed-form geometric characterizations applicable to both grid-following (GFL) and grid-forming (GFM) converters, and validation results confirm its effectiveness.


[27] 2603.13929

Antenna Placement Design for Interference Exploitation in Pinching-Antenna Systems

Pinching-antenna systems (PASs) have been proposed as a flexible antenna technology to fulfill the stringent requirements of high data rate and large-scale equipment deployment in future wireless networks. The principle of PA involves mapping a signal over dielectric waveguides for transmission. By adjusting the positions of pinching antennas (PAs) over the waveguides, with the aim of gain enhancement for line-of-sight links and the reduction of large-scale path loss. Symbol-level precoding (SLP) is a nonlinear precoding technique, which converts multi-user interference into constructive interference via beamforming design at symbol level. In this paper, we study the combination of SLP and PAS, leveraging the advantages of PAS to further enhance the ability of SLP to convert constructive interference. The transmit power minimization problem is formulated and solved for the multiple waveguides multiple PAs system by jointly beamforming and PAs' positions design under the SLP principle. The alternating optimization (AO) framework is applied to decouple the beamforming vector and the position coefficient of PA. For the given beamforming vectors, a new objective function is formulated with respect to the positions of the PAs. With the characteristics of the formulated objective function, the optimization problem for the position coefficients of PAs can be decomposed into multiple independent subproblems, each corresponding to a PA's position coefficient, and a projected gradient descent (PGD)-based method, constrained by the feasible movable region of each PA, is then developed to obtain the suboptimal position coefficients. The performance improvements achieved by the combination of PAS and SLP, as well as the effectiveness of the proposed algorithm are verified through the simulation results.


[28] 2603.13959

Safety in Admittance Control using Reference Trajectory Shaping

This paper presents a switched model reference admittance control framework to achieve safe and compliant human-robot collaboration through reference trajectory shaping. The proposed method generates variable admittance parameters according to task compliance and task-space safety requirements. Additionally, a disturbance bound is incorporated to enhance robustness against disturbances. Safety guarantees are explicitly established by integrating invariance control, ensuring that the reference trajectory remains within the admissible region. Stability of the switched system is analyzed using a common quadratic Lyapunov function, which confirms asymptotic convergence of the tracking error. The effectiveness of the approach is demonstrated through simulations on a two link manipulator and comparisons with existing methods are also presented. Furthermore, real time implementation on a single link manipulator validates the practical feasibility of the controller, highlighting its ability to achieve both compliance and safety in physical interaction scenarios.


[29] 2603.13967

EchoLVFM: One-Step Video Generation via Latent Flow Matching for Echocardiogram Synthesis

Echocardiography is widely used for assessing cardiac function, where clinically meaningful parameters such as left-ventricular ejection fraction (EF) play a central role in diagnosis and management. Generative models capable of synthesising realistic echocardiogram videos with explicit control over such parameters are valuable for data augmentation, counterfactual analysis, and specialist training. However, existing approaches typically rely on computationally expensive multi-step sampling and aggressive temporal normalisation, limiting efficiency and applicability to heterogeneous real-world data. We introduce EchoLVFM, a one-step latent video flow-matching framework for controllable echocardiogram generation. Operating in the latent space, EchoLVFM synthesises temporally coherent videos in a single inference step, achieving a $\mathbf{\sim 50\times}$ improvement in sampling efficiency compared to multi-step flow baselines while maintaining visual fidelity. The model supports global conditioning on clinical variables, demonstrated through precise control of EF, and enables reconstruction and counterfactual generation from partially observed sequences. A masked conditioning strategy further removes fixed-length constraints, allowing shorter sequences to be retained rather than discarded. We evaluate EchoLVFM on the CAMUS dataset under challenging single-frame conditioning. Quantitative and qualitative results demonstrate competitive video quality, strong EF adherence, and 57.9% discrimination accuracy by expert clinicians which is close to chance. These findings indicate that efficient, one-step flow matching can enable practical, controllable echocardiogram video synthesis without sacrificing fidelity. Code available at: this https URL


[30] 2603.13975

Discrete-time linear quadratic stochastic control with equality-constrained inputs: Application to energy demand response

We investigate the discrete-time stochastic linear quadratic control problem for a population of cooperative agents under the hard equality constraint on total control inputs, motivated by demand response in renewable energy systems. We establish the optimal solution that respects hard equality constraints for systems with additive noise in the dynamics. The optimal control law is derived using dynamic programming and Karush-Kuhn-Tucker (KKT) conditions, and the resulting control solution depends on a discrete-time Riccati-like recursive equation. Application examples of coordinating the charging of a network of residential batteries to absorb excess solar power generation are demonstrated, and the proposed control is shown to achieve exact power tracking while considering individual State-of-Charge (SoC) objectives


[31] 2603.13981

NLOS-Aided Joint OTA Synchronization and Off-Grid Imaging for Distributed MIMO Systems

Distributed multiple-input multiple-output (MIMO) architectures enable large-scale integrated sensing and communication (ISAC) by providing high spatial resolution and robustness through spatial diversity. However, practical phase-coherent sensing is challenged by phase synchronization errors and modeling mismatch caused by grid discretization. Existing over-the-air (OTA) synchronization methods typically treat synchronization and sensing tasks separately, which may lead to inaccurate phase alignment when multipath components are used for imaging. In this paper, we propose a non-line-of-sight (NLOS)-aided joint OTA synchronization and off-grid imaging framework for distributed MIMO ISAC systems. First, a line-of-sight (LOS)-assisted coarse synchronization is performed to establish initial phase coherence across distributed links. Subsequently, an iterative refinement stage exploits reconstructed NLOS components obtained from imaging results. By modeling off-grid effects via a first-order Taylor expansion, we transform measurements with nonlinear off-grid offset into an augmented linear model with jointly sparse reflectivity and off-set variables. The imaging problem is reformulated as a structured sparse recovery task and solved using a tailored off-grid approximate message passing (OG-AMP) algorithm. The imaging and synchronization modules are coupled within a closed-loop alternative optimization framework, where improved imaging enables more accurate phase refinement, and vice versa. Numerical results show that the proposed framework achieves accurate synchronization and imaging under phase errors. Compared with conventional approaches, it shows superior robustness and accuracy.


[32] 2603.14003

The Taxonomies, Training, and Applications of Event Stream Modelling for Electronic Health Records

The widespread adoption of electronic health records (EHRs) enables the acquisition of heterogeneous clinical data, spanning lab tests, vital signs, medications, and procedures, which offer transformative potential for artificial intelligence in healthcare. Although traditional modelling approaches have typically relied on multivariate time series, they often struggle to accommodate the inherent sparsity and irregularity of real-world clinical workflows. Consequently, research has shifted toward event stream representation, which treats patient records as continuous sequences, thereby preserving the precise temporal structure of the patient journey. However, the existing literature remains fragmented, characterised by inconsistent definitions, disparate modelling architectures, and varying training protocols. To address these gaps, this review establishes a unified definition of EHR event streams and introduces a novel taxonomy that categorises models based on their handling of event time, type, and value. We systematically review training strategies, ranging from supervised learning to self-supervised methods, and provide a comprehensive discussion of applications across clinical scenarios. Finally, we identify open critical challenges and future directions, with the aim of clarifying the current landscape and guiding the development of next-generation healthcare models.


[33] 2603.14017

A Multi-Objective Learning Approach for Adaptive Waveform Selection in Integrated Sensing and Communications Systems

Integrated Sensing and Communications (ISAC) has emerged as a key enabler for sixth generation (6G) wireless systems by jointly supporting data transmission and environmental awareness within a unified framework. However, communication and sensing functionalities impose inherently conflicting performance requirements, particularly in heterogeneous networks where users may demand sensing only, communication only, or joint services. Selecting a waveform that satisfies diverse service demands therefore becomes a challenging multi objective decision problem. In this paper, a multi objective learning approach for adaptive waveform selection in ISAC systems is proposed. A simulation driven evaluation framework is developed to assess multiple waveform candidates across communication, sensing, and joint performance metrics. Instead of enforcing scalar utility aggregation, waveform performance is represented in a multi dimensional objective space where Pareto optimal candidates are identified for each scenario. A dataset is generated by varying user demand distributions and channel conditions, and multi-label targets are constructed based on Pareto dominance. Machine learning models are trained to learn the mapping between network conditions and Pareto optimal waveform sets, enabling fast waveform selection under dynamic network states. Simulation results demonstrate that the proposed framework effectively adapts waveform selection to heterogeneous service requirements while preserving sensing communication trade offs, providing a forward-looking perspective for 6G and beyond ISAC deployments.


[34] 2603.14018

LLM-Guided Safe Reinforcement Learning for Energy System Topology Reconfiguration

The increasing penetration of renewable generation and the growing variability of electrified demand introduce substantial operational uncertainty to modern power systems. Topology reconfiguration is widely recognized as an effective and economical means to enhance grid resilience. Due to the coexistence of AC power-flow constraints and discrete switching decisions, topology reconfiguration in large-scale systems leads to a highly nonlinear and nonconvex optimization problem, making traditional methods computationally prohibitive. Consequently, several studies have explored reinforcement learning-based approaches to improve scalability and operational efficiency. However, its practical implementation is challenged by the high-dimensional combinatorial action space and the need to ensure safety during learning-based decision-making. To address these challenges, this paper presents a safe and intelligent topology control framework that integrates Large Language Models (LLMs) with a Safety Soft Actor-Critic (Safety-SAC) architecture. Operational voltage and thermal limits are reformulated into smooth safety-cost signals, enabling risk-aware policy optimization within a constrained Markov decision process. A knowledge-based Safety-LLM module is further introduced to refine unsafe or suboptimal transitions through domain knowledge and state-informed reasoning, thus guiding the learning agent toward safer and more effective switching actions. Experiments on the IEEE 36-bus and 118-bus Grid2Op benchmarks show that the proposed method consistently improves reward, survival time, and safety metrics, achieving higher reward, longer survival, and lower safety cost compared with SAC, ACE, and their safety-enhanced variants. These results demonstrate the potential of combining LLM-based reasoning with safe reinforcement learning to achieve scalable and reliable grid topology control.


[35] 2603.14032

Beyond Two-stage Diffusion TTS: Joint Structure and Content Refinement via Jump Diffusion

Diffusion and flow matching TTS faces a tension between discrete temporal structure and continuous spectral modeling. Two-stage models diffuse on fixed alignments, often collapsing to mean prosody; single-stage models avoid explicit durations but suffer alignment instability. We propose a jump-diffusion framework where discrete jumps model temporal structure and continuous diffusion refines spectral content within one process. Even in its one-shot degenerate form, our framework achieves 3.37% WER vs. 4.38% for Grad-TTS with improved UTMOSv2 on LJSpeech. The full iterative UDD variant further enables adaptive prosody, autonomously inserting natural pauses in out-of-distribution slow speech rather than stretching uniformly. Audio samples are available at this https URL.


[36] 2603.14060

Energy-Aware Integrated Proactive Maintenance Planning and Production Scheduling

Demand-side energy management, such as the real-time pricing (RTP) program, offers manufacturers opportunities to reduce energy costs by shifting production to low-price hours. However, this strategy is challenging to implement when machine degradation is considered, as degraded machines have decreased processing capacity and increased energy consumption. Proactive maintenance (PM) can restore machine health but requires production downtime, creating a challenging trade-off: scheduling maintenance during low-price periods sacrifices energy savings opportunities, while deferring maintenance leads to capacity losses and higher energy consumption. To address this challenge, we propose a hierarchical bi-level control framework that jointly optimizes PM planning and runtime production scheduling, considering the machine degradation. A higher-level optimization, with the lower-level model predictive control (MPC) embedded as a sub-problem, determines PM plans that minimize total operational costs under day-ahead RTP. At runtime, the lower-level MPC executes closed-loop production scheduling to minimize energy costs under realized RTP, meeting delivery targets. Simulation results from a lithium-ion battery pack assembly line case study demonstrate that the framework strategically shifts PM away from bottlenecks and high-price hours, meeting daily production targets while reducing energy costs.


[37] 2603.14197

On Globally Optimal Stochastic Policy Gradient Methods for Domain Randomized LQR Synthesis

Domain randomization is a simple, effective, and flexible scheme for obtaining robust feedback policies aimed at reducing the sim-to-real gap due to model mismatch. While domain randomization methods have yielded impressive demonstrations in the robotics-learning literature, general and theoretically motivated principles for designing optimization schemes that effectively leverage the randomization are largely unexplored. We address this gap by considering a stochastic policy gradient descent method for the domain randomized linear-quadratic regulator synthesis problem, a situation simple enough to provide theoretical guarantees. In particular, we demonstrate that stochastic gradients obtained by repeatedly sampling new systems at each gradient step converge to global optima with appropriate hyperparameters choices, and yield better controllers with lower variability in the final controllers when compared to approaches that do not resample. Sampling is often a quick and cheap operation, so computing policy gradients with newly sampled systems at each iteration is preferable to evaluating gradients on a fixed set of systems.


[38] 2603.14263

Geometry-Aware Set-Membership Multilateration: Directional Bounds and Anchor Selection

In this paper, we study anchor selection for range-based localization under unknown-but-bounded measurement errors. We start from the convex localization set $\X=\Xd\cap\Hset$ recently introduced in \cite{CalafioreSIAM}, where $\Xd$ is a polyhedron obtained from pairwise differences of squared-range equations between the unknown location $x$ and the anchors, and $\Hset$ is the intersection of upper-range hyperspheres. Our first goal is \emph{offline} design: we derive geometry-only E- and D-type scores from the centered scatter matrix $S(A)=AQ_mA\tran$, where $A$ collects the anchor coordinates and $Q_m=I_m-\frac{1}{m}\one\one\tran$ is the centering projector, showing that $\lambda_{\min}(S(A))$ controls worst-direction and diameter surrogates for the polyhedral certificate $\Xd$, while $\det S(A)$ controls principal-axis volume surrogates. Our second goal is \emph{online} uncertainty assessment for a selected subset of anchors: exploiting the special structure $\X=\Xd\cap\Hset$, we derive a simplex-aggregated enclosing ball for $\Hset$ and an exact support-function formula for $\Hset$, which lead to finite hybrid bounds for the actual localization set $\X$, even when the polyhedral certificate deteriorates. Numerical experiments are performed in two dimensions, showing that geometry-based subset selection is close to an oracle combinatorial search, that the D-score slightly dominates the E-score for the area-oriented metric considered here, and that the new $\Hset$-aware certificates track the realized size of the selected localization set closely.


[39] 2603.14266

Modeling, Optimization and Electromagnetic Validation of Stacked Intelligent Metasurfaces by Using a Multiport Network Model

Stacked intelligent metasurfaces (SIMs) extend the concept of reconfigurable intelligent surfaces by cascading multiple programmable layers, enabling advanced electromagnetic wave transformations for communication and sensing applications. However, most existing optimization frameworks rely on simplified channel abstractions that may overlook key electromagnetic effects such as multiport coupling, circuit losses, and non-ideal hardware behavior. In this paper, we develop a modeling and optimization framework for SIMs based on a multiport network representation using scattering parameters. The proposed formulation captures realistic circuit characteristics and mutual interactions among SIM ports while remaining amenable to optimization. The resulting models are validated through electromagnetic simulations, enabling a systematic comparison between idealized and practical SIM configurations. Numerical results for communication and sensing scenarios confirm that the proposed framework provides accurate performance predictions and enables the effective design of SIM configurations under realistic electromagnetic conditions.


[40] 2603.14275

Controllable Accent Normalization via Discrete Diffusion

Existing accent normalization methods do not typically offer control over accent strength, yet many applications-such as language learning and dubbing-require tunable accent retention. We propose DLM-AN, a controllable accent normalization system built on masked discrete diffusion over self-supervised speech tokens. A Common Token Predictor identifies source tokens that likely encode native pronunciation; these tokens are selectively reused to initialize the reverse diffusion process. This provides a simple yet effective mechanism for controlling accent strength: reusing more tokens preserves more of the original accent. DLM-AN further incorporates a flow-matching Duration Ratio Predictor that automatically adjusts the total duration to better match the native rhythm. Experiments on multi-accent English data show that DLM-AN achieves the lowest word error rate among all compared systems while delivering competitive accent reduction and smooth, interpretable accent strength control.


[41] 2603.14298

Topological Conditions for Echo Chamber Formation under the FJ model: A Cluster Consensus-based Approach

The Friedkin-Johnsen (FJ) model is a popular opinion dynamics model that explains the disagreement that can occur even among closely interacting individuals. Cluster consensus is a special type of disagreement, where agents in a network split into subgroups such that those within a subgroup agree and those in different subgroups disagree. In large-scale social networks, users often distribute into echo chambers (i.e. groups of users with aligned views) while discussing contested issues such as electoral politics, social norms, etc. Additionally, they are exposed only to opinions and news sources that align with their existing beliefs. Hence, the interaction network plays a key role in the formation of an echo chamber. Since cluster consensus can represent echo chambers in a social network, we examine the conditions for cluster consensus under the FJ model with the objective of determining the properties of the interaction network that lead to echo chamber formation. We present topology-based necessary and sufficient conditions for cluster consensus under the FJ model, regardless of the edge weights in the network and stubbornness values (which are difficult to estimate parameters in a social network). A major advantage of the proposed results is that they are applicable to arbitrary digraphs. Moreover, using the proposed conditions, we explain the emergence of bow-tie structures which are often observed in real-world echo chambers. Finally, we also develop a computationally feasible methodology to verify the proposed conditions for cluster consensus.


[42] 2603.14317

AI/ML for mobile networks: Current status in Rel. 19 and challenges ahead

The transformative power of artificial intelligence (AI) and machine learning (ML) is recognized as a key enabler for sixth generation (6G) mobile networks by both academia and industry. Research on AI/ML in mobile networks has been ongoing for years, and the 3rd generation partnership project (3GPP) launched standardization efforts to integrate AI into mobile networks. However, a comprehensive review of the current status and challenges of the standardization of AI/ML for mobile networks is still missing. To this end, we provided a comprehensive review of the standardization efforts by 3GPP on AI/ML for mobile networks. This includes an overview of the general AI/ML framework, representative use cases (i.e., CSI feedback, beam management and positioning), and corresponding evaluation matrices. We emphasized the key research challenges on dataset preparation, generalization evaluation and baseline AI/ML models selection. Using CSI feedback as a case study, given the test dataset 2, we demonstrated that the pre-training-fine-tuning paradigm (i.e., pre-training using dataset 1 and fine-tuning using dataset 2) outperforms training on dataset 2. Moreover, we observed the highest performance enhancements in Transformer-based models through fine-tuning, showing its great generalization potential at large floating-point operations (FLOPs). Finally, we outlined future research directions for the application of AI/ML in mobile networks.


[43] 2603.14351

Clutter-Resilient ISAC for Low-Altitude Wireless Networks: A 5G Base Station-Compatible Protocol, Waveform, and Prototype

Integrated sensing and communications (ISAC) has been envisioned as a promising solution to support emerging services in low-altitude wireless networks (LAWNs), where upgrading 5G ground base stations (GBS) toward new active sensing systems with wide coverage, low cost, high accuracy, and favorable spectrum compatibility, is strongly desired. However, such an evolution faces several critical challenges, particularly in the detection and tracking of weak and slow unmanned aerial vehicles (UAVs). These challenges include ISAC waveform design, clutter cancellation resilient to high clutter-to-noise ratios (CNRs), and efficient Doppler separation between UAVs and clutter. To that end, we summarize potential solutions and raise a comprehensive framework on implementing the 5Gadvanced (5G-A) GBS. Outfield experiments demonstrate that the developed 5G-A GBS can effectively track weak and slow targets at distances exceeding 1 kilometer, while incurring only a 1.2% downlink rate loss relative to commercial 5G-A GBS.


[44] 2603.14383

Geometric Framework for Robust Order Detection in Delay-Coordinates Dynamic Mode Decomposition

Delay-coordinates dynamic mode decomposition (DC-DMD) is widely used to extract coherent spatiotemporal modes from high-dimensional time series. A central challenge is distinguishing dynamically meaningful modes from spurious modes induced by noise and order overestimation. We show that model order detection and mode selection in DC-DMD are fundamentally problems of subspace geometry. Specifically, true modes are characterized by concentration within a low-dimensional signal subspace, whereas spurious modes necessarily retain non-negligible components outside any moderate overestimate of that subspace. This geometric distinction yields a perturbation-robust definition of true and spurious modes and yields fully data-driven selection criteria. This geometric framework leads to two complementary data-driven selection criteria. The first is derived directly from the geometric distinction and uses a data-driven proxy of the signal-subspace to compute a residual score. The second arises from a new operator-theoretic analysis of delay embedding. Using a block-companion formulation, we show that all modes exhibit a Kronecker-Vandermonde (KV) structure induced by the delay-coordinates, and true modes are distinguished by the degree to which they conform to it. Importantly, we also show that this deviation is governed precisely by the geometric residual. In addition, our analysis provides a principled explanation for the empirical behavior of magnitude- and norm-based heuristics, clarifying when and why they fail under delay-coordinates. Extensive numerical experiments confirm the theoretical predictions and demonstrate that the proposed geometric and structure-based methods achieve robust and accurate order detection and mode selection, consistently better than existing baselines across noise levels, spectral separations, damping regimes, and embedding lengths.


[45] 2603.14384

Low-Data Predictive Maintenance of Railway Station Doors and Elevators Using Bayesian Proxy Flow Modeling

This paper proposes a low-data predictive maintenance framework for automatic doors and elevators in a railway station building. The method is intended for assets without direct condition monitoring, where only aggregate passenger traffic information and expert knowledge about movement patterns are available. Passenger flows are modeled on a reduced station graph using a Bayesian formulation with uncertain totals and routing shares. The inferred flows are converted into approximate operating-cycle loads for doors and elevators through simple stochastic proxy relations. These loads are combined with uncertain age- and cycle-based maintenance thresholds to estimate the probability that predefined maintenance conditions have been reached. A cost-aware scheduling model is then used to align maintenance activities while accounting for service costs, disruption, delay penalties, and grouping opportunities within each asset class. The framework is illustrated on a simulated case study reflecting a real station layout. The results show that proxy operational data can support maintenance scheduling with low incremental implementation cost and can improve alignment relative to a calendar-based policy.


[46] 2603.14386

Data-Enabled Policy and Value Iteration for Continuous-Time Linear Quadratic Output Feedback Control

This paper proposes efficient policy iteration and value iteration algorithms for the continuous-time linear quadratic regulator problem with unmeasurable states and unknown system dynamics, from the perspective of direct data-driven control. Specifically, by re-examining the data characteristics of input-output filtered vectors and introducing QR decomposition, an improved substitute state construction method is presented that further eliminates redundant information, ensures a full row rank data matrix, and enables a complete parameterized representation of the feedback controller. Furthermore, the original problem is transformed into an equivalent linear quadratic regulator problem defined on the substitute state with a known input matrix, verifying the stabilizability and detectability of the transformed system. Consequently, model-free policy iteration and value iteration algorithms are designed that fully exploit the full row rank substitute state data matrix. The proposed algorithms offer distinct advantages: they avoid the need for prior knowledge of the system order or the calculation of signal derivatives and integrals; the iterative equations can be solved directly without relying on the traditional least-squares paradigm, guaranteeing feasibility in both single-output and multi-output settings; and they demonstrate superior numerical stability, reduced data demand, and higher computational efficiency. Moreover, the heuristic results regarding trajectory generation for continuous-time systems are discussed, circumventing potential failure modes associated with existing approaches.


[47] 2603.14388

Context-Aware Adaptive Shared Control for Magnetically-Driven Bimanual Dexterous Micromanipulation

Magnetically actuated robots provide a promising untethered platform for navigation in confined environments, enabling biological studies and targeted micro-delivery. However, dexterous manipulation in complex structures remains challenging. While single-arm magnetic actuation suffices for simple transport, steering through tortuous or bifurcating channels demands coordinated control of multiple magnetic sources to generate the torques required for precise rotation and directional guidance. Bimanual teleoperation enables such dexterous steering but imposes high cognitive demands, as operators must handle the nonlinear dynamics of magnetic actuation while coordinating two robotic manipulators. To address these limitations, we propose Bi-CAST, a context-aware adaptive shared control framework for bimanual magnetic micromanipulation. A multimodal network fuses spatio-temporal visual features, spatial risk metrics, and historical states to continuously adjust the control authority of each manipulator in real time. In parallel, a bidirectional haptic interface integrates force-based intent recognition with risk-aware guidance, enabling force feedback to provide a continuous channel for dynamic human-machine authority negotiation. We validate the framework through user studies with eight participants performing three navigation tasks of increasing complexity in a vascular phantom. Compared with fixed authority and discrete switching baselines, Bi-CAST achieves up to 76.6% reduction in collisions, 25.9% improvement in trajectory smoothness, and 44.4% lower NASA-TLX workload, while delivering the fastest task completion times.


[48] 2603.14396

DexterousMag: A Reconfigurable Electromagnetic Actuation System for Miniature Helical Robot

Despite the promise of magnetically actuated miniature helical robots for minimally invasive interventions, state-of-the-art electromagnetic actuation systems are often space-inefficient and geometrically fixed. These constraints hinder clinical translation and, moreover, prevent task-adaptive trade-offs among workspace coverage, energy distribution, and field/gradient capability. We present DexterousMag, a robot-arm-assisted three-coil electromagnetic actuation system that enables continuous geometric reconfiguration of a compact coil group, thereby redistributing magnetic-field and gradient capability for task-adaptive operation. The reconfiguration is realized by a parallel mechanism that exposes a single geometric DOF of the coil group, conveniently parameterized by the polar angle. Using an FEM-based modeling pipeline, we precompute actuation and gradient libraries and quantify the resulting trade-offs under current limits: configurations that favor depth reach expand the feasible region but reduce peak field/gradient, whereas configurations that favor near-surface capability concentrate stronger fields/gradients and support lifting. We validate these trade-offs on representative tasks (deep translation, planar tracking, and 3D lifting) and further demonstrate a proof-of-concept online geometry scheduling scheme for combined tasks, benchmarked against fixed-geometry settings. Overall, DexterousMag establishes continuous geometric reconfiguration as an operational mechanism for enlarging the practical envelope of miniature helical robot actuation while improving energy efficiency and safety.


[49] 2603.14403

Robust Safety Filters for Lipschitz-Bounded Adaptive Closed-Loop Systems with Structured Uncertainties

Adaptive control provides closed-loop stability and reference tracking for uncertain dynamical systems through online parameter adaptation. These properties alone, however, do not ensure safety in the sense of forward invariance of state constraints, particularly during transient phases of adaptation. Control barrier function (CBF)-based safety filters have been proposed to address this limitation, but existing approaches often rely on conservative constraint tightening or static safety margins within quadratic program formulations. This paper proposes a reference-based adaptive safety framework for systems with structured parametric uncertainty that explicitly accounts for transient plant-reference mismatch. Safety is enforced at the reference level using a barrier-function-based filter, while adaptive control drives the plant to track the safety-certified reference. By exploiting Lipschitz bounds on the closed-loop error dynamics, a robust CBF condition is derived and reformulated as a convex second-order cone program (SOCP). The resulting approach reduces conservatism while preserving formal guarantees of forward invariance, stability, and tracking.


[50] 2603.14408

DRCC-LPVMPC: Robust Data-Driven Control for Autonomous Driving and Obstacle Avoidance

Safety in obstacle avoidance is critical for autonomous driving. While model predictive control (MPC) is widely used, simplified prediction models such as linearized or single-track vehicle models introduce discrepancies between predicted and actual behavior that can compromise safety. This paper proposes a distributionally robust chance-constrained linear parameter-varying MPC (DRCC-LPVMPC) framework that explicitly accounts for such discrepancies. The single-track vehicle dynamics are represented in a quasi-linear parameter-varying (quasi-LPV) form, with model mismatches treated as additive uncertainties of unknown distribution. By constructing chance constraints from finite sampled data and employing a Wasserstein ambiguity set, the proposed method avoids restrictive assumptions on boundedness or Gaussian distributions. The resulting DRCC problem is reformulated as tractable convex constraints and solved in real time using a quadratic programming solver. Recursive feasibility of the approach is formally established. Simulation and real-world experiments demonstrate that DRCC-LPVMPC maintains safer obstacle clearance and more reliable tracking than conventional nonlinear MPC and LPVMPC controllers under significant uncertainties.


[51] 2603.14411

A Comprehensive Survey of Redundancy Systems with a Focus on Triple Modular Redundancy (TMR)

Despite its maturity, the field of fault-tolerant redundancy suffers from significant terminological fragmentation, where functionally equivalent methods are frequently described under disparate names across academic and industrial domains. This survey addresses this ambiguity by providing a structured and comprehensive analysis of redundancy techniques, with a primary focus on Triple Modular Redundancy (TMR). A unified taxonomy is established to classify redundancy strategies into Spatial, Temporal, and Mixed categories, alongside the introduction of a novel five-class framework for voter architectures. Key findings synthesize practical tradeoffs, contrasting high-reliability spatial TMR for safety-critical applications against resource-efficient temporal methods for constrained systems. Furthermore, the shift toward Mixed and Adaptive TMR (e.g., Approximate Triple Modular Redundancy (ATMR), X-Rel) for dynamic and error-tolerant applications, such as Artificial Intelligence (AI) acceleration, is explored. This work identifies critical research gaps, including the threat of Multi-Bit Upsets (MBUs) in sub-28nm technologies, the scarcity of public-domain data on proprietary high-integrity systems, and the absence of high-level toolchains for dynamic reconfiguration. Finally, suggestions are offered for future research directions, emphasizing the need for terminological standardization, MBU-resilient design methodologies, and the development of open-source tools for adaptive fault tolerance.


[52] 2603.14437

Near-Field Channel Estimation for mmWave/THz Communications with Extremely Large-Scale UPAs

Extremely large antenna arrays (ELAAs) are widely adopted in mmWave/THz communications to compensate for the severe path loss, wherein the channel estimation remains a significant challenge since the Rayleigh distance of ELAAs stretches to tens or even hundreds of meters and the near-field channel model should be considered. Existing polar-domain based methods and block-sparse based methods are originally devised for Uniform Linear Arrays (ULAs) near-field channel estimation. The polar-domain based method can be applied to Uniform Planar Arrays (UPAs), but it behaves plain since it ignores the specific sparsity structure of the UPA near-field channels. Meanwhile, the block-sparse based method cannot be extended to the UPA scenarios directly. To address these issues, we first reformulate the original UPA near-field channel as an outer product of two ULA near-field channels and we construct a modified two-dimensional DFT (2D-DFT) dictionary for it. With the proposed dictionary, we further prove that the UPA near-field channel admits a 2D block-sparse structure. Leveraging this specific sparse structure, we solve the channel estimation problem with the 2D Pattern-Coupled Sparse Bayesian Learning (2D-PCSBL) algorithm. Simulation results show that the proposed approach outperforms conventional existing methods while maintaining a comparable computational complexity.


[53] 2603.14442

Predicting power grid frequency dynamics with invertible Koopman-based architectures

The system frequency is a critical measure of power system stability and understanding, and modeling it are key to ensure reliable power system operations. Koopman-based autoencoders are effective at approximating complex nonlinear data patterns, with potential applications in the frequency dynamics of power systems. However, their non-invertibility can result in a distorted latent representation, leading to significant prediction errors. Invertible neural networks (INNs) in combination with the Koopman operator framework provide a promising approach to address these limitations. In this study, we analyze different INN architectures and train them on simulation datasets. We further apply extensions to the networks to address inherent limitations of INNs and evaluate their impact. We find that coupling-layer INNs achieve the best performance when used in isolation. In addition, we demonstrate that hybrid approaches can improve the performance when combined with suitable INNs, while reducing the generalization capabilities in combination with disadvantageous architectures. Overall, our results provide a clearer overview of how architectural choices influence INN performance, offering guidance for selecting and designing INNs for modeling power system frequency dynamics.


[54] 2603.14450

Surgi-HDTMR: Closing the Sensorimotor Loop in Bimanual Microsurgery via Haptics, Digital Twin, and Mixed Reality

Robotic microsurgery demands precise bimanual control, intuitive interaction, and informative force feedback. However, most training platforms for robotic microsurgery lack immersive 3D interaction and high-fidelity haptics. Here, we present Surgi-HDTMR, a mixed-reality (MR) and digital-twin (DT) training system that couples bimanual haptic teleoperation with a benchtop microsurgical robotic platform, and 3D-printed phantoms. A metrically co-registered, time-synchronized DT aligns in-situ MR guidance with the physical workspace and drives a depth-adaptive haptic model that renders contact, puncture, and tissue-retraction forces. In a within-subjects study of simulated cortical navigation and tumor resection, Surgi-HDTMR shortened task time, reduced harmful contacts and collisions, and improved perceptual accuracy relative to non-haptic and non-adaptive baselines. These results suggest that tightly coupling MR overlays with a synchronized DT, together with depth-adaptive haptics, can accelerate skill acquisition and improve safety in robot-assisted microsurgery, pointing toward next-generation surgical training.


[55] 2603.14461

CATFA-Net: A Trans-Convolutional Approach for Accurate Medical Image Segmentation

Convolutional blocks have played a crucial role in advancing medical image segmentation by excelling in dense prediction tasks. However, their inability to effectively capture long-range dependencies has limited their performance. Transformer-based architectures, leveraging attention mechanisms, address this limitation by modeling global context and creating expressive feature representations. Recent research has explored this potential by introducing hybrid frameworks that combine transformer encoders with convolutional decoders. Despite their advantages, these approaches face challenges such as limited inductive bias, high computational cost, and reduced robustness to data variability. To overcome these issues, this study introduces CATFA-Net, a novel and efficient segmentation framework designed to produce high-quality segmentation masks while reducing computational costs and increasing inference speed. CATFA-Net employs a hierarchical hybrid encoder architecture with a lightweight convolutional decoder backbone. Its transformer-based encoder uses a new Context Addition Attention mechanism that captures inter-image dependencies without the quadratic complexity of standard attention mechanisms. Features from the transformer branch are fused with those from the convolutional branch through a proposed Cross-Channel Attention mechanism, which helps retain spatial and channel information during downsampling. Additionally, a Spatial Fusion Attention mechanism in the decoder refines features while reducing background noise ambiguity. Extensive evaluations on five publicly available datasets show that CATFA-Net outperforms existing methods in accuracy and efficiency. The framework sets new state-of-the-art Dice scores on GLaS (94.48%) and ISIC 2018 (91.55%). Robustness tests and external validation further demonstrate its strong ability to generalize in binary segmentation tasks.


[56] 2603.14509

Bayesian and Classical Feature Ranking for Interpretable BLDC Fault Diagnosis

This paper compares Bayesian and classical feature ranking methods for interpretable fault diagnosis of brushless DC (BLDC) motors. Two Bayesian approaches, spike-and-slab and ARD logistic ranking, are evaluated against three classical baselines on a public BLDC benchmark in binary and multiclass settings using current-based, rotational-speed-based, and combined feature sets. The strongest overall results are obtained for the combined representation. In binary classification, ReliefF achieves the highest balanced accuracy of 0.923, while ARD logistic and spike-and-slab remain very close at 0.919 and 0.920 with much smaller subsets ($k=5$). In multiclass classification, ARD logistic performs best for the combined variant with balanced accuracy 0.914, followed closely by LASSO (0.913) and spike-and-slab (0.912). The results show that Bayesian ranking is particularly competitive for current-only and combined descriptors, while ReliefF remains especially effective for speed-based ranking. Because the benchmark consists of short segmented observations from a limited number of experimental conditions, the findings are interpreted primarily as benchmark-specific evidence rather than strong claims of fault generalization.


[57] 2603.14516

Consensus in Plug-and-Play Heterogeneous Dynamical Networks: A Passivity Compensation Approach

This paper investigates output consensus in heterogeneous dynamical networks within a plug-and-play framework. The networks are interconnected through nonlinear diffusive couplings and operate in the presence of measurement and communication noise. Focusing on systems that are input feedforward passive (IFP), we propose a passivity-compensation approach that exploits the surplus passivity of coupling links to locally offset shortages of passivity at the nodes. This mechanism enables subnetworks to be interconnected without requiring global reanalysis, thereby preserving modularity. Specifically, we derive locally verifiable interface conditions, expressed in terms of passivity indices and coupling gains, to guarantee that consensus properties of individual subnetworks are preserved when forming larger networks.


[58] 2603.14542

A Decoupling-based Approach for Signature Estimation of Wideband XL MIMO-FMCW Radars

Modern radars employing wideband signals and extremely large (XL) multiple-input multiple-output (MIMO) arrays can significantly improve range and angular resolution. However, when large bandwidth and array aperture are used simultaneously, the spatial delay across the array becomes comparable to the radar range resolution, leading to the spatial wideband effect (SWE). The SWE introduces several distortions including range migration (range squint), beam squint, and range-angle coupling (RAC), which spread the target response in the range-angle domain and may cause physically separated targets to overlap and mask each other. In this work, we propose a decoupling-based target detection and parameter estimation framework for MIMO frequency modulated continuous wave (FMCW) radar. The proposed method reformulates the joint range-angle estimation problem as a decoupled sequential frequency estimation problem, where the two-dimensional (2D) estimation is carried out through successive one-dimensional (1D) super-resolution estimations. Specifically, we employ orthogonal matching pursuit (OMP) to perform sparse recovery-based range and angle estimation with high resolution. The proposed decoupling strategy is further extended to spatial wideband XL-MIMO FMCW radar systems, enabling reliable detection and separation of targets even when their responses overlap due to severe RAC. Simulation results demonstrate that the proposed approach accurately detects multiple targets and successfully resolves overlapping target responses in the presence of SWE, outperforming conventional Fourier transform and clustering-based methods.


[59] 2603.14606

Collective Grid: Privacy-Preserved Multi-Operator Energy Sharing Optimization via Federated Energy Prediction

Electricity consumption in mobile networks is increasing with the continued 5G expansion, rising data traffic, and more complex infrastructures. However, energy management is often handled independently by each mobile network operator (MNO), leading to limited coordination and missed opportunities for collective efficiency gains. To address this gap, we propose a privacy-preserving framework for automated energy infrastructure sharing among co-located MNOs. Our framework consists of three modules: (i) a federated learning-based privacy-preserving site energy consumption forecasting module, (ii) an orchestration module in which a mixed-integer linear program is solved to schedule energy purchases from the grid, utilization of renewable sources, and shared battery charging or discharging, based on real-time prices, forecasts, and battery state, and (iii) an energy source selection module which handles the selection of cost-effective power sources and storage actions based on predicted demand across MNOs for the next control window. Using data from operational networks, our experiments confirm that the proposed solution substantially reduces operational costs and outperforms non-sharing baselines, with gains that increase as network density rises in 5G-and-beyond deployments.


[60] 2603.14622

Progress-Based Fault Detection and Health-Aware Task Allocation for Heterogeneous Multi-Robot Systems

We present a progress-based fault detection module and its integration with dynamic task allocation for heterogeneous robot teams. The detector monitors a normalized task-completion signal with a lightweight Kalman filter (KF) and a normalized innovation squared (NIS) test, augmented with a low-rate stall gate, an uncertainty gate, and debounce logic. Health estimates influence the allocator via health-weighted costs and health-dependent masks; reallocation is event-triggered and regularized with an $\ell_1$ assignment-change penalty to limit reassignment churn while preserving feasibility through slack variables. The detector has constant per-robot update cost, and the allocation remains a convex quadratic program (QP). Experiments on a common team-task setup evaluate measurement-noise increases, velocity-slip biases, communication dropouts, and task abandonment. The results show timely detection in the noise and bias cases, maintained task completion with limited reassignment, and the expected observability delays under communication dropouts.


[61] 2603.14644

LUMINA: A Multi-Vendor Mammography Benchmark with Energy Harmonization Protocol

Publicly available full-field digital mammography (FFDM) datasets remain limited in size, clinical labels, and vendor diversity, which hinders the training of robust models. We present LUMINA, a curated, multi-vendor FFDM dataset that explicitly encodes acquisition energy and vendor metadata to expose clinically relevant appearance shifts that current benchmarks overlook. This innovative resource comprises 1824 images from 468 patients (960 benign, 864 malignant) with pathology-confirmed outcomes, BI-RADS assessments, and breast-density annotations. LUMINA spans six acquisition systems and both high- and low-energy styles, exposing vendor- and energy-driven appearance shifts. To reduce cross-vendor/energy drift while preserving lesion morphology, we introduce a foreground-only, pixel-space alignment (''energy harmonization'') that aligns each image to a low-energy reference style, leaving the zero-valued background unchanged. By benchmarking modern CNN and transformer baselines on three clinically meaningful tasks -- diagnosis (benign vs. malignant), BI-RADS risk grouping, and density -- we unify single-vs-two-view evaluation and show that two-view models consistently outperform single-view; in our benchmark, EfficientNet-B0 attains AUC 93.54% for diagnosis, and Swin-T yields the best macro-AUC 89.43% for density. Harmonization improves AUC/ACC across backbones and yields more focal Grad-CAM localization around suspicious regions. Being a richly annotated resource, LUMINA thus provides (a) a vendor-diverse, energy-labeled benchmark and (b) a model-agnostic harmonization protocol that together catalyze reliable, deployable mammography AI.


[62] 2603.14655

Two-Stage Heterogeneous Graph Neural Network for RIS-Aided Physical-Layer Security

This paper investigates physical-layer security (PLS) enabled by graph neural networks (GNNs). We propose a two-stage heterogeneous GNN (HGNN) to maximize the secrecy energy efficiency (SEE) of a reconfigurable intelligent surface (RIS)-assisted multi-input-single-output (MISO) system that serves multiple legitimate users (LUs) and eavesdroppers (Eves). The first stage formulates the system as a bipartite graph involving three types of nodes-RIS reflecting elements, LUs, and Eves-with the goal of generating the RIS phase shift matrix. The second stage models the system as a fully connected graph with two types of nodes (LUs and Eves), aiming to produce beamforming and artificial noise (AN) vectors. Both stages adopt an HGNN integrated with a multi-head attention mechanism, and the second stage incorporates two output methods: beam-direct and model-based approaches. The two-stage HGNN is trained in an unsupervised manner and designed to scale with the number of RIS reflecting elements, LUs, and Eves. Numerical results demonstrate that the proposed two-stage HGNN outperforms state-of-the-art GNNs in RIS-aided PLS scenarios. Compared with convex optimization algorithms, it reduces the average running time by three orders of magnitude with a performance loss of less than $4\%$. Additionally, the scalability of the two-stage HGNN is validated through extensive simulations.


[63] 2603.14829

A Spatio-Temporal-Frequency Transformer Framework for Near-Field Target Recognition

A target recognition framework relying on near-field integrated sensing and communication (ISAC) systems is proposed. By exploiting the distance-dependent spatial signatures provided by the near-field spherical wavefront, high-accuracy sensing is realized in a bandwidth-efficient manner. A spatio--temporal--frequency (STF) transformer framework is introduced for target recognition using electromagnetic features found in the wireless channel response. In particular, a lightweight spatial encoder is employed to extract features from the antenna array for each frame and subcarrier. These features are then fused by a time-frequency transformer head with positional embeddings to model temporal dynamics and cross-subcarrier correlations. Simulation results demonstrate that strong target recognition performance can be achieved even with limited bandwidth resources.


[64] 2603.14875

Flag-Preamble-Based Delay-Doppler Channel Estimation for Next-Evolution Waveforms

Accurate delay-Doppler channel estimation is critical for next-evolution waveforms (NEWs) to enable reliable signal detection. This paper proposes a robust channel estimation algorithm that integrates Flag sequences optimized via an adaptive accelerated parallel majorization-minimization (AP-MM) algorithm with a proposed channel estimation algorithm. To enable efficient, low-complexity parameter extraction and further overcome the robustness issues of conventional greedy estimation, we introduce two key enhancements, i.e., a candidate selection strategy to mitigate spurious sidelobe peaks, and a global least squares (LS) refinement stage to eliminate error propagation caused by sidelobe masking effects. Numerical results demonstrate that the proposed scheme significantly outperforms traditional existing algorithms, achieving the desired estimation accuracy.


[65] 2603.14877

SoulX-Duplug: Plug-and-Play Streaming State Prediction Module for Realtime Full-Duplex Speech Conversation

Recent advances in spoken dialogue systems have brought increased attention to human-like full-duplex voice interactions. However, our comprehensive review of this field reveals several challenges, including the difficulty in obtaining training data, catastrophic forgetting, and limited scalability. In this work, we propose SoulX-Duplug, a plug-and-play streaming state prediction module for full-duplex spoken dialogue systems. By jointly performing streaming ASR, SoulX-Duplug explicitly leverages textual information to identify user intent, effectively serving as a semantic VAD. To promote fair evaluation, we introduce SoulX-Duplug-Eval, extending widely used benchmarks with improved bilingual coverage. Experimental results show that SoulX-Duplug enables low-latency streaming dialogue state control, and the system built upon it outperforms existing full-duplex models in overall turn management and latency performance. We have open-sourced SoulX-Duplug and SoulX-Duplug-Eval.


[66] 2603.14889

Modeling and Benchmarking Spoken Dialogue Rewards with Modality and Colloquialness

The rapid evolution of end-to-end spoken dialogue systems demands transcending mere textual semantics to incorporate paralinguistic nuances and the spontaneous nature of human conversation. However, current methods struggle with two critical gaps: the modality gap, involving prosody and emotion, and the colloquialness gap, distinguishing written scripts from natural speech. To address these challenges, we introduce SDiaReward, an end-to-end multi-turn reward model trained on SDiaReward-Dataset, a novel collection of episode-level preference pairs explicitly targeting these gaps. It operates directly on full multi-turn speech episodes and is optimized with pairwise preference supervision, enabling joint assessment of modality and colloquialness in a single evaluator. We further establish ESDR-Bench, a stratified benchmark for robust episode-level evaluation. Experiments demonstrate that SDiaReward achieves state-of-the-art pairwise preference accuracy, significantly outperforming general-purpose audio LLMs. Further analysis suggests that SDiaReward captures relative conversational expressiveness beyond superficial synthesis cues, improving generalization across domains and recording conditions. Code, data, and demos are available at this https URL.


[67] 2603.14910

Transformers As Generalizable Optimal Controllers

We study whether optimal state-feedback laws for a family of heterogeneous Multiple-Input, Multiple-Output (MIMO) Linear Time-Invariant (LTI) systems can be captured by a single learned controller. We train one transformer policy on LQR-generated trajectories from systems with different state and input dimensions, using a shared representation with standardization, padding, dimension encoding, and masked loss. The policy maps recent state history to control actions without requiring plant matrices at inference time. Across a broad set of systems, it achieves empirically small sub-optimality relative to Linear Quadratic Regulator (LQR), remains stabilizing under moderate parameter perturbations, and benefits from lightweight fine-tuning on unseen systems. These results support transformer policies as practical approximators of near-optimal feedback laws over structured linear-system families.


[68] 2603.14912

Integrated Channel Sounding and Communication: Requirements, Architecture, Challenges, and Key Technologies

Channel models are essential for the design, evaluation, and optimization of wireless communication systems. The emerging space-air-ground-sea integrated network (SAGSIN), characterized by diverse service applications and extended-spectrum operations, places even greater demands on highly accurate channel models. However, conventional channel sounding is limited by generalized measurement campaigns, inadequate cross-band consistency, and insufficient real-time adaptability, making it unable to meet the needs of SAGSIN for scenario-specific and high-precision channel modeling. To address this challenge, we propose a novel technological framework, termed integrated channel sounding and communication (ICSC). By deeply integrating sounding and communication, the ICSC enables efficient and real-time acquisition of dynamic channel characteristics during communication processes, supporting fine-grained site- and scenario-specific measurements. Furthermore, leveraging artificial intelligence techniques, ICSC can identify channel conditions and adapt waveform parameters in real-time according to scenario variations, which in turn enhances communication performance. This article first introduces the fundamental principles of the ICSC framework, elaborates on its core concepts and key advantages, and demonstrates its feasibility through the development of an integrated verification system (IVS). Subsequently, the potential applications and opportunities of the ICSC are analyzed in depth, followed by a discussion of its future development directions and remaining challenges.


[69] 2603.14917

Spectrogram features for audio and speech analysis

Spectrogram-based representations have grown to dominate the feature space for deep learning audio analysis systems, and are often adopted for speech analysis also. Initially, the primary motivator for spectrogram-based representations was their ability to present sound as a two dimensional signal in the time-frequency plane, which not only provides an interpretable physical basis for analysing sound, but also unlocks the use of a wide range of machine learning techniques such as convolutional neural networks, that had been developed for image processing. A spectrogram is a matrix characterised by the resolution and span of its two dimensions, as well as by the representation and scaling of each element. Many possibilities for these three characteristics have been explored by researchers across numerous application areas, with different settings showing affinity for various tasks. This paper reviews the use of spectrogram-based representations and surveys the state-of-the-art to question how front-end feature representation choice allies with back-end classifier architecture for different tasks.


[70] 2603.14940

Intelligent Control of Differential Drive Robots Subject to Unmodeled Dynamics with EKF-based State Estimation

Reliable control and state estimation of differential drive robots (DDR) operating in dynamic and uncertain environments remains a challenge, particularly when system dynamics are partially unknown and sensor measurements are prone to degradation. This work introduces a unified control and state estimation framework that combines a Lyapunov-based nonlinear controller and Adaptive Neural Networks (ANN) with Extended Kalman Filter (EKF)-based multi-sensor fusion. The proposed controller leverages the universal approximation property of neural networks to model unknown nonlinearities in real time. An online adaptation scheme updates the weights of the radial basis function (RBF), the architecture chosen for the ANN. The learned dynamics are integrated into a feedback linearization (FBL) control law, for which theoretical guarantees of closed-loop stability and asymptotic convergence in a trajectory-tracking task are established through a Lyapunov-like stability analysis. To ensure robust state estimation, the EKF fuses inertial measurement unit (IMU) and odometry from monocular, 2D-LiDAR and wheel encoders. The fused state estimate drives the intelligent controller, ensuring consistent performance even under drift, wheel slip, sensor noise and failure. Gazebo simulations and real-world experiments are done using DDR, demonstrating the effectiveness of the approach in terms of improved velocity tracking performance with reduction in linear and angular velocity errors up to $53.91\%$ and $29.0\%$ in comparison to the baseline FBL.


[71] 2603.14942

A System-Theoretic Approach to Hawkes Process Identification with Guaranteed Positivity and Stability

The Hawkes process models self-exciting event streams, requiring a strictly non-negative and stable stochastic intensity. Standard identification methods enforce these properties using non-negative causal bases, yielding conservative parameter constraints and severely ill-conditioned least-squares Gram matrices at higher model orders. To overcome this, we introduce a system-theoretic identification framework utilizing the sign-indefinite orthonormal Laguerre basis, which guarantees a well-conditioned asymptotic Gram matrix independent of model order. We formulate a constrained least-squares problem enforcing the necessary and sufficient conditions for positivity and stability. By constructing the empirical Gram matrix via a Lyapunov equation and representing the constraints through a sum-of-squares trace equivalence, the proposed estimator is efficiently computed via semidefinite programming.


[72] 2603.14943

RF-Fencing: A Novel RIS-Based Service for Proactive Covert Communications

Programmable wireless environments (PWEs), empowered by reconfigurable intelligent surfaces (RISes), have emerged as a transformative paradigm for next-generation networks, enabling deterministic control over electromagnetic (EM) propagation to enhance both performance and security. In this work, we introduce RF-Fencing, a novel RIS-enabled PWE service that enforces spatially selective control over wireless transmissions, simultaneously suppressing unwanted signal exposure while sustaining robust connectivity for legitimate users. To realize this vision, we develop SHIELD, a lightweight and scalable algorithm that orchestrates multiple RIS units by multiplexing precompiled codebook entries with real-time, low-complexity optimization. Through extensive evaluations across diverse frequencies, RIS configurations, and deployment scenarios, SHIELD demonstrates both far-field directional control and near-field quiet-zone creation, thereby enhancing network security. Our findings reveal that SHIELD effectively balances proactive covert communication with service delivery by dynamically managing multiple signal suppression and delivery areas, while enabling the realization of EM quiet zones with minimal impact on surrounding regions, ultimately establishing RF-Fencing as a practical RIS-based foundation for privacy-preserving and adaptive wireless environments in future 6G networks.


[73] 2603.14959

Cyclic Delay-Doppler Shift: A Simple Transmit Diversity Technique for Ultra-Reliable Communications in Doubly Selective Channels

Affine frequency division multiplexing (AFDM) and orthogonal time frequency space (OTFS) are two promising advanced waveforms proposed for reliable communications in high-mobility scenarios. In this paper, we introduce a simple transmit diversity technique, termed cyclic delay-Doppler shift (CDDS), for these two advanced waveforms to achieve ultra-reliable communications in doubly selective channels (DSCs). Two simple CDDS schemes, named modulation-domain CDDS (MD-CDDS) and time-domain CDDS (TD-CDDS), are proposed, which perform CDDS in advance at the transmitter before and after the modulation, respectively. We demonstrate that both of the two proposed CDDS schemes can be implemented efficiently and flexibly by multiplying the transmit vector with a well-designed precoding matrix, which is nothing but a sparse phase-compensated permutation matrix. Moreover, we theoretically and numerically prove that CDDS can provide MIMO-AFDM and MIMO-OTFS with optimal transmit diversity gain when a proper CDDS step is adopted. Compared to the conventional transmit diversity techniques, the proposed CDDS scheme enjoys the advantages of lower channel estimation overhead, implementation complexity, and signal processing latency, making it particularly suitable for ultra-reliable communications in high-mobility scenarios.


[74] 2603.14986

Deep Filter Estimation from Inter-Frame Correlations for Monaural Speech Dereverberation

Speech dereverberation in distant-microphone scenarios remains challenging due to the high correlation between reverberation and target signals, often leading to poor generalization in real-world environments. We propose IF-CorrNet, a correlation-to-filter architecture designed for robustness against acoustic variability. Unlike conventional black-box mapping methods that directly estimate complex spectra, IF-CorrNet explicitly exploits inter-frame STFT correlations to estimate multi-frame deep filters for each time-frequency bin. By shifting the learning objective from direct mapping to filter estimation, the network effectively constrains the solution space, which simplifies the training process and mitigates overfitting to synthetic data. Experimental results on the REVERB Challenge dataset demonstrate that IF-CorrNet achieves a substantial gain in the SRMR metric on RealData, confirming its robustness in suppressing reverberation and noise in practical, non-synthetic environments.


[75] 2603.14990

Chattering Reduction for a Second-Order Actuator via Dynamic Sliding Manifolds

We analyze actuator chattering in a scalar integrator system subject to second-order actuator dynamics with an unknown time constant and first-order sliding-mode control, using both a conventional static sliding manifold and a dynamic sliding manifold. Using the harmonic balance method we proof that it is possible to adjust the parameters of the dynamic sliding manifold so as to reduce the amplitude of the chattering in comparison to the static manifold. The proof of concept is illustrated with an example.


[76] 2603.15045

LLMs and Speech: Integration vs. Combination

In this work, we study how to best utilize pre-trained LLMs for automatic speech recognition. Specifically, we compare the tight integration of an acoustic model (AM) with the LLM ("speech LLM") to the traditional way of combining AM and LLM via shallow fusion. For tight integration, we provide ablations on the effect of different label units, fine-tuning strategies, LLM sizes and pre-training data, attention interfaces, encoder downsampling, text prompts, and length normalization. Additionally, we investigate joint recognition with a CTC model to mitigate hallucinations of speech LLMs and present effective optimizations for this joint recognition. For shallow fusion, we investigate the effect of fine-tuning the LLM on the transcriptions using different label units, and we compare rescoring AM hypotheses to single-pass recognition with label-wise or delayed fusion of AM and LLM scores. We train on Librispeech and Loquacious and evaluate our models on the HuggingFace ASR leaderboard.


[77] 2603.15063

Data-Driven Robust Predictive Control with Interval Matrix Uncertainty Propagation

This paper presents a new data-driven robust predictive control law, for linear systems affected by unknown-but-bounded process disturbances. A sequence of input-state data is used to construct a suitable uncertainty representation based on interval matrices. Then, the effect of uncertainty along the prediction horizon is bounded through an operator leveraging matrix zonotopes. This yields a tube that is exploited within a variable-horizon optimal control problem, to guarantee robust satisfaction of state and input constraints. The resulting data-driven predictive control scheme is shown to be recursively feasible and practically stable. A numerical example shows that the proposed approach compares favorably to existing methods based on zonotopic tubes and is competitive with an approach combining set-membership system identification and model-based predictive control.


[78] 2603.15068

Generative Semantic HARQ: Latent-Space Text Retransmission and Combining

Semantic communication conveys meaning rather than raw bits, but reliability at the semantic level remains an open challenge. We propose a semantic-level hybrid automatic repeat request (HARQ) framework for text communication, in which a Transformer-variational autoencoder (VAE) codec operates as a lightweight overlay on the conventional protocol stack. The stochastic encoder inherently generates diverse latent representations across retransmissions-providing incremental knowledge (IK) from a single model without dedicated protocol design. On the receiver side, a soft quality estimator triggers retransmissions and a quality-aware combiner merges the received latent vectors within a consistent latent space. We systematically benchmark six semantic quality metrics and four soft combining strategies under hybrid semantic distortion that mixes systematic bias with additive noise. The results suggest combining Weighted-Average or MRC-Inspired combining with self-consistency-based HARQ triggering for the best performance.


[79] 2603.15093

Beam Prediction Based on Multimodal Large Language Models

Accurate beam prediction is a key enabler for next-generation wireless communication systems. In this paper, we propose a multimodal large language model (LLM)-based beam prediction framework that effectively utilizes contextual information, provided by sensory data including RGB camera images and LiDAR point clouds. To effectively fuse heterogeneous modalities, we design specialized modality encoders together with a beam-guided attention masking mechanism and a high-frequency temporal alignment strategy, enabling robust cross-modal feature integration under dynamic environments. Furthermore, we construct a large-scale multimodal dataset for communication, named Multimodal-Wireless, which covers diverse weather and traffic conditions with high-fidelity ray-tracing labels. Extensive simulation results demonstrate that the proposed approach significantly reduces the reliance on oracle angle-of-departure knowledge and consistently outperforms state-of-the-art multimodal LLM-based beam prediction methods in terms of beam accuracy and communication performance, improving the average Top-1 accuracy to 80.8% and the average normalized gain to 89.1%.


[80] 2603.15105

Dual-Domain Sparse Adaptive Filtering: Exploiting Error Memory for Improved Performance

Many signal processing applications such as acoustic echo cancellation and wireless channel estimation require identifying systems where only a small fraction of coefficients are actually active, i.e. sparse systems. Zero-attracting adaptive filters tackle this by adding a penalty that pulls inactive coefficients toward zero, speeding up convergence. However, these algorithms determine which coefficients to penalize based solely on their current size. This creates a problem during early adaptation since active coefficients that should eventually grow large start out small, making them look identical to truly inactive coefficients. The algorithm ends up applying strong penalties to the very coefficients it needs to develop, slowing down the initial convergence. This paper provides a solution to this problem by introducing a dual-domain approach that looks at coefficients from two perspectives simultaneously. Beyond just tracking coefficient magnitude, we introduce an error-memory vector that monitors how persistently each coefficient contributes to the adaptation error over time. If a coefficient keeps showing up in the error signal, it is probably active even if it is still small. By combining both views, the proposed dual-domain sparse adaptive filter (DD-SAF) can identify active coefficients early and eliminate penalties accordingly. Moreover, complete theoretical analysis is derived. The analysis shows that DD-SAF maintains the same stability properties as standard least-mean-square (LMS) while achieves provably better steady-state performance than existing methods. Simulations demonstrate that the DD-SAF converges to the steady-state faster and/or convergences to a lower mean-square-deviation (MSD) than the standard LMS and the reweighted zero-attracting LMS (RZA-LMS) algorithms for sparse system identification settings.


[81] 2603.15120

How Attention Shapes Emotion: A Comparative Study of Attention Mechanisms for Speech Emotion Recognition

Speech Emotion Recognition (SER) plays a key role in advancing human-computer interaction. Attention mechanisms have become the dominant approach for modeling emotional speech due to their ability to capture long-range dependencies and emphasize salient information. However, standard self-attention suffers from quadratic computational and memory complexity, limiting its scalability. In this work, we present a systematic benchmark of optimized attention mechanisms for SER, including RetNet, LightNet, GSA, FoX, and KDA. Experiments on both MSP-Podcast benchmark versions show that while standard self-attention achieves the strongest recognition performance across test sets, efficient attention variants dramatically improve scalability, reducing inference latency and memory usage by up to an order of magnitude. These results highlight a critical trade-off between accuracy and efficiency, providing practical insights for designing scalable SER systems.


[82] 2603.15143

Clinical Priors Guided Lung Disease Detection in 3D CT Scans

Accurate classification of lung diseases from chest CT scans plays an important role in computer-aided diagnosis systems. However, medical imaging datasets often suffer from severe class imbalance, which may significantly degrade the performance of deep learning models, especially for minority disease categories. To address this issue, we propose a gender-aware two-stage lung disease classification framework. The proposed approach explicitly incorporates gender information into the disease recognition pipeline. In the first stage, a gender classifier is trained to predict the patient's gender from CT scans. In the second stage, the input CT image is routed to a corresponding gender-specific disease classifier to perform final disease prediction. This design enables the model to better capture gender-related imaging characteristics and alleviate the influence of imbalanced data distribution. Experimental results demonstrate that the proposed method improves the recognition performance for minority disease categories, particularly squamous cell carcinoma, while maintaining competitive performance on other classes.


[83] 2603.15160

Multi-Scale Control of Large Agent Populations: From Density Dynamics to Individual Actuation

We review a body of recent work by the author and collaborators on controlling the spatial organisation of large agent populations across multiple scales. A central theme is the systematic bridging of microscopic agent-level dynamics and macroscopic density descriptions, enabling control design at the most natural level of abstraction and subsequent translation across scales. We show how this multi-scale perspective provides a unified approach to both \emph{direct control}, where every agent is actuated, and \emph{indirect control}, where few leaders or herders steer a larger uncontrolled population. The review covers continuification-based control with robustness under limited sensing and decentralised implementation via distributed density estimation; leader--follower density regulation with dual-feedback stability guarantees and bio-inspired plasticity; optimal-transport methods for coverage control and macro-to-micro discretisation; nonreciprocal field theory for collective decision-making; mean-field control barrier functions for population-level safety; and hierarchical reinforcement learning for settings where closed-form solutions are intractable. Together, these results demonstrate the breadth and versatility of a multi-scale control framework that integrates analytical methods, learning, and physics-inspired approaches for large agent populations.


[84] 2603.15180

Iterative Learning Control-Informed Reinforcement Learning for Batch Process Control

A significant limitation of Deep Reinforcement Learning (DRL) is the stochastic uncertainty in actions generated during exploration-exploitation, which poses substantial safety risks during both training and deployment. In industrial process control, the lack of formal stability and convergence guarantees further inhibits adoption of DRL methods by practitioners. Conversely, Iterative Learning Control (ILC) represents a well-established autonomous control methodology for repetitive systems, particularly in batch process optimization. ILC achieves desired control performance through iterative refinement of control laws, either between consecutive batches or within individual batches, to compensate for both repetitive and non-repetitive disturbances. This study introduces an Iterative Learning Control-Informed Reinforcement Learning (IL-CIRL) framework for training DRL controllers in dual-layer batch-to-batch and within-batch control architectures for batch processes. The proposed method incorporates Kalman filter-based state estimation within the iterative learning structure to guide DRL agents toward control policies that satisfy operational constraints and ensure stability guarantees. This approach enables the systematic design of DRL controllers for batch processes operating under multiple disturbance conditions.


[85] 2603.15234

RIS-Aided RSMA Improves the Latency vs. Energy Trade-off in the Finite Block Length MIMO Downlink

We simultaneously minimize the latency and improve energy efficiency (EE) of the multi-user multiple-input multiple-output (MU-MIMO) rate splitting multiple access (RSMA) downlink, aided by a reconfigurable intelligent surface (RIS). Our results show that RSMA improves the EE and may reduce the delay to 13\% of that of spatial division multiple access (SDMA). Moreover, RIS and RSMA support each other synergistically, while an RIS operating without RSMA provides limited benefits in terms of latency and cannot effectively mitigate interference. {Furthermore, increasing the RIS size amplifies the gains of RSMA more significantly than those of SDMA, without altering the fundamental EE-latency trade-offs.} Results also show that latency increases with more stringent reliability requirements, and RSMA yields more significant gains under such conditions, making it eminently suitable for energy-efficient ultra-reliable low-latency communication (URLLC) scenarios.


[86] 2603.15278

Encirclement Guaranteed Finite-Time Capture against Unknown Evader Strategies

We consider a pursuit-evasion scenario involving a group of pursuers and a single evader in a two-dimensional unbounded environment. The pursuers aim to capture the evader in finite time while ensuring the evader remains enclosed within the convex hull of their positions until capture, without knowledge of the evader's heading angle. Prior works have addressed the problem of encirclement and capture separately in different contexts. In this paper, we present a class of strategies for the pursuers that guarantee capture in finite time while maintaining encirclement, irrespective of the evader's strategy. Furthermore, we derive an upper bound on the time to capture. Numerical results highlight the effectiveness of the proposed framework against a range of evader strategies.


[87] 2603.15285

Fast Volume Alignment by Frequency-Marched Newton

We develop a fast and accurate method for 3D alignment, recovering the rotation and translation that best align a reference volume with a noisy observation. Classical matched filtering evaluates cross-correlation over a large discretized transformation space; we show that high-precision alignment can be achieved far more efficiently by treating pose estimation as a continuous optimization problem. Our starting point is a band-limited Wigner-$D$ expansion of the rotational correlation, which enables rapid evaluation and efficient closed-form gradients and Hessians. Combined with analytical control of the complexity of trigonometric-polynomial landscapes, this makes second-order optimization practical in a setting where it is often avoided due to nonconvexity and noise sensitivity. We show that Newton-type refinement is stable and effective when initialized at low angular bandwidth: a coarse low-resolution $\mathrm{SO}(3)$ search provides robust candidates, which are then refined by iterative frequency marching and Newton steps, with translations updated via FFT in an alternating scheme. We provide a deterministic convergence guarantee showing that, under verifiable spectral-decay and gap conditions, the frequency-marching scheme returns a near-optimal solution whose suboptimality is controlled by the Newton tolerance. On synthetic rotation-estimation benchmarks, the method attains sub-degree accuracy while substantially reducing runtime relative to exhaustive $\mathrm{SO}(3)$ search. Integrated into the subtomogram-averaging pipeline of RELION5, it matches the baseline reconstruction quality, reaching local resolution at the Nyquist limit, while reducing pose-refinement time by more than an order of magnitude.


[88] 2603.15286

ReLU Barrier Functions for Nonlinear Systems with Constrained Control: A Union of Invariant Sets Approach

Certifying safety for nonlinear systems with polytopic input constraints is challenging because CBF synthesis must ensure control admissibility under saturation. We propose an approximation--verification pipeline that performs convex barrier synthesis on piecewise-affine (PWA) surrogates and certifies safety for the original nonlinear system via facet-wise verification. To reduce conservatism while preserving tractability, we use a two-slope Leaky ReLU surrogate for the extended class-$\mathcal{K}$ function $\alpha(\cdot)$ and combine multiple certificates using a Union of Invariant Sets (UIS). Counterexamples are handled through local uncertainty updates. Simulations on pendulum and cart-pole systems with input saturation show larger certified invariant sets than linear-$\alpha$ designs with tractable computation time.


[89] 2603.15288

Neural Network-Based Time-Frequency-Bin-Wise Linear Combination of Beamformers for Underdetermined Target Source Extraction

Extracting a target source from underdetermined mixtures is challenging for beamforming approaches. Recently proposed time-frequency-bin-wise switching (TFS) and linear combination (TFLC) strategies mitigate this by combining multiple beamformers in each time-frequency (TF) bin and choosing combination weights that minimize the output power. However, making this decision independently for each TF bin can weaken temporal-spectral coherence, causing discontinuities and consequently degrading extraction performance. In this paper, we propose a novel neural network-based time-frequency-bin-wise linear combination (NN-TFLC) framework that constructs minimum power distortionless response (MPDR) beamformers without explicit noise covariance estimation. The network encodes the mixture and beamformer outputs, and predicts temporally and spectrally coherent linear combination weights via a cross-attention mechanism. On dual-microphone mixtures with multiple interferers, NN-TFLC-MPDR consistently outperforms TFS/TFLC-MPDR and achieves competitive performance with TFS/TFLC built on the minimum variance distortionless response (MVDR) beamformers that require noise priors.


[90] 2603.15310

On the CRLB for Blind Receiver I/Q Imbalance Estimation in OFDM Systems: Efficient Computation and Closed-Form Bounds

Modern mobile communication receivers are often implemented with a direct-conversion architecture, which features a number of advantages over competing designs. A notable limitation of direct-conversion architectures, however, is their sensitivity to amplitude and phase mismatches between the in-phase and quadrature signal paths. Such in-phase and quadrature-phase (I/Q) imbalances introduce undesired image components in the baseband signal, degrading link performance -- most notably by increasing the bit-error ratio. Considerable research effort has therefore been devoted to digital techniques for estimating and mitigating these impairments. Existing approaches generally fall into two categories: data-aided methods that exploit known pilots, preambles, or training sequences, and blind techniques that operate without such prior information. For data-aided estimation, Cramér-Rao lower bounds (CRLBs) have been established in the literature. In contrast, the derivation of a CRLB for the blind I/Q-imbalance estimation case is considerably more challenging, since the received data is random and typically non-Gaussian in the frequency domain. This work extends our earlier conference contribution, which introduced a CRLB derivation for the blind estimation of frequency-independent (FID) receiver I/Q imbalance using central limit theorem (CLT) arguments. The extensions include a computationally efficient method for calculating the bound, reducing complexity from cubic in the number of samples to linear in the fast-Fourier transform (FFT) size, along with a simplified closed-form approximation. This approximation provides new insights into the allocation dependent performances of existing estimation methods, motivating a pre-estimation filtering modification that drastically improves their estimation performance in certain scenarios.


[91] 2603.15311

Near-field Boundary Distance in mmWave and THz Communications with Misaligned Antenna Arrays

Wireless communications in the millimeter wave (mmWave) and terahertz (THz) spectrum allow harnessing large frequency bands, thus achieving ultra-high data rates. However, the inherently short wavelengths of mmWave and THz signals lead to an extended radiative near-field region, where certain canonical far-field assumptions fail. Most prior works aimed to characterize this radiative near-field region either do not consider antenna arrays on both communicating nodes or, if they do, assume perfect alignment between the arrays. However, such assumptions break down in many realistic deployments, where both sides must employ large-scale mmWave/THz antenna arrays to maintain the desired communication range, while perfect antenna alignment cannot be guaranteed particularly under nodes mobility. In this work, a generalized mathematical framework is presented to characterize the radiative near-field distance in directional mmWave and THz communication systems under various realistic array rotations and misalignments. With the use of the developed framework, compact closed-form expressions are derived for the near-field boundary distance in a wide range of antenna configurations, including array-to-array and array-to-point setups, considering both linear and planar arrays. Our numerical study reveals that the presence of antenna misalignment may significantly adjust the boundaries of the near-field region in mmWave and THz communication systems.


[92] 2603.15394

Matched Filter-Based Molecule Source Localization in Advection-Diffusion-Driven Pipe Networks with Known Topology

Synthetic molecular communication (MC) has emerged as a powerful framework for modeling, analyzing, and designing communication systems where information is encoded into properties of molecules. Among the envisioned applications of MC is the localization of molecule sources in pipe networks (PNs) like the human cardiovascular system (CVS), sewage networks (SNs), and industrial plants. While existing algorithms mostly focus on simplified scenarios, in this paper, we propose the first framework for source localization in complex PNs with known topology, by leveraging the mixture of inverse Gaussians for hemodynamic transport (MIGHT) model as a closed-form representation for advection-diffusion-driven MC in PNs. We propose a matched filter (MF)-based approach to identify molecule sources under realistic conditions such as unknown release times, random numbers of released molecules, sensor noise, and limited sensor sampling rate. We apply the algorithm to localize a source of viral markers in a real-world SN and show that the proposed scheme outperforms randomly guessing sources even at low signal-to-noise ratios (SNRs) at the sensor and achieves error-free localization under favorable conditions, i.e., high SNRs and sampling rates. Furthermore, by identifying clusters of frequently confused sources, reliable cluster-level localization is possible at substantially lower SNRs and sampling rates.


[93] 2603.15399

Spatial Characterization of Sub-Synchronous Oscillations Using Black-Box IBR Models

Power systems with high penetration of inverter-based resources (IBRs) are prone to sub-synchronous oscillations (SSO). The opaqueness of vendor-specific IBR models limits the ability to predict the severity and the spread of SSO. This paper demonstrates that black-box IBR models estimated through frequency-domain identification techniques, along with dynamic network model can replicate the actual oscillatory behavior. The estimated IBR models are validated against actual IBR models in a closed-loop multi-IBR test system through modal analysis by comparing closed-loop eigenvalues, and participation factors. Furthermore, using output-observable right eigenvectors, spatial heatmaps are developed to visualize the spread and severity of dominant SSO modes. The case studies on the 11-bus and 39-bus test systems confirm that even with the estimated IBR models, the regions susceptible to SSO can be identified in IBR-dominated power systems.


[94] 2603.15516

spINAch: A Diachronic Corpus of French Broadcast Speech Controlled for Speakers' Age and Gender

We present spINAch, a large diachronic corpus of French speech from radio and television archives, balanced by speakers' gender, age (20-95 years old), and spanning 60 years from 1955 to 2015. The dataset includes over 320 hours of recordings from more than two thousand speakers. The methodology for building the corpus is described, focusing on the quality of collected samples in acoustic terms. The data were automatically transcribed and phonetically aligned to allow studies at a phonemic level. More than 3 million oral vowels have been analyzed to propose their fundamental frequency and formants. The corpus, available to the community for research purposes, is valuable for describing the evolution of Parisian French through the representation of gender and age. The presented analyses also demonstrate that the diachronic nature of the corpus allows the observation of various phonetic phenomena, such as the evolution of voice pitch over time (which does not differ by gender in our data) and the neutralization of the /a/-/$a$/ opposition in Parisian French during this period.


[95] 2603.15588

Switching-Reference Voltage Control for Distribution Systems with AI-Training Data Centers

Large-scale AI training workloads in modern data centers exhibit rapid and periodic power fluctuations, which may induce significant voltage deviations in power distribution systems. Existing voltage regulation methods, such as droop control, are primarily designed for slowly varying loads and may therefore be ineffective in mitigating these fast fluctuations. In addition, repeated control actions can incur substantial cost. To address this challenge, this paper proposes a decentralized switching-reference voltage control framework that exploits the structured behavior of AI training workloads. We establish conditions for voltage convergence and characterize an effective reference design that aligns with the two dominant operating levels of the AI training workload. The switching rule for voltage references is implemented solely using local voltage measurements, enabling simple local implementation while significantly reducing control effort. Simulation studies demonstrate that the proposed method substantially reduces both voltage deviations and reactive control effort, while remaining compatible with internal data center control strategies without requiring extensive coordination.


[96] 2603.11600

Hybrid Energy-Aware Reward Shaping: A Unified Lightweight Physics-Guided Methodology for Policy Optimization

Deep reinforcement learning excels in continuous control but often requires extensive exploration, while physics-based models demand complete equations and suffer cubic complexity. This study proposes Hybrid Energy-Aware Reward Shaping (H-EARS), unifying potential-based reward shaping with energy-aware action regularization. H-EARS constrains action magnitude while balancing task-specific and energy-based potentials via functional decomposition, achieving linear complexity O(n) by capturing dominant energy components without full dynamics. We establish a theoretical foundation including: (1) functional independence for separate task/energy optimization; (2) energy-based convergence acceleration; (3) convergence guarantees under function approximation; and (4) approximate potential error bounds. Lyapunov stability connections are analyzed as heuristic guides. Experiments across baselines show improved convergence, stability, and energy efficiency. Vehicle simulations validate applicability in safety-critical domains under extreme conditions. Results confirm that integrating lightweight physics priors enhances model-free RL without complete system models, enabling transfer from lab research to industrial applications.


[97] 2603.13343

AI-Driven Predictive Maintenance with Real-Time Contextual Data Fusion for Connected Vehicles: A Multi-Dataset Evaluation

Most vehicle predictive maintenance systems rely exclusively on internal diagnostic signals and are validated on deterministic synthetic data, limiting the credibility of reported metrics. This paper presents a simulation-validated proof-of-concept framework for V2X-augmented predictive maintenance, integrating on-board sensor streams with external contextual signals -- road quality, weather, traffic density, and driver behaviour -- acquired via V2X communication and third-party APIs, with inference at the vehicle edge. Field validation on instrumented vehicles is identified as the required next step. Three experiments address common shortcomings of prior work. A feature group ablation study shows that V2X contextual features contribute a 2.6-point F1 gain, with full context removal reducing macro F1 from 0.855 to 0.807. On the AI4I 2020 real-world industrial failure dataset (10,000 samples, five failure modes), LightGBM achieves AUC-ROC of 0.973 under 5-fold stratified CV with SMOTE confined to training folds. A noise sensitivity analysis shows macro F1 remains above 0.88 under low noise and degrades to 0.74 under very high noise. SHAP analysis confirms that V2X and engineered interaction features rank among the top 15 predictors. Edge inference is estimated to reduce latency from 3.5s to under 1.0s versus cloud-only processing.


[98] 2603.13362

Patient-Level Multimodal Question Answering from Multi-Site Auscultation Recordings

Auscultation is a vital diagnostic tool, yet its utility is often limited by subjective interpretation. While general-purpose Audio-Language Models (ALMs) excel in general domains, they struggle with the nuances of physiological signals. We propose a framework that aligns multi-site auscultation recordings directly with a frozen Large Language Model (LLM) embedding space via gated cross-attention. By leveraging the LLM's latent world knowledge, our approach moves beyond isolated classification toward holistic, patient-level assessment. On the CaReSound benchmark, our model achieves a state-of-the-art 0.865 F1-macro and 0.952 BERTScore. We demonstrate that lightweight, domain-specific encoders rival large-scale ALMs and that multi-site aggregation provides spatial redundancy that mitigates temporal truncation. This alignment of medical acoustics with text foundations offers a scalable path for bridging signal processing and clinical assessment.


[99] 2603.13405

Anchor Forcing: Anchor Memory and Tri-Region RoPE for Interactive Streaming Video Diffusion

Interactive long video generation requires prompt switching to introduce new subjects or events, while maintaining perceptual fidelity and coherent motion over extended horizons. Recent distilled streaming video diffusion models reuse a rolling KV cache for long-range generation, enabling prompt-switch interaction through re-cache at each switch. However, existing streaming methods still exhibit progressive quality degradation and weakened motion dynamics. We identify two failure modes specific to interactive streaming generation: (i) at each prompt switch, current cache maintenance cannot simultaneously retain KV-based semantic context and recent latent cues, resulting in weak boundary conditioning and reduced perceptual quality; and (ii) during distillation, unbounded time indexing induces a positional distribution shift from the pretrained backbone's bounded RoPE regime, weakening pretrained motion priors and long-horizon motion retention. To address these issues, we propose \textbf{Anchor Forcing}, a cache-centric framework with two designs. First, an anchor-guided re-cache mechanism stores KV states in anchor caches and warm-starts re-cache from these anchors at each prompt switch, reducing post-switch evidence loss and stabilizing perceptual quality. Second, a tri-region RoPE with region-specific reference origins, together with RoPE re-alignment distillation, reconciles unbounded streaming indices with the pretrained RoPE regime to better retain motion priors. Experiments on long videos show that our method improves perceptual quality and motion metrics over prior streaming baselines in interactive settings. Project page: this https URL


[100] 2603.13437

Vision-Language Based Expert Reporting for Painting Authentication and Defect Detection

Authenticity and condition assessment are central to conservation decision-making, yet interpretation and reporting of thermographic output remain largely bespoke and expert-dependent, complicating comparison across collections and limiting systematic integration into conservation documentation. Pulsed Active Infrared Thermography (AIRT) is sensitive to subsurface features such as material heterogeneity, voids, and past interventions; however, its broader adoption is constrained by artifact misinterpretation, inter-laboratory variability, and the absence of standardized, explainable reporting frameworks. Although multi-modal thermographic processing techniques are established, their integration with structured natural-language interpretation has not been explored in cultural heritage. A fully automated thermography-vision-language model (VLM) framework is presented. It combines multi-modal AIRT analysis with modality-aware textual reporting, without human intervention during inference. Thermal sequences are processed using Principal Component Thermography (PCT), Thermographic Signal Reconstruction (TSR), and Pulsed Phase Thermography (PPT), and the resulting anomaly masks are fused into a consensus segmentation that emphasizes regions supported by multiple thermal indicators while mitigating boundary artifacts. The fused evidence is provided to a VLM, which generates structured reports describing the location of the anomaly, thermal behavior, and plausible physical interpretations while explicitly acknowledging the uncertainty and diagnostic limitations. Evaluation on two marquetries demonstrates consistent anomaly detection and stable structured interpretations, indicating reproducibility and generalizability across samples.


[101] 2603.13502

Safety-guaranteed and Goal-oriented Semantic Sensing, Communication, and Control for Robotics

Wirelessly-connected robotic system empowers robots with real-time intelligence by leveraging remote computing resources for decision-making. However, the data exchange between robots and base stations often overwhelms communication links, introducing latency that undermines real-time response. To tackle this, goal-oriented semantic communication (GSC) has been introduced into wirelessly-connected robotic systems to extract and transmit only goal-relevant semantic representations, enhancing communication efficiency and task effectiveness. However, existing GSC approaches focused primarily on optimizing effectiveness metrics while overlooking safety requirements, which should be treated as the top priority in real-world robotic systems. To bridge this gap, we propose safety-guaranteed and goal-oriented semantic communication for wirelessly-connected robotic system, aiming to maximize the robotic task effectiveness subject to practical operational safety requirements. We first summarize the general safety requirements and effectiveness metrics across typical robotic tasks, including robot arm grasping, unmanned aerial vehicle (UAV)-assisted tasks, and multi-robot exploration. We then systematically analyze the unique safety and effectiveness challenges faced by wirelessly-connected robotic system in sensing, communication, and control. Based on these, we further present potential safety-guaranteed and goal-oriented sensing, communication, and control solutions. Finally, a UAV target tracking case study validates that our proposed GSC solutions can significantly improve safety rate and tracking success rate by more than 2 times and 4.5 times, respectively.


[102] 2603.13529

Hybrid topology control: a dynamic leader-based distributed edge-addition and deletion mechanism

Coordinated operations of multi-robot systems (MRS) require agents to maintain communication connections to accomplish team objectives. However, maintaining the connections imposes costs in terms of restricted robot mobility, resulting in suboptimal team performance. In this work, we consider a realistic MRS framework in which agents are subject to unknown dynamical disturbances and experience communication delays. Most existing works on connectivity maintenance use consensus-based frameworks for graph reconfiguration, where decision-making time scales with the number of nodes and requires multiple rounds of communication, making them ineffective under communication delays. To address this, we propose a novel leader-based decision-making algorithm that uses a central node for efficient real-time reconfiguration, reducing decision-making time to depend on the graph diameter rather than the number of nodes and requiring only one round of information transfer through the network. We propose a novel method for estimating robot locations within the MRS that actively accounts for unknown disturbances and the communication delays. Using these position estimates, the central node selects a set of edges to delete while allowing the formation of new edges, aiming to keep the diameter of the new graph within a threshold. We provide numerous simulation results to showcase the efficacy of the proposed method.


[103] 2603.13559

Robust Automatic Differentiation of Square-Root Kalman Filters via Gramian Differentials

Square-root Kalman filters propagate state covariances in Cholesky-factor form for numerical stability, and are a natural target for gradient-based parameter learning in state-space models. Their core operation, triangularization of a matrix $M \in \mathbb{R}^{n \times m}$, is computed via a QR decomposition in practice, but naively differentiating through it causes two problems: the semi-orthogonal factor is non-unique when $m > n$, yielding undefined gradients; and the standard Jacobian formula involves inverses, which diverges when $M$ is rank-deficient. Both are resolved by the observation that all filter outputs relevant to learning depend on the input matrix only through the Gramian $MM^\top$, so the composite loss is smooth in $M$ even where the triangularization is not. We derive a closed-form chain-rule directly from the differential of this Gramian identity, prove it exact for the Kalman log-marginal likelihood and filtered moments, and extend it to rank-deficient inputs via a two-component decomposition: a column-space term based on the Moore--Penrose pseudoinverse, and a null-space correction for perturbations outside the column space of $M$.


[104] 2603.13716

Multi-Agent SAC Enabled Beamforming Design for Joint Secret Key Generation and Data Transmission

Physical layer key generation (PLKG) has emerged as a promising solution for achieving highly secured and low-latency key distribution, offering information-theoretic security that is inherently resilient to quantum attacks. However, simultaneously ensuring a high data transmission rate and a high secret key generation rate under eavesdropping attacks remains a major challenge. In time-division duplex (TDD) systems with multiple antennas, we derive closed-form expressions for both rates by modeling the legitimate channel as a time-correlated autoregressive (AR) process. This formulation leads to a highly nonconvex and time-coupled optimization problem, rendering traditional optimization methods ineffective. To address this issue, we propose a multi-agent soft actor-critic (SAC) framework equipped with a long short-term memory (LSTM) adversary prediction module to cope with the partial observability of the eavesdropper's mode. Simulation results demonstrate that the proposed approach achieves superior performance compared with other benchmark algorithms, while effectively balancing the trade-off between secret key generation rate and data transmission rate. The results also confirm the robustness of the proposed framework against intelligent eavesdropping and partial observation uncertainty.


[105] 2603.13734

Ransomware and Artificial Intelligence: A Comprehensive Systematic Review of Reviews

This study provides a comprehensive synthesis of Artificial Intelligence (AI), especially Machine Learning (ML) and Deep Learning (DL), in ransomware defense. Using a "review of reviews" methodology based on PRISMA, this paper gathers insights on how AI is transforming ransomware detection, prevention, and mitigation strategies during the past five years (2020-2024). The findings highlight the effectiveness of hybrid models that combine multiple analysis techniques such as code inspection (static analysis) and behavior monitoring during execution (dynamic analysis). The study also explores anomaly detection and early warning mechanisms before encryption to address the increasing complexity of ransomware. In addition, it examines key challenges in ransomware defense, including techniques designed to deceive AI-driven detection systems and the lack of strong and diverse datasets. The results highlight the role of AI in early detection and real-time response systems, improving scalability and resilience. Using a systematic review-of-reviews approach, this study consolidates insights from multiple review articles, identifies effective AI models, and bridges theory with practice to support collaboration among academia, industry, and policymakers. Future research directions and practical recommendations for cybersecurity practitioners are also discussed. Finally, this paper proposes a roadmap for advancing AI-driven countermeasures to protect critical systems and infrastructures against evolving ransomware threats.


[106] 2603.13877

Scribe Verification in Chinese manuscripts using Siamese, Triplet, and Vision Transformer Neural Networks

The paper examines deep learning models for scribe verification in Chinese manuscripts. That is, to automatically determine whether two manuscript fragments were written by the same scribe using deep metric learning methods. Two datasets were used: the Tsinghua Bamboo Slips Dataset and a selected subset of the Multi-Attribute Chinese Calligraphy Dataset, focusing on the calligraphers with a large number of samples. Siamese and Triplet neural network architectures are implemented, including convolutional and Transformer-based models. The experimental results show that the MobileNetV3+ Custom Siamese model trained with contrastive loss achieves either the best or the second-best overall accuracy and area under the Receiver Operating Characteristic Curve on both datasets.


[107] 2603.13952

LLM-Guided Reinforcement Learning for Audio-Visual Speech Enhancement

In existing Audio-Visual Speech Enhancement (AVSE) methods, objectives such as Scale-Invariant Signal-to-Noise Ratio (SI-SNR) and Mean Squared Error (MSE) are widely used; however, they often correlate poorly with perceptual quality and provide limited interpretability for optimization. This work proposes a reinforcement learning-based AVSE framework with a Large Language Model (LLM)-based interpretable reward model. An audio LLM generates natural language descriptions of enhanced speech, which are converted by a sentiment analysis model into a 1-5 rating score serving as the PPO reward for fine-tuning a pretrained AVSE model. Compared with scalar metrics, LLM-generated feedback is semantically rich and explicitly describes improvements in speech quality. Experiments on the 4th COG-MHEAR AVSE Challenge (AVSEC-4) dataset show that the proposed method outperforms a supervised baseline and a DNSMOS-based RL baseline in PESQ, STOI, neural quality metrics, and subjective listening tests.


[108] 2603.13969

Leveraging a Statistical Shape Model for Efficient Generation of Annotated Training Data: A Case Study on Liver Landmarks Segmentation

Anatomical landmark segmentation serves as a critical initial step for robust multimodal registration during computer-assisted interventions. Current approaches predominantly rely on deep learning, which often necessitates the extensive manual generation of annotated datasets. In this paper, we present a novel strategy for creating large annotated datasets using a statistical shape model (SSM) based on a mean shape that is manually labeled only once. We demonstrate the method's efficacy through its application to deep-learning-based anatomical landmark segmentation, specifically targeting the detection of the anterior ridge and the falciform ligament in 3D liver shapes. A specialized deep learning network was trained with 8,800 annotated liver shapes generated by the SSM. The network's performance was evaluated on 500 unseen synthetic SSM shapes, yielding a mean Intersection over Union of 91.4% (87.4% for the anterior ridge and 87.6% for the falciform ligament). Subsequently, the network was applied to clinical patient liver shapes, with qualitative evaluation indicating promising results and highlighting the generalizability of the proposed approach. Our findings suggest that the SSM-based data generation approach alleviates the labor-intensive process of manual labeling while enabling the creation of large annotated training datasets for machine learning. Although our study focuses on liver anatomy, the proposed methodology holds potential for a broad range of applications where annotated training datasets play a pivotal role in developing accurate deep-learning models.


[109] 2603.14033

What Counts as Real? Speech Restoration and Voice Quality Conversion Pose New Challenges to Deepfake Detection

Audio anti-spoofing systems are typically formulated as binary classifiers distinguishing bona fide from spoofed speech. This assumption fails under layered generative processing, where benign transformations introduce distributional shifts that are misclassified as spoofing. We show that phonation-modifying voice conversion and speech restoration are treated as out-of-distribution despite preserving speaker authenticity. Using a multi-class setup separating bona fide, converted, spoofed, and converted-spoofed speech, we analyse model behaviour through self-supervised learning (SSL) embeddings and acoustic correlates. The benign transformations induce a drift in the SSL space, compressing bona fide and spoofed speech and reducing classifier separability. Reformulating anti-spoofing as a multi-class problem improves robustness to benign shifts while preserving spoof detection, suggesting binary systems model the distribution of raw speech rather than authenticity itself.


[110] 2603.14042

Block-QAOA-Aware Detection with Parameter Transfer for Large-Scale MIMO

Large-scale MIMO detection remains challenging because exact or near-maximum-likelihood search is difficult to scale, while available quantum resources are insufficient for directly solving full-size detection instances by QAOA. This paper therefore proposes a Block-QAOA-Aware MIMO Detector (BQA-MD), whose primary purpose is to reorganize the detection chain so that it becomes compatible with limited-qubit local quantum subproblems. Specifically, BQA-MD combines block-QAOA-aware preprocessing in the QR domain, a standards-consistent blockwise 5G NR Gray-HUBO interface, an MMSE-induced dynamic regularized blockwise objective, and K-best candidate propagation. Within this framework, fixed-size block construction gives every local subproblem a uniform circuit width and parameter dimension, which in turn enables parameter-transfer QAOA as a practical realization strategy for structurally matched local subproblems. Experiments are conducted on a 16x16 Rayleigh MIMO system with 16QAM using classical simulation of the quantum subroutine. The results show that the regularized blockwise detector improves upon its unregularized counterpart, validating the adopted blockwise objective and the block-QAOA-aware design rationale. They also show that the parameter-transfer QAOA detector nearly matches the regularized blockwise exhaustive reference and clearly outperforms direct-training QAOA in BER, thereby supporting parameter reuse as the preferred QAOA realization strategy within the proposed framework. In the tested setting, MMSE remains slightly better in the low-SNR region, whereas the parameter-transfer QAOA detector becomes highly competitive from the medium-SNR regime onward.


[111] 2603.14047

Distributional Uncertainty and Adaptive Decision-Making in System

Complex engineered systems require coordinated design choices across heterogeneous components under multiple conflicting objectives and uncertain specifications. Monotone co-design provides a compositional framework for such problems by modeling each subsystem as a design problem: a feasible relation between provided functionalities and required resources in partially ordered sets. Existing uncertain co-design models rely on interval bounds, which support worst-case reasoning but cannot represent probabilistic risk or multi-stage adaptive decisions. We develop a distributional extension of co-design that models uncertain design outcomes as distributions over design problems and supports adaptive decision processes through Markov-kernel re-parameterizations. Using quasi-measurable and quasi-universal spaces, we show that the standard co-design interconnection operations remain compositional under this richer notion of uncertainty. We further introduce queries and observations that extract probabilistic design trade-offs, including feasibility probabilities, confidence bounds, and distributions of minimal required resources. A task-driven unmanned aerial vehicle case study illustrates how the framework captures risk-sensitive and information-dependent design choices that interval-based models cannot express.


[112] 2603.14049

Schrödinger Bridge Over A Compact Connected Lie Group

This work studies the Schrödinger bridge problem for the kinematic equation on a compact connected Lie group. The objective is to steer a controlled diffusion between given initial and terminal densities supported over the Lie group while minimizing the control effort. We develop a coordinate-free formulation of this stochastic optimal control problem that respects the underlying geometric structure of the Lie group, thereby avoiding limitations associated with local parameterizations or embeddings in Euclidean spaces. We establish the existence and uniqueness of solution to the corresponding Schrödinger system. Our results are constructive in that they derive a geometric controller that optimally interpolates probability densities supported over the Lie group. To illustrate the results, we provide numerical examples on $\mathsf{SO}(2)$ and $\mathsf{SO}(3)$.


[113] 2603.14056

Amortizing Trajectory Diffusion with Keyed Drift Fields

Diffusion-based trajectory planners can synthesize rich, multimodal action sequences for offline reinforcement learning, but their iterative denoising incurs substantial inference-time cost, making closed-loop planning slow under tight compute budgets. We study the problem of achieving diffusion-like trajectory planning behavior with one-step inference, while retaining the ability to sample diverse candidate plans and condition on the current state in a receding-horizon control loop. Our key observation is that conditional trajectory generation fails under naïve distribution-matching objectives when the similarity measure used to align generated trajectories with the dataset is dominated by unconstrained future dimensions. In practice, this causes attraction toward average trajectories, collapses action diversity, and yields near-static behavior. Our key insight is that conditional generative planning requires a conditioning-aware notion of neighborhood: trajectory updates should be computed using distances in a compact key space that reflects the condition, while still applying updates in the full trajectory space. Building on this, we introduce Keyed Drifting Policies (KDP), a one-step trajectory generator trained with a drift-field objective that attracts generated trajectories toward condition-matched dataset windows and repels them from nearby generated samples, using a stop-gradient drifted target to amortize iterative refinement into training. At inference, the resulting policy produces a full trajectory window in a single forward pass. Across standard RL benchmarks and real-time hardware deployments, KDP achieves strong performance with one-step inference and substantially lower planning latency than diffusion sampling. Project website, code and videos: this https URL


[114] 2603.14106

Chaos-Free Networks are Stable Recurrent Neural Networks

Gated Recurrent Neural Networks (RNNs) are widely used for nonlinear system identification due to their high accuracy, although they often exhibit complex, chaotic dynamics that are difficult to analyze. This paper investigates the system-theoretic properties of the Chaos-Free Network (CFN), an architecture originally proposed to eliminate the chaotic behavior found in standard gated RNNs. First, we formally prove that the CFN satisfies Input-to-State Stability (ISS) by design. However, we demonstrate that ensuring Incremental ISS (delta-ISS) still requires specific parametric constraints on the CFN architecture. Then, to address this, we introduce the Decoupled-Gate Network (DGN), a novel structural variant of the CFN that removes internal state connections in the gating mechanisms. Finally, we prove that the DGN unconditionally satisfies the delta-ISS property, providing an incrementally stable architecture for identifying nonlinear dynamical systems without requiring complex network training modifications. Numerical results confirm that the DGN maintains the modeling capabilities of standard architectures while adhering to these rigorous stability guarantees.


[115] 2603.14124

Experimental Evaluation of Security Attacks on Self-Driving Car Platforms

Deep learning-based perception pipelines in autonomous ground vehicles are vulnerable to both adversarial manipulation and network-layer disruption. We present a systematic, on-hardware experimental evaluation of five attack classes: FGSM, PGD, man-in-the-middle (MitM), denial-of-service (DoS), and phantom attacks on low-cost autonomous vehicle platforms (JetRacer and Yahboom). Using a standardized 13-second experimental protocol and comprehensive automated logging, we systematically characterize three dimensions of attack behavior:(i) control deviation, (ii) computational cost, and (iii) runtime responsiveness. Our analysis reveals that distinct attack classes produce consistent and separable "fingerprints" across these dimensions: perception attacks (MitM output manipulation and phantom projection) generate high steering deviation signatures with nominal computational overhead, PGD produces combined steering perturbation and computational load signatures across multiple dimensions, and DoS exhibits frame rate and latency degradation signatures with minimal control-plane perturbation. We demonstrate that our fingerprinting framework generalizes across both digital attacks (adversarial perturbations, network manipulation) and environmental attacks (projected false features), providing a foundation for attack-aware monitoring systems and targeted, signature-based defense mechanisms.


[116] 2603.14310

A Convergence-Guaranteed Algorithm for Stochastic Optimal Control Problems

Stochastic Optimal Control Problems (SOCPs) plays a major role in the sequential decision-making challenges. There exist various iterative algorithms, under framework of stochastic maximum principle, that sequentially find the optimal control decision. However, they are based on the adjoint sensitivity analysis that necessitates simulation of an adjoint process, typically a backward stochastic differential equation (SDE) that must simultaneously be adapted to a forward filtration and satisfy a terminal condition, which substantially increases complexity and exacerbates the curse of dimensionality. We instead develop a stochastic maximum principle based on the Malliavin calculus, which enables us to devise an iterative algorithm without need of an adjoint process. Our algorithm however needs the Malliavin derivative that can be efficiently computed based on a forward simulator. Empirical comparisons against standard iterative algorithms demonstrate that our approach alleviates the dimensionality bottleneck while delivering competitive performance on the considered SOCPs.


[117] 2603.14328

CodecMOS-Accent: A MOS Benchmark of Resynthesized and TTS Speech from Neural Codecs Across English Accents

We present the CodecMOS-Accent dataset, a mean opinion score (MOS) benchmark designed to evaluate neural audio codec (NAC) models and the large language model (LLM)-based text-to-speech (TTS) models trained upon them, especially across non-standard speech like accented speech. The dataset comprises 4,000 codec resynthesis and TTS samples from 24 systems, featuring 32 speakers spanning ten accents. A large-scale subjective test was conducted to collect 19,600 annotations from 25 listeners across three dimensions: naturalness, speaker similarity, and accent similarity. This dataset does not only represent an up-to-date study of recent speech synthesis system performance but reveals insights including a tight relationship between speaker and accent similarity, the predictive power of objective metrics, and a perceptual bias when listeners share the same accent with the speaker. This dataset is expected to foster research on more human-centric evaluation for NAC and accented TTS.


[118] 2603.14358

A Unified Pulse-Shaped OFDM Framework for Chirp-Domain Waveforms: Continuous-Time Modeling and Practical I/O Analysis

In this paper, a unified framework for chirp-domain waveforms, including orthogonal chirp division multiplexing (OCDM) and affine frequency division multiplexing (AFDM), is developed. Based on their continuous-time representations, we show that these waveforms fall within the conventional Weyl-Heisenberg (WH) framework for multicarrier (MC) waveforms, where the root chirp corresponds directly to the prototype pulse in the WH framework. Since the chirp is a constant-envelope signal and is transparent to subcarrier orthogonality, these waveforms can be further interpreted as pulse-shaped (PS) orthogonal frequency division multiplexing (OFDM). Within the developed PS-OFDM framework, the power spectral density of chirp-domain waveforms is derived analytically. We then discuss existing practical implementations of chirp-domain waveforms, which rely on sub-Nyquist discrete-time samples and therefore exhibit frequency aliasing. The resulting aliased waveform is analyzed, and the orthogonality among the embedded aliased chirps is discussed. It is shown that the aliased chirps are conditionally orthogonal, whereas the implemented approximate aliased chirps can maintain mutual orthogonality when an appropriate sample-wise pulse-shaping filter is applied. We further derive an exact input-output relation for the implemented chirp-domain waveform over a delay-Doppler (DD) channel, showing that the effective channel observed at a practical receiver does not, in general, admit a DD spreading-function model commonly assumed in the literature. The implementation complexity is also investigated and compared with that of orthogonal delay-Doppler division multiplexing (ODDM), the DD-domain MC waveform defined within the evolved WH framework. Finally, simulation results are provided to verify the analysis.


[119] 2603.14374

A Systematic Comparison and Evaluation of Building Ontologies for Deploying Data-Driven Analytics in Smart Buildings

Ontologies play a critical role in data exchange, information integration, and knowledge sharing across diverse smart building applications. Yet, semantic differences between the prevailing building ontologies hamper their purpose of bringing data interoperability and restrict the ability to reuse building ontologies in real-world applications. In this paper, we propose and adopt a framework to conduct a systematic comparison and evaluation of four popular building ontologies (Brick Schema, RealEstateCore, Project Haystack and Google's Digital Buildings) from both axiomatic design and assertions in a use case, namely the Terminological Box (TBox) evaluation and the Assertion Box (ABox) evaluation. In the TBox evaluation, we use the SQuaRE-based Ontology Quality Evaluation (OQuaRE) Framework and concede that Project Haystack and Brick Schema are more compact with respect to the ontology axiomatic design. In the ABox evaluation, we apply an empirical study with sample building data that suggests that Brick Schema and RealEstateCore have greater completeness and expressiveness in capturing the main concepts and relations within the building domain. The results implicitly indicate that there is no universal building ontology for integrating Linked Building Data (LBD). We also discuss ontology compatibility and investigate building ontology design patterns (ODPs) to support ontology matching, alignment, and harmonisation.


[120] 2603.14391

4D reconstruction of alumina laser melt pools at 25 kHz via operando X-ray multi-projection imaging

Advancing additive manufacturing, e.g., laser powder-bed fusion (LPBF), requires resolving rapid processes such as melt-pool dynamics and keyhole evolution in 4D (3D + time). Operando X-ray tomography is a state-of-the-art approach for 4D characterization, but its temporal resolution is fundamentally constrained by the sample rotation speed, limiting achievable 4D imaging rates and preventing the resolution of these fast phenomena. Here we present rotation-enabled X-ray Multi-Projection Imaging (rotation-XMPI), which captures three angularly resolved projections per time step and thereby decouples temporal resolution from the sample rotation speed. Combined with a self-supervised deep-learning reconstruction framework for multi-angle inputs, rotation-XMPI enables high-fidelity 4D imaging at unprecedented speed. We demonstrate the approach in an operando alumina laser-remelting experiment at MAX IV using three beamlets combined with 25 Hz sample rotation. Rotation-XMPI resolves melt-pool morphology and keyhole evolution; in contrast, conventional and limited-angle tomography remain rotation-limited, and motion blur prevents resolving these dynamics. Overall, rotation-XMPI delivers a 250-fold increase relative to state-of-the-art melt-pool imaging, effectively achieving 25,000 reconstructed volumes per second. This method establishes a practical route to scalable ultrafast 4D imaging for additive manufacturing and other materials processes.


[121] 2603.14514

High-Probability Bounds for SGD under the Polyak-Lojasiewicz Condition with Markovian Noise

We present the first uniform-in-time high-probability bound for SGD under the PL condition, where the gradient noise contains both Markovian and martingale difference components. This significantly broadens the scope of finite-time guarantees, as the PL condition arises in many machine learning and deep learning models while Markovian noise naturally arises in decentralized optimization and online system identification problems. We further allow the magnitude of noise to grow with the function value, enabling the analysis of many practical sampling strategies. In addition to the high-probability guarantee, we establish a matching $1/k$ decay rate for the expected suboptimality. Our proof technique relies on the Poisson equation to handle the Markovian noise and a probabilistic induction argument to address the lack of almost-sure bounds on the objective. Finally, we demonstrate the applicability of our framework by analyzing three practical optimization problems: token-based decentralized linear regression, supervised learning with subsampling for privacy amplification, and online system identification.


[122] 2603.14610

Make it SING: Analyzing Semantic Invariants in Classifiers

All classifiers, including state-of-the-art vision models, possess invariants, partially rooted in the geometry of their linear mappings. These invariants, which reside in the null-space of the classifier, induce equivalent sets of inputs that map to identical outputs. The semantic content of these invariants remains vague, as existing approaches struggle to provide human-interpretable information. To address this gap, we present Semantic Interpretation of the Null-space Geometry (SING), a method that constructs equivalent images, with respect to the network, and assigns semantic interpretations to the available variations. We use a mapping from network features to multi-modal vision language models. This allows us to obtain natural language descriptions and visual examples of the induced semantic shifts. SING can be applied to a single image, uncovering local invariants, or to sets of images, allowing a breadth of statistical analysis at the class and model levels. For example, our method reveals that ResNet50 leaks relevant semantic attributes to the null space, whereas DinoViT, a ViT pretrained with self-supervised DINO, is superior in maintaining class semantics across the invariant space.


[123] 2603.14616

Functional Safety Analysis for Infrastructure-Enabled Depot Autonomy System

This paper presents the functional safety analysis for an Infrastructure-Enabled Depot Autonomy (IX-DA) system. The IX-DA system automates the marshalling of delivery vehicles within a controlled depot environment, navigating connected autonomous vehicles (CAVs) between drop-off zones, service stations (washing, calibration, charging, loading), and pick-up zones without human intervention. We describe the system architecture comprising three principal subsystems -- the connected autonomous vehicle, the infrastructure sensing and compute layer, and the human operator interface -- and derive their functional requirements. Using ISO 26262-compliant Hazard Analysis and Risk Assessment (HARA) methodology, we identify eight hazardous events, evaluate them across different operating scenarios, and assign Automotive Safety Integrity Levels~(ASILs) ranging from Quality Management (QM) to ASIL C. Six safety goals are derived and allocated to vehicle and infrastructure subsystems. The analysis demonstrates that high-speed uncontrolled operation imposes the most demanding safety requirements (ASIL C), while controlled low-speed operation reduces most goals to QM, offering a practical pathway for phased deployment.


[124] 2603.14625

EcoFair-CH-MARL: Scalable Constrained Hierarchical Multi-Agent RL with Real-Time Emission Budgets and Fairness Guarantees

Global decarbonisation targets and tightening market pressures demand maritime logistics solutions that are simultaneously efficient, sustainable, and equitable. We introduce EcoFair-CH-MARL, a constrained hierarchical multi-agent reinforcement learning framework that unifies three innovations: (i) a primal-dual budget layer that provably bounds cumulative emissions under stochastic weather and demand; (ii) a fairness-aware reward transformer with dynamically scheduled penalties that enforces max-min cost equity across heterogeneous fleets; and (iii) a two-tier policy architecture that decouples strategic routing from real-time vessel control, enabling linear scaling in agent count. New theoretical results establish O(\sqrt{T}) regret for both constraint violations and fairness loss. Experiments on a high-fidelity maritime digital twin (16 ports, 50 vessels) driven by automatic identification system traces, plus an energy-grid case study, show up to 15% lower emissions, 12% higher through-put, and a 45% fair-cost improvement over state-of-the-art hierarchical and constrained MARL baselines. In addition, EcoFair-CH-MARL achieves stronger equity (lower Gini and higher min-max welfare) than fairness-specific MARL baselines (e.g., SOTO, FEN), and its modular design is compatible with both policy- and value-based learners. EcoFair-CH-MARL therefore advances the feasibility of large-scale, regulation-compliant, and socially responsible multi-agent coordination in safety-critical domains.


[125] 2603.14636

Nudging Hidden States: Training-Free Model Steering for Chain-of-Thought Reasoning in Large Audio-Language Models

Chain-of-thought (CoT) prompting has been extended to large audio-language models (LALMs) to elicit reasoning, yet enhancing its effectiveness without training remains challenging. We study inference-time model steering as a training-free approach to improve LALM reasoning. We introduce three strategies using diverse information sources and evaluate them across four LALMs and four benchmarks. Results show general accuracy gains up to 4.4% over CoT prompting. Notably, we identify a cross-modal transfer where steering vectors derived from few text samples effectively guide speech-based reasoning, demonstrating high data efficiency. We also examine hyperparameter sensitivity to understand the robustness of these approaches. Our findings position model steering as a practical direction for strengthening LALM reasoning.


[126] 2603.14762

Online Learning for Supervisory Switching Control

We study supervisory switching control for partially-observed linear dynamical systems. The objective is to identify and deploy the best controller for the unknown system by periodically selecting among a collection of $N$ candidate controllers, some of which may destabilize the underlying system. While classical estimator-based supervisory control guarantees asymptotic stability, it lacks quantitative finite-time performance bounds. Conversely, current non-asymptotic methods in both online learning and system identification require restrictive assumptions that are incompatible in a control setting, such as system stability, which preclude testing potentially unstable controllers. To bridge this gap, we propose a novel, non-asymptotic analysis of supervisory control that adapts multi-armed bandit algorithms to address these control-theoretic challenges. Our data-driven algorithm evaluates candidate controllers via scoring criteria that leverage system observability to isolate the effects of historical states, enabling both detection of destabilizing controllers and accurate system identification. We present two algorithmic variants with dimension-free, finite-time guarantees, where each identifies the most suitable controller in $\mathcal{O}(N \log N)$ steps, while simultaneously achieving finite $L_2$-gain with respect to system disturbances.


[127] 2603.14852

Surgical Robot, Path Planning, Joint Space, Riemannian Manifolds

Robotic surgery for minimally invasive surgery can reduce the surgeon's workload by autonomously guiding robotic forceps. Movement of the robot is restricted around a fixed insertion port. The robot often encounters angle limitations during operation. Also, the surface of the abdominal cavity is non-concave, making it computationally expensive to find the desired this http URL this work, to solve these problems, we propose a method for path planning in joint space by transforming the position into a Riemannian manifold. An edge cost function is defined to search for a desired path in the joint space and reduce the range of motion of the joints. We found that the organ is mostly non-concave, making it easy to find the optimal path using gradient descent method. Experimental results demonstrated that the proposed method reduces the range of joint angle movement compared to calculations in position space.


[128] 2603.14868

Free Final Time Adaptive Mesh Covariance Steering via Sequential Convex Programming

In this paper we develop a sequential convex programming (SCP) framework for free-final-time covariance steering of nonlinear stochastic differential equations (SDEs) subject to both additive and multiplicative diffusion. We cast the free-final-time objective through a time-normalization and introduce per-interval time-dilation variables that induce an adaptive discretization mesh, enabling the simultaneous optimization of the control policy and the temporal grid. A central difficulty is that, under multiplicative noise, accurate covariance propagation within SCP requires retaining the first-order diffusion linearization and its coupling with time dilation. We therefore derive the exact local linear stochastic model (preserving the multiplicative structure) and introduce a tractable discretization that maintains the associated diffusion terms, after which each SCP subproblem is solved via conic/semidefinite covariance-steering relaxations with terminal moment constraints and state/control chance constraints. Numerical experiments on a nonlinear double-integrator with drag and velocity-dependent diffusion validate free-final-time minimization through adaptive time allocation and improved covariance accuracy relative to frozen-diffusion linearizations.


[129] 2603.15184

CATFormer: When Continual Learning Meets Spiking Transformers With Dynamic Thresholds

Although deep neural networks perform extremely well in controlled environments, they fail in real-world scenarios where data isn't available all at once, and the model must adapt to a new data distribution that may or may not follow the initial distribution. Previously acquired knowledge is lost during subsequent updates based on new data. a phenomenon commonly known as catastrophic forgetting. In contrast, the brain can learn without such catastrophic forgetting, irrespective of the number of tasks it encounters. Existing spiking neural networks (SNNs) for class-incremental learning (CIL) suffer a sharp performance drop as tasks accumulate. We here introduce CATFormer (Context Adaptive Threshold Transformer), a scalable framework that overcomes this limitation. We observe that the key to preventing forgetting in SNNs lies not only in synaptic plasticity but also in modulating neuronal excitability. At the core of CATFormer is the Dynamic Threshold Leaky Integrate-and-Fire (DTLIF) neuron model, which leverages context-adaptive thresholds as the primary mechanism for knowledge retention. This is paired with a Gated Dynamic Head Selection (G-DHS) mechanism for task-agnostic inference. Extensive evaluation on both static (CIFAR-10/100/Tiny-ImageNet) and neuromorphic (CIFAR10-DVS/SHD) datasets reveals that CATFormer outperforms existing rehearsal-free CIL algorithms across various task splits, establishing it as an ideal architecture for energy-efficient, true-class incremental learning.


[130] 2603.15248

Mechanistic Foundations of Goal-Directed Control

Mechanistic interpretability has transformed the analysis of transformer circuits by decomposing model behavior into competing algorithms, identifying phase transitions during training, and deriving closed-form predictions for when and why strategies shift. However, this program has remained largely confined to sequence-prediction architectures, leaving embodied control systems without comparable mechanistic accounts. Here we extend this framework to sensorimotor-cognitive development, using infant motor learning as a model system. We show that foundational inductive biases give rise to causal control circuits, with learned gating mechanisms converging toward theoretically motivated uncertainty thresholds. The resulting dynamics reveal a clean phase transition in the arbitration gate whose commitment behavior is well described by a closed-form exponential moving-average surrogate. We identify context window k as the critical parameter governing circuit formation: below a minimum threshold (k$\leq$4) the arbitration mechanism cannot form; above it (k$\geq$8), gate confidence scales asymptotically as log k. A two-dimensional phase diagram further reveals task-demand-dependent route arbitration consistent with the prediction that prospective execution becomes advantageous only when prediction error remains within the task tolerance window. Together, these results provide a mechanistic account of how reactive and prospective control strategies emerge and compete during learning. More broadly, this work sharpens mechanistic accounts of cognitive development and provides principled guidance for the design of interpretable embodied agents.


[131] 2603.15346

A superposition approach for the ISS Lyapunov-Krasovskii theorem with pointwise dissipation

We show that the existence of a Lyapunov-Krasovskii functional (LKF) with pointwise dissipation (i.e. dissipation in terms of the current solution norm) suffices for input-to-state stability, provided that uniform global stability can also be ensured using the same LKF. To this end, we develop a stability theory, in which the behavior of solutions is not assessed through the classical norm but rather through a specific LKF, which may provide significantly tighter estimates. We discuss the advantages of our approach by means of an example.


[132] 2603.15352

NV-Bench: Benchmark of Nonverbal Vocalization Synthesis for Expressive Text-to-Speech Generation

While recent text-to-speech (TTS) systems increasingly integrate nonverbal vocalizations (NVs), their evaluations lack standardized metrics and reliable ground-truth references. To bridge this gap, we propose NV-Bench, the first benchmark grounded in a functional taxonomy that treats NVs as communicative acts rather than acoustic artifacts. NV-Bench comprises 1,651 multi-lingual, in-the-wild utterances with paired human reference audio, balanced across 14 NV categories. We introduce a dual-dimensional evaluation protocol: (1) Instruction Alignment, utilizing the proposed paralinguistic character error rate (PCER) to assess controllability, (2) Acoustic Fidelity, measuring the distributional gap to real recordings to assess acoustic realism. We evaluate diverse TTS models and develop two baselines. Experimental results demonstrate a strong correlation between our objective metrics and human perception, establishing NV-Bench as a standardized evaluation framework.


[133] 2603.15360

Mitigating Renewable-Induced Risks for Green and Conventional Ammonia Producers through Coordinated Production and Futures Trading

Renewable power-to-ammonia (ReP2A), which uses hydrogen produced from renewable electricity as feedstock, is a promising pathway for decarbonizing the energy, transportation, and chemical sectors. However, variability in renewable generation causes fluctuations in hydrogen supply and ammonia production, leading to revenue instability for both ReP2A producers and conventional fossil-based gray ammonia (GA) producers in the market. Existing studies mainly rely on engineering measures, such as production scheduling, to manage this risk, but their effectiveness is constrained by physical system limits. To address this challenge, this paper proposes a financial instrument termed \emph{renewable ammonia futures} and integrates it with production decisions to hedge ammonia output risk. Production and trading models are developed for both ReP2A and GA producers, with conditional value-at-risk (CVaR) used to represent risk preferences under uncertainty. A game-theoretic framework is established in which the two producers interact in coupled ammonia spot and futures markets, and a Nash bargaining mechanism coordinates their production and trading strategies. Case studies based on a real-world system show that introducing renewable ammonia futures increases the CVaR utilities of ReP2A and GA producers by 5.103% and 10.14%, respectively, improving profit stability under renewable uncertainty. Sensitivity analysis further confirms the effectiveness of the mechanism under different levels of renewable variability and capacity configurations.


[134] 2603.15393

Unimodal self-oscillations and their sign-symmetry for discrete-time relay feedback systems with dead zone

This paper characterizes self-oscillations in discrete-time linear time-invariant (LTI) relay feedback systems with nonnegative dead zone. Specifically, we aim to establish existence criteria for unimodal self-oscillations, defined as periodic solutions where the output exhibits a single-peaked period. Assuming that the linear part of system is stable, with a strictly monotonically decreasing impulse response on its infinite support, we propose a novel analytical framework based on the theory of total positivity to address this problem. We demonstrate that unimodal self-oscillations subject to mild variation-based constraints exist only if the number of positive and negative values of the system's loop gain coincides within a given strictly positive period, i.e., the self-oscillation is sign-symmetric. Building upon these findings, we derive conditions for the existence of such self-oscillations, establish tight bounds on their periods, and address the question of their uniqueness.


[135] 2603.15440

Music Genre Classification: A Comparative Analysis of Classical Machine Learning and Deep Learning Approaches

Automatic music genre classification is a long-standing challenge in Music Information Retrieval (MIR); work on non-Western music traditions remains scarce. Nepali music encompasses culturally rich and acoustically diverse genres--from the call-and-response duets of Lok Dohori to the rhythmic poetry of Deuda and the distinctive melodies of Tamang Selo--that have not been addressed by existing classification systems. In this paper, we construct a novel dataset of approximately 8,000 labeled 30-second audio clips spanning eight Nepali music genres and conduct a systematic comparison of nine classification models across two paradigms. Five classical machine learning classifiers (Logistic Regression, SVM, KNN, Random Forest, and XGBoost) are trained on 51 hand-crafted audio features extracted via Librosa, while four deep learning architectures (CNN, RNN, parallel CNN-RNN, and sequential CNN followed by RNN) operate on Mel spectrograms of dimension 640 x 128. Our experiments reveal that the sequential Convolutional Recurrent Neural Network (CRNN)--in which convolutional layers feed into an LSTM--achieves the highest accuracy of 84%, substantially outperforming both the best classical models (Logistic Regression and XGBoost, both at 71%) and all other deep architectures. We provide per-class precision, recall, F1-score, confusion matrices, and ROC analysis for every model, and offer a culturally grounded interpretation of misclassification patterns that reflects genuine overlaps in Nepal's musical traditions.


[136] 2603.15468

DMD Prediction of MIMO Channel Using Tucker Decomposition

Accurate channel state information (CSI) prediction is crucial for next-generation multiple-input multiple-output (MIMO) communication systems. Classical prediction methods often become inefficient for high-dimensional and rapidly time-varying channels. To improve prediction efficiency, it is essential to exploit the inherent low-rank tensor structure of the MIMO channel. Motivated by this observation, we propose a dynamic mode decomposition (DMD)-based prediction framework operating on the low-dimensional core tensors obtained via a Tucker decomposition. The proposed method predicts reduced-order channel cores, significantly lowering computational complexity. Simulation results demonstrate that the proposed approach preserves the dominant channel dynamics and achieves high prediction accuracy.


[137] 2603.15471

On the Derivation of Tightly-Coupled LiDAR-Inertial Odometry with VoxelMap

This note presents a concise mathematical formulation of tightly-coupled LiDAR-Inertial Odometry within an iterated error-state Kalman filter framework using a VoxelMap representation. Rather than proposing a new algorithm, it provides a clear and self-contained derivation that unifies the geometric modeling and probabilistic state estimation through consistent notation and explicit formulations. The document is intended to serve both as a technical reference and as an accessible entry point for a foundational understanding of the system architecture and estimation principles.


[138] 2603.15475

Seeing Beyond: Extrapolative Domain Adaptive Panoramic Segmentation

Cross-domain panoramic semantic segmentation has attracted growing interest as it enables comprehensive 360° scene understanding for real-world applications. However, it remains particularly challenging due to severe geometric Field of View (FoV) distortions and inconsistent open-set semantics across domains. In this work, we formulate an open-set domain adaptation setting, and propose Extrapolative Domain Adaptive Panoramic Segmentation (EDA-PSeg) framework that trains on local perspective views and tests on full 360° panoramic images, explicitly tackling both geometric FoV shifts across domains and semantic uncertainty arising from previously unseen classes. To this end, we propose the Euler-Margin Attention (EMA), which introduces an angular margin to enhance viewpoint-invariant semantic representation, while performing amplitude and phase modulation to improve generalization toward unseen classes. Additionally, we design the Graph Matching Adapter (GMA), which builds high-order graph relations to align shared semantics across FoV shifts while effectively separating novel categories through structural adaptation. Extensive experiments on four benchmark datasets under camera-shift, weather-condition, and open-set scenarios demonstrate that EDA-PSeg achieves state-of-the-art performance, robust generalization to diverse viewing geometries, and resilience under varying environmental conditions. The code is available at this https URL.


[139] 2603.15566

Lore: Repurposing Git Commit Messages as a Structured Knowledge Protocol for AI Coding Agents

As AI coding agents become both primary producers and consumers of source code, the software industry faces an accelerating loss of institutional knowledge. Each commit captures a code diff but discards the reasoning behind it - the constraints, rejected alternatives, and forward-looking context that shaped the decision. I term this discarded reasoning the Decision Shadow. This paper proposes Lore, a lightweight protocol that restructures commit messages - using native git trailers - into self-contained decision records carrying constraints, rejected alternatives, agent directives, and verification metadata. Lore requires no infrastructure beyond git, is queryable via a standalone CLI tool, and is discoverable by any agent capable of running shell commands. The paper formalizes the protocol, compares it against five competing approaches, stress-tests it against its strongest objections, and outlines an empirical validation path.


[140] 2603.15586

Computational Concept of the Psyche

This article presents an overview of approaches to modeling the human psyche in the context of constructing an artificial one. Based on this overview, a concept of cognitive architecture is proposed, in which the psyche is viewed as the operating system of a living or artificial subject, comprising a space of states, including the state of needs that determine the meaning of a subject's being in relation to stimuli from the external world, and intelligence as a decision-making system regarding actions in this world to satisfy these needs. Based on this concept, a computational formalization is proposed for creating artificial general intelligence systems for an agent through experiential learning in a state space that includes agent's needs, taking into account their biological or existential significance for the intelligent agent, along with agent's sensations and actions. Thus, the problem of constructing artificial general intelligence is formalized as a system for making optimal decisions in the space of specific agent needs under conditions of uncertainty, maximizing success in achieving goals, minimizing existential risks, and maximizing energy efficiency. A minimal experimental implementation of the model is presented.


[141] 2603.15597

AC-Foley: Reference-Audio-Guided Video-to-Audio Synthesis with Acoustic Transfer

Existing video-to-audio (V2A) generation methods predominantly rely on text prompts alongside visual information to synthesize audio. However, two critical bottlenecks persist: semantic granularity gaps in training data, such as conflating acoustically distinct sounds under coarse labels, and textual ambiguity in describing micro-acoustic features. These bottlenecks make it difficult to perform fine-grained sound synthesis using text-controlled modes. To address these limitations, we propose AC-Foley, an audio-conditioned V2A model that directly leverages reference audio to achieve precise and fine-grained control over generated sounds. This approach enables fine-grained sound synthesis, timbre transfer, zero-shot sound generation, and improved audio quality. By directly conditioning on audio signals, our approach bypasses the semantic ambiguities of text descriptions while enabling precise manipulation of acoustic attributes. Empirically, AC-Foley achieves state-of-the-art performance for Foley generation when conditioned on reference audio, while remaining competitive with state-of-the-art video-to-audio methods even without audio conditioning.


[142] 2603.15606

Saddle Point Evasion via Curvature-Regularized Gradient Dynamics

Nonconvex optimization underlies many modern machine learning and control tasks, where saddle points pose the dominant obstacle to reliable convergence in high-dimensional settings. Escaping these saddle points deterministically and at a controllable rate remains an open challenge: gradient descent is blind to curvature, stochastic perturbation methods lack deterministic guarantees, and Newton-type approaches suffer from Hessian singularity. We present Curvature-Regularized Gradient Dynamics (CRGD), which augments the objective with a smooth penalty on the most negative Hessian eigenvalue, yielding an augmented cost that serves as an optimization Lyapunov function with user-selectable convergence rates to second-order stationary points. Numerical experiments on a nonconvex matrix factorization example confirm that CRGD escapes saddle points across all tested configurations, with escape time that decreases with the eigenvalue gap, in contrast to gradient descent, whose escape time grows inversely with the gap.


[143] 2210.03412

The Trajectory PHD Filter for Coexisting Point and Extended Target Tracking

This paper develops a general trajectory probability hypothesis density (TPHD) filter, which uses a general density for target-generated measurements and is able to estimate trajectories of coexisting point and extended targets. First, we provide a derivation of this general TPHD filter based on finding the best Poisson posterior approximation by minimizing the Kullback-Leibler divergence, without using probability generating functionals. Second, we adopt an efficient implementation of this filter, where Gaussian densities correspond to point targets and Gamma Gaussian Inverse Wishart densities for extended targets. The L-scan approximation is also proposed as a simplified version to mitigate the huge computational cost. Simulation and experimental results show that the proposed filter is able to classify targets correctly and obtain accurate trajectory estimation.


[144] 2307.04880

Inertia-Constrained Generation Scheduling: Sample Selection, Learning-Embedded Optimization Modeling, and Computational Enhancement

Day-ahead generation scheduling is typically conducted by solv-ing security-constrained unit commitment (SCUC) problem. However, with fast-growing of inverter-based resources, grid inertia has been dramatically reduced, compromising the dy-namic stability system. Traditional SCUC (T-SCUC), without any inertia requirements, may no longer be effective for renewa-bles-dominated grids. To address this, we propose the active linearized sparse neural network-embedded SCUC (ALSNN-SCUC) model, utilizing machine learning (ML) to incorporate system dynamic performance. A multi-output deep neural net-work (DNN) model is trained offline on strategically-selected data samples to accurately predict frequency stability metrics: locational RoCoF and frequency nadir. Structured sparsity and active ReLU linearization are implemented to prune redundant DNN neurons, significantly reducing its size while ensuring pre-diction accuracy even at high sparsity levels. By embedding this ML-based frequency stability predictor into SCUC as con-straints, the proposed ALSNN-SCUC model minimizes its com-putational complexity while ensuring frequency stability follow-ing G-1 contingency. Case studies show that the proposed ALSNN-SCUC can enforce pre-specified frequency requirements without being overly conservative, outperforming five bench-mark models including T-SCUC, two physics-based SCUC, and two ML-based SCUC. The proposed sparsification and active linearization strategies can reduce the DNN-SCUC computing time by over 95% for both IEEE 24-bus and 118-bus systems, demonstrating the effectiveness and scalability of the proposed ALSNN-SCUC model.


[145] 2309.02650

Machine Learning-assisted Dynamics-Constrained Day-Ahead Energy Scheduling

TThe rapid expansion of inverter-based resources, such as wind and solar power plants, will significantly diminish the presence of conventional synchronous generators in fu-ture power grids with rich renewable energy sources. This transition introduces in-creased complexity and reduces dynamic stability in system operation and control, with low inertia being a widely recognized challenge. However, the literature has not thoroughly explored grid dynamic performance associated with energy scheduling so-lutions that traditionally only consider grid steady-state constraints. This paper will bridge the gap by enforcing grid dynamic constraints when conducting optimal energy scheduling; particularly, this paper explores locational post-contingency rate of change of frequency (RoCoF) requirements to accommodate substantial inertia reductions. This paper introduces a machine learning-assisted RoCoF-constrained unit commit-ment (ML-RCUC) model designed to ensure RoCoF stability after the most severe generator outage while maintaining operational efficiency. A graph-informed NN (GINN)-based RoCoF predictor is first trained on a high-fidelity simulation dataset to track the highest locational RoCoF, which is then reformulated as mixed-integer linear programming constraints that are integrated into the unit commitment model. Case studies, by solving the optimization problem ML-RCUC and validating its solutions with time-domain simulations, demonstrate that the proposed method can ensure loca-tional RoCoF stability with minimum conservativeness.


[146] 2310.17180

A Forward Reachability Perspective on Control Barrier Functions and Discount Factors in Reachability Analysis

Control invariant sets are crucial for various methods that aim to design safe control policies for systems whose state constraints must be satisfied over an indefinite time horizon. In this article, we explore the connections among reachability, control invariance, and Control Barrier Functions (CBFs). Unlike prior formulations based on backward reachability concepts, we establish a strong link between these three concepts by examining the inevitable Forward Reachable Tube (FRT), which is the set of states such that every trajectory reaching the FRT must have passed through a given initial set of states. First, our findings show that the inevitable FRT is a robust control invariant set if it has a continuously differentiable boundary. If the boundary is not differentiable, the FRT may lose invariance. We also show that any robust control invariant set including the initial set is a superset of the FRT if the boundary of the invariant set is differentiable. Next, we formulate a differential game between the control and disturbance, where the inevitable FRT is characterized by the zero-superlevel set of the value function. By incorporating a discount factor in the cost function of the game, the barrier constraint of the CBF naturally arises in the Hamilton-Jacobi (HJ) equation and determines the optimal policy. The resulting FRT value function serves as a CBF-like function, and conversely, any valid CBF is also a forward reachability value function. We further prove that any $C^1$ supersolution of the HJ equation for the FRT value functions is a valid CBF and characterizes a robust control invariant set that outer-approximates the FRT. Building on this property, finally, we devise a novel method that learns neural control barrier functions, which learn an control invariant superset of the FRT of a given initial set.


[147] 2402.08027

On the Stability of Undesirable Equilibria in the Quadratic Program Framework for Safety-Critical Control

Control Lyapunov functions (CLFs) and Control Barrier Functions (CBFs) have been used to develop provably safe controllers by means of quadratic programs (QPs). This framework guarantees safety in the form of trajectory invariance with respect to a given set, but it can introduce undesirable equilibrium points to the closed loop system, which can be asymptotically stable. In this work, we present a detailed study of the formation and stability of equilibrium points with the CLF-CBF-QP framework with multiple CBFs. In particular, we prove that undesirable equilibrium points occur for most systems, and their stability is dependent on the CLF and CBF geometrical properties. We introduce the concept of CLF-CBF compatibility for a system, regarding a CLF-CBF pair inducing no stable equilibrium points other than the CLF global minimum on the corresponding closed-loop dynamics. Sufficient conditions for CLF-CBF compatibility for LTI and drift-less full-rank systems with quadratic CLF and CBFs are derived, and we propose a novel control strategy to induce smooth changes in the CLF geometry at certain regions of the state space in order to satisfy the CLF-CBF compatibility conditions, aiming to achieve safety with respect to multiple safety objectives and quasi-global convergence of the trajectories towards the CLF minimum. Numerical simulations illustrate the applicability of the proposed method.


[148] 2404.09876

Conservative Bias Linear Power Flow Approximations: Application to Unit Commitment

Accurate modeling of power flow behavior is essential for a wide range of power system applications, yet the nonlinear and nonconvex structure of the underlying equations often limits their direct use in large-scale optimization problems. As a result, linear models are frequently adopted to improve computational tractability, though these simplifications can introduce excessive approximation error or lead to constraint violations. This paper presents a linear approximation framework, referred to as Conservative Bias Linear Approximations (CBLA), that systematically incorporates conservativeness into the approximation process. Rather than solely minimizing local linearization error, CBLA constructs linear constraints that bound the nonlinear functions of interest over a defined operating region while reducing overall approximation bias. The proposed approach maintains the simplicity of linear formulations and allows the approximation to be shaped through user-defined loss functions tailored to specific system quantities. Numerical studies demonstrate that CBLA provides more reliable and accurate approximations than conventional linearization techniques, and its integration into a unit commitment formulation results in improved feasibility and reduced operating costs.


[149] 2410.15832

Nonlinear Bayesian Filtering with Natural Gradient Gaussian Approximation

Practical Bayes filters often assume the state distribution of each time step to be Gaussian for computational tractability, resulting in the so-called Gaussian filters. When facing nonlinear systems, Gaussian filters such as extended Kalman filter (EKF) or unscented Kalman filter (UKF) typically rely on certain linearization techniques, which can introduce large estimation errors. To address this issue, this paper reconstructs the prediction and update steps of Gaussian filtering as solutions to two distinct optimization problems, whose optimal conditions are found to have analytical forms from Stein's lemma. It is observed that the stationary point for the prediction step requires calculating the first two moments of the prior distribution, which is equivalent to that step in existing moment-matching filters. In the update step, instead of linearizing the model to approximate the stationary points, we propose an iterative approach to directly minimize the update step's objective to avoid linearization errors. For the purpose of performing the steepest descent on the Gaussian manifold, we derive its natural gradient that leverages Fisher information matrix to adjust the gradient direction, accounting for the curvature of the parameter space. Combining this update step with moment matching in the prediction step, we introduce a new iterative filter for nonlinear systems called \textit{N}atural Gr\textit{a}dient Gaussia\textit{n} Appr\textit{o}ximation filter, or NANO filter for short. We prove that NANO filter locally converges to the optimal Gaussian approximation at each time step. Furthermore, the estimation error is proven exponentially bounded for nearly linear measurement equation and low noise levels through constructing a supermartingale-like property across consecutive time steps.


[150] 2502.02687

NDKF: A Neural-Enhanced Distributed Kalman Filter for Nonlinear Multi-Sensor Estimation

We propose a Neural-Enhanced Distributed Kalman Filter (NDKF) for multi-sensor state estimation in nonlinear systems. Unlike traditional Kalman filters that rely on explicit analytical models and assume centralized fusion, NDKF leverages neural networks to replace analytical process and measurement models with learned mappings while each node performs local prediction and update steps and exchanges only compact posterior summaries with its neighbors. This distributed design reduces communication overhead and avoids a central fusion bottleneck. We provide sufficient mean-square stability conditions under bounded Jacobians and well-conditioned innovations, together with practically checkable proxies such as Jacobian norm control and innovation monitoring. We also discuss consistency under learned-model mismatch, including covariance inflation and covariance-intersection fusion when cross-correlations are uncertain. Simulations on a 2D nonlinear system with four partially observing nodes show that NDKF outperforms a distributed EKF baseline under model mismatch and yields improved estimation accuracy with modest communication requirements.


[151] 2502.02756

Adaptive Voxel-Weighted Loss Using L1 Norms in Deep Neural Networks for Detection and Segmentation of Prostate Cancer Lesions in PET/CT Images

Accurate automated detection of recurrent prostate cancer in PSMA PET/CT scans is challenging due to heterogeneous lesion size, activity, anatomical location, and intra- and inter-class imbalances. Conventional deep learning loss functions often produce suboptimal optimization, as gradients are dominated by easy background voxels or extreme outliers. To address this, we propose L1-weighted Dice Focal Loss (L1DFL), which harmonizes gradient magnitudes across voxels using L1 norms to adaptively weight samples based on classification difficulty, resulting in well-calibrated predictions with a bimodal separation between correct and incorrect predictions. We trained three 3D convolutional networks (Attention U-Net, SegResNet, U-Net) and a transformer-based UNETR model on 380 PSMA PET/CT scans. PET and CT volumes were concatenated as input to the models. We also fine-tuned SAM-Med3D foundation model with the different loss functions and evaluated their performance. Across architectures, L1DFL consistently outperformed Dice Loss (DL) and Dice Focal Loss (DFL), achieving at least a 4% improvement in Dice Similarity Coefficient. F1 scores were higher by 6% and 26% compared to DL and DFL, respectively. While DFL produced more false positives and DL struggled with larger lesions, L1DFL achieved balanced detection, minimizing false detections while maintaining high true positive rates. The gradient harmonization mechanism ensured robustness across varying lesion sizes, volumes, and spread. The code is publicly available at: this https URL.


[152] 2503.17543

Echo-E$^3$Net: Efficient Endocardial Spatio-Temporal Network for Ejection Fraction Estimation

Objective To develop a robust and computationally efficient deep learning model for automated left ventricular ejection fraction (LVEF) estimation from echocardiography videos that is suitable for real-time point-of-care ultrasound (POCUS) deployment. Methods We propose Echo-E$^3$Net, an endocardial spatio-temporal network that explicitly incorporates cardiac anatomy into LVEF prediction. The model comprises a dual-phase Endocardial Border Detector (E$^2$CBD) that uses phase-specific cross attention to localize end-diastolic and end-systolic endocardial landmarks and to learn phase-aware landmark embeddings, and an Endocardial Feature Aggregator (E$^2$FA) that fuses these embeddings with global statistical descriptors of deep feature maps to refine EF regression. Training is guided by a multi-component loss inspired by Simpson's biplane method that jointly supervises EF and landmark geometry. We evaluate Echo-E$^3$Net on the EchoNet-Dynamic dataset using RMSE and R$^2$ while reporting parameter count and GFLOPs to characterize efficiency. Results On EchoNet-Dynamic, Echo-E$^3$Net achieves an RMSE of 5.20 and an R$^2$ score of 0.82 while using only 1.55M parameters and 8.05 GFLOPs. The model operates without external pre-training, heavy data augmentation, or test-time ensembling, supporting practical real-time deployment. Conclusion By combining phase-aware endocardial landmark modeling with lightweight spatio-temporal feature aggregation, Echo-E$^3$Net improves the efficiency and robustness of automated LVEF estimation and is well-suited for scalable clinical use in POCUS settings. Code is available at this https URL


[153] 2504.02057

Path planning with moving obstacles using stochastic optimal control

Navigating a collision-free and optimal trajectory for a robot is a challenging task, particularly in environments with moving obstacles such as humans. We formulate this problem as a stochastic optimal control problem. Since solving the full problem is computationally demanding, we introduce a tractable approximation whose Bellman equation can be solved efficiently. The resulting value function is then incorporated as a terminal penalty in an online rollout framework. We construct a trade-off curve between safety and performance to identify an appropriate weighting between them, and compare the performance with other methods. Simulation results show that the proposed rollout approach can be tuned to reach the target in nearly the same expected time as receding horizon $A^\star$ while maintaining a larger expected minimum distance to the moving obstacle. The results also show that the proposed method outperforms the considered CBF-based methods when a larger obstacle clearance is desired, while achieving comparable performance otherwise.


[154] 2504.15453

Barrier-Riccati Synthesis for Nonlinear Safe Control with Expanded Region of Attraction

We present a Riccati-based framework for safety-critical nonlinear control that integrates the barrier states (BaS) methodology with the State-Dependent Riccati Equation (SDRE) approach. The BaS formulation embeds safety constraints into the system dynamics via auxiliary states, enabling safety to be treated as a control objective. To overcome the limited region of attraction in linear BaS controllers, we extend the framework to nonlinear systems using SDRE synthesis applied to the barrier-augmented dynamics and derive a matrix inequality condition that certifies forward invariance of a large region of attraction and guarantees asymptotic safe stabilization. The resulting controller is computed online via pointwise Riccati solutions. We validate the method on an unstable constrained system and cluttered quadrotor navigation tasks, demonstrating improved constraint handling, scalability, and robustness near safety boundaries. This framework offers a principled and computationally tractable solution for synthesizing nonlinear safe feedback in safety-critical environments.


[155] 2504.18951

Quadratic Programming Approach to Flight Envelope Protection Using Control Barrier Functions

Ensuring the safe operation of aerospace systems within their prescribed flight envelope is a fundamental requirement for modern flight control systems. Flight envelope protection (FEP) prevents violations of aerodynamic, structural, and performance constraints, mitigating risks such as stall, excessive loads, and loss of control. Conventional FEP approaches, such as reference clipping via saturation functions and model-based command filtering, impose constraints at the reference input level but often fail to account for closed-loop system dynamics, potentially leading to constraint violations during transients. This paper introduces a new approach to flight envelope protection by employing a quadratic-programming-based safety filter using control barrier functions to dynamically enforce flight envelope constraints while preserving control performance. Unlike traditional reference filtering methods, the proposed control barrier function-based safety filter actively ensures forward invariance of the safe flight envelope set while seamlessly integrating with existing control architectures. The framework is implemented in a nonlinear missile flight control system and evaluated in a simulated environment. The results demonstrate its ability to prevent constraint violations while minimizing conservatism, offering a robust alternative to existing flight envelope protection methodologies.


[156] 2505.10492

Multi-contrast laser endoscopy for in vivo gastrointestinal imaging

White light endoscopy is the clinical gold standard for detecting diseases in the gastrointestinal tract. Most applications involve identifying visual abnormalities in tissue color, texture, and shape. Unfortunately, the contrast of these features is often subtle, causing many clinically relevant cases to go undetected. To overcome this challenge, we introduce Multi-contrast Laser Endoscopy (MLE): a platform for widefield clinical imaging with rapidly tunable spectral, coherent, and directional illumination. We demonstrate three capabilities of MLE: enhancing tissue chromophore contrast with multispectral diffuse reflectance, quantifying blood flow using laser speckle contrast imaging, and characterizing mucosal topography using photometric stereo. We validate MLE with benchtop models, then demonstrate MLE in vivo during clinical colonoscopies. MLE images from 31 polyps demonstrate an approximate three-fold improvement in contrast and a five-fold improvement in color difference compared to white light and narrow band imaging. With the ability to reveal multiple complementary types of tissue contrast while seamlessly integrating into the clinical environment, MLE shows promise as an investigative tool to improve gastrointestinal imaging.


[157] 2505.21384

Label-free super-resolution color flow imaging using ultrasound phase microscopy

Ultrasound vascular imaging is limited by acoustic diffraction, restricting visualization of microvessels essential for understanding organ function and disease. Label-free super-resolution methods exploiting endogenous red blood cells have faced challenges in acquisition time and complexity. Here we introduce ultrasound phase microscopy (UPM), a label-free technique that achieves sub-wavelength resolution flow imaging by exploiting phase differences between consecutively beamformed frames with mismatched apodizations, without requiring localization or tracking. Validated in vivo across multiple species, organs, and ultrasound platforms, UPM attains spatial resolutions better than 5 um up to tenfold improvement over conventional color flow imaging while accelerating data acquisition by nearly two orders of magnitude compared to ultrasound localization microscopy. UPM enables rapid, high resolution vascular imaging and offers a practical approach for label-free super-resolution vascular imaging.


[158] 2506.02841

Enhancing Sample Efficiency in Multi-Agent RL with Uncertainty Quantification and Selective Exploration

Multi-agent reinforcement learning (MARL) methods have achieved state-of-the-art results on a range of multi-agent tasks. Yet, MARL algorithms typically require significantly more environment interactions than their single-agent counterparts to converge, a problem exacerbated by the difficulty in exploring over a large joint action space and the high variance intrinsic to MARL environments. To tackle these issues, we propose a novel algorithm that combines a decomposed centralized critic with decentralized ensemble learning, incorporating several key contributions. The main component in our scheme is a selective exploration method that leverages ensemble kurtosis. We extend the global decomposed critic with a diversity-regularized ensemble of individual critics and utilize its excess kurtosis to guide exploration toward high-uncertainty states and actions. To improve sample efficiency, we train the centralized critic with a novel truncated variation of the TD($\lambda$) algorithm, enabling efficient off-policy learning with reduced variance. On the actor side, our suggested algorithm adapts the mixed samples approach to MARL, mixing on-policy and off-policy loss functions for training the actors. This approach balances between stability and efficiency and outperforms purely off-policy learning. The evaluation shows our method outperforms state-of-the-art baselines on standard MARL benchmarks, including a variety of SMAC II maps.


[159] 2506.06387

Model-based Implicit Neural Representation for sub-wavelength Radio Localization

The increasing deployment of large antenna arrays at base stations has significantly improved the spatial resolution and localization accuracy of radio-localization methods. However, traditional signal processing techniques struggle in complex radio environments, particularly in scenarios dominated by non line of sight (NLoS) propagation paths, resulting in degraded localization accuracy. Recent developments in machine learning have facilitated the development of machine learning-assisted localization techniques, enhancing localization accuracy in complex radio environments. However, these methods often involve substantial computational complexity during both the training and inference phases. This work extends the well-established fingerprinting-based localization framework by simultaneously reducing its memory requirements and improving its accuracy. Specifically, a model-based neural network is used to learn the location-to-channel mapping, and then serves as a generative neural channel model. This generative model augments the fingerprinting comparison dictionary while reducing the memory requirements. The proposed method outperforms fingerprinting baselines by achieving sub-wavelength localization accuracy, even in complex static NLoS environments. Remarkably, it offers an improvement by several orders of magnitude in localization accuracy, while simultaneously reducing memory requirements by an order of magnitude compared to classical fingerprinting methods.


[160] 2506.12639

Channel Estimation for Downlink Communications Based on Dynamic Metasurface Antennas

Dynamic metasurface antennas (DMAs) are emerging as a promising technology to enable energy-efficient, large array-based multi-antenna systems. This paper presents a simple channel estimation scheme for the downlink of a multiple-input single-output orthogonal frequency division multiplexing (MISO-OFDM) communication system exploiting DMAs. The proposed scheme extracts separate estimates of the wireless channel and the unknown waveguide propagation vector using a simple iterative algorithm based on the parallel factor (PARAFAC) decomposition. Obtaining decoupled estimates of the wireless channel and inner waveguide vector enables the isolation and compensation for its effect when designing the DMA beamformer, regardless of the wireless channel state, which evolves much faster due to its shorter coherence time and bandwidth. Additionally, our solution operates in a data-aided manner, delivering estimates of useful data symbols jointly with channel estimates, without requiring sequential pilot and data stages. To the best of our knowledge, this is the first work to explore this CE approach. Numerical results corroborate the notable performance of the proposed scheme.


[161] 2507.12237

Constructed Realities? Technical and Contextual Anomalies in a High-Profile Image

This study offers a forensic assessment of a widely circulated photograph featuring Andrew Mountbatten-Windsor, Virginia Giuffre, and Ghislaine Maxwell, an image that has played a pivotal role in public discourse and legal narratives. Numerous inconsistencies emerge across multiple published versions, including irregularities in lighting, posture, and physical interaction, which are more compatible with digital compositing than with an unmanipulated original. The analysis includes a 3D reconstruction of the scene geometry and a search of reference images indexed to the identified camera model. Because no original print is available, and because no verifiable chain of custody exists for the original, definitive conclusions remain unattainable. Even so, the technical and contextual anomalies indicate that the photograph may have been deliberately constructed, particularly since at least one source image unrelated to the case was identified. In the absence of further evidence, it remains an unresolved yet symbolically charged artifact within a complex story of abuse, memory, and contested truth.


[162] 2508.16055

Clutter Suppression in ISAC Systems with Compound Reconfigurable Antenna Arrays

Integrated sensing and communication (ISAC) systems often suffer severe performance degradation due to strong clutter echoes, and spatial-only beamforming is often inadequate for realistic array sizes. This paper addresses clutter suppression in ISAC by leveraging compound reconfigurable antenna (CRA) arrays, which simultaneously enable dynamic adjustment of both radiation patterns and polarization states, thus substantially expanding the degrees of freedom available in the electromagnetic (EM) domain. We develop a unified compound channel model that integrates virtual angular-domain responses, spatial propagation, and polarization rotation/depolarization. Leveraging statistical information about target and clutter covariances, we formulate a joint EM-domain and baseband-domain optimization aimed at maximizing the radar signal-to-clutter-plus-noise ratio (SCNR). The formulation also enforces multiuser downlink signal-to-interference-plus-noise ratio constraints, a total transmit-power budget, and finite-codebook EM-mode selection. The resulting nonconvex mixed-integer problem is tackled by an alternating algorithm that combines fractional programming and majorization-minimization with second-order cone programming-based updates and a penalty relaxation for mode selection. Extensive simulations in QuaDRiGa-based channel environments validate the effectiveness of the proposed CRA array design, demonstrating up to 11 dB SCNR improvements over conventional beamforming methods relying solely on baseband-domain optimization and confirming the substantial benefits of fully exploiting EM-domain reconfigurability for clutter-rich ISAC scenarios.


[163] 2509.11022

Privacy-Preserving Uncertainty Disclosure for Facilitating Enhanced Energy Storage Dispatch

This paper proposes a novel privacy-preserving uncertainty disclosure framework, enabling system operators to release marginal value function bounds to reduce the conservativeness of interval forecast and mitigate excessive withholding, thereby enhancing storage dispatch and social welfare. We develop a risk-averse storage arbitrage model based on stochastic dynamic programming, explicitly accounting for uncertainty intervals in value function training. Real-time marginal value function bounds are derived using a rolling-horizon chance-constrained economic dispatch formulation. We rigorously prove that the bounds reliably cap the true opportunity cost and dynamically converge to the hindsight value. We verify that both the marginal value function and its bounds monotonically decrease with the state of charge (SoC) and increase with uncertainty, providing a theoretical basis for risk-averse strategic behaviors and SoC-dependent designs. An adjusted storage dispatch algorithm is further designed using these bounds. We validate the effectiveness of the proposed framework via an agent-based simulation on the ISO-NE test system. Under 50% renewable capacity and 35% storage capacity, the proposed bounds enhance storage response by 38.91% and reduce the optimality gap to 3.91% through improved interval predictions. Additionally, by mitigating excessive withholding, the bounds yield an average system cost reduction of 0.23% and an average storage profit increase of 13.22%. These benefits further scale with higher prediction conservativeness, storage capacity, and system uncertainty.


[164] 2509.11045

Opinion Clustering under the Friedkin-Johnsen Model: Agreement in Disagreement

The convergence of opinions in the Friedkin-Johnsen (FJ) framework is well studied, but the topological conditions leading to opinion clustering remain less explored. To bridge this gap, we examine the role of topology in the emergence of opinion clusters within the network. The key contribution of the paper lies in the introduction of the notion of topologically prominent agents, referred to as Locally Topologically Persuasive (LTP) agents. Interestingly, each LTP agent is associated with a unique set of (non-influential) agents in its vicinity. Using them, we present conditions to obtain opinion clusters in the FJ framework in any arbitrarily connected digraph. A key advantage of the proposed result is that the resulting opinion clusters are independent of the edge weights and the stubbornness of the agents. Finally, we demonstrate using simulation results that, by suitably placing LTP agents, one can design networks that achieve any desired opinion clustering.


[165] 2509.13360

PREDICT-GBM: A multi-center platform to advance personalized glioblastoma radiotherapy planning

Glioblastoma recurrence is largely driven by diffuse infiltration beyond radiologically visible tumor margins, yet standard radiotherapy, the mainstay of glioblastoma treatment, relies on uniform expansions that ignore patient-specific biological and anatomical factors. While computational models promise to map this invisible growth and guide personalized treatment planning, their clinical translation is hindered by the lack of standardized, large-scale benchmarking and reproducible validation workflows. To bridge this gap, we present PREDICT-GBM, a comprehensive open-source platform that integrates a curated, longitudinal, multi-center dataset of 243 patients with a standardized evaluation pipeline, and fuels model development and validation. We demonstrate PREDICT-GBM's potential by training and benchmarking a novel U-Net-based recurrence prediction model against state-of-the-art biophysical and data-driven methods. Our results show that both biophysical and deep-learning approaches significantly outperform standard-of-care protocols in predicting future recurrence sites while maintaining iso-volumetric treatment constraints. Notably, our U-Net model achieved a superior coverage of enhancing recurrence (79.37 +/- 2.08 %), markedly surpassing the standard-of-care (paired Wilcoxon signed-rank test, p = 0.0000057). Furthermore, the biophysical model GliODIL reached 78.91 +/- 2.08 % (p = 0.00045), validating the platform's ability to compare diverse modeling paradigms. By providing the first rigorous, reproducible ecosystem for model training and validation, PREDICT-GBM eliminates a major bottleneck for personalized, computationally guided radiotherapy. This work establishes a new standard for developing computationally guided, personalized radiotherapy, with the platform, models, and data openly available at this http URL


[166] 2509.13989

Do You Hear What I Mean? Quantifying the Instruction-Perception Gap in Instruction-Guided Expressive Text-To-Speech Systems

Instruction-guided text-to-speech (ITTS) enables users to control speech generation through natural language prompts, offering a more intuitive interface than traditional TTS. However, the alignment between user style instructions and listener perception remains largely unexplored. This work first presents a perceptual analysis of ITTS controllability across two expressive dimensions (adverbs of degree and graded emotion intensity) and collects human ratings on speaker age and word-level emphasis attributes. To comprehensively reveal the instruction-perception gap, we provide a data collection with large-scale human evaluations, named Expressive VOice Control (E-VOC) corpus. Furthermore, we reveal that (1) gpt-4o-mini-tts is the most reliable ITTS model with great alignment between instruction and generated utterances across acoustic dimensions. (2) The 5 analyzed ITTS systems tend to generate Adult voices even when the instructions ask to use child or Elderly voices. (3) Fine-grained control remains a major challenge, indicating that most ITTS systems have substantial room for improvement in interpreting slightly different attribute instructions.


[167] 2509.19001

HD-PPT: Hierarchical Decoding of Content- and Prompt-Preference Tokens for Instruction-based TTS

Large Language Model (LLM)-based Text-to-Speech (TTS) models have already reached a high degree of naturalness. However, the precision control of TTS inference is still challenging. Although instruction-based Text-to-Speech (Instruct-TTS) models are proposed, these models still lack fine-grained control due to the modality gap between single-level text instructions and multilevel speech tokens. To address this limitation, we propose HD-PPT, a framework that transforms speech synthesis into a structured, hierarchical task. To enable fine-grained control, we introduce a novel speech codec to extract distinct prompt-preference and content-preference tokens from the complex speech tokens, supervised by automatic speech recognition (ASR) and cross-lingual audio-text pre-training (CLAP) objectives. To bridge the modality gap of these tokens, we propose a hierarchical decoding strategy, where the LLM generates tokens in a structured order: first semantic, then fine-grained style, and finally complete acoustic representation. Extensive experiments demonstrate that this hierarchical paradigm significantly improves instruction adherence and achieves state-of-the-art naturalness, validating our approach for precise and controllable speech synthesis. Audio samples are available at this https URL.


[168] 2509.20396

Data-Efficient ASR Personalization for Non-Normative Speech Using an Uncertainty-Based Phoneme Difficulty Score for Guided Sampling

ASR systems struggle with non-normative speech due to high acoustic variability and data scarcity. We propose a data-efficient method using phoneme-level uncertainty to guide fine-tuning for personalization. Instead of computationally expensive ensembles, we leverage Variational Low-Rank Adaptation (VI LoRA) to estimate epistemic uncertainty in foundation models. These estimates form a composite Phoneme Difficulty Score (PhDScore) that drives a targeted oversampling strategy. Evaluated on English and German datasets, including a longitudinal analysis against two clinical reports taken one year apart, we demonstrate that: (1) VI LoRA-based uncertainty aligns better with expert clinical assessments than standard entropy; (2) PhDScore captures stable, persistent articulatory difficulties; and (3) uncertainty-guided sampling significantly improves ASR accuracy for impaired speech.


[169] 2509.20397

Variational Low-Rank Adaptation for Personalized Impaired Speech Recognition

Speech impairments resulting from congenital disorders, such as cerebral palsy, down syndrome, or apert syndrome, as well as acquired brain injuries due to stroke, traumatic accidents, or tumors, present major challenges to automatic speech recognition (ASR) systems. Despite recent advancements, state-of-the-art ASR models like Whisper still struggle with non-normative speech due to limited training data availability and high acoustic variability. Moreover, collecting and annotating non-normative speech is burdensome: speaking is effortful for many affected individuals, while laborious annotation often requires caregivers familiar with the speaker. This work introduces a novel ASR personalization method based on Bayesian Low-rank Adaptation for data-efficient fine-tuning. We validate our method on the English UA-Speech dataset and a newly collected German speech dataset, BF-Sprache, from a child with structural speech impairment. The dataset and approach are designed to reflect the challenges of low-resource settings that include individuals with speech impairments. Our method significantly improves ASR accuracy for impaired speech while maintaining data and annotation efficiency, offering a practical path toward inclusive ASR.


[170] 2509.21071

Super-resolution of 4D flow MRI through inverse problem explicit solving

Four-dimensional Flow MRI enables non-invasive, time-resolved imaging of blood flow in three spatial dimensions, offering valuable insights into complex hemodynamics. However, its clinical utility is limited by low spatial resolution and poor signal-to-noise ratio, imposed by acquisition time constraints. In this work, we propose a novel method for super-resolution and denoising of 4D Flow MRI based on the explicit solution of an inverse problem formulated in the complex domain. Using clinically available magnitude and phase images, we reconstruct synthetic complex-valued spatial signals. This enables us to model resolution degradation as a physically meaningful truncation of high-frequency components in k-space, and to recover high-resolution velocity fields through a fast, non-iterative 3D Fourier-based solver. The proposed approach enhances spatial resolution and reduces noise without the need for large training datasets or iterative optimization, and is validated on synthetic datasets generated from CFD simulations as well as on a 4D Flow MRI of a physical phantom.


[171] 2509.21425

Quaternionic Pole Placement via Companion Forms and the Ackermann Formula

We present an extension of state-feedback pole placement for quaternionic systems, based on companion forms and the Ackermann formula. For controllable single-input quaternionic LTI models, we define a companion polynomial that annihilates its companion matrix, characterize spectra via right-eigenvalue similarity classes, and prove coefficient-matching design in controllable coordinates. We then derive a coordinate-free Ackermann gain expression valid for real target polynomials, and state its scope and limitations. Short examples demonstrate correctness, practical use, and numerical simplicity.


[172] 2510.01541

A Scalable Design Approach to Resilient Architectures for Interconnected Cyber-Physical Systems: Safety Guarantees under Multiple Attacks

Complex, interconnected cyber-physical systems (CPS) are increasingly prevalent in domains such as power systems. Cyber-resilient architectures have been proposed to recover compromised cyber components of CPS. Recent works have studied tuning the recovery times of such architectures to guarantee safety in single-system settings. Extending these designs to interconnected CPS is more challenging, since solutions must account for attacks on multiple subsystems that can occur in any order and potentially infinite possible temporal overlap. This paper aims to address the aforementioned challenge by developing a scalable framework to assign resilient architectures and to inform the tuning of their recovery times. Our approach introduces a scalar index that quantifies the impact of each subsystem on safety under compromised input. These indices aggregate linearly across subsystems, enabling scalable analysis under arbitrary attack orderings and temporal overlaps. We establish a linear inequality relating each subsystem's index and recovery time that guarantees safety and guides resilient architecture assignment. We also propose a segmentation-based approach to strengthen the previously derived conditions. We then present algorithms to compute the proposed indices and to find a cost-optimal architecture assignment with a safety guarantee. We validate the framework through a case study on temperature regulation in interconnected rooms under different attack scenarios.


[173] 2510.06846

Decentralized CBF-based Safety Filters for Collision Avoidance of Cooperative Missile Systems with Input Constraints

This paper presents a decentralized safety filter for collision avoidance in multi-agent aerospace interception scenarios. The approach leverages robust control barrier functions (RCBFs) to guarantee forward invariance of safety sets under bounded inputs and high-relative-degree dynamics. Each effector executes its nominal cooperative guidance command, while a local quadratic program (QP) modifies the input only when necessary. Event-triggered activation based on range and zero-effort miss (ZEM) criteria ensures scalability by restricting active constraints to relevant neighbors. To resolve feasibility issues from simultaneous constraints, a slack-variable relaxation scheme is introduced that prioritizes critical agents in a Pareto-optimal manner. Simulation results in many-on-many interception scenarios demonstrate that the proposed framework maintains collision-free operation with minimal deviation from nominal guidance, providing a computationally efficient and scalable solution for safety-critical multi-agent aerospace systems.


[174] 2510.07900

Topology optimization of nonlinear forced response curves via reduction on spectral submanifolds

Forced response curves (FRCs) of nonlinear systems can exhibit complex behaviors, including hardening/softening behavior and bifurcations. Although topology optimization holds great potential for tuning these nonlinear dynamic responses, its use in high-dimensional systems is limited by the high cost of repeated response and sensitivity analyses. To address this challenge, we employ the spectral submanifolds (SSMs) reduction theory, which reformulates the periodic response as the equilibria of an associated reduced-order model (ROM). This enables efficient and analytic evaluation of both response amplitudes and their sensitivities. Based on the SSM-based ROM, we formulate optimization problems that optimize the peak amplitude, the hardening/softening behavior, and the distance between two saddle-node bifurcations for an FRC. The proposed method is applied to the design of nonlinear MEMS devices, achieving targeted performance optimization. This framework provides a practical and efficient strategy for incorporating nonlinear dynamic effects into the topology optimization of structures.


[175] 2510.08586

Dynamic Stress Detection: A Study of Temporal Progression Modelling of Stress in Speech

Detecting psychological stress from speech is critical in high-pressure settings. While prior work has leveraged acoustic features for stress detection, most treat stress as a static label. In this work, we model stress as a temporally evolving phenomenon influenced by historical emotional state. We propose a dynamic labelling strategy that derives fine-grained stress annotations from emotional labels and introduce cross-attention-based sequential models, a Unidirectional LSTM and a Transformer Encoder, to capture temporal stress progression. Our approach achieves notable accuracy gains on MuSE (+5%) and StressID (+18%) over existing baselines, and generalises well to a custom real-world dataset. These results highlight the value of modelling stress as a dynamic construct in speech.


[176] 2510.10442

Risk-Budgeted Control Framework for Balanced Performance and Safety in Autonomous Vehicles

This paper presents a hybrid control framework with a risk-budgeted monitor for safety-certified autonomous driving. A sliding-window monitor tracks insufficient barrier residuals and triggers switching from a relaxed control barrier function (R-CBF) to a more conservative conditional value-at-risk CBF (CVaR-CBF) when the safety margin deteriorates. Two real-time triggers are considered: feasibility-triggered (FT), which activates CVaR-CBF when the R-CBF problem is reported infeasible, and quality-triggered (QT), which switches when the residual falls below a prescribed safety margin. The framework is evaluated with model predictive control (MPC) under vehicle localization noise and obstacle position uncertainty across multiple AV-pedestrian interaction scenarios with 1,500 Monte Carlo runs. In the most challenging case with 5 m pedestrian detection uncertainty, the proposed method achieves a 94--96\% collision-free success rate over 300 trials while maintaining the lowest mean cross-track error (CTE = 3.2--3.6 m), indicating faster trajectory recovery after obstacle avoidance and a favorable balance between safety and performance.


[177] 2510.12366

Beyond-Diagonal RIS Architecture Design and Optimization under Physics-Consistent Models

Reconfigurable intelligent surface (RIS) is a promising technology for future wireless communication systems. Conventional RIS is constrained to a diagonal scattering matrix, which limits its flexibility. Recently, beyond-diagonal RIS (BD-RIS) has been proposed as a more general RIS architecture class that allows inter-element connections and shows great potential for performance improvement. Despite extensive progress on BD-RIS, most existing studies rely on simplified channel models that ignore practical electromagnetic (EM) effects such as mutual coupling and impedance mismatching. To address this gap, this paper investigates the architecture design and optimization of BD-RIS under the general physics-consistent model derived with multiport network theory in recent literature. Building on a compact reformulation of this model, we show that band-connected RIS achieves the same channel-shaping capability as fully-connected RIS, which extends existing results obtained for conventional channel models. We then develop optimization methods under the general physics-consistent model; specifically, we derive closed-form solutions for single-input single-output (SISO) systems, propose a globally optimal semidefinite relaxation (SDR)-based algorithm for single-stream multi-input multi-output (MIMO) systems, and design an efficient alternating direction method of multipliers (ADMM)-based algorithm for multiuser MIMO systems. Using the proposed algorithms, we conduct comprehensive simulations to evaluate the impact of various EM effects and approximations. The results indicate that the commonly adopted unilateral approximation provides sufficient accuracy in RIS-aided systems and can therefore be readily adopted to simplify the channel model, whereas mutual coupling among RIS elements should be properly taken into account in channel modeling.


[178] 2510.13000

Identifying Best Candidates for Busbar Splitting

Rising electricity demand and the growing integration of renewables are intensifying congestion in transmission grids. Grid topology optimization through busbar splitting (BuS) and optimal transmission switching can alleviate grid congestion and reduce the generation costs in a power system. However, BuS optimization requires a large number of binary variables, and analyzing all the substations for potential new topological actions is computationally intractable, particularly in large grids. To tackle this issue, we propose a set of metrics to identify and rank promising candidates for BuS, focusing on finding buses where topology optimization can reduce generation costs. To assess the effect of BuS on the identified buses, we use a combined mixed-integer convex-quadratic BuS model to compute the optimal topology and test it with the non-linear non-convex AC optimal power flow (OPF) simulation to show its AC feasibility. By testing and validating the proposed metrics on test cases of different sizes, we show that they are able to identify busbars that reduce the total generation costs when their topology is optimized. Thus, the metrics enable effective selection of busbars for BuS, with no need to test every busbar in the grid, one at a time.


[179] 2510.14075

DiffOPF: Diffusion Solver for Optimal Power Flow

The optimal power flow (OPF) is a multi-valued, non-convex mapping from loads to dispatch setpoints. The variability of system parameters (e.g., admittances, topology) further contributes to the multiplicity of dispatch setpoints for a given load. Existing deep learning OPF solvers are single-valued and thus fail to capture the variability of system parameters unless fully represented in the feature space, which is prohibitive. To solve this problem, we introduce a diffusion-based OPF solver, termed \textit{DiffOPF}, that treats OPF as a conditional sampling problem. The solver learns the joint distribution of loads and dispatch setpoints from operational history, and returns the marginal dispatch distributions conditioned on loads. Unlike single-valued solvers, DiffOPF enables sampling statistically credible warm starts with favorable cost and constraint satisfaction trade-offs. We explore the sample complexity of DiffOPF to ensure the OPF solution within a prescribed distance from the optimization-based solution, and verify this experimentally on power system benchmarks.


[180] 2510.14806

Joint Channel and CFO Estimation From Beam-Swept Synchronization Signal Under Strong Inter-Cell Interference

Complete awareness of the wireless environment, crucial for future intelligent networks, requires sensing all transmitted signals, not just the strongest. A fundamental barrier is estimating the target signal when it is buried under strong co-channel interference from other transmitters, a failure of which renders the signal unusable. This work proposes a maximum likelihood (ML)-based cross-preamble estimation framework that exploits carrier frequency offset (CFO) constancy across beam-swept synchronization signals (SS), coherently aggregating information across multiple observations to reinforce the desired signal against overwhelming interference. Cramer-Rao lower bound (CRLB) analysis and simulation demonstrate reliable estimation even when the signal is over a thousand times weaker than the interference. A low-altitude radio-map case study further verifies the framework's practical effectiveness.


[181] 2510.26708

Pareto-Optimal Sampling and Resource Allocation for Timely Communication in Shared-Spectrum Low-Altitude Networks

Guaranteeing stringent data freshness for low-altitude unmanned aerial vehicles (UAVs) in shared spectrum forces a critical trade-off between two operational costs: the UAV's own energy consumption and the occupation of terrestrial channel resources. The core challenge is to satisfy the aerial data freshness while finding a Pareto-optimal balance between these costs. Leveraging predictive channel models and predictive UAV trajectories, we formulate a bi-objective Pareto optimization problem over a long-term planning horizon to jointly optimize the sampling timing for aerial traffic and the power and spectrum allocation for fair coexistence. However, the problem's non-convex, mixed-integer nature renders classical methods incapable of fully characterizing the complete Pareto frontier. Notably, we show monotonicity properties of the frontier, building on which we transform the bi-objective problem into several single-objective problems. We then propose a new graph-based algorithm and prove that it can find the complete set of Pareto optima with low complexity, linear in the horizon and near-quadratic in the resource block (RB) budget. Numerical comparisons show that our approach meets the stringent timeliness requirement and achieves a six-fold reduction in RB utilization or a 6 dB energy saving compared to benchmarks.


[182] 2511.01403

Risk Aware Safe Control with Multi-Modal Sensing for Dynamic Obstacle Avoidance

Safe control in dynamic traffic environments remains a major challenge for autonomous vehicles (AVs), as ego vehicle and obstacle states are inherently affected by sensing noise and estimation uncertainty. However, existing studies have not sufficiently addressed how uncertain multi-modal sensing information can be systematically incorporated into tail-risk-aware safety-critical control. To address this gap, this paper proposes a risk-aware safe control framework that integrates probabilistic state estimation with a conditional value-at-risk (CVaR) control barrier function (CBF) safety filter. Obstacle detections from cameras, LiDAR, and vehicle-to-everything (V2X) communication are combined using a Wasserstein barycenter (WB) to obtain a probabilistic state estimate. A model predictive controller generates the nominal control, which is then filtered through a CVaR-CBF quadratic program to enforce risk-aware safety constraints. The approach is evaluated through numerical studies and further validated on a full-scale AV. Results demonstrate improved safety and robustness over a baseline MPC-CBF design, with an average improvement of 12.7\% in success rate across the evaluated scenarios.


[183] 2511.07553

Frequency-Aware Sparse Optimization for Diagnosing Grid Instabilities and Collapses

This paper aims to proactively diagnose and manage frequency instability risks from a steady-state perspective, without the need for derivative-dependent transient modeling. Specifically, we jointly address two questions (Q1) Survivability: following a disturbance and the subsequent primary frequency response, can the system settle into a healthy steady state (feasible with an acceptable frequency deviation $\Delta f$)? (Q2) Dominant Vulnerability: if found unstable, what critical vulnerabilities create instability and/or full collapse? To address these questions, we first augment steady-state power flow states to include frequency-dependent governor relationships (i.e., governor power flow). Afterwards, we propose a frequency-aware sparse optimization that finds the minimal set of bus locations with measurable compensations (corrective actions) to enforce power balance and maintain frequency within predefined/acceptable bounds. We evaluate our method on standard transmission systems to empirically validate its ability to localize dominant sources of vulnerabilities. For a 1354-bus large system, our method detects compensations to only four buses under N-1 generation outage (3424.8 MW) while enforcing a maximum allowable steady-state frequency drop of 0.06 Hz (otherwise, frequency drops by nearly 0.08 Hz). We further validate the scalability of our method, requiring less than four minutes to obtain sparse solutions for the 1354-bus system.


[184] 2511.08852

DRL-Based Beam Positioning for LEO Satellite Constellations with Weighted Least Squares

This paper investigates a lightweight deep reinforcement learning (DRL)-assisted weighting framework for CSI-free multi-satellite positioning in LEO constellations, where each visible satellite provides one serving beam (one pilot response) per epoch. A discrete-action Deep Q-Network (DQN) learns satellite weights directly from received pilot measurements and geometric features, while an augmented weighted least squares (WLS) estimator provides physics-consistent localization and jointly estimates the receiver clock bias. The proposed hybrid design targets an accuracy-runtime trade-off rather than absolute supervised optimality. In a representative 2-D setting with 10 visible satellites, the proposed approach achieves sub-meter accuracy (0.395m RMSE) with low computational overhead, supporting practical deployment for resource-constrained LEO payloads.


[185] 2511.09995

Time-Layer Adaptive Alignment for Speaker Similarity in Flow-Matching Based Zero-Shot TTS

Flow-Matching (FM)-based zero-shot text-to-speech (TTS) systems exhibit high-quality speech synthesis and robust generalization capabilities. However, the speaker representation ability of such systems remains underexplored, primarily due to the lack of explicit speaker-specific supervision in the FM framework. To this end, we conduct an empirical analysis of speaker information distribution and reveal its non-uniform allocation across time steps and network layers, underscoring the need for adaptive speaker alignment. Accordingly, we propose Time-Layer Adaptive Speaker Alignment (TLA-SA), a strategy that enhances speaker consistency by jointly leveraging temporal and hierarchical variations. Experimental results show that TLA-SA substantially improves speaker similarity over baseline systems on both research- and industrial-scale datasets and generalizes well across diverse model architectures, including decoder-only language model (LM)-based and free TTS systems. A demo is provided.


[186] 2512.02138

Scalable Distributed Nonlinear Control Under Flatness-Preserving Coupling

We study distributed control for a network of nonlinear, differentially flat subsystems subject to dynamic coupling. Although differential flatness simplifies planning and control for isolated subsystems, the presence of coupling can destroy this property for the overall joint system. Focusing on subsystems in pure-feedback form, we identify a class of compatible lower-triangular dynamic couplings that preserve flatness and guarantee that the flat outputs of the subsystems remain the flat outputs of the coupled system. Further, we show that the joint flatness diffeomorphism can be constructed from those of the individual subsystems and, crucially, its sparsity structure reflects that of the coupling. Exploiting this structure, we synthesize a distributed tracking controller that computes control actions from local information only, thereby ensuring scalability. We validate our proposed framework on a simulated example of planar quadrotors dynamically coupled via aerodynamic downwash, and show that the distributed controller achieves accurate trajectory tracking.


[187] 2512.15441

Semi-Blind Joint Channel and Symbol Estimation for Beyond Diagonal Reconfigurable Surfaces

The beyond-diagonal reconfigurable intelligent surface (BD-RIS) is a recent architecture in which scattering elements are interconnected to enhance the degrees of freedom for wave control, yielding performance gains over traditional single-connected RISs. For BD-RIS, channel estimation, which is well studied for conventional RIS, becomes more challenging due to complex connections and a larger number of coefficients. Previous works relied on pilot-assisted estimation followed by data decoding. This paper introduces a semi-blind tensor-based approach to joint channel and symbol estimation that eliminates the need for training sequences by directly leveraging data symbols. A practical scenario with time-varying user terminal-RIS channels under mobility is considered. By reformulating the received signal from a tensor-decomposition perspective, we develop two semi-blind receivers: a two-stage method that transforms the fourth-order PARATUCK model into a third-order PARAFAC model, and a single-stage iterative process based on the fourth-order TUCKER decomposition. Identifiability conditions for reliable joint recovery are derived, and numerical results demonstrate the performance advantages and trade-offs of the proposed schemes over existing solutions.


[188] 2512.20970

Universal Transient Stability Analysis: A Large Language Model-Enabled Dynamics Prediction Framework

Existing dynamics prediction frameworks for transient stability analysis (TSA) fail to achieve multi-scenario "universality"--the inherent ability of a single, pre-trained architecture to generalize across diverse operating conditions, unseen faults, and heterogeneous systems. To address this, this paper proposes TSA-LLM, a large language model (LLM)-based universal framework that models multi-variate transient dynamics prediction as a univariate generative task with three key innovations: First, a novel data processing pipeline featuring channel independence decomposition to resolve dimensional heterogeneity, sample-wise normalization to eliminate separate stable or unstable pipelines, and temporal patching for efficient long-sequence modeling; Second, a parameter-efficient freeze-and-finetune strategy that augments the LLM's architecture with dedicated input embedding and output projection layers while freezing core transformer blocks to preserve generic feature extraction capabilities; Third, a two-stage fine-tuning scheme that combines teacher forcing, which feeds the model ground-truth data during initial training, with scheduled sampling, which gradually shifts to leveraging model-generated predictions, to mitigate cumulative errors in long-horizon iterative prediction. Comprehensive testing demonstrates the framework's universality, as TSA-LLM trained solely on the New England 39-bus system achieves zero-shot generalization to mixed stability conditions and unseen faults, and matches expert performance on the larger Iceland 189-bus system with only 5% fine-tuning data. This multi-scenario versatility validates a universal framework that eliminates scenario-specific retraining and achieves scalability via large-scale parameters and cross-scenario training data.


[189] 2512.22479

FARIS: Fluid-Active-RIS

In this paper, we introduce a new wireless paradigm termed fluid-active reconfigurable intelligent surface (FARIS) that combines fluid-based port repositioning with per-element active amplification to enhance the performance of 6G networks. To realistically characterize the hardware operation, we first develop a circuit-level abstraction of the FARIS architecture and establish a practical power consumption model that captures both the logical control/switching power of candidate ports and the direct current (DC) bias power required for active reflection. Based on this model, we establish the FARIS signal model and formulate a corresponding ergodic-rate maximization problem that jointly optimizes the active amplification-reflection vector and the discrete selection of fluid-active elements under practical hardware constraints. The problem is addressed via an alternating optimization (AO) framework, which progressively improves the rate. Complexity and convergence analyses that follow furnish deeper insight into the algorithmic operation and performance enhancement. Numerical results confirm that the proposed FARIS with the AO framework consistently outperforms conventional baselines, delivering higher rates across diverse environments, often even when using fewer active elements or a smaller physical aperture.


[190] 2512.24815

Efficient Joint Resource Allocation for Wireless Powered ISAC with Target Localization

Wireless powered integrated sensing and communication (ISAC) faces a fundamental tradeoff between energy supply, communication throughput, and sensing accuracy. This paper investigates a wireless powered ISAC system with target localization requirements, where users harvest energy from wireless power transfer (WPT) and then conduct ISAC transmissions in a time-division manner. In addition to energy supply, the WPT signal also contributes to target sensing, and the localization accuracy is characterized by Cramér-Rao bound (CRB) constraints. Under this setting, we formulate a max-min throughput maximization problem by jointly allocating the WPT duration, ISAC transmission time allocation, and transmit power. Due to the nonconvexity of the resulting problem, a suitable reformulation is developed by exploiting variable substitutions and the monotonicity of logarithmic functions, based on which an efficient successive convex approximation (SCA)-based iterative algorithm is proposed. Simulation results demonstrate convergence and significant performance gains over benchmark schemes, highlighting the importance of coordinated time-power optimization in balancing sensing accuracy and communication performance in wireless powered ISAC systems.


[191] 2601.00538

Parametrized Sharing for Multi-Agent Hybrid DRL for Multiple Multi-Functional RISs-Aided Downlink NOMA Networks

Multi-functional reconfigurable intelligent surface (MF-RIS) is conceived to address the communication efficiency thanks to its extended signal coverage from its active RIS capability and self-sustainability from energy harvesting (EH). We investigate the architecture of multi-MF-RISs to assist non-orthogonal multiple access (NOMA) downlink networks. We formulate an energy efficiency (EE) maximization problem by optimizing power allocation, transmit beamforming and MF-RIS configurations of amplitudes, phase-shifts and EH ratios, as well as the position of MF-RISs, while satisfying constraints of available power, user rate requirements, and self-sustainability property. We design a parametrized sharing scheme for multi-agent hybrid deep reinforcement learning (PMHRL), where the multi-agent proximal policy optimization (PPO) and deep-Q network (DQN) handle continuous and discrete variables, respectively. The simulation results have demonstrated that proposed PMHRL has the highest EE compared to other benchmarks, including cases without parametrized sharing, pure PPO and DQN. Moreover, the proposed multi-MF-RISs-aided downlink NOMA achieves the highest EE compared to scenarios of no-EH/amplification, traditional RISs, and deployment without RISs/MF-RISs under different multiple access.


[192] 2601.05276

Channel Selected Stratified Nested Cross Validation for Clinically Relevant EEG Based Parkinsons Disease Detection

The early detection of Parkinsons disease remains a critical challenge in clinical neuroscience, with electroencephalography offering a noninvasive and scalable pathway toward population level screening. While machine learning has shown promise in this domain, many reported results suffer from methodological flaws, most notably patient level data leakage, inflating performance estimates and limiting clinical translation. To address these modeling pitfalls, we propose a unified evaluation framework grounded in nested cross validation and incorporating three complementary safeguards: (i) patient level stratification to eliminate subject overlap and ensure unbiased generalization, (ii) multi layered windowing to harmonize heterogeneous EEG recordings while preserving temporal dynamics, and (iii) inner loop channel selection to enable principled feature reduction without information leakage. Applied across three independent datasets with a heterogeneous number of channels, a convolutional neural network trained under this framework achieved 80.6% accuracy and demonstrated state of the art performance under held out population block testing, comparable to other methods in the literature. This performance underscores the necessity of nested cross validation as a safeguard against bias and as a principled means of selecting the most relevant information for patient level decisions, providing a reproducible foundation that can extend to other biomedical signal analysis domains.


[193] 2601.10980

Uni-Fi: Integrated Multi-Task Wi-Fi Sensing

Wi-Fi sensing technology enables non-intrusive, continuous monitoring of user locations and activities, which supports diverse smart home applications. Since different sensing tasks exhibit contextual relationships, their integration can enhance individual module performance. However, integrating sensing tasks across different studies faces challenges due to the absence of: 1) a unified architecture that captures the fundamental nature shared across diverse sensing tasks, and 2) an extensible pipeline that accommodates future sensing methodologies. This paper presents UNI-FI, an extensible framework for multi-task Wi-Fi sensing integration. This paper makes the following contributions: 1) we propose a unified theoretical framework that reveals fundamental differences between single-task and multi-task sensing; 2) we develop a scalable sensing pipeline that automatically generates a multi-task sensing solver, enabling seamless integration of multiple sensing models. Experimental results show that UNI-FI achieves robust performance across tasks, with a median localization error of approximately 0.54 m, 98.34% accuracy for activity classification, and 98.57% accuracy for presence detection.


[194] 2602.03987

On transferring safety certificates across dynamical systems

Control barrier functions (CBFs) provide a powerful tool for enforcing safety constraints in control systems, but their direct application to complex, high-dimensional dynamics is often challenging. In many settings, safety certificates are more naturally designed for simplified or alternative system models that do not exactly match the dynamics of interest. This paper addresses the problem of transferring safety guarantees between dynamical systems with mismatched dynamics. We propose a transferred control barrier function (tCBF) framework that enables safety constraints defined on one system to be systematically enforced on another system using a simulation function and an explicit margin term. The resulting transferred barrier accounts for model mismatch and induces a safety condition that can be enforced on the target system via a quadratic-program-based safety filter. The proposed approach is general and does not require the two systems to share the same state dimension or dynamics. We demonstrate the effectiveness of the framework on a quadrotor navigation task with the transferred barrier ensuring collision avoidance for the target system, while remaining minimally invasive to a nominal controller. These results highlight the potential of transferred control barrier functions as a general mechanism for enforcing safety across heterogeneous dynamical systems.


[195] 2602.07803

SoulX-Singer: Towards High-Quality Zero-Shot Singing Voice Synthesis

While recent years have witnessed rapid progress in speech synthesis, open-source singing voice synthesis (SVS) systems still face significant barriers to industrial deployment, particularly in terms of robustness and zero-shot generalization. In this report, we introduce SoulX-Singer, a high-quality open-source SVS system designed with practical deployment considerations in mind. SoulX-Singer supports controllable singing generation conditioned on either symbolic musical scores (MIDI) or melodic representations, enabling flexible and expressive control in real-world production workflows. Trained on more than 42,000 hours of vocal data, the system supports Mandarin Chinese, English, and Cantonese and consistently achieves state-of-the-art synthesis quality across languages under diverse musical conditions. Furthermore, to enable reliable evaluation of zero-shot SVS performance in practical scenarios, we construct SoulX-Singer-Eval, a dedicated benchmark with strict training-test disentanglement, facilitating systematic assessment in zero-shot settings.


[196] 2602.09050

SAS-Net: Cross-Domain Image Registration as Inverse Rendering via Structure-Appearance Factorization

Cross-domain image registration requires aligning images acquired under heterogeneous imaging physics, where the classical brightness constancy assumption is fundamentally violated. We formulate this problem through an image formation model I = R(s, a) + epsilon, where each observation is generated by a rendering function R acting on domain-invariant scene structure s and domain-specific appearance statistics a. Registration then reduces to an inverse rendering problem: given observations from two domains, recover the shared structure and re-render it under the target appearance to obtain the registered output. We instantiate this framework as SAS-Net (Scene-Appearance Separation Network), where instance normalization implements the structure-appearance decomposition and Adaptive Instance Normalization (AdaIN) realizes the differentiable forward renderer. A scene consistency loss enforces geometric correspondence in the factorized latent space. Experiments on EuroSAT-Reg-256 (satellite remote sensing) and FIRE-Reg-256 (retinal fundus) demonstrate state-of-the-art performance across heterogeneous imaging domains. SAS-Net (3.35M parameters) achieves 89 FPS on an RTX 5090 GPU. Code: this https URL.


[197] 2602.11547

H.265/HEVC Video Steganalysis Based on CU Block Structure Gradients and IPM Mapping

Existing H.265/HEVC video steganalysis research mainly focuses on detecting the steganography based on motion vectors, intra prediction modes, and transform coefficients. However, there is currently no effective steganalysis method capable of detecting steganography based on Coding Unit (CU) block structure. To address this issue, we propose, for the first time, a H.265/HEVC video steganalysis algorithm based on CU block structure gradients and intra prediction mode mapping. The proposed method first constructs a new gradient map to explicitly describe changes in CU block structure, and combines it with a block level mapping representation of IPM. It can jointly model the structural perturbations introduced by steganography based on CU block structure. Then, we design a novel steganalysis network called GradIPMFormer, whose core innovation is an integrated architecture that combines convolutional local embedding with Transformer-based token modeling to jointly capture local CU boundary perturbations and long-range cross-CU structural dependencies, thereby effectively enhancing the capability to perceive CU block structure embedding. Experimental results show that under different quantization parameters and resolution settings, the proposed method consistently achieves superior detection performance across multiple steganography methods based on CU block structure. This study provides a new CU block structure steganalysis paradigm for H.265/HEVC and has significant research value for covert communication security detection.


[198] 2603.01071

AI-enhanced Direct SLAM: A Principled Approach to Unsupervised Learning in Bayesian Inference

In this paper, we propose an artificial intelligence (AI)-enhanced hybrid simultaneous localization and mapping (SLAM) method that performs Bayesian inference directly on raw radio-frequency (RF) signals while learning an environment model in an unsupervised manner. The approach combines a physically interpretable signal model for line-of-sight (LOS) components with an AI model that captures multipath component statistics. Building on this formulation, we develop a particle-based sumproduct algorithm (SPA) on a factor graph that jointly estimates the mobile terminal (MT) state, visibility, multipath parameters, and noise variances, and integrate it into a variational framework that maximizes the evidence lower bound (ELBO) to learn the neural network (NN) parametrization directly from measurements. We further present a highly efficient GPU-based implementation that enables parallel likelihood evaluation across particles and base stations (BSs). Simulation results in multipath environments demonstrate that the proposed method learns the generative, environment-dependent signal model in an unsupervised manner while accurately localizing the MT and effectively exploiting the learned map in obstructed-line-of-sight (OLOS) scenarios.


[199] 2603.02252

Whisper-RIR-Mega: A Paired Clean-Reverberant Speech Benchmark for ASR Robustness to Room Acoustics

We introduce Whisper-RIR-Mega, a benchmark dataset of paired clean and reverberant speech for evaluating automatic speech recognition (ASR) robustness to room acoustics. Each sample pairs a clean LibriSpeech utterance with the same utterance convolved with a real room impulse response from the RIR-Mega corpus, with stratified splits by reverberation time (RT60) and direct-to-reverberant ratio (DRR). We evaluate five Whisper models (tiny through large-v3) on 1600 test samples and report word error rate (WER) and character error rate (CER) under clean and reverberant conditions. Reverberation consistently degrades performance across all model sizes; the reverb penalty in WER ranges from 2.31 to 15.50 percentage points depending on the model. Whisper-large-v3 shows the smallest penalty; Whisper-tiny shows the largest. We release the dataset, evaluation code, and baseline results to support reproducible research on robust ASR.


[200] 2603.09472

Vector-field guided constraint-following control for path following of uncertain mechanical systems

This note proposes a general control approach, called vector-field guided constraint-following control, to solve the dynamics control problem of geometric path-following for a class of uncertain mechanical systems. More specifically, it operates at the dynamics level and can handle both fully-actuated and underactuated mechanical systems, heterogeneous (possibly fast) time-varying uncertainties with unknown bounds, and geometric desired paths that may be self-intersecting. Simulations are conducted to demonstrate the effectiveness of the approach.


[201] 2603.10723

MOS-Bias: From Hidden Gender Bias to Gender-Aware Speech Quality Assessment

The Mean Opinion Score (MOS) serves as the standard metric for speech quality assessment, yet biases in human annotations remain underexplored. We conduct the first systematic analysis of gender bias in MOS, revealing that male listeners consistently assign higher scores than female listeners--a gap that is most pronounced in low-quality speech and gradually diminishes as quality improves. This quality-dependent structure proves difficult to eliminate through simple calibration. We further demonstrate that automated MOS models trained on aggregated labels exhibit predictions skewed toward male standards of perception. To address this, we propose a gender-aware model that learns gender-specific scoring patterns through abstracting binary group embeddings, thereby improving overall and gender-specific prediction accuracy. This study establishes that gender bias in MOS constitutes a systematic, learnable pattern demanding attention in equitable speech evaluation.


[202] 2603.12220

Conformalized Data-Driven Reachability Analysis with PAC Guarantees

Data-driven reachability analysis computes over-approximations of reachable sets directly from noisy data. Existing deterministic methods require either known noise bounds or system-specific structural parameters such as Lipschitz constants. We propose Conformalized Data-Driven Reachability (CDDR), a framework that provides Probably Approximately Correct (PAC) coverage guarantees through the Learn Then Test (LTT) calibration procedure, requiring only that calibration and test trajectories be independently and identically distributed. CDDR is developed for three settings: linear time-invariant (LTI) systems with unknown process noise distributions, LTI systems with bounded measurement noise, and general nonlinear systems including non-Lipschitz dynamics. Experiments on a 5-dimensional LTI system under Gaussian and heavy-tailed Student-t noise and on a 2-dimensional non-Lipschitz system with fractional damping demonstrate that CDDR achieves valid coverage where deterministic methods do not provide formal guarantees. Under anisotropic noise, a normalized score function reduces the reachable set volume while preserving the PAC guarantee.


[203] 2603.12340

Optimizing Task Completion Time Updates Using POMDPs

Managing announced task completion times is a fundamental control problem in project management. While extensive research exists on estimating task durations and task scheduling, the problem of when and how to update completion times communicated to stakeholders remains understudied. Organizations must balance announcement accuracy against the costs of frequent timeline updates, which can erode stakeholder trust and trigger costly replanning. Despite the prevalence of this problem, current approaches rely on static predictions or ad-hoc policies that fail to account for the sequential nature of announcement management. In this paper, we formulate the task announcement problem as a Partially Observable Markov Decision Process (POMDP) where the control policy must decide when to update announced completion times based on noisy observations of true task completion. Since most state variables (current time and previous announcements) are fully observable, we leverage the Mixed Observability MDP (MOMDP) framework to enable more efficient policy optimization. Our reward structure captures the dual costs of announcement errors and update frequency, enabling synthesis of optimal announcement control policies. Using off-the-shelf solvers, we generate policies that act as feedback controllers, adaptively managing announcements based on belief state evolution. Simulation results demonstrate significant improvements in both accuracy and announcement stability compared to baseline strategies, achieving up to 75\% reduction in unnecessary updates while maintaining or improving prediction accuracy.


[204] 2603.12442

Room Impulse Response Completion Using Signal-Prediction Diffusion Models Conditioned on Simulated Early Reflections

Room impulse responses (RIRs) are fundamental to audio data augmentation, acoustic signal processing, and immersive audio rendering. While geometric simulators such as the image source method (ISM) can efficiently generate early reflections, they lack the realism of measured RIRs due to missing acoustic wave effects. We propose a diffusion-based RIR completion method using signal-prediction conditioned on ISM-simulated direct-path and early reflections. Unlike state-of-the-art methods, our approach imposes no fixed duration constraint on the input early reflections. We further incorporate classifier-free guidance to steer generation toward a target distribution learned from physically realistic RIRs simulated with the Treble SDK. Objective evaluation demonstrates that the proposed method outperforms a state-of-the-art baseline in early RIR completion and energy decay curve reconstruction.


[205] 2603.12728

Dual-Chirp AFDM for Joint Delay-Doppler Estimation with Rydberg Atomic Quantum Receivers

In this paper, we propose a joint delay-Doppler estimation framework for Rydberg atomic quantum receivers (RAQRs) leveraging affine frequency division multiplexing (AFDM), as a future enabler of hyper integrated sensing and communication (ISAC) in 6G and beyond. The proposed approach preserves the extreme sensitivity of RAQRs, while offering a pioneering solution to the joint estimation of delay-Doppler parameters of mobile targets, which has yet to be addressed in the literature due to the inherent coupling of time-frequency parameters in the optical readout of RAQRs to the best of our knowledge. To overcome this unavoidable ambiguity, we propose a dual-chirp AFDM framework where the utilization of distinct chirp parameters effectively converts the otherwise ambiguous estimation problem into a full-rank system, enabling unique delay-Doppler parameter extraction from RAQRs. Numerical simulations verify that the proposed dual-chirp AFDM shows superior delay-Doppler estimation performance compared to the classical single-chirp AFDM over RAQRs.


[206] 2603.12891

Exploiting Near-Field Dynamics with Movable Antennas to Enhance Discrete Transmissive RIS

The design of low-complexity transceivers is crucial for the deployment of next-generation wireless systems. In this work, we combine two emerging concepts, movable antennas (MA) and transmissive reconfigurable intelligent surfaces (TRIS), which have recently attracted significant attention for enhancing wireless communication performance. In particular, we propose a compact base station (BS) architecture that integrates a single MA with a TRIS operating in their near-field region. We address the joint optimization of the MA location and the quantized TRIS phase configuration. Due to the non-convex coupling between spatial positioning and discrete phase constraints, an alternating optimization (AO) framework is developed, where the MA position is updated via gradient ascent (GA) and the TRIS phases are optimized through quantized phase alignment. Simulation results demonstrate that the proposed architecture significantly outperforms conventional BS designs equipped with fixed fully-active antenna arrays under the same channel model and transmit power constraint. Moreover, MA repositioning effectively mitigates the performance degradation caused by discrete TRIS phase quantization in near-field propagation environments. This reveals a favorable trade-off between hardware complexity and spatial signal processing, where the spatial adaptability of the MA can compensate for low-resolution TRIS phase control.


[207] 2303.06324

Comprehensive Deadlock Prevention for GPU Collective Communication

Distributed deep neural network training necessitates efficient GPU collective communications, which are inherently susceptible to deadlocks. GPU collective deadlocks arise easily in distributed deep learning applications when multiple collectives circularly wait for each other. GPU collective deadlocks pose a significant challenge to the correct functioning and efficiency of distributed deep learning, and no general effective solutions are currently available. Only in specific scenarios, ad-hoc methods, making an application invoke collectives in a consistent order across GPUs, can be used to prevent circular collective dependency and deadlocks. This paper presents DFCCL, a novel GPU collective communication library that provides a comprehensive approach for GPU collective deadlock prevention while maintaining high performance. DFCCL achieves preemption for GPU collectives at the bottom library level, effectively preventing deadlocks even if applications cause circular collective dependency. DFCCL ensures high performance with its execution and scheduling methods for collectives. Experiments show that DFCCL effectively prevents GPU collective deadlocks in various situations. Moreover, extensive evaluations demonstrate that DFCCL delivers performance comparable to or superior to NCCL, the state-of-the-art collective communication library highly optimized for NVIDIA GPUs.


[208] 2310.02641

Deformation-Invariant Neural Network and Its Applications in Distorted Image Restoration and Analysis

Images degraded by geometric distortions pose a significant challenge to imaging and computer vision tasks such as object recognition. Deep learning-based imaging models usually fail to give accurate performance for geometrically distorted images. In this paper, we propose the deformation-invariant neural network (DINN), a framework to address the problem of imaging tasks for geometrically distorted images. The DINN outputs consistent latent features for images that are geometrically distorted but represent the same underlying object or scene. The idea of DINN is to incorporate a simple component, called the quasiconformal transformer network (QCTN), into other existing deep networks for imaging tasks. The QCTN is a deep neural network that outputs a quasiconformal map, which can be used to transform a geometrically distorted image into an improved version that is closer to the distribution of natural or good images. It first outputs a Beltrami coefficient, which measures the quasiconformality of the output deformation map. By controlling the Beltrami coefficient, the local geometric distortion under the quasiconformal mapping can be controlled. The QCTN is lightweight and simple, which can be readily integrated into other existing deep neural networks to enhance their performance. Leveraging our framework, we have developed an image classification network that achieves accurate classification of distorted images. Our proposed framework has been applied to restore geometrically distorted images by atmospheric turbulence and water turbulence. DINN outperforms existing GAN-based restoration methods under these scenarios, demonstrating the effectiveness of the proposed framework. Additionally, we apply our proposed framework to the 1-1 verification of human face images under atmospheric turbulence and achieve satisfactory performance, further demonstrating the efficacy of our approach.


[209] 2401.12783

A Review of Deep Learning Methods for Photoplethysmography Data

Background: Photoplethysmography (PPG) is a non-invasive optical sensing technique widely used to capture hemodynamic information and is extensively deployed in both clinical monitoring systems and wearable devices. In recent years, the integration of deep learning has substantially advanced PPG signal analysis and broadened its applications across both healthcare and non-healthcare domains. Methods: We conducted a comprehensive review of studies applying deep learning to PPG data published between January 1, 2017 and December 31, 2025, retrieved from Google Scholar, PubMed, and Dimensions. The included studies were analyzed from three key perspectives: tasks, models, and data. Results: A total of 460 papers were included that applied deep learning techniques to PPG signal analysis. These studies span a wide range of application domains, including traditional physiological monitoring tasks such as cardiovascular assessment, as well as emerging applications such as sleep analysis, cross-modality signal reconstruction, and biometric identification. Conclusions: Deep learning has significantly advanced PPG signal analysis by enabling more effective extraction of physiological information. Compared with traditional machine learning approaches based on handcrafted features, deep learning methods generally achieve improved performance and provide greater flexibility in model development. Nevertheless, several challenges remain, including the limited availability of large-scale high-quality datasets, insufficient validation in real-world environments, and concerns regarding model interpretability, scalability, and computational efficiency. Addressing these challenges and exploring emerging research directions will be essential for further advancing deep learning-based PPG analysis.


[210] 2404.12598

Continuous-time Risk-sensitive Reinforcement Learning via Quadratic Variation Penalty

This paper studies continuous-time risk-sensitive reinforcement learning (RL) under the entropy-regularized, exploratory diffusion process formulation with the exponential-form objective. The risk-sensitive objective arises either as the agent's risk attitude or as a distributionally robust approach against the model uncertainty. Owing to the martingale perspective in Jia and Zhou (J Mach Learn Res 24(161): 1--61, 2023) the risk-sensitive RL problem is shown to be equivalent to ensuring the martingale property of a process involving both the value function and the q-function, augmented by an additional penalty term: the quadratic variation of the value process, capturing the variability of the value-to-go along the trajectory. This characterization allows for the straightforward adaptation of existing RL algorithms developed for non-risk-sensitive scenarios to incorporate risk sensitivity by adding the realized variance of the value process. Additionally, I highlight that the conventional policy gradient representation is inadequate for risk-sensitive problems due to the nonlinear nature of quadratic variation; however, q-learning offers a solution and extends to infinite horizon settings. Finally, I prove the convergence of the proposed algorithm for Merton's investment problem and quantify the impact of temperature parameter on the behavior of the learning procedure. I also conduct simulation experiments to demonstrate how risk-sensitive RL improves the finite-sample performance in the linear-quadratic control problem.


[211] 2407.00104

MultiTask Learning AI system to assist BCC diagnosis with dual explanation

Basal cell carcinoma (BCC) accounts for about 75% of skin cancers. The adoption of teledermatology protocols in Spanish public hospitals has increased dermatologists' workload, motivating the development of AI tools for lesion prioritization. However, limited transparency in current systems hinders clinical acceptance. This study proposes an AI system for BCC detection from dermoscopic images that integrates dermatologist diagnostic criteria based on specific dermoscopic patterns. We analyzed 1559 dermoscopic images from 60 primary care centers annotated by four dermatologists for seven BCC patterns. An Expectation-Maximization consensus algorithm was used to build a unified standard reference. A multitask learning model based on MobileNet-V2 was developed to classify lesions and identify clinically relevant patterns, supported by Grad-CAM visual explanations. The system achieved 90% accuracy in BCC classification (precision 0.90, recall 0.89). Clinically relevant BCC patterns were correctly detected in 99% of positive cases, and the pigment network exclusion criterion was satisfied in 95% of non-BCC cases. Grad-CAM maps showed strong spatial agreement with dermatologist-defined regions. The proposed system combines accurate BCC detection with transparent pattern-based explanations, helping bridge the gap between AI performance and clinical trust in teledermatology.


[212] 2408.01180

Nested Music Transformer: Sequentially Decoding Compound Tokens in Symbolic Music and Audio Generation

Representing symbolic music with compound tokens, where each token consists of several different sub-tokens representing a distinct musical feature or attribute, offers the advantage of reducing sequence length. While previous research has validated the efficacy of compound tokens in music sequence modeling, predicting all sub-tokens simultaneously can lead to suboptimal results as it may not fully capture the interdependencies between them. We introduce the Nested Music Transformer (NMT), an architecture tailored for decoding compound tokens autoregressively, similar to processing flattened tokens, but with low memory usage. The NMT consists of two transformers: the main decoder that models a sequence of compound tokens and the sub-decoder for modeling sub-tokens of each compound token. The experiment results showed that applying the NMT to compound tokens can enhance the performance in terms of better perplexity in processing various symbolic music datasets and discrete audio tokens from the MAESTRO dataset.


[213] 2411.07463

MSEG-VCUQ: Multimodal SEGmentation with Enhanced Vision Foundation Models, Convolutional Neural Networks, and Uncertainty Quantification for High-Speed Video Phase Detection Data

High-speed video (HSV) phase detection (PD) segmentation is crucial for monitoring vapor, liquid, and microlayer phases in industrial processes. While CNN-based models like U-Net have shown success in simplified shadowgraphy-based two-phase flow (TPF) analysis, their application to complex HSV PD tasks remains unexplored, and vision foundation models (VFMs) have yet to address the complexities of either shadowgraphy-based or PD TPF video segmentation. Existing uncertainty quantification (UQ) methods lack pixel-level reliability for critical metrics like contact line density and dry area fraction, and the absence of large-scale, multimodal experimental datasets tailored to PD segmentation further impedes progress. To address these gaps, we propose MSEG-VCUQ. This hybrid framework integrates U-Net CNNs with the transformer-based Segment Anything Model (SAM) to achieve enhanced segmentation accuracy and cross-modality generalization. Our approach incorporates systematic UQ for robust error assessment and introduces the first open-source multimodal HSV PD datasets. Empirical results demonstrate that MSEG-VCUQ outperforms baseline CNNs and VFMs, enabling scalable and reliable PD segmentation for real-world boiling dynamics.


[214] 2502.03285

Deep Learning-based Event Data Coding: A Joint Spatiotemporal and Polarity Solution

Neuromorphic vision sensors, commonly referred to as event cameras, generate a massive number of pixel-level events, composed by spatiotemporal and polarity information, thus demanding highly efficient coding solutions. Existing solutions focus on lossless coding of event data, assuming that no distortion is acceptable for the target use cases, mostly including computer vision tasks such as classification and recognition. One promising coding approach exploits the similarity between event data and point clouds, both being sets of 3D points, thus allowing to use current point cloud coding solutions to code event data, typically adopting a two-point clouds representation, one for each event polarity. This paper proposes a novel lossy Deep Learning-based Joint Event data Coding (DL-JEC) solution, which adopts for the first time a single-point cloud representation, where the event polarity plays the role of a point cloud attribute, thus enabling to exploit the correlation between the geometry/spatiotemporal and polarity event information. Moreover, this paper also proposes novel adaptive voxel binarization strategies which may be used in DL-JEC, optimized for either quality-oriented or computer vision task-oriented purposes which allow to maximize the performance for the task at hand. DL-JEC can achieve significant compression performance gains when compared with relevant conventional and DL-based state-of-the-art event data coding solutions, notably the MPEG G-PCC and JPEG Pleno PCC standards. Furthermore, it is shown that it is possible to use lossy event data coding, with significantly reduced rate regarding lossless coding, without compromising the target computer vision task performance, notably event classification, thus changing the current event data coding paradigm.


[215] 2502.12984

On Erlang mixture approximations for differential equations with distributed time delays

In this paper, we propose a general approach for approximate simulation and analysis of delay differential equations (DDEs) with distributed time delays based on methods for ordinary differential equations (ODEs). The key innovation is that we 1) propose an Erlang mixture approximation of the kernel in the DDEs and 2) use the linear chain trick to transform the resulting approximate DDEs to ODEs. Furthermore, we prove that the approximation converges for continuous and bounded kernels and for specific choices of the coefficients if the number of terms increases sufficiently fast. We show that the approximate ODEs can be used to assess the stability of the steady states of the original DDEs and that the solution to the ODEs converges if the kernel is also exponentially bounded. Additionally, we propose an approach based on bisection and least-squares estimation for determining optimal parameter values in the approximation. Finally, we present numerical examples that demonstrate the accuracy and convergence rate obtained with the optimal parameters and the efficacy of the proposed approach for bifurcation analysis and Monte Carlo simulation. The numerical examples involve a modified logistic equation, chemotherapy-induced myelosuppression, and a point reactor kinetics model of a molten salt nuclear fission reactor.


[216] 2503.16777

Physics-Informed Deep B-Spline Networks

Physics-informed machine learning offers a promising framework for solving complex partial differential equations (PDEs) by integrating observational data with governing physical laws. However, learning PDEs with varying parameters and changing initial conditions and boundary conditions (ICBCs) with theoretical guarantees remains an open challenge. In this paper, we propose physics-informed deep B-spline networks, a novel technique that approximates a family of PDEs with different parameters and ICBCs by learning B-spline control points through neural networks. The proposed B-spline representation reduces the learning task from predicting solution values over the entire domain to learning a compact set of control points, enforces strict compliance to initial and Dirichlet boundary conditions by construction, and enables analytical computation of derivatives for incorporating PDE residual losses. While existing approximation and generalization theories are not applicable in this setting - where solutions of parametrized PDE families are represented via B-spline bases - we fill this gap by showing that B-spline networks are universal approximators for such families under mild conditions. We also derive generalization error bounds for physics-informed learning in both elliptic and parabolic PDE settings, establishing new theoretical guarantees. Finally, we demonstrate in experiments that the proposed technique has improved efficiency-accuracy tradeoffs compared to existing techniques in a dynamical system problem with discontinuous ICBCs and can handle nonhomogeneous ICBCs and non-rectangular domains.


[217] 2503.19200

Optimal Modified Feedback Strategies in LQ Games under Control Imperfections

Game-theoretic approaches and Nash equilibrium have been widely applied across various engineering domains. However, practical challenges such as disturbances, delays, and actuator limitations can hinder the precise execution of Nash equilibrium strategies. This work investigates the impact of such implementation imperfections on game trajectories and players' costs in the context of a two-player finite-horizon linear quadratic (LQ) nonzero-sum game. Specifically, we analyze how small deviations by one player, measured or estimated at each stage affect the state trajectory and the other player's cost. To mitigate these effects, we construct a compensation law for the influenced player by augmenting the nominal game with the measurable deviation dynamics. The resulting policy is shown to be optimal within a causal affine policy class, and, for sufficiently small deviations, it locally outperforms the uncompensated equilibrium-derived feedback. Rigorous analysis and proofs are provided, and the effectiveness of the proposed approach is demonstrated through a representative numerical example.


[218] 2503.22218

ABC-GS: Alignment-Based Controllable Style Transfer for 3D Gaussian Splatting

3D scene stylization approaches based on Neural Radiance Fields (NeRF) achieve promising results by optimizing with Nearest Neighbor Feature Matching (NNFM) loss. However, NNFM loss does not consider global style information. In addition, the implicit representation of NeRF limits their fine-grained control over the resulting scenes. In this paper, we introduce ABC-GS, a novel framework based on 3D Gaussian Splatting to achieve high-quality 3D style transfer. To this end, a controllable matching stage is designed to achieve precise alignment between scene content and style features through segmentation masks. Moreover, a style transfer loss function based on feature alignment is proposed to ensure that the outcomes of style transfer accurately reflect the global style of the reference image. Furthermore, the original geometric information of the scene is preserved with the depth loss and Gaussian regularization terms. Extensive experiments show that our ABC-GS provides controllability of style transfer and achieves stylization results that are more faithfully aligned with the global style of the chosen artistic reference. Our homepage is available at this https URL.


[219] 2505.09986

High Quality Underwater Image Compression with Adaptive Color Correction

With the increasing exploration and exploitation of the underwater world, underwater images have become a critical medium for human interaction with marine environments, driving extensive research into their efficient transmission and storage. However, contemporary underwater image compression algorithms fail to adequately address the impact of water refraction and scattering on light waves, which not only elevate training complexity but also result in suboptimal compression performance. To tackle this limitation, we propose High Quality Underwater Image Compression (HQUIC), a novel framework designed to handle the unique illumination conditions and color shifts inherent in underwater images, thereby achieving superior compression performance. HQUIC first incorporates an Adaptive Lighting and Tone Correction (ALTC) module to adaptively predict the attenuation coefficients and global light information of images, effectively alleviating issues stemming from variations in illumination and tone across underwater images. Secondly, it dynamically weights multi-scale frequency components, prioritizing information critical to distortion quality while discarding redundant details. Furthermore, we introduce a tone adjustment loss to enable the model to better balance discrepancies among different color channels. Comprehensive evaluations on diverse underwater datasets validate that HQUIC outperforms state-of-the-art compression methods, demonstrating its effectiveness.


[220] 2506.04779

MMSU: A Massive Multi-task Spoken Language Understanding and Reasoning Benchmark

Speech inherently contains rich acoustic information that extends far beyond the textual language. In real-world spoken language understanding, effective interpretation often requires integrating semantic meaning (e.g., content), paralinguistic features (e.g., emotions, speed, pitch) and phonological characteristics (e.g., prosody, intonation, rhythm), which are embedded in speech. While recent multimodal Speech Large Language Models (SpeechLLMs) have demonstrated remarkable capabilities in processing audio information, their ability to perform fine-grained perception and complex reasoning in natural speech remains largely unexplored. To address this gap, we introduce MMSU, a comprehensive benchmark designed specifically for understanding and reasoning in spoken language. MMSU comprises 5,000 meticulously curated audio-question-answer triplets across 47 distinct tasks. To ground our benchmark in linguistic theory, we systematically incorporate a wide range of linguistic phenomena, including phonetics, prosody, rhetoric, syntactics, semantics, and paralinguistics. Through a rigorous evaluation of 14 advanced SpeechLLMs, we identify substantial room for improvement in existing models, highlighting meaningful directions for future optimization. MMSU establishes a new standard for comprehensive assessment of spoken language understanding, providing valuable insights for developing more sophisticated human-AI speech interaction systems. MMSU benchmark is available at this https URL. Evaluation Code is available at this https URL.


[221] 2506.07323

Speech Recognition on TV Series with Video-guided Post-ASR Correction

Automatic Speech Recognition (ASR) has achieved remarkable success with deep learning, driving advancements in conversational artificial intelligence, media transcription, and assistive technologies. However, ASR systems still struggle in complex environments such as TV series, where multiple speakers, overlapping speech, domain-specific terminology, and long-range contextual dependencies pose significant challenges to transcription accuracy. Existing approaches fail to explicitly leverage the rich temporal and contextual information available in the video. To address this limitation, we propose a Video-Guided Post-ASR Correction (VPC) framework that uses a Video-Large Multimodal Model (VLMM) to capture video context and refine ASR outputs. Evaluations on a TV-series benchmark show that our method consistently improves transcription accuracy in complex multimedia environments.


[222] 2506.24092

WaRA: Wavelet Low Rank Adaptation

Adapting large pretrained vision models to medical image classification is often limited by memory, computation, and task-specific specializations. Parameter-efficient fine-tuning (PEFT) methods like LoRA reduce this cost by learning low-rank updates, but operating directly in feature space can struggle to capture the localized, multi-scale features common in medical imaging. We propose WaRA, a wavelet-structured adaptation module that performs low-rank adaptation in a wavelet domain. WaRA reshapes patch tokens into a spatial grid, applies a fixed discrete wavelet transform, updates subband coefficients using a shared low-rank adapter, and reconstructs the additive update through an inverse wavelet transform. This design provides a compact trainable interface while biasing the update toward both coarse structure and fine detail. For extremely low-resource settings, we introduce Tiny-WaRA, which further reduces trainable parameters by learning only a small set of coefficients in a fixed basis derived from the pretrained weights through a truncated SVD. Experiments on medical image classification across four modalities and datasets demonstrate that WaRA consistently improves performance over strong PEFT baselines, while retaining a favorable efficiency profile. Our code is publicly available at~\href{this https URL}{\textcolor{magenta}{GitHub}}.


[223] 2507.16495

Spiking neurons as predictive controllers of linear systems

Neurons communicate with downstream systems via sparse and incredibly brief electrical pulses, or spikes. Using these events, they control various targets such as neuromuscular units, neurosecretory systems, and other neurons in connected circuits. This gave rise to the idea of spiking neurons as controllers, in which spikes are the control signal. Using instantaneous events directly as the control inputs, also called `impulse control', is challenging as it does not scale well to larger networks and has low analytical tractability. Therefore, current spiking control usually relies on filtering the spike signal to approximate analog control. This ultimately means spiking neural networks (SNNs) have to output a continuous control signal, necessitating continuous energy input into downstream systems. Here, we circumvent the need for rate-based representations, providing a scalable method for task-specific spiking control with sparse neural activity. In doing so, we take inspiration from both optimal control and neuroscience theory, and define a spiking rule where spikes are only emitted if they bring a dynamical system closer to a target. From this principle, we derive the required connectivity for an SNN, and show that it can successfully control linear systems. We show that for physically constrained systems, predictive control is required, and the control signal ends up exploiting the passive dynamics of the downstream system to reach a target. Finally, we show that the control method scales to both high-dimensional networks and systems. Importantly, in all cases, we maintain a closed-form mathematical derivation of the network connectivity, the network dynamics and the control objective. This work advances the understanding of SNNs as biologically-inspired controllers, providing insight into how real neurons could exert control, and enabling applications in neuromorphic hardware design.


[224] 2509.26234

Machine Learning Detection of Lithium Plating in Lithium-ion Cells: A Gaussian Process Approach

Lithium plating during fast charging is a critical degradation mechanism that accelerates capacity fade and can trigger catastrophic safety failures. Recent work has shown that plating onset can manifest in incremental-capacity analysis as an additional high-voltage feature above 4.0 V, often appearing as a secondary peak or shoulder distinct from the main intercalation peak complex; however, conventional methods for computing dQ/dV rely on finite differencing with filtering, which amplifies sensor noise and introduces bias in feature location. In this paper, we propose a Gaussian Process (GP) framework for lithium plating detection by directly modeling the charge-voltage relationship Q(V) as a stochastic process with calibrated uncertainty. Leveraging the property that derivatives of GPs remain GPs, we infer dQ/dV analytically and probabilistically from the posterior, enabling robust detection without ad hoc smoothing. The framework provides three key benefits: (i) noise-aware inference with hyperparameters learned from data, (ii) closed-form derivatives with credible intervals for uncertainty quantification, and (iii) scalability to online variants suitable for embedded BMS. Experimental validation on Li-ion coin cells across a range of C-rates (0.2C-1C) and temperatures (0-40$^\circ$C) demonstrates that the GP-based method reliably resolves distinct high-voltage secondary peak features under low-temperature, high-rate charging, while correctly reporting no features in non-plating cases. The concurrence of GP-identified differential features, reduced charge throughput, capacity fade measured via reference performance tests, and post-mortem microscopy confirmation supports the interpretation of these signatures as plating-related, establishing a practical pathway for real-time lithium plating detection.


[225] 2510.01144

Partial Resilient Leader-Follower Consensus in Time-Varying Graphs

This work studies resilient leader-follower consensus with a bounded number of adversaries. Existing approaches typically require robustness conditions of the entire network to guarantee resilient consensus. However, the behavior of such systems when these conditions are not fully met remains unexplored. To address this gap, we introduce the notion of partial leader-follower consensus, in which a subset of non-adversarial followers successfully tracks the leader's reference state despite insufficient robustness. We propose a novel distributed algorithm - the Bootstrap Percolation and Mean Subsequence Reduced (BP-MSR) algorithm - and establish sufficient conditions for individual followers to achieve consensus via the BP-MSR algorithm in arbitrary time-varying graphs. We validate our findings through simulations, demonstrating that our method guarantees partial leader-follower consensus, even when standard resilient consensus algorithms fail.


[226] 2510.01485

Pose Estimation of a Thruster-Driven Bioinspired Multi-Link Robot

This work demonstrates simultaneous pose (position and orientation) and shape estimation for a free-floating, bioinspired multi-link robot with unactuated joints, link-mounted thrusters for control, and a single gyroscope per link, resulting in an underactuated, minimally sensed platform. Because the inter-link joint angles are constrained, translation and rotation of the multi-link system requires cyclic, reciprocating actuation of the thrusters, referred to as a gait. Through a proof-of-concept hardware experiment and offline analysis, we show that the robot's shape can be reliably estimated using an Unscented Kalman Filter augmented with Gaussian process residual models to compensate for non-zero-mean, non-Gaussian noise, while the pose exhibits drift expected from gyroscope integration in the absence of absolute position measurements. Experimental results demonstrate that a Gaussian process model trained on a multi-gait dataset (forward, backward, left, right, and turning) performs comparably to one trained exclusively on forward-gait data, revealing an overlap in the gait input space, which can be exploited to reduce per-gait training data requirements while enhancing the filter's generalizability across multiple gaits. Lastly, we introduce a heuristic derived from the observability Gramian to correlate joint angle estimate quality with gait periodicity and thruster inputs, highlighting how control affects estimation quality.


[227] 2510.03423

Efficient Input-Constrained Impulsive Optimal Control of Linear Systems with Application to Spacecraft Relative Motion

This work presents a novel algorithm for impulsive optimal control of linear time-varying systems with the inclusion of input magnitude constraints. Impulsive optimal control problems, where the optimal input solution is a sum of delta functions, are typically formulated as an optimization over a normed function space subject to integral equality constraints and can be efficiently solved for linear time-varying systems in their dual formulation. In this dual setting, the problem takes the form of a semi-infinite program which is readily solvable in online scenarios for constructing maneuver plans. This work augments the approach with the inclusion of magnitude constraints on the input over time windows of interest, which is shown to preserve the impulsive nature of the optimal solution and enable efficient solution procedures via semi-infinite programming. The resulting algorithm is demonstrated on the highly relevant problem of relative motion control of spacecraft in Low Earth Orbit (LEO).


[228] 2510.03481

Optimization-Based Robust Permissive Synthesis for Interval MDPs

We present an optimization-based framework for robust permissive synthesis for Interval Markov Decision Processes (IMDPs), motivated by robotic decision-making under transition uncertainty. In many robotic systems, model inaccuracies and sensing noise lead to interval-valued transition probabilities. While robust IMDP synthesis typically yields a single policy and permissive synthesis assumes exact models, we show that robust permissive synthesis under interval uncertainty can be cast as a global mixed-integer linear program (MILP) that directly encodes robust Bellman constraints. The formulation maximizes a quantitative permissiveness metric (the number of enabled state-action pairs), while guaranteeing that every compliant strategy satisfies probabilistic reachability or expected reward specifications under all admissible transition realizations. To address the exponential complexity of vertex-based uncertainty representations, we derive a dualization-based encoding that eliminates explicit vertex enumeration and scales linearly with the number of successors. Experimental evaluation on four representative robotic benchmark domains demonstrates scalability to IMDPs with hundreds of thousands of states. The proposed framework provides a practical and general foundation for uncertainty-aware, flexibility-preserving controller synthesis in robotic systems.


[229] 2510.16689

Geometric Control Theory Over Networks: Minimal Node Cardinality Disturbance Decoupling Problems

In this paper we show how to formulate and solve disturbance decoupling problems over networks while choosing a minimal number of input and output nodes. Feedback laws that isolate and eliminate the impact of disturbance nodes on specific target nodes to be protected are provided using state, output, and dynamical feedback. For that, we leverage the fact that when reformulated in terms of sets of nodes rather than subspaces, the controlled and conditional invariance properties admit a simple graphical interpretation. For state and dynamical feedback, the minimal input and output cardinality solutions can be computed exactly in polynomial time, via min-cut/max-flow algorithms.


[230] 2510.16917

SAKE: Towards Editing Auditory Attribute Knowledge of Large Audio-Language Models

Knowledge editing enables targeted updates without retraining, but prior work focuses on textual or visual facts, leaving abstract auditory perceptual knowledge underexplored. We introduce SAKE, the first benchmark for editing perceptual auditory attribute knowledge in large audio-language models (LALMs), which requires modifying acoustic generalization rather than isolated facts. We evaluate eight diverse editing methods on three LALMs across reliability, generality, locality, and portability, under single and sequential edits. Results show that most methods enforce edits reliably but struggle with auditory generalization, intra-attribute locality, and multimodal knowledge propagation, and often exhibit forgetting or degeneration in sequential editing. Additionally, fine-tuning the modality connector emerges as a more robust and balanced baseline compared with directly editing the LLM backbones. SAKE reveals key limitations of current methods and provides a foundation for developing auditory-specific LALM editing techniques.


[231] 2510.17512

AWARE: Audio Watermarking with Adversarial Resistance to Edits

Prevailing practice in learning-based audio watermarking is to pursue robustness by expanding the set of simulated distortions during training. However, such surrogates are narrow and prone to overfitting. This paper presents AWARE (Audio Watermarking with Adversarial Resistance to Edits), an alternative approach that avoids reliance on attack-simulation stacks and handcrafted differentiable distortions. Embedding is obtained through adversarial optimization in the time-frequency domain under a level-proportional perceptual budget. Detection employs a time-order-agnostic detector with a Bitwise Readout Head (BRH) that aggregates temporal evidence into one score per watermark bit, enabling reliable watermark decoding even under desynchronization and temporal cuts. Empirically, AWARE attains high audio quality and speech intelligibility (PESQ/STOI) and consistently low BER across various audio edits, often surpassing representative state-of-the-art learning-based systems.


[232] 2510.24753

Artificial Transmission Line Synthesis Tailored for Traveling-Wave Parametric Processes

Artificial transmission lines built with lumped-element inductors and capacitors form the backbone of broadband, nearly quantum-limited traveling-wave parametric amplifiers (TWPAs). When tailoring these transmission lines for parametric processes, nonlinear elements are added, typically nonlinear inductances in superconducting circuits, and energy and momentum conservation between interacting tones must be enforced through careful design of the ATL dispersion relation. However, a unified theoretical framework describing achievable dispersion relations is lacking. Here, I develop such a framework, borrowing from periodic structure theory and passive network synthesis. These complementary approaches divide the design space: periodic loading synthesis employs spatial modulation of frequency-independent components, while filter synthesis employs frequency-dependent responses in spatially-uniform components. The framework reveals fundamental constraints and enables the discovery of novel TWPA architectures. In particular, I design a kinetic inductance TWPA with a novel phase-matching architecture, and a backward-pumped Josephson TWPA exploiting an ambidextrous i.e., right-left-handed transmission line.


[233] 2510.26204

Sequential Change Detection Under Markov Setup With Unknown Prechange And Postchange Distributions

In this work we extend the results developed in 2022 for a sequential change detection algorithm making use of Page's CUSUM statistic, the empirical distribution as an estimate of the pre-change distribution, and a universal code as a tool for estimating the post-change distribution, from the i.i.d. case to the Markov setup.


[234] 2511.03571

OneOcc: Semantic Occupancy Prediction for Legged Robots with a Single Panoramic Camera

Robust 3D semantic occupancy is crucial for legged/humanoid robots, yet most semantic scene completion (SSC) systems target wheeled platforms with forward-facing sensors. We present OneOcc, a vision-only panoramic SSC framework designed for gait-introduced body jitter and 360° continuity. OneOcc combines: (i) Dual-Projection fusion (DP-ER) to exploit the annular panorama and its equirectangular unfolding, preserving 360° continuity and grid alignment; (ii) Bi-Grid Voxelization (BGV) to reason in Cartesian and cylindrical-polar spaces, reducing discretization bias and sharpening free/occupied boundaries; (iii) a lightweight decoder with Hierarchical AMoE-3D for dynamic multi-scale fusion and better long-range/occlusion reasoning; and (iv) plug-and-play Gait Displacement Compensation (GDC) learning feature-level motion correction without extra sensors. We also release two panoramic occupancy benchmarks: QuadOcc (real quadruped, first-person 360°) and Human360Occ (H3O) (CARLA human-ego 360° with RGB, Depth, semantic occupancy; standardized within-/cross-city splits). OneOcc sets a new state of the art on QuadOcc, outperforming strong vision baselines and remaining competitive with classical LiDAR baselines; on H3O it gains +3.83 mIoU (within-city) and +8.08 (cross-city). Modules are lightweight, enabling deployable full-surround perception for legged/humanoid robots. Datasets and code will be publicly available at this https URL.


[235] 2512.03216

Kaleidoscopic Scintillation Event Imaging

Scintillators are transparent materials that interact with high-energy particles and emit visible light as a result. They are used in state of the art methods of measuring high-energy particles and radiation sources. Most existing methods use fast single-pixel detectors to detect and time scintillation events. Cameras provide spatial resolution but can only capture an average over many events, making it difficult to image the events associated with an individual particle. Emerging single-photon avalanche diode cameras combine speed and spatial resolution to enable capturing images of individual events. This allows us to use machine vision techniques to analyze events, enabling new types of detectors. The main challenge is the very low brightness of the events. Techniques have to work with a very limited number of photons. We propose a kaleidoscopic scintillator to increase light collection in a single-photon camera while preserving the event's spatial information. The kaleidoscopic geometry creates mirror reflections of the event in known locations for a given event location that are captured by the camera. We introduce theory for imaging an event in a kaleidoscopic scintillator and an algorithm to estimate the event's 3D position. We find that the kaleidoscopic scintillator design provides sufficient light collection to perform high-resolution event measurements for advanced radiation imaging techniques using a commercial CMOS single-photon camera.


[236] 2512.03886

A Modular Architecture Design for Autonomous Driving Racing in Controlled Environments

This paper presents a modular autonomous driving architecture for Formula Student Driverless competition vehicles operating in closed-circuit environments. The perception module employs YOLOv11 for real-time traffic cone detection, achieving 0.93 mAP@0.5 on the FSOCO dataset, combined with neural stereo depth estimation from a ZED 2i camera for 3D cone localization with sub-0.5 m median error at distances up to 7 m. State estimation fuses RTK-GNSS positioning and IMU measurements through an Extended Kalman Filter (EKF) based on a kinematic bicycle model, achieving centimeter-level localization accuracy with a 12 cm improvement over raw GNSS. Path planning computes the racing line via cubic spline interpolation on ordered track boundaries and assigns speed profiles constrained by curvature and vehicle dynamics. A regulated pure pursuit controller tracks the planned trajectory with a dynamic lookahead parameterized by speed error. The complete pipeline is implemented as a modular ROS 2 architecture on an NVIDIA Jetson Orin NX platform, with each subsystem deployed as independent nodes communicating through a dual-computer configuration. Experimental validation combines real-world sensor evaluation with simulation-based end-to-end testing, where realistic sensor error distributions are injected to assess system-level performance under representative conditions.


[237] 2512.09944

Echo-CoPilot: A Multiple-Perspective Agentic Framework for Reliable Echocardiography Interpretation

Echocardiography interpretation requires integrating multi-view temporal evidence with quantitative measurements and guideline-grounded reasoning, yet existing foundation-model pipelines largely solve isolated subtasks and fail when tool outputs are noisy or values fall near clinical cutoffs. We propose Echo-CoPilot, an end-to-end agentic framework that combines a multi-perspective workflow with knowledge-graph guided measurement selection. Echo-CoPilot runs three independent ReAct-style agents, structural, pathological, and quantitative, that invoke specialized echocardiography tools to extract parameters while querying EchoKG to determine which measurements are required for the clinical question and which should be avoided. A self-contrast language model then compares the evidence-grounded perspectives, generates a discrepancy checklist, and re-queries EchoKG to apply the appropriate guideline thresholds and resolve conflicts, reducing hallucinated measurement selection and borderline flip-flops. On MIMICEchoQA, Echo-CoPilot provides higher accuracy compared to SOTA baselines and, under a stochasticity stress test, achieves higher reliability through more consistent conclusions and fewer answer changes across repeated runs. Our code is publicly available at~\href{this https URL}{\textcolor{magenta}{GitHub}}.


[238] 2512.15562

Reducing Pilots in Channel Estimation with Predictive Foundation Models

Accurate channel state information (CSI) acquisition is essential for modern wireless systems, which becomes increasingly difficult under large antenna arrays, strict pilot overhead constraints, and diverse deployment environments. Existing artificial intelligence-based solutions often lack robustness and fail to generalize across scenarios. To address this limitation, this paper introduces a predictive-foundation-model-based channel estimation framework that enables accurate, low-overhead, and generalizable CSI acquisition. The proposed framework employs a predictive foundation model trained on large-scale cross-domain CSI data to extract universal channel representations and provide predictive priors with strong cross-scenario transferability. A pilot processing network based on a vision transformer architecture is further designed to capture spatial, temporal, and frequency correlations from pilot observations. An efficient fusion mechanism integrates predictive priors with real-time measurements, enabling reliable CSI reconstruction even under sparse or noisy conditions. Extensive evaluations across diverse configurations demonstrate that the proposed estimator significantly outperforms both classical and data-driven baselines in accuracy, robustness, and generalization capability.


[239] 2601.00557

A Language-Agnostic Hierarchical LoRA-MoE Architecture for CTC-based Multilingual ASR

Large-scale multilingual ASR (mASR) models such as Whisper achieve strong performance but incur high computational and latency costs, limiting their deployment on resource-constrained edge devices. In this study, we propose a lightweight and language-agnostic multilingual ASR system based on a CTC architecture with domain adaptation. Specifically, we introduce a Language-agnostic Hierarchical LoRA-MoE (HLoRA) framework integrated into an mHuBERT-CTC model, enabling end-to-end decoding via LID-posterior-driven LoRA routing. The hierarchical design consists of a multilingual shared LoRA for learning language-invariant acoustic representations and language-specific LoRA experts for modeling language-dependent characteristics. The proposed routing mechanism removes the need for prior language identity information or explicit language labels during inference, achieving true language-agnostic decoding. Experiments on MSR-86K and the MLC-SLM 2025 Challenge datasets demonstrate that HLoRA achieves comparable performance to two-stage inference approaches while reducing RTF by 11.7% and 8.2%, respectively, leading to improved decoding efficiency for low-resource mASR applications.


[240] 2601.10453

Stable Differentiable Modal Synthesis for Learning Nonlinear Dynamics

Modal methods are a long-standing approach to physical modelling synthesis. Extensions to nonlinear problems are possible, leading to coupled nonlinear systems of ordinary differential equations. Recent work in scalar auxiliary variable techniques has enabled construction of explicit and stable numerical solvers for such systems. On the other hand, neural ordinary differential equations have been successful in modelling nonlinear systems from data. In this work, we examine how scalar auxiliary variable techniques can be combined with neural ordinary differential equations to yield a stable differentiable model capable of learning nonlinear dynamics. The proposed approach leverages the analytical solution for linear vibration of the system's modes so that physical parameters of a system remain easily accessible after the training without the need for a parameter encoder in the model architecture. Compared to our previous work that used multilayer perceptrons to parametrise nonlinear dynamics, we employ gradient networks that allow an interpretation in terms of a closed-form and non-negative potential required by scalar auxiliary variable techniques. As a proof of concept, we generate synthetic data for the nonlinear transverse vibration of a string and show that the model can be trained to reproduce the nonlinear dynamics of the system. Sound examples are presented.


[241] 2601.18184

VIBEVOICE-ASR Technical Report

This report presents VibeVoice-ASR, a general-purpose speech understanding framework built upon VibeVoice, designed to address the persistent challenges of context fragmentation and multi-speaker complexity in long-form audio (e.g., meetings, podcasts) that remain despite recent advancements in short-form speech recognition. Unlike traditional pipelined approaches that rely on audio chunking, VibeVoice-ASRsupports single-pass processing for up to 60 minutes of audio. It unifies Automatic Speech Recognition, Speaker Diarization, and Timestamping into a single end-to-end generation task. In addition, VibeVoice-ASR supports over 50 languages, requires no explicit language setting, and natively handles code-switching within and across utterances. Furthermore, we introduce a prompt-based context injection mechanism that allows users to supply customized conetxt, significantly improving accuracy on domain-specific terminology and polyphonic character disambiguation.


[242] 2602.09823

Covo-Audio Technical Report

In this work, we present Covo-Audio, a 7B-parameter end-to-end LALM that directly processes continuous audio inputs and generates audio outputs within a single unified architecture. Through large-scale curated pretraining and targeted post-training, Covo-Audio achieves state-of-the-art or competitive performance among models of comparable scale across a broad spectrum of tasks, including speech-text modeling, spoken dialogue, speech understanding, audio understanding, and full-duplex voice interaction. Extensive evaluations demonstrate that the pretrained foundation model exhibits strong speech-text comprehension and semantic reasoning capabilities on multiple benchmarks, outperforming representative open-source models of comparable scale. Furthermore, Covo-Audio-Chat, the dialogue-oriented variant, demonstrates strong spoken conversational abilities, including understanding, contextual reasoning, instruction following, and generating contextually appropriate and empathetic responses, validating its applicability to real-world conversational assistant scenarios. Covo-Audio-Chat-FD, the evolved full-duplex model, achieves substantially superior performance on both spoken dialogue capabilities and full-duplex interaction behaviors, demonstrating its competence in practical robustness. To mitigate the high cost of deploying end-to-end LALMs for natural conversational systems, we propose an intelligence-speaker decoupling strategy that separates dialogue intelligence from voice rendering, enabling flexible voice customization with minimal text-to-speech (TTS) data while preserving dialogue performance. Overall, our results highlight the strong potential of 7B-scale models to integrate sophisticated audio intelligence with high-level semantic reasoning, and suggest a scalable path toward more capable and versatile LALMs.


[243] 2603.02105

Resilient Chaotic Cross-Layer Routing for Smart Grid IoT Networks

This paper presents the Distributed Adaptive Multi-Radio Cross-Layer Routing (DAMCR) protocol, designed to enhance reliability, adaptability, and energy efficiency in smart grid and industrial Internet of Things (IoT) communication networks. DAMCR integrates Chaotic Frequency-Hopping Spread Spectrum (C-FHSS) to improve physical-layer security and jamming resilience with Link-Adaptive Quality Power Control (LAQPC) to dynamically regulate transmission power based on instantaneous link quality and residual node energy. To meet heterogeneous traffic requirements, the protocol incorporates priority-aware message classification that differentiates between periodic monitoring data and time-critical fault and protection messages. The proposed framework is implemented and evaluated in MATLAB using a heterogeneous network composed of LoRa, Wi-Fi, and dual-radio nodes operating under AWGN, Rayleigh, and Rician fading environments. Extensive simulation results demonstrate that DAMCR consistently achieves a Packet Delivery Ratio (PDR) exceeding 95% across all evaluated scenarios, while maintaining end-to-end latency between 17 and 23 ms, even in the presence of controlled jamming attacks. These results confirm that the tight integration of chaos-based spectrum agility, cross-technology routing, and energy-aware cross-layer adaptation significantly improves communication reliability, latency stability, and resilience compared to conventional single-radio and static-routing protocols.


[244] 2603.06354

Frequency-Separable Hamiltonian Neural Network for Multi-Timescale Dynamics

While Hamiltonian mechanics provides a powerful inductive bias for neural networks modeling dynamical systems, Hamiltonian Neural Networks and their variants often fail to capture complex temporal dynamics spanning multiple timescales. This limitation is commonly linked to the spectral bias of deep neural networks, which favors learning low-frequency, slow-varying dynamics. Prior approaches have sought to address this issue through symplectic integration schemes that enforce energy conservation or by incorporating geometric constraints to impose structure on the configuration-space. However, such methods either remain limited in their ability to fully capture multiscale dynamics or require substantial domain specific assumptions. In this work, we exploit the observation that Hamiltonian functions admit decompositions into explicit fast and slow modes and can be reconstructed from these components. We introduce the Frequency-Separable Hamiltonian Neural Network (FS-HNN), which parameterizes the system Hamiltonian using multiple networks, each governed by Hamiltonian dynamics and trained on data sampled at distinct timescales. We further extend this framework to partial differential equations by learning a state- and boundary-conditioned symplectic operators. Empirically, we show that FS-HNN improves long-horizon extrapolation performance on challenging dynamical systems and generalizes across a broad range of ODE and PDE problems.


[245] 2603.09783

Lightweight 3D LiDAR-Based UAV Tracking: An Adaptive Extended Kalman Filtering Approach

Accurate relative positioning is crucial for swarm aerial robotics, enabling coordinated flight and collision avoidance. Although vision-based tracking has been extensively studied, 3D LiDAR-based methods remain underutilized despite their robustness under varying lighting conditions. Existing systems often rely on bulky, power-intensive sensors, making them impractical for small UAVs with strict payload and energy constraints. This paper presents a lightweight LiDAR-based UAV tracking system incorporating an Adaptive Extended Kalman Filter (AEKF) framework. Our approach effectively addresses the challenges posed by sparse, noisy, and nonuniform point cloud data generated by non-repetitive scanning 3D LiDARs, ensuring reliable tracking while remaining suitable for small drones with strict payload constraints. Unlike conventional filtering techniques, the proposed method dynamically adjusts the noise covariance matrices using innovation and residual statistics, thereby enhancing tracking accuracy under real-world conditions. Additionally, a recovery mechanism ensures continuity of tracking during temporary detection failures caused by scattered LiDAR returns or occlusions. Experimental validation was performed using a Livox Mid-360 LiDAR mounted on a DJI F550 UAV in real-world flight scenarios. The proposed method demonstrated robust UAV tracking performance under sparse LiDAR returns and intermittent detections, consistently outperforming both standard Kalman filtering and particle filtering approaches during aggressive maneuvers. These results confirm that the framework enables reliable relative positioning in GPS-denied environments without the need for multi-sensor arrays or external infrastructure.


[246] 2603.11473

Slack More, Predict Better: Proximal Relaxation for Probabilistic Latent Variable Model-based Soft Sensors

Nonlinear Probabilistic Latent Variable Models (NPLVMs) are a cornerstone of soft sensor modeling due to their capacity for uncertainty delineation. However, conventional NPLVMs are trained using amortized variational inference, where neural networks parameterize the variational posterior. While facilitating model implementation, this parameterization converts the distributional optimization problem within an infinite-dimensional function space to parameter optimization within a finite-dimensional parameter space, which introduces an approximation error gap, thereby degrading soft sensor modeling accuracy. To alleviate this issue, we introduce KProxNPLVM, a novel NPLVM that pivots to relaxing the objective itself and improving the NPLVM's performance. Specifically, we first prove the approximation error induced by the conventional approach. Based on this, we design the Wasserstein distance as the proximal operator to relax the learning objective, yielding a new variational inference strategy derived from solving this relaxed optimization problem. Based on this foundation, we provide a rigorous derivation of KProxNPLVM's optimization implementation, prove the convergence of our algorithm can finally sidestep the approximation error, and propose the KProxNPLVM by summarizing the abovementioned content. Finally, extensive experiments on synthetic and real-world industrial datasets are conducted to demonstrate the efficacy of the proposed KProxNPLVM.