New articles on Electrical Engineering and Systems Science


[1] 2603.20238

Joint Trajectory, RIS, and Computation Offloading Optimization via Decentralized Model-Based PPO in Urban Multi-UAV Mobile Edge Computing

Efficient computation offloading in multi-UAV edge networks becomes particularly challenging in dense urban areas, where line-of-sight (LoS) links are frequently blocked and user demand varies rapidly. Reconfigurable intelligent surfaces (RISs) can mitigate blockage by creating controllable reflected links, but realizing their potential requires tightly coupled decisions on UAV trajectories, offloading schedules, and RIS phase configurations. This joint optimization is hard to solve in practice because multiple UAVs must coordinate under limited information exchange, and purely model-free multi-agent reinforcement learning (MARL) often learns too slowly in highly dynamic environments. To address these challenges, we propose a decentralized model-based MARL framework. Each UAV optimizes mobility and offloading using observations from several hop neighbors, and submits an RIS phase proposal that is aggregated by a lightweight RIS controller. To boost sample efficiency and stability, agents learn local dynamics models and perform short horizon branched rollouts for proximal policy optimization (PPO) updates. Simulations show near centralized performance with improved throughput and energy efficiency at scale.


[2] 2603.20249

Experimental Modal Analysis for engineering structures via time-delay Dynamic Mode Decomposition with Control

Experimental Modal Analysis (EMA) has been widely used to identify structural dynamic properties, including natural frequencies, damping ratios, and mode shapes, for structural integrity assessment. The Poly-reference Least Squares Complex Frequency (pLSCF) method is one of the most widely adopted approaches for EMA because of its strong ability to separate closely spaced modes and its robustness to measurement noise. However, pLSCF-based EMA is generally limited to low-dimensional cases with a small number of measurement points, as its computational cost increases rapidly for high-dimensional or continuous structural measurements, particularly with increasing model order. To overcome this limitation, this paper develops a high-dimensional EMA framework based on Dynamic Mode Decomposition with control (DMDc), a powerful data-driven technique originally developed in fluid dynamics, for modal identification under high-dimensional measurement scenarios. Specifically, the relationship between pLSCF and time-delay DMDc is clarified through the discrete state-space representation of the auto-regressive with exogenous inputs (ARX) model for linear systems. By showing that both methods describe the same physical dynamics of the structure, this study provides a physics-based rationale for applying time-delay DMDc to EMA. The capability and advantages of time-delay DMDc for modal parameter identification in both low- and high-dimensional measurements are validated through numerical simulations of a 6-DOF system and experiments on a cantilever beam using a digital camera. The results demonstrate that time-delay DMDc enables robust and reliable modal parameter identification, effectively addressing high-dimensional EMA problems that are difficult for conventional pLSCF and highlighting its potential for real-world structural dynamics applications.


[3] 2603.20258

The Deep-Match Framework for Event-Related Potential Detection in EEG

Reliable detection of event-related potentials (ERPs) at the single-trial level remains a major challenge due to the low signal-to-noise ratio EEG recordings. In this work, we investigate whether incorporating prior knowledge about ERP templates into deep learning models can improve detection performance. We employ the Deep-Match framework for ERP detection using multi-channel EEG signals. The model is trained in two stages. First, an encoder-decoder architecture is trained to reconstruct input EEG signals, enabling the network to learn compact signal representations. In the second stage, the decoder is replaced with a detection module, and the network is fine-tuned for ERP identification. Two model variants are evaluated: a standard model with randomly initialized filters and a Deep-MF model in which input kernels are initialized using ERP templates. Model performance is assessed on a single-trial ERP detection task using leave-one-subject-out validation. The proposed Deep-MF model slightly outperforms the detector with standard kernel initialization for most held-out subjects. Despite substantial inter-subject variability, Deep-MF achieves a higher average F1-score (0.37) compared to the standard network (0.34), indicating improved robustness to cross-subject differences. The best performance obtained by Deep-MF reaches an F1-score of 0.71, exceeding the maximum score achieved by the standard model (0.59). These results demonstrate that ERP-informed kernel initialization can provide consistent improvements in subject-independent single-trial ERP detection. Overall, the findings highlight the potential of integrating domain knowledge with deep learning architectures for EEG analysis. The proposed approach represents a step toward practical wearable EEG and passive brain-computer interface systems capable of real-time monitoring of cognitive processes.


[4] 2603.20259

Polynomial Updates for the Unscented Kalman Filter

Most nonlinear filters used in spacecraft navigation are based on a linear approximation of the optimal minimum mean square error estimator. The Unscented Kalman Filter (UKF) handles nonlinear dynamics through a sigma-point transform, but the resulting state estimate remains a linear function of the measurement. This paper proposes a polynomial approximation of the optimal Bayesian update, leading to a Polynomial Unscented Kalman Filter that retains the structure of the standard UKF but enriches the measurement update with higher-order (polynomial) terms. To compute the moments required by this polynomial estimator, we employ a Conjugate Unscented Transformation (CUT), which accurately captures higher-order central moments of the state and measurement. Numerical examples, including Clohessy-Wiltshire and Circular Restricted 3-Body dynamics with non-Gaussian measurement noise, illustrate that the resulting polynomial-CUT filters improve both state estimation accuracy and covariance consistency when compared with their linear counterparts.


[5] 2603.20263

MiSiSUn: Minimum Simplex Semisupervised Unmixing

This paper proposes a semisupervised geometric unmixing approach called minimum simplex semisupervised unmixing (MiSiSUn). The geometry of the data was incorporated for the first time into library-based unmixing using a simplex-volume-flavored penalty based on an archetypal analysis-type linear model. The experimental results were performed on two simulated datasets considering different levels of mixing ratios and spatial instruction at varying input noise. MiSiSUn considerably outperforms state-of-the-art semisupervised unmixing methods. The improvements vary from 1 dB to over 3 dB in different scenarios. The proposed method was also applied to a real dataset where visual interpretation is close to the geological map. MiSiSUn was implemented using PyTorch, which is open-source and available at this https URL. Moreover, we provide a dedicated Python package for Semisupervised Unmixing, which is open-source and includes all the methods used in the experiments for the sake of reproducibility.


[6] 2603.20332

Framework for Indoor Wireless Propagation Modeling Through Wireless Insite

Multipaths, reflections, diffractions, and material interactions complicate indoor wireless propagation modelling. More than 80% of wireless data is consumed indoors; hence, planning successful deployments and maximizing network performance depends on accurate propagation modelling of indoor environments. This work explains a complete framework for indoor wireless propagation modelling via ray tracing simulation in a step-by-step manner. The ray tracing simulations are conducted with Wireless Insite, a proprietary electromagnetic propagation software, whereas SketchUp is used at the input side for layout construction from the field measurements, and MATLAB is used at the output side for portraying channel model parameters such as power delay profile (PDP). A whole floor of the authors' department is modelled, and different transmitter-receiver locations were tested for possible use cases such as coverage hole prediction.


[7] 2603.20355

CaroTo: A Tool for Fast Comprehensive Analysis of Carotid Artery Stenosis in 4D PC- and 3D BB-MRI Data

Atherosclerosis of the carotid artery increases stroke risk. Atherosclerosis assessment with MRI requires multimodal and multidimensional segmentation of the carotid artery, reproducible extraction of biomarkers, and the visualization of segmentations and biomarkers. We developed CaroTo, a tool that allows for standardized carotid atherosclerosis assessment. It combines the capabilities of MEVISFlow with specialized tools for carotid geometry and vessel wall assessment. It supports manual and automatic segmentation for 2D, 2D+time, and 3D images, facilitating precise and consistent evaluations of carotid artery stenosis.


[8] 2603.20385

Cost-Aware Neural Early Stopping for Local Constraint OSD Decoders

Local constraint ordered statistics decoding (LC-OSD) provides strong soft decision performance for short block length linear codes, but its practical cost is dominated by the number of tested error patterns (TEPs). This paper proposes a neural early stopping (NES) protocol for LC-OSD with explicit cost control through one trade-off parameter balancing frame error risk and search effort. The proposed approach is trained with frame error rate (FER)-aligned supervision at predefined checkpoints, and learns if additional search is still likely to improve the current best candidate. Later, stopping is decided by comparing predicted continuation need with a cost measured in TEPs. Experimental results across multiple code families show that the proposed protocol significantly reduces average TEP count with only marginal FER degradation, using a single global model for the range of all operating signal-to-noise ratios (SNRs).


[9] 2603.20387

End-to-End Multi-Task Learning for Adjustable Joint Noise Reduction and Hearing Loss Compensation

A multi-task learning framework is proposed for optimizing a single deep neural network (DNN) for joint noise reduction (NR) and hearing loss compensation (HLC). A distinct training objective is defined for each task, and the DNN predicts two time-frequency masks. During inference, the amounts of NR and HLC can be adjusted independently by exponentiating each mask before combining them. In contrast to recent approaches that rely on training an auditory-model emulator to define a differentiable training objective, we propose an auditory model that is inherently differentiable, thus allowing end-to-end optimization. The audiogram is provided as an input to the DNN, thereby enabling listener-specific personalization without the need for retraining. Results show that the proposed approach not only allows adjusting the amounts of NR and HLC individually, but also improves objective metrics compared to optimizing a single training objective. It also outperforms a cascade of two DNNs that were separately trained for NR and HLC, and shows competitive HLC performance compared to a traditional hearing-aid prescription. To the best of our knowledge, this is the first study that uses an auditory model to train a single DNN for both NR and HLC across a wide range of listener profiles.


[10] 2603.20402

A Unified Family-optimal Solution to Covariance Intersection Problems with Semidefinite Programming

Covariance intersection (CI) methods provide a principled approach to fusing estimates with unknown cross-correlations by minimizing a worst-case measure of uncertainty that is consistent with the available information. This paper introduces a generalized CI framework, called overlapping covariance intersection (OCI), which unifies several existing CI formulations within a single optimization-based framework. This unification enables the characterization of family-optimal solutions for multiple CI variants, including standard CI and split covariance intersection (SCI), as solutions to a semidefinite program, for which efficient off-the-shelf solvers are available. When specialized to the corresponding settings, the proposed family-optimal solutions recover the state-of-the-art family-optimal solutions previously reported for CI and SCI. The resulting formulation facilitates the systematic design and real-time implementation of CI-based fusion methods in large-scale distributed estimation problems, such as cooperative localization.


[11] 2603.20411

Activate the Dual Cones: A Tight Reformulation of Conic ACOPF Constraints

By exploiting the observed tightness of dual rotated second-order cone (RSOC) constraints, this paper transforms the dual of a conic ACOPF relaxation into an equivalent, non-conic problem where dual constraints are implicitly enforced through eliminated dual RSOC variables. To accomplish this, we apply the RSOC-based Jabr relaxation of ACOPF, pose its dual, and then show that all dual RSOC constraints must be tight (i.e., active) at optimality. We then construct a reduced dual maximization problem with only non-negativity constraints, avoiding the explicit RSOC inequality constraints. Numerical experiments confirm that the tight formulation recovers the same dual objective values as a mature conic solver (e.g., MOSEK via PowerModels) on various PGLib benchmark test systems (ranging from 3- to 1354-buses). The proposed formulation has useful performance benefits, compared with its conic counterpart, and it allows us to define a bounding function which provides a guaranteed lower bound on system cost. While this paper focuses on demonstrating the correctness and validity of the proposed structural simplification, it lays the groundwork for future GPU-accelerated first-order optimization methods which can exploit the unconstrained nature of the proposed formulation.


[12] 2603.20434

Verifiable Error Bounds for Physics-Informed Neural KKL Observers

This paper proposes a computable state-estimation error bound for learning-based Kazantzis--Kravaris/Luenberger (KKL) observers. Recent work learns the KKL transformation map with a physics-informed neural network (PINN) and a corresponding left-inverse map with a conventional neural network. However, no computable state-estimation error bounds are currently available for this approach. We derive a state-estimation error bound that depends only on quantities that can be certified over a prescribed region using neural network verification. We further extend the result to bounded additive measurement noise and demonstrate the guarantees on nonlinear benchmark systems.


[13] 2603.20447

Performance Analysis of LEO-Terrestrial Systems in Presence of Doppler Effect

In this paper, we present a novel stochastic geometry-based approach to analyze the effect of residual Doppler shift on orthogonal frequency-division multiple access (OFDMA) systems in low earth orbit (LEO) satellite-terrestrial networks. Focusing on multiuser systems employing common Doppler compensation, we analytically formulate the coverage probability by explicitly capturing the loss of OFDMA subcarrier orthogonality caused by geometry-induced residual Doppler through inter-carrier interference. The analysis accounts for the spatial distribution of ground terminals within the serving satellite's cell and is validated through extensive Monte-Carlo simulations for both S-band and Ka-band settings. The results demonstrate the high accuracy of both the Doppler shift approximation and the derived coverage probability expression, while also highlighting the significant impact of residual Doppler shift, even after compensation, emphasizing the necessity of considering this effect in the design of future satellite networks.


[14] 2603.20462

Shift-Invariant Feature Attribution in the Application of Wireless Electrocardiograms

Assigning relevance scores to the input features of a machine learning model enables to measure the contributions of the features in achieving a correct outcome. It is regarded as one of the approaches towards developing explainable models. For biomedical assignments, this is very useful for medical experts to comprehend machine-based decisions. In the analysis of electro cardiogram (ECG) signals, in particular, understanding which of the electrocardiogram samples or features contributed most for a given decision amounts to understanding the underlying cardiac phases or conditions the machine tries to explain. For the computation of relevance scores, determining the proper baseline is important. Moreover, the scores should have a distribution which is at once intuitive to interpret and easy to associate with the underline cardiac reality. The purpose of this work is to achieve these goals. Specifically, we propose a shift-invariant baseline which has a physical significance in the analysis as well as interpretation of electrocardiogram measurements. Moreover, we aggregate significance scores in such a way that they can be mapped to cardiac phases. We demonstrate our approach by inferring physical exertion from cardiac exertion using a residual network. We show that the ECG samples which achieved the highest relevance scores (and, therefore, which contributed most to the accurate recognition of the physical exertion) are those associated with the P and T waves. Index Terms Attribution, baseline, cardiovascular diseases, electrocardiogram, activity recognition, machine learning


[15] 2603.20472

Flow-based Polynomial Chaos Expansion for Uncertainty Quantification in Power System Dynamic Simulation

The large-scale integration of renewable energy sources introduces significant operational uncertainty into power systems. Although Polynomial Chaos Expansion (PCE) provides an efficient tool for uncertainty quantification (UQ) in power system dynamics, its accuracy depends critically on the faithful representation of input uncertainty, an assumption that is oftern violated in practice due to correlated, non-Gaussian, and otherwise complex data distributions. In contrast to purely data-driven surrogates that often overlook rigorous input distribution modelling, this paper introduces flow-based PCE, a unified framework that couples expressive input modelling with efficient uncertainty propagation. Specifically, normalising flows are employed to learn an invertible transport map from a simple base distribution to the empirical joint distribution of uncertain inputs, and this map is then integrated directly into the PCE construction. In addition, the Map Smoothness Index (MSI) is introduced as a new metric to quantify the quality of the learned map, and smoother transformations are shown to yield more accurate PCE surrogates. The proposed Flow-based PCE framework is validated on benchmark dynamic models, including the IEEE 14-bus system and the Great Britain transmission system, under a range of uncertainty scenarios.


[16] 2603.20489

Realization of a Fully Connected Neural Layer Over-the-Air through Multi-hop Amplify-and-Forward Relays

We study the problem of implementing a fully-connected layer of a neural network using wireless over-the-air computing. We assume a multi hop system with a multi-antenna transmitter and receiver, along with a number of multi-hop amplify-and-forward relay devices in between. We formulate an optimization problem that optimizes the transmitter precoder, receiver combiner and amplify-and-forward gains, subject to relay device power constraint and transmitter power constraint. We propose an alternating optimization framework that optimizes the imitation accuracy. Simulation study results reveal that multi-hop relaying achieves an almost perfect classification accuracy when used in a neural network.


[17] 2603.20500

A Control Architecture for Fast Frequency Regulation with Increasing Penetration of Inverter Based Resources

This paper addresses frequency regulation under operational constraints in interconnected power systems with high penetration of inverter-based renewable generation. A two-layer control architecture is proposed that combines optimized droop and Virtual Synchronous Machine (VSM) primary control with a Model Predictive Control (MPC) secondary layer operating at realistic control-room update rates. Unlike recent proposed approaches, the proposed framework integrates MPC within existing grid control structures, enabling constraint-aware coordination. A reduced-order frequency response model is systematically derived from a high-fidelity grid model using Hankel singular values, and a reduced-order Kalman-Bucy observer enables state and disturbance estimation using only measurable outputs. Validation using representative data from the Kingdom of Saudi Arabia demonstrates effective frequency regulation under realistic operating conditions.


[18] 2603.20553

Performance Guarantees for Data-Driven Sequential Decision-Making

The solutions to many sequential decision-making problems are characterized by dynamic programming and Bellman's principle of optimality. However, due to the inherent complexity of solving Bellman's equation exactly, there has been significant interest in developing various approximate dynamic programming (ADP) schemes to obtain near-optimal solutions. A fundamental question that arises is: how close are the objective values produced by ADP schemes relative to the true optimal objective values? In this paper, we develop a general framework that provides performance guarantees for ADP schemes in the form of ratio bounds. Specifically, we show that the objective value under an ADP scheme is at least a computable fraction of the optimal value. We further demonstrate the applicability of our theoretical framework through two applications: data-driven robot path planning and multi-agent sensor coverage.


[19] 2603.20557

Sustainable Load Balancing for Wireless Networks With Renewable Energy Sources

Future wireless networks powered by renewable energy sources and storage systems (e.g., batteries) require energy-aware mechanisms to ensure stability in critical and high-demand scenarios. These include large-scale user gatherings, especially during evening hours when solar generation is unavailable, and days with poor wind conditions that limit the effectiveness of wind-based energy harvesting. Maintaining network performance under such constraints, while preserving stored energy, remains a key challenge. This work proposes an enhanced Proactive-Reactive Load Balancing algorithm that integrates energy conditions into mobility management. By leveraging standardized mobility events, the algorithm optimizes traffic distribution and energy utilization (avoiding complete drainage of stored energy), thereby preventing service degradation. Simulations show improved energy sustainability and network performance under congestion and limited solar availability.


[20] 2603.20564

Online Feedback Optimization of Energy Storage to Smooth Data Center Grid Impacts

The growing electricity demand of AI data centers introduces significant voltage variability in power networks, affecting not only their own operation but also the experience of all users sharing the network. To smooth data center impacts on power networks, we develop an online feedback optimization approach that controls distributed battery energy storage systems to mitigate voltage issues induced by data center operations. The controller adjusts the active and reactive power setpoints of distributed battery systems in response to voltage measurements, with a two-fold objective: managing voltage to minimize the magnitude of constraint violations and smoothing voltage profiles. Control performance is evaluated in a high-fidelity simulation environment that integrates a three-phase distribution feeder and a detailed battery system model, and benchmarked against a local control approach with similar objectives but without optimality guarantees and constraint enforcement. We show that the proposed controller delivers consistent voltage regulation in the long term, while the local control approach pursues the objectives more aggressively but quickly hits the storage limits.


[21] 2603.20608

Reinforcement Learning-Based Secure Near-field Directional Modulation Enhanced by Rotatable RIS

This paper investigates secure Directional Modulation (DM) design enhanced by a rotatable active Reconfigurable Intelligent Surface (RIS). In conventional RIS-assisted DM networks, the security performance gain is limited due to the multiplicative path loss introduced by the RIS reflection path. To address this challenge, a Secrecy Rate (SR) maximization problem is formulated, subject to constraints including the eavesdropper's Direction Of Arrival (DOA) estimation performance, transmit power, rotatable range, and maximum reflection amplitude of the RIS elements. To solve this non-convex optimization problem, three algorithms are proposed: a multi-stream null-space projection and leakage-based method, an enhanced leakage-based method, and an optimization scheme based on the Distributed Soft Actor-Critic with Three refinements (DSAC-T). Simulation results validate the effectiveness of the proposed algorithms. A performance trade-off is observed between eavesdropper's DOA estimation accuracy and the achievable SR. The security enhancement provided by the RIS is more significant in systems equipped with a small number of antennas. By optimizing the orientation of the RIS, a 52.6\% improvement in SR performance can be achieved.


[22] 2603.20614

Sparse stability diagrams of LSCF method via strategic pole destabilization using orthogonal matching pursuit

In various engineering fields including mechanical, aerospace, and civil engineering, the identification of modal parameters, including natural frequencies, damping ratios, and mode shapes, is crucial for determining the vibration characteristics of engineered structures. A common method for identifying the modal parameters of structures involves experimental modal analysis using frequency response functions (FRFs) obtained from forced vibration tests. The least squares complex frequency (LSCF) domain method is a widely-used frequency-domain curve-fitting method for the FRFs using the polynomials of high order, which can extract modal parameters with high accuracy. However, increasing the polynomial order tends to result in the generation of non-physical spurious poles that need to be eliminated from the stability diagrams. To overcome this issue, we propose a method that strategically destabilize the stable yet spurious poles of the characteristic polynomials by making their coefficients as sparse as possible, via orthogonal matching pursuit (OMP). This results in sparse stability diagrams because unstable poles can be eliminated from the diagrams. In this paper, the proposed method is first applied to a numerically-obtained FRFs of a rectangular plate using finite element model, and its validity is discussed. Then, the method is applied to experimentally-obtained FRFs of rectangular plates with low-damping and with high-damping. Furthermore, to confirm its applicability to industrial applications with realistic complexity, it has also been applied to the FRFs of the electric machine's stator core used for electric vehicles. Based on the results, we have confirmed that the spurious roots can be eliminated from the stability diagrams without compromising accuracy for the cases considered.


[23] 2603.20621

A Channel Knowledge Map-Driven Two-Stage Coordinated User Scheduling in Multi-Cell Massive MIMO Systems

This paper investigates narrowband coordinated user scheduling in multi-cell massive multiple-input multiple-output (MIMO) systems. We formulate the problem under a spectral-efficiency maximization criterion, revealing inherent challenges in computational complexity and signaling overhead. To address these, we develop a user-scheduling-oriented CKM (US-CKM) and a US-CKM-driven two-stage coordinated scheduling framework. By exploiting the mapping between location information and statistical channel state information (SCSI), the system enables rapid SCSI retrieval and persistent reuse, substantially reducing CSI acquisition overhead. Embedding statistical channel correlation into the CKM further characterizes interuser interference patterns. The framework designs an intra-cell active-user selection scheme for the first stage and an inter-cell coordinated scheduling scheme for the second, both based on US-CKM entries. The first stage identifies users with favorable channel gains and low intra-cell interference, reducing the candidate set with marginal sum-rate loss. The second stage suppresses inter-cell interference (ICI) by exploiting cross-cell channel correlations. To enhance robustness against imperfect SCSI in dynamic scattering environments, we augment the framework with a reliability-guided mechanism. Instead of uniform treatment, we evaluate entry stability using a grid reliability metric quantifying channel measurement variance at sampling locations. Low-reliability grids are identified, and their instantaneous CSI is acquired in real time to integrate with existing SCSI. This process refines channel gain and spatial correlation characteristics, ensuring robust performance under imperfect conditions.


[24] 2603.20638

OmniCodec: Low Frame Rate Universal Audio Codec with Semantic-Acoustic Disentanglement

Large Language Models (LLMs) have advanced audio generation through discrete representation learning. However, most existing neural codecs focus on speech and emphasize reconstruction fidelity, overlooking unified low frame rate modeling across diverse audio domains, including speech, music, and general sound. Moreover, high reconstruction quality does not necessarily yield semantically informative representations, limiting effectiveness in downstream generation tasks. We propose OmniCodec, a universal neural audio codec tailored for low frame rate. It adopts a hierarchical multi-codebook design with semantic-acoustic decoupling by leveraging the audio encoder of the pre-trained understanding model, along with a self-guidance strategy to improve codebook utilization and reconstruction. Compared with the Mimi codec, experiments show that OmniCodec achieves outstanding performance at the same bitrate, delivering superior reconstruction quality while also providing more semantically informative representations that benefit downstream generation tasks. Our model and code will be open-sourced. Our demo page is available.


[25] 2603.20672

Towards Certified Sim-to-Real Transfer via Stochastic Simulation-Gap Functions

This paper introduces the notion of stochastic simulation-gap function, which formally quantifies the gap between an approximate mathematical model and a high-fidelity stochastic simulator. Since controllers designed for the mathematical model may fail in practice due to unmodeled gaps, the stochastic simulation-gap function enables the simulator to be interpreted as the nominal model with bounded state- and input-dependent disturbances. We propose a data-driven approach and establish a formal guarantee on the quantification of this gap. Leveraging the stochastic simulation-gap function, we design a controller for the mathematical model that ensures the desired specification is satisfied in the high-fidelity simulator with high confidence, taking a step toward bridging the sim-to-real gap. We demonstrate the effectiveness of the proposed method using a TurtleBot model and a pendulum system in stochastic simulators.


[26] 2603.20692

Agentic Physical-AI for Self-Aware RF Systems

Intelligent control of RF transceivers adapting to dynamic operational conditions is essential in the modern and future communication systems. We propose a multi-agent neurosymbolic AI system, where AI agents are assigned for circuit components. Agents have an internal model and a corresponding control algorithm as its constituents. Modeling of the IF amplifier shows promising results, where the same approach can be extended to all the components, thus creating a fully intelligent RF system.


[27] 2603.20700

mmWave-Diffusion:A Novel Framework for Respiration Sensing Using Observation-Anchored Conditional Diffusion Model

Millimeter-wave (mmWave) radar enables contactless respiratory sensing,yet fine-grained monitoring is often degraded by nonstationary interference from body this http URL achieve micromotion interference removal,we propose mmWave-Diffusion,an observation-anchored conditional diffusion framework that directly models the residual between radar phase observations and the respiratory ground truth,and initializes sampling within an observation-consistent neighborhood rather than from Gaussian noise-thereby aligning the generative process with the measurement physics and reducing inference overhead. The accompanying Radar Diffusion Transformer (RDT) is explicitly conditioned on phase observations, enforces strict one-to-one temporal alignment via patch-level dual positional encodings, and injects local physical priors through banded-mask multi-head cross-attention, enabling robust denoising and interference removal in just 20 reverse steps. Evaluated on 13.25 hours of synchronized radar-respiration data, mmWave-Diffusion achieves state-of-the-art waveform reconstruction and respiratory-rate estimation with strong generalization. Code repository:this https URL.


[28] 2603.20743

The Binding Effect: Analyzing How Multi-Dimensional Cues Form Gender Bias in Instruction TTS

Current bias evaluations in Instruction Text-to-Speech (ITTS) often rely on univariate testing, overlooking the compositional structure of social cues. In this work, we investigate gender bias by modeling prompts as combinations of Social Status, Career stereotypes, and Persona descriptors. Analyzing open-source ITTS models, we uncover systematic interaction effects where social dimensions modulate one another, creating complex bias patterns missed by univariate baselines. Crucially, our findings indicate that these biases extend beyond surface-level artifacts, demonstrating strong associations with the semantic priors of pre-trained text encoders and the skewed distributions inherent in training data. We further demonstrate that generic diversity prompting is insufficient to override these entrenched patterns, underscoring the need for compositional analysis to diagnose latent risks in generative speech.


[29] 2603.20762

4D Fresnel Space-Time Modulation for Near-Field ELAA: Kinematic Multiplexing and O(N log N) Precoding at Sub-THz Frequencies

Extremely Large Antenna Arrays (ELAA) operating at sub-terahertz frequencies introduce a regime where near-field Fresnel propagation and high-mobility carrier Doppler interact simultaneously, creating a four-dimensional signal space that existing schemes exploit only partially. This paper proposes \textbf{4D Fresnel Space-Time Modulation (4D-FSM)}, a unified framework encoding information jointly across angle, depth, synthetic velocity, and QAM amplitude through a structured symbol manifold $\mathcal{S}$. Synthetic velocity is introduced via Space-Time Modulation (STM): a linear phase ramp $u(\xi,t) = \exp(j[\Omega t + g_k\xi])$ induces a Doppler-equivalent shift without physical motion, creating velocity-orthogonal bubbles that resolve co-located users. We derive the joint orthogonality surface governing simultaneous user separability in depth and velocity, revealing that users separated in depth require strictly less velocity separation to remain orthogonal -- a multiplexing gain with no counterpart in OTFS or LDMA. The Discrete Fresnel Transform (DFnT) factorization $\mathbf{H} = \mathbf{F}_D \mathbf{C}(z) \mathbf{P}$ reduces precoder complexity from $\mathcal{O}(N^3)$ to $\mathcal{O}(N\log N)$, completing within \SI{500}{\nano\second} against a \SI{5.4}{\micro\second} coherence window. Monte Carlo evaluation at $f_c = \SI{140}{\giga\hertz}$, $N = 4096$ confirms $\rho \approx 0.998$ across the full velocity range, \SI{6.16}{\bit\per\second\per\hertz} spectral efficiency where all baselines collapse, and $K_{\max} = 64$ orthogonal users -- a $248\times$ sum-rate advantage over TTD at $K = 50$.


[30] 2603.20784

Enhanced Direction-Sensing Methods and Performance Analysis in Low-Altitude Wireless Network via a Rotation Antenna Array

Due to the directive property of each antenna element, the received signal power can be severely attenuated when the emitter deviates from the array boresight, which will lead to a severe degradation in sensing performance along the corresponding direction. Although existing rotatable array sensing methods such as recursive rotation (RR-Root-MUSIC) can mitigate this issue by iteratively rotating and sensing, several mechanical rotations and repeated eigendecomposition operations are required to yield a high computational complexity and low time-efficiency. To address this problem, a pre-rotation initialization with recieve power as a rule is proposed to signifcantly reduce the computational complexity and improve the time-efficiency. Using this idea, a low-complexity enhanced direction-sensing framework with pre-rotation initialization and iterative greedy spatial-spectrum search (PRI-IGSS) is develped with three stages: (1) the normal vector of array is rotated to a set of candidates to find the opimal direction with the maximum sensing energy with the corresponding DOA value computed by the Root-MUSIC algorithm; (2) the array is mechanically rotated to the initial estimated direction and kept fixed; (3) an iterative greedy spatial-spectrum search or recieving beamforming method, moviated by reinforcement learning, is designed with a reduced search range and making a summation of all previous sampling variance matrices and the current one is adopted to provide an increasiong performance gain as the iteration process continues. To assess the performance of the proposed method, the corresponding CRLB is derived with a simplified rotation model. Simulation results demonstrate that the proposed PRI-IGSS method performs much better than RR-Root-MUSIC and achieves the CRLB in term of mean squared error due to the fact there is no sample accumulation for the latter.


[31] 2603.20823

Underwater imaging without color distortions requires RAW capture

Consumer cameras are ubiquitous in aquatic sciences because they are affordable and easy to use, generating vast collections of underwater imagery for ecosystem surveys, monitoring, mapping, and animal behavior studies. Yet when color is the variable of interest, such as in coral-bleaching research, most of these images cannot be used quantitatively if captured in JPEG format. The limitation is not due to JPEG compression itself, but to the in-camera processing that precedes it: as cameras produce these images, built-in algorithms modify colors and contrast not to ensure color accuracy but to produce visually pleasing pictures. These irreversible in-camera operations break the linear relationship between pixel values and scene radiance, making colors impossible to standardize, reproduce, or compare across cameras, locations, or time. This essay explains the scientific costs of this practice and offers pragmatic guidance to prevent irreversible data loss, beginning with the capture and archiving of minimally processed RAW images.


[32] 2603.20838

Physics-Informed Graph Neural Jump ODEs for Cascading Failure Prediction in Power Grids

Cascading failures in power grids pose severe risks to infrastructure reliability, yet real-time prediction of their progression remains an open challenge. Physics-based simulators require minutes to hours per scenario, while existing graph neural network approaches treat cascading failures as static classification tasks, ignoring temporal evolution and physical laws. This paper proposes Physics-Informed Graph Neural Jump ODEs (PI-GN-JODE), combining an edge-conditioned graph neural network encoder, a Neural ODE for continuous power redistribution, a jump process handler for discrete relay trips, and Kirchhoff-based physics regularization. The model simultaneously predicts edge and node failure probabilities, severity classification, and demand not served, while an autoregressive extension enables round-by-round temporal cascade prediction. Evaluated on the IEEE 24-bus and 118-bus systems with 20,000 scenarios each, PI-GN-JODE achieves a Precision--Recall Area Under the Curve of 0.991 for edge failure detection, 0.973 for node failure detection, and a coefficient of determination of 0.951 for demand-not-served regression on the 118-bus system, outperforming a standard graph convolutional network baseline (0.948, 0.925, and 0.912, respectively). Ablation studies reveal that the four components function synergistically, with the physics-informed loss alone contributing +9.2 points to demand-not-served regression. Performance improves when scaling to larger grids, and the architecture achieves the highest balanced accuracy (0.996) on the PowerGraph benchmark using data from a different simulation framework.


[33] 2603.20841

Karhunen-Loève Expansion for Fluid Antenna Systems: Information-Theoretic Optimal Channel Compression and Outage Analysis

Fluid antenna systems (FAS) achieve spatial diversity by dynamically switching among $N$ densely packed ports, but the resulting spatially correlated Rayleigh channels render exact outage analysis intractable. Existing block-correlation models (BCM) impose structural approximations on the channel covariance matrix that can introduce optimistic performance bias. This paper proposes a principled Karhunen-Loève (KL) expansion framework that decomposes the $N$-dimensional correlated FAS channel into independent eigenmodes and performs a controlled rank-$K$ truncation, reducing the outage analysis to a $K$-dimensional integration with $K \ll N$. Closed-form outage expressions are derived for the rank-1 and rank-2 cases, and a general Gauss-Hermite quadrature formula is provided for arbitrary $K$. On the theoretical front, it is proved via Anderson's inequality that the KL approximation \emph{always} overestimates the outage probability, providing a conservative guarantee essential for secure system design. Leveraging the Slepian--Landau--Pollak concentration theorem, it is established that only $K^* = 2\lceil W \rceil + 1$ eigenmodes are needed regardless of $N$, where $W$ is the normalized aperture. It is further shown that the KL truncation achieves the Gaussian rate-distortion bound, certifying it as the information-theoretically optimal channel compression. Extensive numerical results confirm that (i) theoretical predictions match Monte Carlo simulations, (ii) the entropy fraction converges faster than the power fraction, (iii) the KL framework uniformly outperforms BCM in approximation accuracy while avoiding the optimistic bias inherent in block-diagonal models, and (iv) the effective degrees of freedom scale with the aperture rather than the number of ports.


[34] 2603.20846

A Gaussian Process Framework for Outage Analysis in Continuous-Aperture Fluid Antenna Systems

This paper develops a comprehensive analytical framework for the outage probability of fluid antenna system (FAS)-aided communications by modeling the antenna as a continuous aperture and approximating the Jakes (Bessel) spatial correlation with a Gaussian kernel $\rho_G(\delta) = e^{-\pi^2\delta^2}$. Three complementary analytical strategies are pursued. First, the Karhunen--Loève (KL) expansion under the Gaussian kernel is derived, yielding closed-form outage expressions for the rank-1 and rank-2 truncations and a Gauss--Hermite formula for arbitrary rank~$K$, with effective degrees of freedom $K_{\mathrm{eff}}^G \approx \pi\sqrt{2}\, W$. Second, rigorous two-sided outage bounds are established via Slepian's inequality and the Gaussian comparison theorem: by sandwiching the true correlation between equi-correlated models with $\rho_{\min}$ and $\rho_{\max}$, closed-form upper and lower bounds that avoid the optimistic bias of block-correlation models are obtained. Third, a continuous-aperture extreme value theory is developed using the Adler--Taylor expected Euler characteristic method and Piterbarg's theorem. The resulting outage expression $P_{\mathrm{out}} \approx 1 - e^{-x}(1 + \pi\sqrt{2}\, W\, x)$ depends only on the aperture~$W$ and threshold~$x$, is independent of the port count~$N$, and is identical for the Jakes and Gaussian models since both share the second spectral moment $\lambda_2 = 2\pi^2$. A Pickands-constant refinement for the deep-outage regime and a threshold-dependent effective diversity $N_{\mathrm{eff}} \approx 1 + \pi\sqrt{2}\, W\, x$ are further derived. Numerical results confirm that the Gaussian approximation incurs less than 10\% relative outage error for $W \leq 2$ and that the continuous-aperture formula converges with as few as $N \approx 10W$ ports.


[35] 2603.20862

Deep Learning-Based Multi-Satellite Massive MIMO Transmission: Centralized or Decentralized?

This paper investigates new efficient transmission architectures for multi-satellite massive multiple-input multiple-output (MIMO). We study the weighted sum-rate maximization problem in a multi-satellite system where multiple satellites transmit independent data streams to multi-antenna user terminals, thereby achieving higher throughput. We first adopt a multi-satellite weighted minimum mean square error (WMMSE) formulation under statistical channel state information (CSI), which yields closed-form updates for the precoding and receive vectors. To overcome the high complexity of optimization, we propose a learning-based WMMSE design that integrates tensor equivariance with closed-form recovery, enabling inference with near-optimal performance without iterative updates. Moreover, to reduce inter-satellite signaling overhead incurred by exchanging CSI and precoding vectors in centralized coordination, we develop a decentralized multi-satellite transmission scheme in which each satellite locally infers its precoders rather than receiving from the central satellite. The proposed decentralized scheme leverages periodically available satellite state information, such as orbital positions and satellite attitude, which is inherently accessible in satellite networks, and employs a dual-branch tensor-equivariant network to predict the precoders at each satellite locally. Numerical results demonstrate that the proposed multi-satellite transmission significantly outperforms single-satellite systems in sum rate; the decentralized scheme achieves sum-rate performance close to the centralized schemes while substantially reducing computational complexity and inter-satellite overhead; and the learning-based schemes exhibit strong robustness and scalability across different scenarios.


[36] 2603.20878

Terahertz Beamforming and Group Sparse Channel Estimation Relying on Low-Resolution ADCs in MU Hybrid MIMO systems

A unified beamforming and channel estimation framework relying on Bayesian learning is conceived. Recognizing the limitations imposed by low-resolution analog-to-digital converter (ADCs) and frequency-dependent propagation effects occurring in the Terahertz (THz) band, we formulate a dual-wideband channel model incorporating root raised cosine (RRC) pulse shaping. To address the non-linear distortions introduced by low-resolution ADCs, Bussgang decomposition is employed, leading to a tractable linearized inference process. By leveraging the shared sparsity inherent in a multi-user (MU) scenario of THz systems, we propose a Hierarchical Bayesian Group-sparse Regression (HBG-SR) based channel learning technique that exploits the group-sparse structure of THz band channels. The estimated dominant angle-of-arrival/ angle-of-departure (AoA/AoD) indices are then exploited for appropriately configuring the true-time-delay (TTD) elements in the hybrid transceiver, enabling precise beam alignment across subcarriers and the effective compensation of the beam-squint effect occurring in wideband THz systems. Extensive simulation results validate the efficiency of the proposed channel estimator and the TTD-aided beamforming architecture, highlighting their robustness and performance gains under practical wideband THz system constraints.


[37] 2603.21070

Koopman Meets Discrete-Time Control Barrier Functions: A Linear Model Predictive Control Framework

This paper proposes a Koopman-based linear model predictive control (LMPC) framework for safety-critical control of nonlinear discrete-time systems. Existing MPC formulations based on discrete-time control barrier functions (DCBFs) enforce safety through barrier constraints but typically result in computationally demanding nonlinear programming. To address this challenge, we construct a DCBF-augmented dynamical system and employ Koopman operator theory to lift the nonlinear dynamics into a higher-dimensional space where both the system dynamics and the barrier function admit a linear predictor representation. This enables the transformation of the nonlinear safety-constrained MPC problem into a quadratic program (QP). To improve feasibility while preserving safety, a relaxation mechanism with slack variables is introduced for the barrier constraints. The resulting approach combines the modeling capability of Koopman operators with the computational efficiency of QP. Numerical simulations on a navigation task for a robot with nonlinear dynamics demonstrate that the proposed framework achieves safe trajectory generation and efficient real-time control.


[38] 2603.21073

SqueezeComposer: Temporal Speed-up is A Simple Trick for Long-form Music Composing

Composing coherent long-form music remains a significant challenge due to the complexity of modeling long-range dependencies and the prohibitive memory and computational requirements associated with lengthy audio representations. In this work, we propose a simple yet powerful trick: we assume that AI models can understand and generate time-accelerated (speeded-up) audio at rates such as 2x, 4x, or even 8x. By first generating a high-speed version of the music, we greatly reduce the temporal length and resource requirements, making it feasible to handle long-form music that would otherwise exceed memory or computational limits. The generated audio is then restored to its original speed, recovering the full temporal structure. This temporal speed-up and slow-down strategy naturally follows the principle of hierarchical generation from abstract to detailed content, and can be conveniently applied to existing music generation models to enable long-form music generation. We instantiate this idea in SqueezeComposer, a framework that employs diffusion models for generation in the accelerated domain and refinement in the restored domain. We validate the effectiveness of this approach on two tasks: long-form music generation, which evaluates temporal-wise control (including continuation, completion, and generation from scratch), and whole-song singing accompaniment generation, which evaluates track-wise control. Experimental results demonstrate that our simple temporal speed-up trick enables efficient, scalable, and high-quality long-form music generation. Audio samples are available at this https URL.


[39] 2603.21081

Multidimensional Opinion Dynamics with Confirmation Bias: A Multi-Layer Framework

We study multidimensional opinion dynamics under confirmation bias in social networks. Each agent holds a vector of correlated opinions across multiple topic layers. Peer interaction is modeled through a static, informationally symmetric social channel, while external information enters through a dynamic, informationally asymmetric source channel. Source influence is described by nonnegative state-dependent functions of agent--source opinion mismatch, which captures confirmation bias without hard thresholds. For general Lipschitz source-influence functions, we give sufficient conditions under which the dynamics are contractive and converge to a unique steady state independent of the initial condition. For affine confirmation-bias functions, we show that the steady state can be computed through a finite sign-consistency search and identify a regime in which it admits a closed form. For broader classes of bounded nonlinear source-influence functions, we derive explicit lower and upper bounds on the fixed point. Numerical examples and a study on a real-world adolescent lifestyle network illustrate the role of multidimensional coupling and show that source-design conclusions can change qualitatively when confirmation bias is ignored.


[40] 2603.21087

Exploiting Self-Sustainable Information-Bearing RIS in Underlay CR-NOMA Networks

Information-bearing reconfigurable intelligent surfaces (IB-RIS) provide a promising solution to self-sustainable and green communications by harvesting ambient radio frequency energy while embedding information via passive reflection. This paper investigates a self-sustainable IB-RIS (SIB-RIS)-assisted non-orthogonal multiple access (NOMA) network operating in an underlay cognitive radio (CR) system. Specifically, a multi-antenna primary transmitter (PT) serves a primary user (PU) and concurrently illuminates the secondary nodes, which enables each SIB-RIS to perform simultaneous energy harvesting and backscatter-based information embedding at each RIS. Based on this model, a weighted sum spectral efficiency (WSSE) maximization problem is formulated for the secondary network by jointly optimizing the PT transmit beamforming vector, the SIB-RIS reflection coefficients, and the power-splitting ratios. To tackle the intricately-coupled non-convex problem, an efficient block coordinate descent (BCD) optimization framework is developed, which leverages fractional programming via Lagrangian dual and quadratic transforms together with a difference-of-convex programming approach. Numerical results demonstrate that the proposed SIB-RIS-assisted NOMA CR system yields substantial WSSE gains over both orthogonal multiple access (OMA)-based and active antenna schemes. Moreover, a 2-bit discrete-phase SIB-RIS implementation achieves competitive to which WSSE performance, confirming the practicality of the low-resolution architecture.


[41] 2603.21089

Approximate Dynamic Programming for Degradation-aware Market Participation of Battery Energy Storage Systems: Bridging Market and Degradation Timescales

We present an approximate dynamic programming framework for designing degradation-aware market participation policies for battery energy storage systems. The approach employs a tailored value function approximation that reduces the state space to state of charge and battery health, while performing dynamic programming along a pseudo-time axis encoded by state of health. This formulation enables an offline/online computation split that separates long-term degradation dynamics (months to years) from short-term market dynamics (seconds to minutes) -- a timescale mismatch that renders conventional predictive control and dynamic programming approaches computationally intractable. The main computational effort occurs offline, where the value function is approximated via coarse-grained backward induction along the health dimension. Online decisions then reduce to a real-time tractable one-step predictive control problem guided by the precomputed value function. This decoupling allows the integration of high-fidelity physics-informed degradation models without sacrificing real-time feasibility. Backtests on historical market data show that the resulting policy outperforms several benchmark strategies with optimized hyperparameters.


[42] 2603.21128

Physics-Infused Neural MPC of a DC-DC Boost Converter with Adaptive Transient Recovery and Enhanced Dynamic Stability

DC-DC boost converters require advanced control to ensure efficiency and stability under varying loads. Traditional model predictive control (MPC) and data-driven neural network methods face challenges such as high complexity and limited physical constraint enforcement. This paper proposes a hybrid physics-informed neural network (PINN) combined with finite control set MPC (FCS-MPC) for boost converters. The PINN embeds physical laws into neural training, providing accurate state predictions, while FCS-MPC ensures constraint satisfaction and multi-objective optimization. The method features adaptive transient recovery, explicit duty-ratio control, and enhanced dynamic stability. Experimental results on a commercial boost module demonstrate improved transient response, reduced voltage ripple, and robust operation across conduction modes. The proposed framework offers a computationally efficient, physically consistent solution for real-time control in power electronics.


[43] 2603.21133

High-Endurance UCAV Propulsion System: A 1-D CNN-Based Real-Time Fault Classification for Tactical-Grade IPMSM Drive

High-performance propulsion for mission-critical applications demands unprecedented reliability and real-time fault resilience. Conventional diagnostic methods (signal-based analysis and standard ML models) are essential for stator/rotor fault detection but suffer from high latency and poor generalization across variable speeds. This paper proposes a 1-D Convolutional Neural Network (CNN) framework for real-time fault classification in the HPDM-350 interior permanent magnet synchronous motor (IPMSM). The proposed architecture extracts discriminative features directly from high-frequency current and speed signals, enabling sub-millisecond inference on embedded controllers. Compared to state-of-the-art long short term memory (LSTM) and classical ML approaches, the 1-D CNN achieves a superior weighted F1-score of 0.9834. Validated through high-fidelity magnetic-domain MATLAB/Simscape models, the method demonstrates robust performance across a +-2700 RPM envelope, providing a lightweight solution for mission-critical electric propulsion systems.


[44] 2603.21219

On the Robustness of AoA as an Authentication Feature Under Spoofing: Fundamental Limits from Misspecified Cramer Rao Theory

The robustness of angle of arrival (AoA) as a physical layer authentication (PLA) feature under spoofing attacks is studied, assuming a digital uniform linear array verifier. The verifier estimates the AoA assuming a legitimate user's single source model, whereas the received signal is generated by a multi antenna adversary at a different angle, leading to a model mismatch. Closed form expressions are derived for the misspecified Cramer Rao bound, the PLA decision threshold, the spoofing detection, false alarm and misdetection probabilities. Simulation results validate the theoretical findings and highlight the impact of the signal to noise ratio, array geometries, spoofing precoding and number of snapshots on authentication robustness.


[45] 2603.21266

Design and Development of Low-Cost Datalogger for Indoor and Outdoor Air Quality Monitoring

The rising demand for low-cost air quality monitors stems from increased public awareness and interest within the research community. These monitors play a pivotal role in empowering citizens and scientists to comprehend spatiotemporal variations in air quality parameters, aiding in the formulation of effective mitigation policies. The primary challenge lies in the diverse array of application scenarios these monitors encounter. The developed data logging device is exceptionally well-suited for air quality monitoring applications, offering exceptional versatility by seamlessly operating on a range of power sources, including solar energy, batteries, and direct electrical supply. The integration of a built-in battery charger enhances its applicability for deployment in regions with solar power or intermittent electricity availability. To ensure strong network connectivity, the advanced datalogger seamlessly integrates with WiFi, Bluetooth, and LoRaWAN networks. A notable feature is its adaptable MCU system, enabling users to swap the MCU based on specific connectivity, power, and computational requirements. Importantly, the system carefully identifies key parameters crucial for both indoor and outdoor air quality assessment, customizing sensor selection accordingly. Furthermore, optimization efforts have prioritized energy efficiency, enabling the system to function with minimal power consumption while maintaining data integrity. Additional I2C and UART ports facilitate the monitoring of supplementary parameters.


[46] 2603.21339

Explore the Capacity of Near Field Channel using Gaussian Beams

Channel capacity lies at the core of wireless communication, yet determining it typically requires detailed channel information between the transmitter and receiver. For near field MIMO systems, obtaining the detailed native channel is often difficult or expensive. This paper develops a scheme to approximate the near field channel in a Gaussian beam domain. Hermite Gaussian (HG) modes are used to approximate the channel between a pair of square antenna arrays in a free space line of sight (LOS) environment. We show that HG modes efficiently represent the dominant singular modes of the native channel, enabling accurate channel estimation and capacity computation in the HG beam space. An iterative algorithm is proposed to approach the maximal channel capacity by gradually expanding the beam space dimension. Simulation results demonstrate that the method converges rapidly and significantly reduces channel estimation overhead.


[47] 2603.21384

Homodyne vs. Heterodyne Architectures in Sub-THz Transceivers: A Phase Noise Perspective

This letter examines the impact of oscillator phase noise on sub-terahertz OFDM transceiver architectures, with a focus on the comparison between homodyne and heterodyne designs. Using a Hexa-X compliant phase noise model, we analytically show that heterodyne architectures reduce the total accumulated phase noise variance by distributing frequency translation across lower-frequency oscillators under realistic phase-noise scaling laws, thereby shifting the dominant impairment from inter-carrier interference to common phase error. OFDM simulations at 70 GHz and 140 GHz demonstrate that while homodyne architectures remain competitive at mmWave frequencies, heterodyne designs provide improved robustness to phase noise at higher sub-THz carriers. These results highlight transceiver architecture as a key design lever for relaxing oscillator and phase-locked loop constraints in future sub-THz wireless systems.


[48] 2603.21408

CGFormer: A Cross-Attention Based Grid-Free Transformer for Radio Map Estimation

Radio map estimation (RME), which predicts wireless signal metrics at unmeasured locations from sparse measurements, has attracted growing attention as a key enabler of intelligent wireless networks. The majority of existing RME techniques employ grid-based strategies to process sparse measurements, where the pursuit of accuracy results in significant computational inefficiency and inflexibility for off-grid prediction. In contrast, grid-free approaches directly exploit coordinate features to capture location-specific spatial dependencies, enabling signal prediction at arbitrary locations without relying on predefined grids. However, current grid-free approaches demand substantial preprocessing overhead for constructing the spatial representation, leading to high complexity and constrained adaptability. To address these limitations, this paper proposes a novel cross-attention grid-free based transformer model for RME. We introduce a lightweight spatial embedding module that incorporates environmental knowledge into high-dimensional feature construction. A cross-attention transformer then models the spatial correlation between target and measurement points. The simulation results demonstrate that our proposed method reduces RMSE by up to 6%, outperforming grid-based and gridfree baselines.


[49] 2603.21428

Active-power control strategies in grid-forming power converters to improve transient stability in power systems with 100% converter-based generation

Grid-forming voltage source converters (GFM-VSCs) play a crucial role in the stability of power systems with large amounts of converter-based generation. Transient stability (angle stability under large disturbances) is a critical limiting factor in stressed power systems. Previous studies have proposed control strategies in GFM-VSCs to improve transient stability. These approaches typically rely on suitable current-limiting algorithms, voltage/reactive-power and active-power supplementary control strategies. This paper investigates and compares the effectiveness of three active-power control strategies in GFM-VSCs to enhance transient stability in power systems with 100 % converter-based generation: (i) a wide-area control strategy (TSP-WACS) using the centre of inertia (COI) frequency, (ii) a local transient damping method (TSP-TDM), and (iii) a novel local control strategy (TSP-L) proposed in this work. All strategies were implemented and assessed using short-circuit simulations on Kundur two-area test system with 100 % GFM-VSC generators, demonstrating critical clearing time (CCT) improvement. The TSP-WACS strategy achieves the best performance but requires a communication infrastructure, while TSP-L strategy offers a simple-but-robust alternative using local measurements, only.


[50] 2603.21429

Unified Sensitivity-Based Heuristic for Optimal Line Switching and Substation Reconfiguration

Optimal transmission switching (OTS) determines which transmission lines to remove from service to minimize dispatch costs. Unlike topology design, it alters the operational status of operating lines. Sensitivity-based methods, as advanced optimization techniques, select lines whose outage yields a significant cost reduction. However, these methods overlook bus splitting, an effective congestion management strategy that our work incorporates to achieve improved economic gains. In this work, we formulate an optimal transmission reconfiguration (OTR) problem that incorporates both line switching and bus splitting. We develop a novel approach to quantify the sensitivity of the OTR objective to line switching and bus splitting, establish connections between the proposed sensitivity framework and existing heuristic metrics, prove the equivalence between bus splitting and a generalized line switching to enable unified treatment, and provide a simpler derivation of Bus Split Distribution Factor (BSDF). Simulations on nine IEEE test systems spanning 118 to 13,659 buses demonstrate the high effectiveness of our proposed sensitivity method. They also demonstrate that incorporating bus splitting into transmission reconfiguration achieves greater cost savings than line switching alone. The results confirm the economic advantage of this comprehensive approach to transmission system operation.


[51] 2603.21433

Site-Specific Channel Modeling and Optimization of RIS-Assisted Multiuser MISO Systems

This paper presents a physics-based channel modeling and optimization framework for reconfigurable intelligent surface (RIS)-assisted downlink multi-user multiple-input single-output (MU-MISO) communication systems in site-specific environments. A hybrid ray-tracing (RT) and full-wave electromagnetic analysis approach is developed to construct a deterministic channel model that explicitly captures multipath propagation, RIS scattering behavior, and mutual coupling effects through a non-diagonal load impedance representation. Based on this model, an alternating optimization scheme jointly updates the base-station (BS) beamformer and RIS load impedances to maximize the minimum achievable rate under a total transmit power constraint and practical capacitance limits. The objective of the proposed framework is to provide a reliable initial assessment of the system-level impact of RIS deployment in realistic propagation scenarios. To evaluate this capability, the RIS is operated in a column-paired 1-bit control mode that enables exhaustive evaluation of all realizable configurations in both simulation and measurement. Performance is compared at the distribution level through achievable-rate histograms across all configurations and further examined under small user-location variations. The observed agreement between simulation and measurement demonstrates that the proposed framework reliably captures practical performance trends and provides useful guidance for the design and deployment of RIS-assisted MU-MISO systems in site-specific environments.


[52] 2603.21476

Emission reduction potential of freeway stop-and-go wave smoothing

Real-world potential of stop-and-go wave smoothing at scale remains largely unquantified. Smoothing freeway traffic waves requires creating a gap so the wave can dissipate, but the gap suggested is often too large and impractical. We propose a counterfactual wave smoothing benchmark that reconstructs a smooth and feasible trajectory from each empirical trajectory by solving a quadratic program with fixed boundary conditions and a maximum allowable gap constraint. We estimate the emission reduction potential from trajectories using the MOVES model. Applying the framework to nine weeks of weekday peak traffic data, featuring rich day-to-day stop-and-go wave dynamics, from the I-24 MOTION testbed, we find meaningful reduction potential under a 0.1-mile maximum gap: average CO2 reductions of 7.92% to 12.04% across lanes, with concurrent reductions of 14.30% to 28.91% CO, 23.15% to 29.42% HC, and 24.37% to 30.98% NOx. Our analysis also quantifies the trade-off between maximum allowable gap opening and emissions benefits.


[53] 2603.21498

Rydberg Atomic Receivers for Net-Zero 6G Wireless Communication and Sensing: Progress, Experiments, and Sustainable Prospects

Against the backdrop of the global drive to advance the green transformation of the information and communications technology (ICT) industry and leverage technological innovation to facilitate the achievement of Net-Zero carbon goals, research into Rydberg atomic receivers (RAREs) is gaining significant interest. RAREs leverage the electron transition phenomenon for signal reception, offering significant advantages over conventional radio frequency receivers in terms of miniaturized antenna design, high sensitivity, robust interference resistance, and compact form factors, which positions them as a competitive alternative for meeting zero-carbon communication demands. This article systematically elaborates on the basic principle, state-of-the-art progress, and novel experiments of RAREs in quantum wireless communication and sensing. In this first-of-its-kind work, we experimentally verify the RARE-based orthogonal frequency division multiplexing transmission and reveal the potential of deep learning design in optimizing quantum wireless systems. Finally, we delve into the prospect of integrating RARE with existing cutting-edge application scenarios, while mapping out critical pathways for developing Rydberg-based wireless systems.


[54] 2603.21510

Unregistered Spectral Image Fusion: Unmixing, Adversarial Learning, and Recoverability

This paper addresses the fusion of a pair of spatially unregistered hyperspectral image (HSI) and multispectral image (MSI) covering roughly overlapping regions. HSIs offer high spectral but low spatial resolution, while MSIs provide the opposite. The goal is to integrate their complementary information to enhance both HSI spatial resolution and MSI spectral resolution. While hyperspectral-multispectral fusion (HMF) has been widely studied, the unregistered setting remains challenging. Many existing methods focus solely on MSI super-resolution, leaving HSI unchanged. Supervised deep learning approaches were proposed for HSI super-resolution, but rely on accurate training data, which is often unavailable. Moreover, theoretical analyses largely address the co-registered case, leaving unregistered HMF poorly understood. In this work, an unsupervised framework is proposed to simultaneously super-resolve both MSI and HSI. The method integrates coupled spectral unmixing for MSI super-resolution with latent-space adversarial learning for HSI super-resolution. Theoretical guarantees on the recoverability of the super-resolution MSI and HSI are established under reasonable generative models -- providing, to our best knowledge, the first such insights for unregistered HMF. The approach is validated on semi-real and real HSI-MSI pairs across diverse conditions.


[55] 2603.21514

Evaluating Power Flow Manifold from Local Data around a Single Operating Point via Geodesics

The widespread adoption of renewable energy poses a challenge in maintaining a feasible operating point in highly variable scenarios. This paper demonstrates that, within a feasible region of a power system that meets practical stability requirements, the power flow equations define a smooth bijection between nodal voltage phasors (angle and magnitude) and nodal active/reactive power injections. Based on this theoretical foundation, this paper proposes a data-based power flow evaluation method that can imply the associated power flow manifold from a limited number of data points around a single operating point. Using techniques from differential geometry and analytic functions, we represent geodesic curves in the associated power flow manifold as analytic functions at the initial point. Then, a special algebraic structure of the power flow problem is revealed and applied to reduce the computation of all higher-order partial derivatives to that of the first-order ones. Integrating these techniques yields the proposed data-based evaluation method, suggesting that a small number of local measurements around a single operating point is sufficient to imply the entire associated power flow manifold. Numerical cases with arbitrary directional variations are tested, certifying the efficacy of the proposed method.


[56] 2603.21539

Stochastic Trajectory Influence Functions for LQR: Joint Sensitivity Through Dynamics and Noise Covariance

Model-based controllers learned from data have the biases and noise of their training trajectories, making it important to know which trajectories help or hurt closed-loop performance. Influence functions, widely used in machine learning for data attribution, approximate this effect through first-order parameter-shift surrogates, avoiding costly retraining. Applying them to stochastic LQR, however, is nontrivial because the cost depends on the learned dynamics through the Riccati equation, and the process-noise covariance is estimated from the same residuals. We develop a three-level influence hierarchy that accounts for both channels.


[57] 2603.21543

IF-CPS: Influence Functions for Cyber-Physical Systems -- A Unified Framework for Diagnosis, Curation, and Safety Attribution

Neural network controllers trained via behavior cloning are increasingly deployed in cyber-physical systems (CPS), yet practitioners lack tools to trace controller failures back to training data. Existing data attribution methods assume i.i.d.\ data and standard loss targets, ignoring CPS-specific properties: closed-loop dynamics, safety constraints, and temporal trajectory structure. We propose IF-CPS, a modular influence function framework with three CPS-adapted variants: safety influence (attributing constraint violations), trajectory influence (temporal discounting over trajectories), and propagated influence (tracing effects through plant dynamics). We evaluate IF-CPS on six benchmarks across diagnosis, curation, and safety attribution tasks. IF-CPS improves over standard influence functions in the majority of settings, achieving AUROC $1.00$ in Pendulum (5-10\% poisoning), $0.92$ vs.\ $0.50$ in HVAC (10\%), and the strongest constraint-boundary correlation (Spearman $\rho = 0.55$ in Pendulum).


[58] 2603.21561

Digital Self-Interference Cancellation in Full-Duplex Radios: A Fundamental Limit Perspective

Digital self-interference cancellation (D-SIC) plays a crucial role in in-band full-duplex radios. Unfortunately, its fundamental limit remains unclear. In this paper, we aim to address this problem by exploring the performance limit of the parallel Hammerstein (PH) canceller for D-SIC, which is most commonly used in practice. First, a comprehensive analysis of the power of the residual self-interference (RSI) after the PH canceller with the least squares (LS) estimator is provided, which takes into account the truncation error, reconstruction error and transmitter noise. Specifically, the analysis is greatly simplified by equivalently expanding the PH canceller via generalized Laguerre polynomials (GLP), which enjoys the desirable property of mutual orthogonality among the basis functions. As a by-product of this orthogonal expansion, we establish that the LS estimator for the weights of the GLP canceller is asymptotically \textit{unbiased}, if the pilot sequence is Gaussian distributed. Second, in order to minimize the reconstruction error of the PH canceller, we propose a succinct criterion for optimizing the pilot sequence, which essentially seeks for small eigenvalue spread and large minimum eigenvalue of the Gram matrix corresponding to the pilot sequence. Specifically, the criterion is to minimize the product of the Shannon rank, an effective rank of a positive semidefinite matrix and the minimum eigenvalue of the Gram matrix. Simulation results demonstrate that with the optimized pilot sequence of a single OFDM symbol, over 10 dB gain can be achieved compared to the conventional pilot sequence (HE-LTF) for the PH canceller, and the corresponding RSI can be as low as -87.6 dBm.


[59] 2603.21592

Battery health reporting fails independent validation across manufacturers

Battery state-of-health (SOH) reported by on-board battery management systems (BMS) is the primary metric available to electric vehicle (EV) owners and regulators, yet no study has validated its reliability across manufacturers against independent measurements. Here we show, through an epidemiological study of 1,114 EVs spanning five manufacturers and 375 days, that battery health reporting is fundamentally unreliable: real capacity differences of 12-25% exist within every model, but BMS SOH fails to track them, with correlations ranging from \r{ho} = 0.10 (non-significant) to \r{ho} = 0.62 only under restrictive filtering, while 384 vehicles do not expose SOH at all. A manufacturer-independent electrochemical marker achieves 74-89% degradation classification accuracy across all platforms without requiring BMS data, and a controlled laboratory validation on cells identical to those in the fleet confirms that partial-voltage-window charge measurements track reference capacity with \r{ho} > 0.80 across all 60 voltage windows (p < 0.001). These findings reveal a structural information asymmetry with direct implications for the EU Battery Regulation's 2027 SOH transparency mandate, California's Advanced Clean Cars (ACC) II durability requirements, warranty enforcement, used-vehicle valuation, right-to-repair legislation, and second-life battery markets.


[60] 2603.21608

DiT-Flow: Speech Enhancement Robust to Multiple Distortions based on Flow Matching in Latent Space and Diffusion Transformers

Recent advances in generative models, such as diffusion and flow matching, have shown strong performance in audio tasks. However, speech enhancement (SE) models are typically trained on limited datasets and evaluated under narrow conditions, limiting real-world applicability. To address this, we propose DiT-Flow, a flow matching-based SE framework built on the latent Diffusion Transformer (DiT) backbone and trained for robustness across diverse distortions, including noise, reverberation, and compression. DiT-Flow operates on compact variational auto-encoders (VAEs)-derived latent features. We validated our approach on StillSonicSet, a synthetic yet acoustically realistic dataset composed of LibriSpeech, FSD50K, FMA, and 90 Matterport3D scenes. Experiments show that DiT-Flow consistently outperforms state-of-the-art generative SE models, demonstrating the effectiveness of flow matching in multi-condition speech enhancement. Despite ongoing efforts to expand synthetic data realism, a persistent bottleneck in SE is the inevitable mismatch between training and deployment conditions. By integrating LoRA with the MoE framework, we achieve both parameter-efficient and high-performance training for DiT-Flow robust to multiple distortions with using 4.9% percentage of the total parameters to obtain a better performance on five unseen distortions.


[61] 2603.21632

Extreme-MIMO Field Trials in 7 GHz Band: Unlocking the Potential of New Spectrum for 6G

The frequency range around 7 GHz has emerged as a promising upper mid-band spectrum for 6th generation (6G), offering a practical balance between coverage and capacity. To fully exploit this band, however, future systems require substantially stronger beamforming and spatial multiplexing capability than today's 5G 64-port commercial deployments. This article investigates extreme multiple-input multiple-output (X-MIMO) with 256 digital ports as a practical 6G architecture for 7 GHz operation. First, through system-level simulations, we examine the throughput benefits and design trade-offs of increasing the number of base station (BS) and user equipment (UE) digital antenna ports, including comparisons between 128-port and 256-port configurations. We then present a 256-port 7 GHz BS and UE prototype and report field-trial results obtained in urban outdoor environments. The measurements demonstrate the feasibility of 8-layer downlink single-user MIMO over a 100 MHz bandwidth, achieving more than 3 Gbps for a single user under urban outdoor propagation conditions. Channel analysis based on measured data further suggests how the large digital aperture of X-MIMO supports high-order spatial multiplexing even with limited dominant angular clusters. Finally, we identify key challenges and outline research directions toward practical deployment of 7 GHz X-MIMO systems for 6G.


[62] 2603.21651

Full Timescale Hierarchical MPC-MTIP Framework for Hybrid Energy Storage Management in Low-Carbon Industrial Microgrid

Uncertainties in balancing generation and load in low-carbon industrial microgrids (IMGs) make hybrid energy storage systems (HESS) crucial for their stable and economic operation. Existing model predictive control (MPC) techniques typically enforce periodic state of charge (SOC) constraints to maintain long term stability. However, these hard constraints compromise dispatch flexibility near the end of the prediction horizon, preventing sufficient energy release during critical peaks and leading to optimization infeasibility. This paper eliminates the periodic SOC constraints of individual storage units and proposes a novel full-timescale hierarchical MPC scheduling framework. Specifically, comprehensive physical and cost models are established for the HESS composed of flywheel, battery, compressed-air, and hydrogen-methanol energy storage. The control problem is decoupled into a hierarchical MPC architecture. Furthermore, a novel adaptive feedback mechanism based on micro trajectory inverse projection (MTIP) is embedded into the scheduling process, accurately mapping the high frequency dynamic buffering capabilities of lower tier storages into the upper decision space to generate dynamic boundaries. Experiments using 14 consecutive months of second-level data from a real-world IMG validate the effectiveness of the proposed method, demonstrating its significant superiority over existing approaches. By effectively preventing limit violations and deadlocks in lower-tier storages under extreme fluctuations, it achieves a 97.4\% net load smoothing rate and a 62.2\% comprehensive cycle efficiency.


[63] 2603.21709

Near-Field Wideband Channel Estimation for Extremely Large-Scale RIS-Aided Communication Systems

This paper studies wideband channel estimation for OFDM systems assisted by extremely large RIS (XL-RIS). Due to the large aperture of XL-RISs, the user equipment may operate in the near-field region, while the base station-XL-RIS link remains in the far field, leading to a cascaded channel with hybrid near-field and far-field characteristics. Moreover, wideband effects further complicate channel estimation in mmWave/THz systems. To address these challenges, we propose a frequency-independent orthogonal dictionary by augmenting the discrete Fourier transform (DFT) matrix with additional parameters, which enables an efficient representation of the wideband cascaded channel using a two-dimensional block-sparse structure. Based on this property, the considered channel estimation problem is effectively solved within a tailored compressed sensing framework. Simulation results demonstrate that the proposed method significantly outperforms conventional polar-domain channel estimation approaches in terms of estimation accuracy.


[64] 2603.21713

Simple Trajectory Smoothing for UAV Reference Path Planning Based on Decoupling, Spatial Modeling and Linear Programming

A method for trajectory smoothing for UAV reference path planning is presented. It is derived based on the dynamics of a Dubins airplane model, and involves a decoupling step, spatial modeling and linear programming. The decoupling step enables algebraic control laws for flight-path angle and speed control. Only for roll angle control an optimization step is applied, involving the solution of a small linear program. Two variations are discussed. They differ by reference centerline tracking and the introduction of a path shaping constraint. The benefit of natural dimensionality reduction for spatial modeling is discussed. The simplicity of the overall method is highlighted. An extension to acrobative flight is outlined, which comes at the cost of a model approximation, however at the gain of maintaining the general model structure. An extension of the method to tractor path planning along 3D terrain is discussed. The method is validated in simulations.


[65] 2603.21718

ANCHOR: Adaptive Network based on Cascaded Harmonic Offset Routing

Time series analysis plays a foundational role in a wide range of real-world applications, yet accurately modeling complex non-stationary signals remains a shared challenge across downstream tasks. Existing methods attempt to extract features directly from one-dimensional sequences, making it difficult to handle the widely observed dynamic phase drift and discrete quantization error. To address this issue, we decouple temporal evolution into macroscopic physical periods and microscopic phase perturbations, and inject frequency-domain priors derived from the Real Fast Fourier Transform (RFFT) into the underlying spatial sampling process. Based on this idea, we propose a Frequency-Guided Deformable Module (FGDM) to adaptively compensate for microscopic phase deviations. Built upon FGDM, we further develop an Adaptive Network based on Cascaded Harmonic Offset Routing (ANCHOR) as a general-purpose backbone for time-series modeling. Through orthogonal channel partitioning and a progressive residual architecture, ANCHOR efficiently decouples multi-scale harmonic features while substantially suppressing the computational redundancy of multi-branch networks. Extensive experiments demonstrate that ANCHOR achieves the best performance in most short-term forecasting sub-tasks and exhibits strong competitiveness on several specific sub-tasks in anomaly detection and time-series classification, validating its effectiveness as a universal time-series foundation backbone.


[66] 2603.21726

LSAI: A Large Small AI Model Codesign Framework for Agentic Robot Scenarios

The development of Artificial Intelligence (AI) has enabled agentic robots an appealing paradigm for various applications, such as research and rescue in complex environment. In this context, the next wireless communication technology facilitates robot cooperation for efficient environment sensing and exploration. However, traditional AI solutions cannot always provide reasonable resource utilization decisions, which makes it challenging to achieve both accurate and low-latency research and rescue. To address this issue, we propose a, LSAI, a large small AI model codesign framework to achieve highly accurate and real-time robot cooperation with deep interaction between large AI model and small AI model. We first propose an attention-based model aggregation for LAI construction. It can assist agentic robots in accurately sensing physical environments. Next, we design an adaptive model splitting and update algorithm to enable the robots to perform accurate path planning for high-efficiency environment sensing with low energy consumption. Finally, we demonstrate the effectiveness of our proposed LSAI framework. The simulation results indicate that our solution achieves sensing accuracy of up to 20.4% while reducing sensing cooperation latency by an average of 17.9% compared to traditional AI solutions.


[67] 2603.21760

Cycle Inverse-Consistent TransMorph: A Balanced Deep Learning Framework for Brain MRI Registration

Deformable image registration plays a fundamental role in medical image analysis by enabling spatial alignment of anatomical structures across subjects. While recent deep learning-based approaches have significantly improved computational efficiency, many existing methods remain limited in capturing long-range anatomical correspondence and maintaining deformation consistency. In this work, we present a cycle inverse-consistent transformer-based framework for deformable brain MRI registration. The model integrates a Swin-UNet architecture with bidirectional consistency constraints, enabling the joint estimation of forward and backward deformation fields. This design allows the framework to capture both local anatomical details and global spatial relationships while improving deformation stability. We conduct a comprehensive evaluation of the proposed framework on a large multi-center dataset consisting of 2851 T1-weighted brain MRI scans aggregated from 13 public datasets. Experimental results demonstrate that the proposed framework achieves strong and balanced performance across multiple quantitative evaluation metrics while maintaining stable and physically plausible deformation fields. Detailed quantitative comparisons with baseline methods, including ANTs, ICNet, and VoxelMorph, are provided in the appendix. Experimental results demonstrate that CICTM achieves consistently strong performance across multiple evaluation criteria while maintaining stable and physically plausible deformation fields. These properties make the proposed framework suitable for large-scale neuroimaging datasets where both accuracy and deformation stability are critical.


[68] 2603.21778

Cluster-Specific Predictive Modeling: A Scalable Solution for Resource-Constrained Wi-Fi Controllers

This manuscript presents a comprehensive analysis of predictive modeling optimization in managed Wi-Fi networks through the integration of clustering algorithms and model evaluation techniques. The study addresses the challenges of deploying forecasting algorithms in large-scale environments managed by a central controller constrained by memory and computational resources. Feature-based clustering, supported by Principal Component Analysis (PCA) and advanced feature engineering, is employed to group time series data based on shared characteristics, enabling the development of cluster-specific predictive models. Comparative evaluations between global models (GMs) and cluster-specific models demonstrate that cluster-specific models consistently achieve superior accuracy in terms of Mean Absolute Error (MAE) values in high-activity clusters. The trade-offs between model complexity (and accuracy) and resource utilization are analyzed, highlighting the scalability of tailored modeling approaches. The findings advocate for adaptive network management strategies that optimize resource allocation through selective model deployment, enhance predictive accuracy, and ensure scalable operations in large-scale, centrally managed Wi-Fi environments.


[69] 2603.21810

Partial Attention in Deep Reinforcement Learning for Safe Multi-Agent Control

Attention mechanisms excel at learning sequential patterns by discriminating data based on relevance and importance. This provides state-of-the-art performance in advanced generative artificial intelligence models. This paper applies this concept of an attention mechanism for multi-agent safe control. We specifically consider the design of a neural network to control autonomous vehicles in a highway merging scenario. The environment is modeled as a Decentralized Partially Observable Markov Decision Process (Dec-POMDP). Within a QMIX framework, we include partial attention for each autonomous vehicle, thus allowing each ego vehicle to focus on the most relevant neighboring vehicles. Moreover, we propose a comprehensive reward signal that considers the global objectives of the environment (e.g., safety and vehicle flow) and the individual interests of each agent. Simulations are conducted in the Simulation of Urban Mobility (SUMO). The results show better performance compared to other driving algorithms in terms of safety, driving speed, and reward.


[70] 2603.21875

Disentangling Speaker Traits for Deepfake Source Verification via Chebyshev Polynomial and Riemannian Metric Learning

Speech deepfake source verification systems aims to determine whether two synthetic speech utterances originate from the same source generator, often assuming that the resulting source embeddings are independent of speaker traits. However, this assumption remains unverified. In this paper, we first investigate the impact of speaker factors on source verification. We propose a speaker-disentangled metric learning (SDML) framework incorporating two novel loss functions. The first leverages Chebyshev polynomial to mitigate gradient instability during disentanglement optimization. The second projects source and speaker embeddings into hyperbolic space, leveraging Riemannian metric distances to reduce speaker information and learn more discriminative source features. Experimental results on MLAAD benchmark, evaluated under four newly proposed protocols designed for source-speaker disentanglement scenarios, demonstrate the effectiveness of SDML framework. The code, evaluation protocols and demo website are available at this https URL.


[71] 2603.21888

Adaptive Federated Fine-Tuning of Self-Supervised Speech Representations

Integrating Federated Learning (FL) with self-supervised learning (SSL) enables privacy-preserving fine-tuning for speech tasks. However, federated environments exhibit significant heterogeneity: clients differ in computational capacity, causing straggler effects under unified fine-tuning, while diverse downstream tasks require different representation depths, making full-model updates inefficient. To address these challenges, we propose an adaptive federated fine-tuning framework with early exits. Lightweight prediction heads are inserted at intermediate layers of the SSL backbone, allowing clients to terminate computation based on local constraints and task requirements. We further introduce a layer-wise, depth-aware partial aggregation strategy to better utilize representations from different network depths. Experiments show that the framework reduces edge overhead, supports heterogeneous hardware, and maintains competitive performance in resource-constrained federated environments.


[72] 2603.21889

Secure Rate-Splitting and RIS Beamforming with Untrusted Energy Harvesting Receivers

We consider a reconfigurable intelligent surface (RIS)-assisted heterogeneous network comprising legitimate information-harvesting receivers (IHRs) and untrusted energy-harvesting receivers (UEHRs). A multi-antenna base station (BS) transmits confidential information to IHRs while ensuring sufficient energy transfer to UEHRs that may attempt eavesdropping. To enhance physical-layer security, we propose a secure rate-splitting multiple access (RSMA) scheme aided by a UAV-mounted RIS. The objective is to maximize fairness-based secrecy energy efficiency (SEE). Owing to the non-convexity of the formulated problem, we develop an alternating optimization framework that jointly designs the common message allocation, active precoders, and RIS phase shifts under transmit power and energy harvesting constraints, leveraging sequential convex approximation (SCA). Simulation results demonstrate the scalability of the proposed algorithm and its superior SEE performance compared to space-division multiple access (SDMA) and non-orthogonal multiple access (NOMA) benchmarks.


[73] 2603.21891

HMS-VesselNet: Hierarchical Multi-Scale Attention Network with Topology-Preserving Loss for Retinal Vessel Segmentation

Retinal vessel segmentation methods based on standard overlap losses tend to miss thin peripheral vessels because these structures occupy very few pixels and have low contrast against the background. We propose HMS-VesselNet, a hierarchical multi-scale network that processes fundus images across four parallel branches at different resolutions and combines their outputs using learned fusion weights. The training loss combines Dice, binary cross-entropy, and centerline Dice to jointly optimize area overlap and vessel continuity. Hard example mining is applied from epoch 20 onward to concentrate gradient updates on the most difficult training images. Tested on 68 images from DRIVE, STARE, and CHASE_DB1 using 5-fold cross-validation, the model achieves a mean Dice of 88.72 +/- 0.67%, Sensitivity of 90.78 +/- 1.42%, and AUC of 98.25 +/- 0.21%. In leave-one-dataset-out experiments, AUC remains above 95% on each unseen dataset. The largest improvement is in the recall of thin peripheral vessels, which are the structures most frequently missed by standard methods and most critical for early detection of diabetic retinopathy.


[74] 2603.21920

Performance Analysis of Tri-Sector Reflector Antennas for HAPS-Based Cellular Networks

The increasing demand for ubiquitous, highcapacity mobile connectivity has driven cellular systems to explore beyond-terrestrial deployments. In this paper, we present a system-level performance evaluation of fifth-generation (5G) non-terrestrial network (NTN) enabled by high-altitude platform station (HAPS)-based base stations (BSs) equipped with tri-sectoral reflector antennas against fourth-generation (4G) terrestrial network (TN) and 5G TN deployments in a multicell dense urban environment. Using the simulation results comprising the average effective downlink signal-to-interference-plus-noise ratio (SINR) and the average user throughput, along with the subsequent interference analysis, we demonstrate that the reflector-based HAPS architecture is primarily constrained by inter-cell interference, while the combination of reflector configuration and deployment altitude represents a key design parameter.


[75] 2603.21923

APEG: Adaptive Physical Layer Authentication with Channel Extrapolation and Generative AI

With the rapid advancement of 6G, identity authentication has become increasingly critical for ensuring wireless security. The lightweight and keyless Physical Layer Authentication (PLA) is regarded as an instrumental security measure in addition to traditional cryptography-based authentication methods. However, existing PLA schemes often struggle to adapt to dynamic radio environments. To overcome this limitation, we propose the Adaptive PLA with Channel Extrapolation and Generative AI (APEG), designed to enhance authentication robustness in dynamic scenarios. Leveraging Generative AI (GAI), the framework adaptively generates Channel State Information (CSI) fingerprints, thereby improving the precision of identity verification. To refine CSI fingerprint generation, we propose the Collaborator-Cleaned Masked Denoising Diffusion Probabilistic Model (CCMDM), which incorporates collaborator-provided fingerprints as conditional inputs for channel extrapolation. Additionally, we develop the Cross-Attention Denoising Diffusion Probabilistic Model (CADM), employing a cross-attention mechanism to align multi-scale channel fingerprint features, further enhancing generation accuracy. Simulation results demonstrate the superiority of the APEG framework over existing time-sequence-based PLA schemes in authentication performance. Notably, CCMDM exhibits a significant advantage in convergence speed, while CADM, compared with model-free, time-series, and VAE-based methods, achieves superior accuracy in CSI fingerprint generation. The code is available at this https URL


[76] 2603.21958

Interaction-Aware Predictive Environmental Control Barrier Function for Emergency Lane Change

Safety-critical motion planning in mixed traffic remains challenging for autonomous vehicles, especially when it involves interactions between the ego vehicle (EV) and surrounding vehicles (SVs). In dense traffic, the feasibility of a lane change depends strongly on how SVs respond to the EV motion. This paper presents an interaction-aware safety framework that incorporates such interactions into a control barrier function (CBF)-based safety assessment. The proposed method predicts near-future vehicle positions over a finite horizon, thereby capturing reactive SV behavior and embedding it into the CBF-based safety constraint. To address uncertainty in the SV response model, a robust extension is developed by treating the model mismatch as a bounded disturbance and incorporating an online uncertainty estimate into the barrier condition. Compared with classical environmental CBF methods that neglect SV reactions, the proposed approach provides a less conservative and more informative safety representation for interactive traffic scenarios, while improving robustness to uncertainty in the modeled SV behavior.


[77] 2603.22095

Input Convex Encoder-Only Transformer for Fast and Gradient-Stable MPC in Building Demand Response

Learning-based Model Predictive Control (MPC) has emerged as a powerful strategy for building demand response. However, its practical deployment is often hindered by the non-convex optimization problems induced by standard neural network models. These problems lead to long solver times and suboptimal solutions, making real-time control over long horizons challenging. While Input Convex Neural Networks (ICNNs), such as Input-Convex Long Short-Term Memorys (IC-LSTMs), are developed to address the convexity issue, their recurrent architectures suffer from high computational cost and gradient instability as the prediction horizon increases. To overcome these limitations, this paper introduces the Input-Convex Encoder-only Transformer (IC-EoT), a novel architecture that synergizes the parallel processing capabilities of the Transformer with the guaranteed tractability of input convexity. The IC-EoT was developed and evaluated in a high-fidelity co-simulation framework using the Energym Python library to interface with the EnergyPlus building simulator, and compared against its recurrent convex counterpart (IC-LSTM) and standard non-convex models. The results demonstrate that the IC-EoT is structurally immune to the gradient instability that affects recurrent ICNNs while maintaining comparable predictive accuracy. More critically, it substantially reduces MPC solver times; this speed advantage grows with the prediction horizon, with the IC-EoT proving 2.7 to 8.3 times faster than the IC-LSTM across horizons spanning from one to eight hours. This leap in computational efficiency makes the IC-EoT a robust and practical solution, enabling effective, real-time MPC for building energy management under realistic horizon decision-making scenarios.


[78] 2603.22104

End-to-End Differentiable Predictive Control with Guaranteed Constraint Satisfaction and feasibility for Building Demand Response

The high energy consumption of buildings presents a critical need for advanced control strategies like Demand Response (DR). Differentiable Predictive Control (DPC) has emerged as a promising method for learning explicit control policies, yet conventional DPC frameworks are hindered by three key limitations: the use of simplistic dynamics models with limited expressiveness, a decoupled training paradigm that fails to optimize for closed-loop performance, and a lack of practical safety guarantees under realistic assumptions. To address these shortcomings, this paper proposes a novel End-to-End Differentiable Predictive Control (E2E-DPC) framework. Our approach utilizes an Encoder-Only Transformer to model the complex system dynamics and employs a unified, performance-oriented loss to jointly train the model and the control policy. Crucially, we introduce an online tube-based constraint tightening method that provides theoretical guarantees for recursive feasibility and constraint satisfaction without requiring complex offline computation of terminal sets. The framework is validated in a high-fidelity EnergyPlus simulation, controlling a multi-zone building for a DR task. The results demonstrate that the proposed method with guarantees achieves near-perfect constraint satisfaction - a reduction of over 99% in violations compared to the baseline - at the cost of only a minor increase in electricity expenditure. This work provides a deployable, performance-driven control solution for building energy management and establishes a new pathway for developing verifiable learning-based control systems under milder assumptions.


[79] 2603.22107

Sample-based detectability and moving horizon state estimation of continuous-time systems

In this paper we propose a detectability condition for nonlinear continuous-time systems with irregular/infrequent output measurements, namely a sample-based version of incremental integral input/output-to-state stability (i-iIOSS). We provide a sufficient condition for an i-iIOSS system to be sample-based i-iIOSS. This condition is also exploited to analyze the relationship between sample-based i-iIOSS and sample-based observability for linear systems, such that previously established sampling strategies for linear systems can be used to guarantee sample-based i-iIOSS. Furthermore, we present a sample-based moving horizon estimation scheme, for which robust stability can be shown. Finally, we illustrate the applicability of the proposed estimation scheme through a biomedical simulation example.


[80] 2603.22127

DQN Based Joint UAV Trajectory and Association Planning in NTN Assisted Networks

Advanced Air Mobility (AAM) has emerged as a key pillar of next-generation transportation systems, encompassing a wide range of uncrewed aerial vehicle (UAV) applications. To enable AAM, maintaining reliable and efficient communication links between UAVs and control centers is essential. At the same time, the highly dynamic nature of wireless networks, combined with the limited onboard energy of UAVs, makes efficient trajectory planning and network association crucial. Existing terrestrial networks often fail to provide ubiquitous coverage due to frequent handovers and coverage gaps. To address these challenges, geostationary Earth orbit (GEO) satellites offer a promising complementary solution for extending UAV connectivity beyond terrestrial boundaries. This work proposes an integrated GEO terrestrial network architecture to ensure seamless UAV connectivity. Leveraging artificial intelligence (AI), a deep Q network (DQN) based algorithm is developed for joint UAV trajectory and association planning (JUTAP), aiming to minimize energy consumption, handover frequency, and disconnectivity. Simulation results validate the effectiveness of the proposed algorithm within the integrated GEO terrestrial framework.


[81] 2603.22131

WiRD-Gest: Gesture Recognition In The Real World Using Range-Doppler Wi-Fi Sensing on COTS Hardware

Wi-Fi sensing has emerged as a promising technique for gesture recognition, yet its practical deployment is hindered by environmental sensitivity and device placement challenges. To overcome these limitations we propose Wi-Fi Range and Doppler (WiRD)-Gest, a novel system that performs gesture recognition using a single, unmodified Wi-Fi transceiver on a commercial off-the-shelf (COTS) laptop. The system leverages an monostatic full duplex sensing pipeline capable of extracting Range-Doppler (RD) information. Utilizing this, we present the first benchmark of deep learning models for gesture recognition based on monostatic sensing. The key innovation lies in how monostatic sensing and spatial (range) information fundamentally transforms accuracy, robustness and generalization compared to prior approaches. We demonstrate excellent performance in crowded, unseen public spaces with dynamic interference and additional moving targets even when trained on data from controlled environments only. These are scenarios where prior Wi-Fi sensing approaches often fail, however, our system suffers minor degradation. The WiRD-Gest benchmark and dataset will also be released as open source.


[82] 2603.22146

From Singleton Obstacles to Clutter: Translation Invariant Compositional Avoid Sets

This paper studies obstacle avoidance under translation invariant dynamics using an avoid-side travel cost Hamilton Jacobi formulation. For running costs that are zero outside an obstacle and strictly negative inside it, we prove that the value function is non-positive everywhere, equals zero exactly outside the avoid set, and is strictly negative exactly on it. Under translation invariance, this yields a reuse principle: the value of any translated obstacle is obtained by translating a single template value function. We show that the pointwise minimum of translated template values exactly characterizes the union of the translated single-obstacle avoid sets and provides a conservative inner certificate of unavoidable collision in clutter. To reduce conservatism, we introduce a blockwise composition framework in which subsets of obstacles are merged and solved jointly. This yields a hierarchy of conservative certificates from singleton reuse to the exact clutter value, together with monotonicity under block merging and an exactness criterion based on the existence of a common clutter avoiding control. The framework is illustrated on a Dubins car example in a repeated clutter field.


[83] 2603.22210

Route-Phasing-Split-Encoded Genetic Algorithm for Multi-Satellite On-Orbit Servicing Mission Planning

This article addresses multi-servicer on-orbit servicing mission planning in geosynchronous Earth orbit, where routing decisions are tightly coupled with time-dependent orbital phasing and strict propellant and mission-duration constraints. We propose a Route-Phasing-Split Genetic Algorithm (RPS-GA) that simultaneously optimizes target sequencing, discrete phasing rotation decisions (i.e., the number of phasing revolutions/waiting cycles), and route partitioning across multiple servicing spacecrafts (SSCs). An RPS triplet chromosome encodes route order, phasing rotations, and route splits in a unified structure, enabling split-aware recombination without disrupting feasible multi-servicer route blocks. Feasibility is enforced through a constraint-aware fitness function that ranks feasible solutions based on total $\Delta V$, while penalizing propellant and mission duration violations, using aggregate and imbalance penalties. This formulation discourages the concentration of violations on a single servicing spacecraft (SSC). Once a feasible best solution is identified, it is preserved as feasible in subsequent generations, thereby enhancing convergence stability. The framework incorporates split-aware crossover, mutation and a regret-based Large Neighborhood Search for local intensification. Experiments on representative GEO servicing scenarios demonstrate that RPS-GA produces feasible multi-servicer plans with substantially improved fuel efficiency, reducing total $\Delta V$ by $24.5\%$, (from $1956.36 \ m/s$ to $ 1476.32\ m/s $) compared with a state-of-the-art LNS-AGA baseline.


[84] 2603.22222

A Portfolio-Level Optimization Framework for Coordinated Market Participation and Operational Scheduling of Hydrogen-Centric Companies

The vision of electrolytic hydrogen as a clean energy vector prompts the emergence of hydrogen-centric companies that must simultaneously engage in electricity, hydrogen, and green certificate markets while operating complex, geographically distributed asset portfolios. This paper proposes a portfolio-level optimization framework tailored for the integrated operational scheduling and market participation of such companies. The model co-optimizes asset scheduling and market decisions across multiple sites, incorporating spatial distribution, technical constraints, and company-level policy requirements. It supports participation in the electricity market, physical and virtual Power Purchase Agreements (PPAs), bundled and unbundled hydrogen markets, and green certificate transactions. The model is applied to three operational scenarios to evaluate the economic and operational impacts of different compliance strategies. Results show that centralized, portfolio-level control unlocks the full flexibility of geographically distributed assets, enabling a 2.42-fold increase in hydrogen production and a 9.4% reduction in daily operational costs, while satisfying all company policy constraints.


[85] 2603.22252

SelfTTS: cross-speaker style transfer through explicit embedding disentanglement and self-refinement using self-augmentation

This paper presents SelfTTS, a text-to-speech (TTS) model designed for cross-speaker style transfer that eliminates the need for external pre-trained speaker or emotion encoders. The architecture achieves emotional expressivity in neutral speakers through an explicit disentanglement strategy utilizing Gradient Reversal Layers (GRL) combined with cosine similarity loss to decouple speaker and emotion information. We introduce Multi Positive Contrastive Learning (MPCL) to induce clustered representations of speaker and emotion embeddings based on their respective labels. Furthermore, SelfTTS employs a self-refinement strategy via Self-Augmentation, exploiting the model's voice conversion capabilities to enhance the naturalness of synthesized speech. Experimental results demonstrate that SelfTTS achieves superior emotional naturalness (eMOS) and robust stability in target timbre and emotion compared to state-of-the-art baselines.


[86] 2603.22258

Semi-Blind Channel Estimation and Hybrid Receiver Beamforming in the Tera-Hertz Multi-User Massive MIMO Uplink

We develop a pragmatic multi-user (MU) massive multiple-input multiple-output (MIMO) channel model tailored to the THz band, encompassing factors such as molecular absorption, reflection losses and multipath diffused ray components. Next, we propose a novel semi-blind based channel state information (CSI) acquisition technique i.e. MU whitening decorrelation semi-blind (MU-WD-SB) that exploits the second order statistics corresponding to the unknown data symbols along with pilot vectors. A constrained Cramer-Rao Lower Bound (C-CRLB) is derived to bound the normalized mean square error (NMSE) performance of the proposed semi-blind learning technique. Our proposed scheme efficiently reduces the training overheads while enhancing the overall accuracy of the channel learning process. Furthermore, a novel hybrid receiver combiner framework is devised for MU THz massive MIMO systems, leveraging multiple measurement vector based sparse Bayesian learning (MMV-SBL) that relies on the estimated CSI acquired through our proposed semi-blind technique relying on low resolution analog-to-digital converters (ADCs). Finally, we propose an optimal hybrid combiner based on MMV-SBL, which directly reduces the MU interference. Extensive simulations are conducted to evaluate the performance gain of the proposed MU-WD-SB scheme over conventional training-based and other semi-blind learning techniques for a practical THz channel obtained from the high-resolution transmission (HITRAN) database. The metrics considered for quantifying the improvements include the NMSE, bit error rate (BER) and spectral-efficiency (SE).


[87] 2603.20242

LL-SDR: Low-Latency Speech enhancement through Discrete Representations

Many speech enhancement (SE) methods rely on continuous representations. Recently, discrete audio tokens have been explored to enable autoregressive generation for SE. However, it remains unclear whether discretization itself consistently improves SE performance. In this paper, we introduce LL-SDR, a token-based speech enhancement framework that explicitly leverages discretization to better separate speech and noise. Our first contribution is a Variance-Ordered Residual Vector Quantizer (VO-RVQ), designed to disentangle speech and noise distributions during tokenization. Second, we propose a latent-space discriminator to better align enhanced embeddings with semantic embeddings. Experiments show that LL-SDR outperforms continuous baselines and matches the performance of autoregressive token-based approaches, while enabling lightweight, low-latency speech enhancement in both reverberant and non-reverberant noisy environments. Demos and source code are available at our project websites.


[88] 2603.20255

Abjad-Kids: An Arabic Speech Classification Dataset for Primary Education

Speech-based AI educational applications have gained significant interest in recent years, particularly for children. However, children speech research remains limited due to the lack of publicly available datasets, especially for low-resource languages such as this http URL paper presents Abjad-Kids, an Arabic speech dataset designed for kindergarten and primary education, focusing on fundamental learning of alphabets, numbers, and colors. The dataset consists of 46397 audio samples collected from children aged 3 - 12 years, covering 141 classes. All samples were recorded under controlled specifications to ensure consistency in duration, sampling rate, and format. To address high intra-class similarity among Arabic phonemes and the limited samples per class, we propose a hierarchical audio classification based on CNN-LSTM architectures. Our proposed methodology decomposes alphabet recognition into a two-stage process: an initial grouping classification model followed by specialized classifiers for each group. Both strategies: static linguistic-based grouping and dynamic clustering-based grouping, were evaluated. Experimental results demonstrate that static linguistic-based grouping achieves superior performance. Comparisons between traditional machine learning with deep learning approaches, highlight the effectiveness of CNN-LSTM models combined with data augmentation. Despite achieving promising results, most of our experiments indicate a challenge with overfitting, which is likely due to the limited number of samples, even after data augmentation and model regularization. Thus, future work may focus on collecting additional data to address this issue. Abjad-Kids will be publicly available. We hope that Abjad-Kids enrich children representation in speech dataset, and be a good resource for future research in Arabic speech classification for kids.


[89] 2603.20256

SciNav: A General Agent Framework for Scientific Coding Tasks

Autonomous science agents built on large language models (LLMs) are increasingly used to generate hypotheses, design experiments, and produce reports. However, prior work mainly targets open-ended scientific problems with subjective outputs that are difficult to evaluate. Scientific coding benchmarks, by contrast, provide executable outputs for objective assessment. Existing approaches remain engineering-driven pipelines, revealing the need for structured, end-to-end science agent frameworks for scientific coding tasks. We address this gap by focusing on scientific coding tasks, where evaluation can be made rigorously, and introducing an agent framework SciNav (Scientific Navigator) that enables more effective solution exploration. Our framework is designed to operate under constrained search budgets, moving beyond reliance on pre-defined success metrics and prolonged search cycles. Inspired by findings that comparative judgments often reveal finer-grained quality differences and therefore provide greater discriminative power than absolute scoring, our framework leverages pairwise relative judgments within a tree search process to select top-K promising solution branches, prune low-potential ones, and progressively narrow down the solution candidates on the selected branches guided by relative comparisons. We demonstrate our agent's effectiveness across different types of tasks on two benchmarks. Experiments show that SciNav significantly outperforms direct prompting and prior agents like OpenHands and Self-Debug across different base models, task types, and difficulty levels, and exceeds different frontier comparators such as random selection and LLM absolute scoring. These results confirm the strength of our agent design and highlight the effectiveness of relative judgment-guided top-K search for high-quality scientific coding, marking a step toward more practical science agents.


[90] 2603.20265

JCAS-MARL: Joint Communication and Sensing UAV Networks via Resource-Constrained Multi-Agent Reinforcement Learning

Multi-UAV networks are increasingly deployed for large-scale inspection and monitoring missions, where operational performance depends on the coordination of sensing reliability, communication quality, and energy constraints. In particular, the rapid increase in overflowing waste bins and illegal dumping sites has created a need for efficient detection of waste hotspots. In this work, we introduce JCAS-MARL, a resource-aware multi-agent reinforcement learning (MARL) framework for joint communication and sensing (JCAS)-enabled UAV networks. Within this framework, multiple UAVs operate in a shared environment where each agent jointly controls its trajectory and the resource allocation of an OFDM waveform used simultaneously for sensing and communication. Battery consumption, charging behavior, and associated CO$_2$ emissions are incorporated into the system state to model realistic operational constraints. Information sharing occurs over a dynamic communication graph determined by UAV positions and wireless channel conditions. Waste hotspot detection requires consensus among multiple UAVs to improve reliability. Using this environment, we investigate how MARL policies exploit the sensing-communication-energy trade-off in JCAS-enabled UAV networks. Simulation results demonstrate that adaptive pilot-density control learned by the agents can outperform static configurations, particularly in scenarios where sensing accuracy and communication connectivity vary across the environment.


[91] 2603.20277

Resource Allocation in Electricity Markets with Budget Constrained Customers

In electricity markets, customers are increasingly constrained by their budgets. A budget constraint for a user is an upper bound on the price multiplied by the quantity. However, since prices are determined by the market equilibrium, the budget constrained welfare maximization problem is difficult to define rigorously and to work with. In this letter, we show that a natural dual-ascent algorithm converges to a unique competitive equilibrium under budget constraints. Further, this budget-constrained equilibrium is exactly the solution of a convex welfare maximization problem in which each user's utility is replaced by a modified utility that splices the original utility with a logarithmic function where the budget binds. We also provide an explicit piecewise construction of this modified utility and demonstrate the results on examples with quadratic and square root utility functions.


[92] 2603.20284

STAC: Plug-and-Play Spatio-Temporal Aware Cache Compression for Streaming 3D Reconstruction

Online 3D reconstruction from streaming inputs requires both long-term temporal consistency and efficient memory usage. Although causal VGGT transformers address this challenge through a key-value (KV) cache mechanism, the cache grows linearly with the stream length, creating a major memory bottleneck. Under limited memory budgets, early cache eviction significantly degrades reconstruction quality and temporal consistency. In this work, we observe that attention in causal transformers for 3D reconstruction exhibits intrinsic spatio-temporal sparsity. Based on this insight, we propose STAC, a Spatio-Temporally Aware Cache Compression framework for streaming 3D reconstruction with large causal transformers. STAC consists of three key components: (1) a Working Temporal Token Caching mechanism that preserves long-term informative tokens using decayed cumulative attention scores; (2) a Long-term Spatial Token Caching scheme that compresses spatially redundant tokens into voxel-aligned representations for memory-efficient storage; and (3) a Chunk-based Multi-frame Optimization strategy that jointly processes consecutive frames to improve temporal coherence and GPU efficiency. Extensive experiments show that STAC achieves state-of-the-art reconstruction quality while reducing memory consumption by nearly 10x and accelerating inference by 4x, substantially improving the scalability of real-time 3D reconstruction in streaming settings.


[93] 2603.20290

Transparent Fragments Contour Estimation via Visual-Tactile Fusion for Autonomous Reassembly

The contour estimation of transparent fragments is very important for autonomous reassembly, especially in the fields of precision optical instrument repair, cultural relic restoration, and identification of other precious device broken accidents. Different from general intact transparent objects, the contour estimation of transparent fragments face greater challenges due to strict optical properties, irregular shapes and edges. To address this issue, a general transparent fragments contour estimation framework based on visual-tactile fusion is proposed in this paper. First, we construct the transparent fragment dataset named TransFrag27K, which includes a multiscene synthetic data of broken fragments from multiple types of transparent objects, and a scalable synthetic data generation pipeline. Secondly, we propose a visual grasping position detection network named TransFragNet to identify, locate and segment the sampling grasping position. And, we use a two-finger gripper with Gelsight Mini sensors to obtain reconstructed tactile information of the lateral edge of the fragments. By fusing this tactile information with visual cues, a visual-tactile fusion material classifier is proposed. Inspired by the way humans estimate a fragment's contour combining vision and touch, we introduce a general transparent fragment contour estimation framework based on visual-tactile fusion, demonstrates strong performance in real-world validation. Finally, a multi-dimensional similarity metrics based contour matching and reassembly algorithm is proposed, providing a reproducible benchmark for evaluating visual-tactile contour estimation and fragment reassembly. The experimental results demonstrate the validity of the proposed framework. The dataset and codes are available at this https URL.


[94] 2603.20353

Scene Representation using 360° Saliency Graph and its Application in Vision-based Indoor Navigation

A Scene, represented visually using different formats such as RGB-D, LiDAR scan, keypoints, rectangular, spherical, multi-views, etc., contains information implicitly embedded relevant to applications such as scene indexing, vision-based navigation. Thus, these representations may not be efficient for such applications. This paper proposes a novel 360° saliency graph representation of the scenes. This rich representation explicitly encodes the relevant visual, contextual, semantic, and geometric information of the scene as nodes, edges, edge weights, and angular position in the 360° graph. Also, this representation is robust against scene view change and addresses challenges of indoor environments such as varied illumination, occlusions, and shadows as in the case of existing traditional methods. We have utilized this rich and efficient representation for vision-based navigation and compared it with existing navigation methods using 360° scenes. However, these existing methods suffer from limitations of poor scene representation, lacking scene-specific information. This work utilizes the proposed representation first to localize the query scene in the given topological map, and then facilitate 2D navigation by estimating the next required movement directions towards the target destination in the topological map by using the embedded geometric information in the 360° saliency graph. Experimental results demonstrate the efficacy of the proposed 360° saliency graph representation in enhancing both scene localization and vision-based indoor navigation.


[95] 2603.20408

Meta-Learning for Repeated Bayesian Persuasion

Classical Bayesian persuasion studies how a sender influences receivers through carefully designed signaling policies within a single strategic interaction. In many real-world environments, such interactions are repeated across multiple games, creating opportunities to exploit structural similarity across tasks. In this work, we introduce Meta-Persuasion algorithms, establishing the first line of theoretical results for both full-feedback and bandit-feedback settings in the Online Bayesian Persuasion (OBP) and Markov Persuasion Process (MPP) frameworks. We show that our proposed meta-persuasion algorithms achieve provably sharper regret rates under natural notions of task similarity, improving upon the best-known convergence rates for both OBP and MPP. At the same time, they recover the standard single-game guarantees when the sequence of games is picked arbitrarily. Finally, we complement our theoretical analysis with numerical experiments that highlight our regret improvements and the benefits of meta-learning in repeated persuasion environments.


[96] 2603.20433

ALICE: A Multifaceted Evaluation Framework of Large Audio-Language Models' In-Context Learning Ability

While Large Audio-Language Models (LALMs) have been shown to exhibit degraded instruction-following capabilities, their ability to infer task patterns from in-context examples under audio conditioning remains unstudied. To address this gap, we present ALICE, a three-stage framework that progressively reduces textual guidance to systematically evaluate LALMs' in-context learning ability under audio conditioning. Evaluating six LALMs across four audio understanding tasks under two output constraint categories, we uncover a consistent asymmetry across all stages and LALMs: in-context demonstrations reliably improve format compliance but fail to improve, and often degrade, the core task performance. This suggests that LALMs can glean surface-level formatting patterns from demonstrations but may struggle to leverage cross-modal semantic grounding to reliably infer task objectives from audio-conditioned examples, highlighting potential limitations in current cross-modal integration.


[97] 2603.20448

Thermal is Always Wild: Characterizing and Addressing Challenges in Thermal-Only Novel View Synthesis

Thermal cameras provide reliable visibility in darkness and adverse conditions, but thermal imagery remains significantly harder to use for novel view synthesis (NVS) than visible-light images. This difficulty stems primarily from two characteristics of affordable thermal sensors. First, thermal images have extremely low dynamic range, which weakens appearance cues and limits the gradients available for optimization. Second, thermal data exhibit rapid frame-to-frame photometric fluctuations together with slow radiometric drift, both of which destabilize correspondence estimation and create high-frequency floater artifacts during view synthesis, particularly when no RGB guidance (beyond camera pose) is available. Guided by these observations, we introduce a lightweight preprocessing and splatting pipeline that expands usable dynamic range and stabilizes per-frame photometry. Our approach achieves state-of-the-art performance across thermal-only NVS benchmarks, without requiring any dataset-specific tuning.


[98] 2603.20525

High-Speed, All-Terrain Autonomy: Ensuring Safety at the Limits of Mobility

A novel local trajectory planner, capable of controlling an autonomous off-road vehicle on rugged terrain at high-speed is presented. Autonomous vehicles are currently unable to safely operate off-road at high-speed, as current approaches either fail to predict and mitigate rollovers induced by rough terrain or are not real-time feasible. To address this challenge, a novel model predictive control (MPC) formulation is developed for local trajectory planning. A new dynamics model for off-road vehicles on rough, non-planar terrain is derived and used for prediction. Extreme mobility, including tire liftoff without rollover, is safely enabled through a new energy-based constraint. The formulation is analytically shown to mitigate rollover types ignored by many state-of-the-art methods, and real-time feasibility is achieved through parallelized GPGPU computation. The planner's ability to provide safe, extreme trajectories is studied through both simulated trials and full-scale physical experiments. The results demonstrate fewer rollovers and more successes compared to a state-of-the-art baseline across several challenging scenarios that push the vehicle to its mobility limits.


[99] 2603.20624

Cross-Correlation Periodograms with Decaying Noise Floor for Power Spectral Density Estimation

We present a statistical analysis of a variant of the periodogram method that forms power spectral density estimates by cross-correlating the discrete Fourier transforms of adjacent time windows. The proposed estimator is closely related to cross-power spectral methods and to a technique introduced by Nelson, which has been observed empirically to improve detection of sinusoidal components in noise. We show that, under a white Gaussian noise model, the expected contribution of noise to the proposed estimator is zero and that the estimator is unbiased under certain window alignment conditions. This contrasts with classical estimators where averaging reduces variance but not expected noise. Moreover, we derive closed-form expressions for the variance and prove an upper bound on the expected magnitude of the estimator that decreases as the number of windows increases. This establishes that the proposed method achieves a noise floor that decays with averaging, unlike standard nonparametric spectral estimators. We further analyze the effect of taking the absolute value to enforce nonnegativity, providing bounds on the resulting bias, and show that this bias also decreases with the number of windows. Theoretical results are validated through numerical simulations. We demonstrate the potential sensitivity to phase misalignment and methods of realignment. We also provide empirical evidence that the estimator is robust to other types of noise.


[100] 2603.20646

EQISA: Energy-efficient Quantum Instruction Set Architecture using Sparse Dictionary Learning

The scalability of quantum computing in supporting sophisticated algorithms critically depends not only on qubit quality and error handling, but also on the efficiency of classical control, constrained by the cryogenic control bandwidth and energy budget. In this work, we address this challenge by investigating the algorithmic complexity of quantum circuits at the instruction set architecture (ISA) level. We introduce an energy-efficient quantum instruction set architecture (EQISA) that synthesizes quantum circuits in a discrete Solovay-Kitaev basis of fixed depth and encodes instruction streams using a sparse dictionary learned from decomposing a set of Haar-random unitaries, followed by entropy-optimal Huffman coding and an additional lossless bzip2 compression stage. This approach is evaluated on benchmark quantum circuits demonstrating over 60% compression of quantum instruction streams across system sizes, enabling proportional reductions in classical control energy and communication overhead without loss of computational fidelity. Beyond compression, EQISA facilitates the discovery of higher-level composable abstractions in quantum circuits and provides estimates of quantum algorithmic complexity. These findings position EQISA as an impactful direction for improving the energy efficiency and scalability of quantum control architectures.


[101] 2603.20819

Achieving $\widetilde{O}(1/ε)$ Sample Complexity for Bilinear Systems Identification under Bounded Noises

This paper studies finite-sample set-membership identification for discrete-time bilinear systems under bounded symmetric log-concave disturbances. Compared with existing finite-sample results for linear systems and related analyses under stronger noise assumptions, we consider the more challenging bilinear setting with trajectory-dependent regressors and allow marginally stable dynamics with polynomial mean-square state growth. Under these conditions, we prove that the diameter of the feasible parameter set shrinks with sample complexity $\widetilde{O}(1/\epsilon)$. Simulation supports the theory and illustrates the advantage of the proposed estimator for uncertainty quantification.


[102] 2603.20891

Auto-differentiable data assimilation: Co-learning of states, dynamics, and filtering algorithms

Data assimilation algorithms estimate the state of a dynamical system from partial observations, where the successful performance of these algorithms hinges on costly parameter tuning and on employing an accurate model for the dynamics. This paper introduces a framework for jointly learning the state, dynamics, and parameters of filtering algorithms in data assimilation through a process we refer to as auto-differentiable filtering. The framework leverages a theoretically motivated loss function that enables learning from partial, noisy observations via gradient-based optimization using auto-differentiation. We further demonstrate how several well-known data assimilation methods can be learned or tuned within this framework. To underscore the versatility of auto-differentiable filtering, we perform experiments on dynamical systems spanning multiple scientific domains, such as the Clohessy-Wiltshire equations from aerospace engineering, the Lorenz-96 system from atmospheric science, and the generalized Lotka-Volterra equations from systems biology. Finally, we provide guidelines for practitioners to customize our framework according to their observation model, accuracy requirements, and computational budget.


[103] 2603.20995

On the Bit Error Rate Fluctuation Induced by Multipath Interference in the Coherent Regime for Intra Data Center Applications

We theoretically explain how multipath interference resizes PAM-4 constellation in the coherent regime and thus increases bit error rate fluctuation in intra data centers for the first time.


[104] 2603.20999

OrbitStream: Training-Free Adaptive 360-degree Video Streaming via Semantic Potential Fields

Adaptive 360° video streaming for teleoperation faces dual challenges: viewport prediction under uncertain gaze patterns and bitrate adaptation over volatile wireless channels. While data-driven and Deep Reinforcement Learning (DRL) methods achieve high Quality of Experience (QoE), their "black-box" nature and reliance on training data can limit deployment in safety-critical systems. To address this, we propose OrbitStream, a training-free framework that combines semantic scene understanding with robust control theory. We formulate viewport prediction as a Gravitational Viewport Prediction (GVP) problem, where semantic objects generate potential fields that attract user gaze. Furthermore, we employ a Saturation-Based Proportional-Derivative (PD) Controller for buffer regulation. On object-rich teleoperation traces, OrbitStream achieves a 94.7\% zero-shot viewport prediction accuracy without user-specific profiling, approaching trajectory-extrapolation baselines ($\sim$98.5\%). Across 3,600 Monte Carlo simulations on diverse network traces, OrbitStream yields a mean QoE of 2.71. It ranks second among 12 evaluated algorithms, close to the top-performing BOLA-E (2.80) while outperforming FastMPC (1.84). The system exhibits an average decision latency of 1.01 ms with minimal rebuffering events. By providing competitive QoE with interpretability and zero training overhead, OrbitStream demonstrates that physics-based control, combined with semantic modeling, offers a practical solution for 360° streaming in teleoperation.


[105] 2603.21026

Frames and Bases of Translates of Signals on Undirected Graphs

We study a shift invariant space on an undirected graphs $G$ having $N$ vertices. We obtain a characterization theorem for a system of generalized translates $\{T_{i}g : 1\leq i\leq N\}$, for $g\in C^N$, to form an orthonormal basis. Moreover, we find a necessary and sufficient condition for the system $\{T_{i}g : 1\leq i\leq m\}$, $m\leq N$, to form a linearly independent set and an orthonormal set. Further, we obtain a characterization result for a system of generalized translates which is generated by multiple generators $g_{1},...,g_{M}$ to form a frame for $C^N$. In particular, we deduce similar results for the system $\{T_{i}M_{s}g : 1\leq i,s\leq N\}$ with modulation $M_{s}$ and the spectral graph wavelet system. We also provide an illustration for the spectral graph wavelet system.


[106] 2603.21244

Amortized Variational Inference for Logistic Regression with Missing Covariates

Missing covariate data pose a significant challenge to statistical inference and machine learning, particularly for classification tasks like logistic regression. Classical iterative approaches (EM, multiple imputation) are often computationally intensive, sensitive to high missingness rates, and limited in uncertainty propagation. Recent deep generative models based on VAEs show promise but rely on complex latent representations. We propose Amortized Variational Inference for Logistic Regression (AV-LR), a unified end-to-end framework for binary logistic regression with missing covariates. AV-LR integrates a probabilistic generative model with a simple amortized inference network, trained jointly by maximizing the evidence lower bound. Unlike competing methods, AV-LR performs inference directly in the space of missing data without additional latent variables, using a single inference network and a linear layer that jointly estimate regression parameters and the missingness mechanism. AV-LR achieves estimation accuracy comparable to or better than state-of-the-art EM-like algorithms, with significantly lower computational cost. It naturally extends to missing-not-at-random settings by explicitly modeling the missingness mechanism. Empirical results on synthetic and real-world datasets confirm its effectiveness and efficiency across various missing-data scenarios.


[107] 2603.21251

WirelessBench: A Tolerance-Aware LLM Agent Benchmark for Wireless Network Intelligence

LLM agents are emerging as a key enabler for autonomous wireless network management. Reliably deploying them, however, demands benchmarks that reflect real engineering risk. Existing wireless benchmarks evaluate single isolated capabilities and treat all errors uniformly, missing both cascaded-chain failures and catastrophic unit confusions (\textit{e.g.}, dB vs.\ dBm). We present \wb{}, the first tolerance-aware, tool-integrated benchmark for LLM-based wireless agents. \wb{} is organized as a three-tier cognitive hierarchy: domain knowledge reasoning (WCHW, 1{,}392 items), intent-driven resource allocation (WCNS, 1{,}000 items), and proactive multi-step decisions under mobility (WCMSA, 1{,}000 items). Moreover, \wb{} is established on three design principles: \emph{(i)}~tolerance-aware scoring with catastrophic-error detection; \emph{(ii)}~tool-necessary tasks requiring a 3GPP-compliant ray-tracing query for channel quality; and \emph{(iii)}~Chain-of-Thought (CoT)-traceable items, where every benchmark item ships with a complete CoT trajectory enabling fine-grained diagnosis of where in the reasoning chain an agent fails. Our numerical results show that the direct-prompting model (GPT-4o) scores $68\%$, trailing a tool-integrated agent ($84.64\%$) by $16.64$\,pp; $23\%$ of errors are catastrophic failures invisible to exact-match metrics. More importantly, the hierarchy decomposes errors into four actionable diagnostic categories that flat evaluation cannot reveal. Code and data: this https URL.


[108] 2603.21316

HELIX: Scaling Raw Audio Understanding with Hybrid Mamba-Attention Beyond the Quadratic Limit

Audio representation learning typically evaluates design choices such as input frontend, sequence backbone, and sequence length in isolation. We show that these axes are coupled, and conclusions from one setting often do not transfer to others. We introduce HELIX, a controlled framework comparing pure Mamba, pure attention, and a minimal hybrid with a single attention bottleneck. All models are parameter-matched at about 8.3M parameters to isolate architectural effects. Across six datasets, we find that the preferred input representation depends on the backbone, and that attention hurts performance on short, stationary audio but becomes important at longer sequence lengths. On a 5-minute speaker identification task with 30,000 tokens, pure attention fails with out-of-memory errors, while HELIX closes an 11.5-point gap over pure Mamba.


[109] 2603.21326

B-jet Tagging Using a Hybrid Edge Convolution and Transformer Architecture

Jet flavor tagging plays an important role in precise Standard Model measurement enabling the extraction of mass dependence in jet-quark interaction and quark-gluon plasma (QGP) interactions. They also enable inferring the nature of particles produced in high-energy particle collisions that contain heavy quarks. The classification of bottom jets is vital for exploring new Physics scenarios in proton-proton collisions. In this research, we present a hybrid deep learning architecture that integrates edge convolutions with transformer self-attention mechanisms, into one single architecture called the Edge Convolution Transformer (ECT) model for bottom-quark jet tagging. ECT processes track-level features (impact parameters, momentum, and their significances) alongside jet-level observables (vertex information and kinematics) to achieve state-of-the-art performance. The study utilizes the ATLAS simulation dataset. We demonstrate that ECT achieves 0.9333 AUC for b-jet versus combined charm and light jet discrimination, surpassing ParticleNet (0.8904 AUC) and the pure transformer baseline (0.9216 AUC). The model maintains inference latency below 0.060 ms per jet on modern GPUs, meeting the stringent requirements for real-time event selection at the LHC. Our results demonstrate that hybrid architectures combining local and global features offer superior performance for challenging jet classification tasks. The proposed architecture achieves good results in b-jet tagging, particularly excelling in charm jet rejection (the most challenging task), while maintaining competitive light-jet discrimination comparable to pure transformer models.


[110] 2603.21370

Adaptive and robust experimental design for linear dynamical models using Kalman filter

Current experimental design techniques for dynamical systems often only incorporate measurement noise, while dynamical systems also involve process noise. To construct experimental designs we need to quantify their information content. The Fisher information matrix is a popular tool to do so. Calculating the Fisher information matrix for linear dynamical systems with both process and measurement noise involves estimating the uncertain dynamical states using a Kalman filter. The Fisher information matrix, however, depends on the true but unknown model parameters. In this paper we combine two methods to solve this issue and develop a robust experimental design methodology. First, Bayesian experimental design averages the Fisher information matrix over a prior distribution of possible model parameter values. Second, adaptive experimental design allows for this information to be updated as measurements are being gathered. This updated information is then used to adapt the remainder of the design.


[111] 2603.21478

TaigiSpeech: A Low-Resource Real-World Speech Intent Dataset and Preliminary Results with Scalable Data Mining In-the-Wild

Speech technologies have advanced rapidly and serve diverse populations worldwide. However, many languages remain underrepresented due to limited resources. In this paper, we introduce \textbf{TaigiSpeech}, a real-world speech intent dataset in Taiwanese Taigi (aka Taiwanese Hokkien/Southern Min), which is a low-resource and primarily spoken language. The dataset is collected from older adults, comprising 21 speakers with a total of 3k utterances. It is designed for practical intent detection scenarios, including healthcare and home assistant applications. To address the scarcity of labeled data, we explore two data mining strategies with two levels of supervision: keyword match data mining with LLM pseudo labeling via an intermediate language and an audio-visual framework that leverages multimodal cues with minimal textual supervision. This design enables scalable dataset construction for low-resource and unwritten spoken languages. TaigiSpeech will be released under the CC BY 4.0 license to facilitate broad adoption and research on low-resource and unwritten languages. The project website and the dataset can be found on this https URL.


[112] 2603.21545

Auction-Based Task Allocation with Energy-Conscientious Trajectory Optimization for AMR Fleets

This paper presents a hierarchical two-stage framework for multi-robot task allocation and trajectory optimization in asymmetric task spaces: (1) a sequential auction allocates tasks using closed-form bid functions, and (2) each robot independently solves an optimal control problem for energy-minimal trajectories with a physics-based battery model, followed by a collision avoidance refinement step using pairwise proximity penalties. Event-triggered warm-start rescheduling with bounded trigger frequency handles robot faults, priority arrivals, and energy deviations. Across 505 scenarios with 2-20 robots and up to 100 tasks on three factory layouts, both energy- and distance-based auction variants achieve 11.8% average energy savings over nearest-task allocation, with rescheduling latency under 10 ms. The central finding is that bid-metric performance is regime-dependent: in uniform workspaces, distance bids outperform energy bids by 3.5% (p < 0.05, Wilcoxon) because a 15.7% closed-form approximation error degrades bid ranking accuracy to 87%; however, when workspace friction heterogeneity is sufficient (r < 0.85 energy-distance correlation), a zone-aware energy bid outperforms distance bids by 2-2.4%. These results provide practitioner guidance: use distance bids in near-uniform terrain and energy-aware bids when friction variation is significant.


[113] 2603.21580

Conformal Koopman for Embedded Nonlinear Control with Statistical Robustness: Theory and Real-World Validation

We propose a fully data-driven, Koopman-based framework for statistically robust control of discrete-time nonlinear systems with linear embeddings. Establishing a connection between the Koopman operator and contraction theory, it offers distribution-free probabilistic bounds on the state tracking error under Koopman modeling uncertainty. Conformal prediction is employed here to rigorously derive a bound on the state-dependent modeling uncertainty throughout the trajectory, ensuring safety and robustness without assuming a specific error prediction structure or distribution. Unlike prior approaches that merely combine conformal prediction with Koopman-based control in an open-loop setting, our method establishes a closed-loop control architecture with formal guarantees that explicitly account for both forward and inverse modeling errors. Also, by expressing the tracking error bound in terms of the control parameters and the modeling errors, our framework offers a quantitative means to formally enhance the performance of arbitrary Koopman-based control. We validate our method both in numerical simulations with the Dubins car and in real-world experiments with a highly nonlinear flapping-wing drone. The results demonstrate that our method indeed provides formal safety guarantees while maintaining accurate tracking performance under Koopman modeling uncertainty.


[114] 2603.21594

Spatio-Temporal Attention Enhanced Multi-Agent DRL for UAV-Assisted Wireless Networks with Limited Communications

In this paper, we employ multiple UAVs to accelerate data transmissions from ground users (GUs) to a remote base station (BS) via the UAVs' relay communications. The UAVs' intermittent information exchanges typically result in delays in acquiring the complete system state and hinder their effective collaboration. To maximize the overall throughput, we first propose a delay-tolerant multi-agent deep reinforcement learning (MADRL) algorithm that integrates a delay-penalized reward to encourage information sharing among UAVs, while jointly optimizing the UAVs' trajectory planning, network formation, and transmission control strategies. Additionally, considering information loss due to unreliable channel conditions, we further propose a spatio-temporal attention based prediction approach to recover the lost information and enhance each UAV's awareness of the network state. These two designs are envisioned to enhance the network capacity in UAV-assisted wireless networks with limited communications. The simulation results reveal that our new approach achieves over 50\% reduction in information delay and 75% throughput gain compared to the conventional MADRL. Interestingly, it is shown that improving the UAVs' information sharing will not sacrifice the network capacity. Instead, it significantly improves the learning performance and throughput simultaneously. It is also effective in reducing the need for UAVs' information exchange and thus fostering practical deployment of MADRL in UAV-assisted wireless networks.


[115] 2603.21616

Rateless DeepJSCC for Broadcast Channels: a Rate-Distortion-Complexity Tradeoff

In recent years, numerous data-intensive broadcasting applications have emerged at the wireless edge, calling for a flexible tradeoff between distortion, transmission rate, and processing complexity. While deep learning-based joint source-channel coding (DeepJSCC) has been identified as a potential solution to data-intensive communications, most of these schemes are confined to worst-case solutions, lack adaptive complexity, and are inefficient in broadcast settings. To overcome these limitations, this paper introduces nonlinear transform rateless source-channel coding (NTRSCC), a variable-length JSCC framework for broadcast channels based on rateless codes. In particular, we integrate learned source transformations with physical-layer LT codes, develop unequal protection schemes that exploit decoder side information, and devise approximations to enable end-to-end optimization of rateless parameters. Our framework enables heterogeneous receivers to adaptively adjust their received number of rateless symbols and decoding iterations in belief propagation, thereby achieving a controllable tradeoff between distortion, rate, and decoding complexity. Simulation results demonstrate that the proposed method enhances image broadcast quality under stringent communication and processing budgets over heterogeneous edge devices.


[116] 2603.21635

RTD-RAX: Fast, Safe Trajectory Planning for Systems under Unknown Disturbances

Reachability-based Trajectory Design (RTD) is a provably safe, real-time trajectory planning framework that combines offline reachable-set computation with online trajectory optimization. However, standard RTD implementations suffer from two key limitations: conservatism induced by worst-case reachable-set overapproximations, and an inability to account for real-time disturbances during execution. This paper presents RTD-RAX, a runtime-assurance extension of RTD that utilizes a non-conservative RTD formulation to rapidly generate goal-directed candidate trajectories, and utilizes mixed monotone reachability for fast, disturbance-aware online safety certification. When proposed trajectories fail safety certification under real-time uncertainty, a repair procedure finds nearby safe trajectories that preserve progress toward the goal while guaranteeing safety under real-time disturbances.


[117] 2603.21786

The Universal Normal Embedding

Generative models and vision encoders have largely advanced on separate tracks, optimized for different goals and grounded in different mathematical principles. Yet, they share a fundamental property: latent space Gaussianity. Generative models map Gaussian noise to images, while encoders map images to semantic embeddings whose coordinates empirically behave as Gaussian. We hypothesize that both are views of a shared latent source, the Universal Normal Embedding (UNE): an approximately Gaussian latent space from which encoder embeddings and DDIM-inverted noise arise as noisy linear projections. To test our hypothesis, we introduce NoiseZoo, a dataset of per-image latents comprising DDIM-inverted diffusion noise and matching encoder representations (CLIP, DINO). On CelebA, linear probes in both spaces yield strong, aligned attribute predictions, indicating that generative noise encodes meaningful semantics along linear directions. These directions further enable faithful, controllable edits (e.g., smile, gender, age) without architectural changes, where simple orthogonalization mitigates spurious entanglements. Taken together, our results provide empirical support for the UNE hypothesis and reveal a shared Gaussian-like latent geometry that concretely links encoding and generation. Code and data are available this https URL


[118] 2603.21819

Ctrl-A: Control-Driven Online Data Augmentation

We introduce ControlAugment (Ctrl-A), an automated data augmentation algorithm for image-vision tasks, which incorporates principles from control theory for online adjustment of augmentation strength distributions during model training. Ctrl-A eliminates the need for initialization of individual augmentation strengths. Instead, augmentation strength distributions are dynamically, and individually, adapted during training based on a control-loop architecture and what we define as relative operation response curves. Using an operation-dependent update procedure provides Ctrl-A with the potential to suppress augmentation styles that negatively impact model performance, alleviating the need for manually engineering augmentation policies for new image-vision tasks. Experiments on the CIFAR-10, CIFAR-100, and SVHN-core benchmark datasets using the common WideResNet-28-10 architecture demonstrate that Ctrl-A is highly competitive with existing state-of-the-art data augmentation strategies.


[119] 2603.21832

Deriving Health Metrics from the Photoplethysmogram: Benchmarks and Insights from MIMIC-III-Ext-PPG

Photoplethysmography (PPG) is one of the most widely captured biosignals for clinical prediction tasks, yet PPG-based algorithms are typically trained on small-scale datasets of uncertain quality, which hinders meaningful algorithm comparisons. We present a comprehensive benchmark for PPG-based clinical prediction using the \dbname~dataset, establishing baselines across the full spectrum of clinically relevant applications: multi-class heart rhythm classification, and regression of physiological parameters including respiratory rate (RR), heart rate (HR), and blood pressure (BP). Most notably, we provide the first comprehensive assessment of PPG for general arrhythmia detection beyond atrial fibrillation (AF) and atrial flutter (AFLT), with performance stratified by BP, HR, and demographic subgroups. Using established deep learning architectures, we achieved strong performance for AF detection (AUROC = 0.96) and accurate physiological parameter estimation (RR MAE: 2.97 bpm; HR MAE: 1.13 bpm; SBP/DBP MAE: 16.13/8.70 mmHg). Cross-dataset validation demonstrates excellent generalizability for AF detection (AUROC = 0.97), while clinical subgroup analysis reveals marked performance differences across subgroups by BP, HR, and demographic strata. These variations appear to reflect population-specific waveform differences rather than systematic bias in model behavior. This framework establishes the first integrated benchmark for multi-task PPG-based clinical prediction, demonstrating that PPG signals can effectively support multiple simultaneous monitoring tasks and providing essential baselines for future algorithm development.


[120] 2603.21911

A Latent Representation Learning Framework for Hyperspectral Image Emulation in Remote Sensing

Synthetic hyperspectral image (HSI) generation is essential for large-scale simulation, algorithm development, and mission design, yet traditional radiative transfer models remain computationally expensive and often limited to spectrum-level outputs. In this work, we propose a latent representation-based framework for hyperspectral emulation that learns a latent generative representation of hyperspectral data. The proposed approach supports both spectrum-level and spatial-spectral emulation and can be trained either in a direct one-step formulation or in a two-step strategy that couples variational autoencoder (VAE) pretraining with parameter-to-latent interpolation. Experiments on PROSAIL-simulated vegetation data and Sentinel-3 OLCI imagery demonstrate that the method outperforms classical regression-based emulators in reconstruction accuracy, spectral fidelity, and robustness to real-world spatial variability. We further show that emulated HSIs preserve performance in downstream biophysical parameter retrieval, highlighting the practical relevance of emulated data for remote sensing applications.


[121] 2603.21913

Collision-Free Velocity Scheduling for Multi-Agent Systems on Predefined Routes via Inexact-Projection ADMM

In structured multi-agent transportation systems, agents often must follow predefined routes, making spatial rerouting undesirable or impossible. This paper addresses route-constrained multi-agent coordination by optimizing waypoint passage times while preserving each agent's assigned waypoint order and nominal route assignment. A differentiable surrogate trajectory model maps waypoint timings to smooth position profiles and captures first-order tracking lag, enabling pairwise safety to be encoded through distance-based penalties evaluated on a dense temporal grid spanning the mission horizon. The resulting nonlinear and nonconvex velocity-scheduling problem is solved using an inexact-projection Alternating Direction Method of Multipliers (ADMM) algorithm that combines structured timing updates with gradient-based collision-correction steps and avoids explicit integer sequencing variables. Numerical experiments on random-crossing, bottleneck, and graph-based network scenarios show that the proposed method computes feasible and time-efficient schedules across a range of congestion levels and yields shorter mission completion times than a representative hierarchical baseline in the tested bottleneck cases.


[122] 2603.21977

BOOST-RPF: Boosted Sequential Trees for Radial Power Flow

Accurate power flow analysis is critical for modern distribution systems, yet classical solvers face scalability issues, and current machine learning models often struggle with generalization. We introduce BOOST-RPF, a novel method that reformulates voltage prediction from a global graph regression task into a sequential path-based learning problem. By decomposing radial networks into root-to-leaf paths, we leverage gradient-boosted decision trees (XGBoost) to model local voltage-drop regularities. We evaluate three architectural variants: Absolute Voltage, Parent Residual, and Physics-Informed Residual. This approach aligns the model architecture with the recursive physics of power flow, ensuring size-agnostic application and superior out-of-distribution robustness. Benchmarked against the Kerber Dorfnetz grid and the ENGAGE suite, BOOST-RPF achieves state-of-the-art results with its Parent Residual variant which consistently outperforms both analytical and neural baselines in standard accuracy and generalization tasks. While global Multi-Layer Perceptrons (MLPs) and Graph Neural Networks (GNNs) often suffer from performance degradation under topological shifts, BOOST-RPF maintains high precision across unseen feeders. Furthermore, the framework displays linear $O(N)$ computational scaling and significantly increased sample efficiency through per-edge supervision, offering a scalable and generalizable alternative for real-time distribution system operator (DSO) applications.


[123] 2603.22267

TiCo: Time-Controllable Training for Spoken Dialogue Models

We propose TiCo, a simple post-training method for enabling spoken dialogue models (SDMs) to follow time-constrained instructions and generate responses with controllable duration. This capability is valuable for real-world spoken language systems such as voice assistants and interactive agents, where controlling response duration can improve interaction quality. However, despite their strong ability to generate natural spoken responses, existing models lack time awareness and struggle to follow duration-related instructions (e.g., "Please generate a response lasting about 15 seconds"). Through an empirical evaluation of both open-source and commercial SDMs, we show that they frequently fail to satisfy such time-control requirements. TiCo addresses this limitation by enabling models to estimate elapsed speaking time during generation through Spoken Time Markers (STM) (e.g., <10.6 seconds>). These markers help the model maintain awareness of time and adjust the remaining content to meet the target duration. TiCo is simple and efficient: it requires only a small amount of data and no additional question-answer pairs, relying instead on self-generation and reinforcement learning. Experimental results show that TiCo significantly improves adherence to duration constraints while preserving response quality.


[124] 2111.08457

A Novel TSK Fuzzy System Incorporating Multi-view Collaborative Transfer Learning for Personalized Epileptic EEG Detection

In clinical practice, electroencephalography (EEG) plays an important role in the diagnosis of epilepsy. EEG-based computer-aided diagnosis of epilepsy can greatly improve the ac-curacy of epilepsy detection while reducing the workload of physicians. However, there are many challenges in practical applications for personalized epileptic EEG detection (i.e., training of detection model for a specific person), including the difficulty in extracting effective features from one single view, the undesirable but common scenario of lacking sufficient training data in practice, and the no guarantee of identically distributed training and test data. To solve these problems, we propose a TSK fuzzy system-based epilepsy detection algorithm that integrates multi-view collaborative transfer learning. To address the challenge due to the limitation of single-view features, multi-view learning ensures the diversity of features by extracting them from different views. The lack of training data for building a personalized detection model is tackled by leveraging the knowledge from the source domain (reference scene) to enhance the performance of the target domain (current scene of interest), where mismatch of data distributions between the two domains is resolved with adaption technique based on maximum mean discrepancy. Notably, the transfer learning and multi-view feature extraction are performed at the same time. Furthermore, the fuzzy rules of the TSK fuzzy system equip the model with strong fuzzy logic inference capability. Hence, the proposed method has the potential to detect epileptic EEG signals effectively, which is demonstrated with the positive results from a large number of experiments on the CHB-MIT dataset.


[125] 2305.07144

Survey on Integrated Sensing and Communication Performance Modeling and Use Cases Feasibility

As the research community starts to address the* key features of 6G cellular standards, one of the agreed bridge topics to be studied already in 5G advanced releases is Integrated Sensing and Communication (ISAC). The first efforts of the research community are focusing on ISAC enablers, fundamental limits, and first demonstrators, that show that the time has come for the deployment of sensing functionalities in cellular standards. This survey paper takes a needed step towards ISAC deployment, providing an analytical toolkit to model cellular systems' sensing performance, accounting for both their fundamental and practical constraints. We then elaborate on the likely features of 6G systems to provide the feasible sensing key performance indicators (KPIs) in the frequency ranges spanned by cellular networks, including the potential new bands available in 6G, the Frequency Range 3 (FR3). We further validate our framework by visually investigating ISAC constraints with simulation examples. Finally, we assess the feasibility of few selected scenarios that can be enabled by ISAC, highlighting in each of them the limiting factor and, thus, which gaps should be filled by the research and standardization communities in the next years.


[126] 2310.00934

A control-theoretic simplification of adaptive bitrate (ABR) video streaming

Adaptive bitrate streaming (ABR) over the HyperText Transfer Protocol (HTTP), which raises numerous delicate questions, is nowadays almost the only approach to video streaming. This paper presents elementary solutions to three key issues: 1) A straightforward feedforward control strategy for the bitrate and the buffer level via flatness-based control. 2) Closing the loop permits mitigating unavoidable mismatches and disturbances, such as Internet fluctuations. This is adapted from the new HEOL setting, which mixes model-free and flatness-based controls. 3) An easily implementable closed-form estimate of the bandwidth via algebraic identification techniques is derived, perhaps for the first time. It permits handling severe variations in channel capacity. Several computer experiments and metrics for evaluating the Quality of Experience (QoE) are displayed and discussed.


[127] 2404.18010

Energy-Efficient Federated Learning in Cooperative Communication within Factory Subnetworks

This paper investigates energy-efficient transmission protocols in relay-assisted federated learning (FL) setup within industrial subnetworks, considering latency and power constraints. In the subnetworks, devices collaborate to train a global model by transmitting their local models at the edge-enabled primary access (pAP) directly or via secondary access points (sAPs), which act as relays to optimize the training latency. We begin by formulating the energy efficiency problem for our proposed transmission protocol. Given its non-convex nature, we decompose it to minimize computational and transmission energy separately. First, we introduce an algorithm that categorizes devices into single-hop and two-hop groups to decrease transmission energy and then selects associated sAPs. Subsequently, we optimize the transmit power, aiming to maximize energy efficiency. To that end, we propose a Sequential Parametric Convex Approximation (SPCA) method to configure system parameters jointly. Numerical results demonstrate a significant reduction in outage probability and at least a twofold savings in total energy consumption, together with faster convergence, compared with single-hop transmission.


[128] 2410.01591

Imaging foundation model for universal enhancement of non-ideal measurement CT

Non-ideal measurement computed tomography (NICT) employs suboptimal imaging protocols to expand CT applications. However, the resulting trade-offs degrade image quality, limiting clinical acceptability. Although deep learning methods have been used to enhance NICT images, their reliance on large training datasets and limited generalizability across diverse settings hinder practical use. We propose the multi-scale integrated Transformer AMPlifier (TAMP), the first imaging foundation model for universal NICT enhancement. Pre-trained on 10.8 million physics-driven simulated NICT images, TAMP generalizes effectively across various NICT settings, defect degrees, and body regions. Moreover, a parameter-efficient fine-tuning strategy enables TAMP to adapt to specific clinical scenarios using only few slices. Extensive experiments, including radiologists and real-world validations, demonstrate that TAMP consistently improves image quality and clinical acceptability, underscoring its significant potential to advance CT imaging and broaden NICT applications in clinical practice.


[129] 2503.11851

Interpretable Deep Learning Framework for Improved Disease Classification in Medical Imaging

Deep learning models have gained increasing adoption in medical image analysis. However, these models often produce overconfident predictions, which can compromise clinical accuracy and reliability. Bridging the gap between high-performance and awareness of uncertainty remains a crucial challenge in biomedical imaging applications. This study focuses on developing a unified deep learning framework for enhancing feature integration, interpretability, and reliability in prediction. We introduced a cross-guided channel spatial attention architecture that fuses feature representations extracted from EfficientNetB4 and ResNet34. Bidirectional attention approach enables the exchange of information across networks with differing receptive fields, enhancing discriminative and contextual feature learning. For quantitative predictive uncertainty assessment, Monte Carlo (MC)-Dropout is integrated with conformal prediction. This provides statistically valid prediction sets with entropy-based uncertainty visualization. The framework is evaluated on four medical imaging benchmark datasets: chest X-rays of COVID-19, Tuberculosis, Pneumonia, and retinal Optical Coherence Tomography (OCT) images. The proposed framework achieved strong classification performance with an AUC of 99.75% for COVID-19, 100% for Tuberculosis, 99.3% for Pneumonia chest X-rays, and 98.69% for retinal OCT images. Uncertainty-aware inference yields calibrated prediction sets with interpretable examples of uncertainty, showing transparency. The results demonstrate that bidirectional cross-attention with uncertainty quantification can improve performance and transparency in medical image classification.


[130] 2504.15453

Barrier-Riccati Synthesis for Nonlinear Safe Control with Expanded Region of Attraction

We present a Riccati-based framework for safety-critical nonlinear control that integrates the barrier states (BaS) methodology with the State-Dependent Riccati Equation (SDRE) approach. The BaS formulation embeds safety constraints into the system dynamics via auxiliary states, enabling safety to be treated as a control objective. To overcome the limited region of attraction in linear BaS controllers, we extend the framework to nonlinear systems using SDRE synthesis applied to the barrier-augmented dynamics and derive a matrix inequality condition that certifies forward invariance of a large region of attraction and guarantees asymptotic safe stabilization. The resulting controller is computed online via pointwise Riccati solutions. We validate the method on an unstable constrained system and cluttered quadrotor navigation tasks, demonstrating improved constraint handling, scalability, and robustness near safety boundaries. This framework offers a principled and computationally tractable solution for synthesizing nonlinear safe feedback in safety-critical environments.


[131] 2505.13818

RainfalLTE: A Zero-effect Rainfall Sensing System Utilizing Existing LTE Infrastructure

Environmental sensing is an important research topic in the integrated sensing and communication (ISAC) system. Current works often focus on static environments, such as buildings and terrains. However, dynamic factors like rainfall can cause serious interference to wireless signals. In this paper, we propose a system called RainfalLTE that utilizes the downlink signal of LTE base stations for device-independent rain sensing. In articular, it is fully compatible with current communication modes and does not require any additional hardware. We evaluate it with LTE data and rainfall information provided by a weather radar in Badaling Town, Beijing The results show that for 10 classes of rainfall, RainfalLTE achieves over 97% identification accuracy. Our case study shows that the assistance of rainfall information can bring more than 40% energy saving, which provides new opportunities for the design and optimization of ISAC systems.


[132] 2506.09161

From Explanations to Architecture: Explainability-Driven CNN Refinement for Brain Tumor Classification in MRI

Recent brain tumor classification methods often report high accuracy but rely on deep, over-parameterized architectures with limited interpretability, making it difficult to determine whether predictions are driven by tumor-relevant evidence or by spurious cues such as background artifacts or normal tissue. We propose an explainable convolutional neural network (CNN) framework that enhances model transparency without sacrificing classification accuracy. This approach supports more trustworthy AI in healthcare and contributes to SDG 3: Good Health and Well-being by enabling more dependable MRI-based brain tumor diagnosis and earlier detection. Rather than using explainable AI solely for post hoc visualization, we employ Grad-CAM to quantify layer-wise relevance and guide the removal of low-contribution layers, reducing unnecessary depth and parameters while encouraging attention to discriminative tumor regions. We further validate the model's decision rationale using complementary explainability methods, combining Grad-CAM for spatial localization with SHAP and LIME for attribution-based verification. Experiments on multi-class brain MRI datasets show that the proposed model achieves 98.21% accuracy on the primary dataset and 95.74% accuracy on an unseen dataset, indicating strong cross-dataset generalization. Overall, the proposed approach balances simplicity, transparency, and accuracy, supporting more trustworthy and clinically applicable brain tumor classification for improved health outcomes and non-invasive disease detection.


[133] 2506.23204

Data-driven Implementations of Various Generalizations of Balanced Truncation

Quadrature-based approximation of Gramians in standard balanced truncation yields a non-intrusive, data-driven implementation that requires only transfer function samples on the imaginary axis, which can be measured experimentally. This idea has recently been extended to several generalizations of balanced truncation, including positive-real balanced truncation, bounded-real balanced truncation, and balanced stochastic truncation. However, these extensions require samples of some spectral factorizations on the imaginary axis, and no practical method exists to obtain such data experimentally. As a result, these non-intrusive implementations are mainly of theoretical interest at present. This paper shows that if the Gramians in these generalizations are approximated via rational interpolation rather than numerical integration, the resulting non-intrusive implementations do not require spectral factorization samples. Instead, they rely only on transfer function samples. Based on this idea, non-intrusive implementations are first developed for several variants of balanced truncation, wherein the Gramians are approximated implicitly using low-rank Alternating Direction Implicit (ADI) methods for Lyapunov and Riccati equations. These formulations require transfer function samples in the right half of the \(s\)-plane, which cannot be measured experimentally. Next, building on these results, novel data-driven non-intrusive implementations are proposed that require only transfer function samples on the imaginary axis. Hence, unlike the quadrature-based and ADI-based approaches, these non-intrusive formulations can be implemented using practically measurable data. Numerical results are presented for benchmark problems in model order reduction, which show that the proposed non-intrusive implementations achieve accuracy comparable to their intrusive counterparts.


[134] 2509.11571

A Fine-Grained 3D Radio Map Construction Paradigm with Ultra-Low Sampling Rates by Large Generative Models

A radio map captures the spatial distribution of wireless channel parameters, such as the strength of the signal received, across a geographic area. The problem of fine-grained three-dimensional (3D) radio map construction involves inferring a high-resolution radio map for the two-dimensional (2D) area at an arbitrary target height within a 3D region of interest, using radio samples collected by sensors sparsely distributed in that 3D region. Solutions to the problem are crucial for efficient spectrum management in 3D spaces, particularly for drones in the rapidly developing low-altitude economy. However, this problem is challenging due to ultra-sparse sampling, where the number of collected radio samples is far fewer than the desired resolution of the radio map to be estimated. In this paper, we design RadioLAM, a fine-grained 3D radio map construction paradigm built on generative Large Artificial Intelligence Models (LAMs). RadioLAM employs the creative power and the strong generalization capability of LAM to address the ultra-sparse sampling challenge. It consists of three key blocks: 1) an augmentation block, using the radio propagation model to project the radio samples collected at different heights to the 2D area at the target height; 2) a generation block, leveraging a diffusion-based LAM under an Mixture of Experts (MoE) architecture to generate a candidate set of fine-grained radio maps for the target 2D area; and 3) an election block, utilizing the radio propagation model as a guide to find the best map from the candidate set. Extensive simulations show that RadioLAM is able to solve the fine-grained 3D radio map construction problem efficiently from an ultra-low sampling rate of 0.1%, and significantly outperforms state-of-the-art (SOTA). Furthermore, real-world experiments confirm that RadioLAM achieves superior performance compared to SOTA.


[135] 2509.25064

Data-Driven Resilience Assessment against Sparse Sensor Attacks

We develop a data-driven framework for assessing the resilience of linear time-invariant systems against malicious false-data-injection sensor attacks. Leveraging sparse observability, we propose data-driven resilience metrics and derive necessary and sufficient conditions for two data-availability scenarios. For attack-free data, we show that when a rank condition holds, the resilience level can be computed exactly from the data alone, without prior knowledge of the system parameters. We then extend the analysis to the case where only poisoned data are available and show that the resulting assessment is necessarily conservative. For both scenarios, we provide algorithms for computing the proposed metrics and show that they can be computed in polynomial time under an additional spectral condition. A numerical example illustrates the efficacy and limitations of the proposed framework.


[136] 2510.07909

Bloodroot: When Watermarking Turns Poisonous For Stealthy Backdoor

Backdoor data poisoning is a crucial technique for ownership protection and defending against malicious attacks. Embedding hidden triggers in training data can manipulate model outputs, enabling provenance verification, and deterring unauthorized use. However, current audio backdoor methods are suboptimal, as poisoned audio often exhibits degraded perceptual quality, which is noticeable to human listeners. This work explores the intrinsic stealthiness and effectiveness of audio watermarking in achieving successful poisoning. We propose a novel Watermark-as-Trigger concept, integrated into the Bloodroot backdoor framework via adversarial LoRA fine-tuning, which enhances perceptual quality while achieving a much higher trigger success rate and clean-sample accuracy. Experiments on speech recognition (SR) and speaker identification (SID) datasets show that watermark-based poisoning remains effective under acoustic filtering and model pruning. The proposed Bloodroot backdoor framework not only secures data-to-model ownership, but also well reveals the risk of adversarial misuse.


[137] 2510.15598

Observer Design over Hypercomplex Quaternions

We develop observer design over hypercomplex quaternions in a characteristic-polynomial-free framework. Using the standard right-module convention, we derive a right observable companion form and companion polynomial that encode error dynamics through right-eigenvalue similarity classes. We also give an Ackermann-type formula for real-coefficient target polynomials, where polynomial evaluation is similarity-equivariant. The resulting recipes place observer poles directly over quaternions and clarify when companion-coordinate updates and one-shot Ackermann formulas remain valid.


[138] 2510.24191

Sample-based Moving Horizon Estimation

In this paper, we propose a sample-based moving horizon estimation (MHE) scheme for general nonlinear systems to estimate the current system state using irregularly and/or infrequently available measurements. The cost function of the MHE optimization problem is suitably designed to accommodate these irregular output sequences. We also establish that, under a suitable sample-based detectability condition known as sample-based incremental input/output-to-state stability (i-IOSS), the proposed sample-based MHE achieves robust global exponential stability (RGES). Additionally, for the case of linear systems, we draw connections between sample-based observability and sample-based i-IOSS. This demonstrates that previously established conditions for linear systems to be sample-based observable can be utilized to verify or design sampling strategies that satisfy the conditions to guarantee RGES of the sample-based MHE. Finally, the effectiveness of the proposed sample-based MHE is illustrated through a simulation example.


[139] 2511.03002

Robust reduced-order model predictive control using peak-to-peak analysis of filtered signals

We address the design of a model predictive control (MPC) scheme for large-scale linear systems using reduced-order models (ROMs). Our approach uses a ROM, leverages tools from robust control, and integrates them into an MPC framework to achieve computational tractability with robust constraint satisfaction. Our key contribution is a method to obtain guaranteed bounds on the predicted outputs of the full-order system by predicting a (scalar) error-bounding system alongside the ROM. This bound is then used to formulate a robust ROM-based MPC that guarantees constraint satisfaction and robust performance. Our method is developed step-by-step by (i) analysing the error, (ii) bounding the peak-to-peak gain, an (iii) using filtered signals. We demonstrate our method on a 100-dimensional mass-spring-damper system, achieving over four orders of magnitude reduction in conservatism relative to existing approaches.


[140] 2511.07185

Neural Directional Filtering Using a Compact Microphone Array

Beamforming with desired directivity patterns using compact microphone arrays is essential in many audio applications. Directivity patterns achievable using traditional beamformers depend on the number of microphones and the array aperture. Generally, their effectiveness degrades for compact arrays. To overcome these limitations, we propose a neural directional filtering (NDF) approach that leverages deep neural networks to enable sound capture with a predefined directivity pattern. The NDF computes a single-channel complex mask from the microphone array signals, which is then applied to a reference microphone to produce an output that approximates a virtual directional microphone with the desired directivity pattern. We introduce training strategies and propose data-dependent metrics to evaluate the directivity pattern and directivity factor. We show that the proposed method: i) achieves a frequency-invariant directivity pattern even above the spatial aliasing frequency, ii) can approximate diverse and higher-order patterns, iii) can steer the pattern in different directions, and iv) generalizes to unseen conditions. Lastly, experimental comparisons demonstrate superior performance over conventional beamforming and parametric approaches.


[141] 2511.07225

Towards Fair and Efficient allocation of Mobility-on-Demand resources through a Karma Economy

Mobility-on-demand systems like ride-hailing have transformed urban transportation, but they have also exacerbated socio-economic inequalities in access to these services, also due to surge pricing strategies. Although several fairness-aware frameworks have been proposed in smart mobility, they often overlook the temporal and situational variability of user urgency that shapes real-world transportation demands. This paper introduces a non-monetary, Karma-based mechanism that models endogenous urgency, allowing user time-sensitivity to evolve in response to system conditions as well as external factors. We develop a theoretical framework maintaining the efficiency and fairness guarantees of classical Karma economies, while accommodating this realistic user behavior modeling. Applied to a simplified simulated mobility-on-demand scenario, we provide a proof-of-concept illustration of the proposed framework, showing that it exhibits promising behavior in terms of system efficiency and equitable resource allocation, while acknowledging that a full treatment of realistic MoD complexity remains an important direction for future work.


[142] 2511.13971

On the Impact of Voltage Unbalance on Distribution Locational Marginal Prices

Finding clear economic signals for distribution-network operation and expansion is increasingly important as single-phase loads and distributed energy resources escalate. These devices create phase-to-phase imbalances that manifest as voltage unbalance, a power quality issue that accelerates insulation aging in machines and increases network losses, thereby raising costs for operators and consumers. Traditional grid codes address unbalance via disparate hard limits on various indices thresholds that differ across standards, offer no dynamic economic incentive and undermine optimality. This paper proposes instead to treat voltage unbalance as a `soft limit' by adding penalty terms to grid operation costs within a three-phase optimal power flow to reflect the cost of the decrease in lifetime of assets due to being subject to voltage unbalance. This unified approach yields dynamic economic signals unbalance-aware Distribution Locational Marginal Prices (DLMP) that reflect the cost of power quality deviations. A novel mathematical decomposition of DLMP is developed, isolating the energy, loss, congestion, and unbalance components. Case studies conducted on two benchmark networks demonstrate the effectiveness and practical value of the proposed method. The results indicate that unbalance penalties reshape nodal prices, produce unexpected phase-level effects, and even allow scenarios where added load reduces unbalance and lowers costs, while providing planners and market designers with actionable insights to balance investment, operation, and power quality in modern distribution systems.


[143] 2511.18493

SAGE: Shape-Adapting Gated Experts for Adaptive Histopathology Image Segmentation

The significant variability in cell size and shape continues to pose a major obstacle in computer-assisted cancer detection on gigapixel Whole Slide Images (WSIs), due to cellular heterogeneity. Current CNN-Transformer hybrids use static computation graphs with fixed routing. This leads to extra computation and makes it harder to adapt to changes in input. We propose Shape-Adapting Gated Experts (SAGE), an input-adaptive framework that enables dynamic expert routing in heterogeneous visual networks. SAGE reconfigures static backbones into dynamically routed expert architectures via a dual-path design with hierarchical gating and a Shape-Adapting Hub (SA-Hub) that harmonizes feature representations across convolutional and transformer modules. Embodied as SAGE with ConvNeXt and Vision Transformer UNet (SAGE-ConvNeXt+ViT-UNet), our model achieves a Dice score of 95.23\% on EBHI, 92.78\%/91.42\% DSC on GlaS Test A/Test B, and 91.26\% DSC at the WSI level on DigestPath, while exhibiting robust generalization under distribution shifts by adaptively balancing local refinement and global context. SAGE establishes a scalable foundation for dynamic expert routing in visual networks, thereby facilitating flexible visual reasoning.


[144] 2511.22986

The Battle of the Water Futures

The highly anticipated 'Battle of the Water Networks' is back with a new challenge for the water community. This competition will be hosted at the 4th International Joint Conference on Water Distribution Systems Analysis and Computing and Control in the Water Industry (WDSA/CCWI 2026), taking place in Paphos, Cyprus, from May 18-21, 2026. This competition embodies the core mission of Water-Futures and the theme for WDSA/CCWI 2026: "Designing the next generation of urban water (and wastewater) systems." The objective is to design and operate a water distribution system over a long-term horizon under deep uncertainty, with interventions applied in stages. For the first time, this challenge features a staged-design approach, unobservable and unknown uncertainties, and incorporates elements of policymaking and artificial intelligence. The solutions will be assessed using a transparent and inspectable open-source evaluation framework.


[145] 2601.08758

M3CoTBench: Benchmark Chain-of-Thought of MLLMs in Medical Image Understanding

Chain-of-Thought (CoT) reasoning has proven effective in enhancing large language models by encouraging step-by-step intermediate reasoning, and recent advances have extended this paradigm to Multimodal Large Language Models (MLLMs). In the medical domain, where diagnostic decisions depend on nuanced visual cues and sequential reasoning, CoT aligns naturally with clinical thinking processes. However, current benchmarks for medical image understanding generally focus on the final answer while ignoring the reasoning path. Such opaque reasoning processes lack reliable bases for judgment, making it difficult to assist doctors in diagnosis. To address this gap, we introduce a new M3CoTBench benchmark specifically designed to evaluate the correctness, efficiency, impact, and consistency of CoT reasoning in medical image understanding. M3CoTBench features 1) a diverse, multi-level difficulty dataset covering 24 examination types, 2) 13 varying-difficulty tasks, 3) a suite of CoT-specific evaluation metrics (correctness, efficiency, impact, and consistency) tailored to clinical reasoning, and 4) a performance analysis of multiple MLLMs. M3CoTBench systematically evaluates CoT reasoning across diverse medical imaging tasks, revealing current limitations of MLLMs in generating reliable and clinically interpretable reasoning, and aims to foster the development of transparent, trustworthy, and diagnostically accurate AI systems for healthcare. Project page at this https URL.


[146] 2602.07527

From Noise to Prognosis: A Physics-Grounded, Fractional-Domain Framework for Early Gear Fault Detection in Aviation Drivetrains

Early and reliable detection of gear faults in complex drivetrain systems is critical for aviation safety and operational availability. We present the Local Damage Mode Extractor (LDME), a structured, physics-informed signal processing framework that combines dual-path denoising, multiscale decomposition, fractional-domain enhancement, and statistically principled anomaly scoring to produce interpretable condition indicators without supervision. LDME is organized in three layers: (i) dual-path denoising (DWT with adaptive Savitzky-Golay smoothing) to suppress broadband noise while preserving transient fault structure; (ii) multi-scale damage enhancement using a Teager-Kaiser pre-amplifier followed by a Hadamard-Caputo fractional operator that accentuates non-sinusoidal, low-frequency fault signatures; and (iii) decision fusion, where harmonics-aware Fourier indicators are combined and scored by an unsupervised anomaly detector. Evaluation using the Case Western Reserve University (CWRU) bearing dataset, the HUMS 2023 planetary gearbox benchmark, and a controlled simulated dataset shows that LDME consistently distinguishes nominal, early-crack, and propagated-crack stages under various operating conditions. LDME identifies the primary detection event earlier (198 cycles) than HT-TSA (284 cycles) and advances maintenance recommendation time from 383 to 365 cycles. We discuss its relation to prior art, limitations, and future theoretical directions. All code and experimental configurations are documented for reproducibility.


[147] 2602.12288

Energy-Aware Reinforcement Learning for Robotic Manipulation of Articulated Components in Infrastructure Operation and Maintenance

With the growth of intelligent civil infrastructure and smart cities, operation and maintenance (O&M) increasingly requires safe, efficient, and energy-conscious robotic manipulation of articulated components, including access doors, service drawers, and pipeline valves. However, existing robotic approaches either focus primarily on grasping or target object-specific articulated manipulation, and they rarely incorporate explicit actuation energy into multi-objective optimisation, which limits their scalability and suitability for long-term deployment in real O&M settings. Therefore, this paper proposes an articulation-agnostic and energy-aware reinforcement learning framework for robotic manipulation in intelligent infrastructure O&M. The method combines part-guided 3D perception, weighted point sampling, and PointNet-based encoding to obtain a compact geometric representation that generalises across heterogeneous articulated objects. Manipulation is formulated as a Constrained Markov Decision Process (CMDP), in which actuation energy is explicitly modelled and regulated via a Lagrangian-based constrained Soft Actor-Critic scheme. The policy is trained end-to-end under this CMDP formulation, enabling effective articulated-object operation while satisfying a long-horizon energy budget. Experiments on representative O&M tasks demonstrate 16%-30% reductions in energy consumption, 16%-32% fewer steps to success, and consistently high success rates, indicating a scalable and sustainable solution for infrastructure O&M manipulation.


[148] 2603.07499

Inverse-dynamics observer design for a linear single-track vehicle model with distributed tire dynamics

Accurate estimation of the vehicle's sideslip angle and tire forces is essential for enhancing safety and handling performances in unknown driving scenarios. To this end, the present paper proposes an innovative observer that combines a linear single-track model with a distributed representation of the tires and information collected from standard sensors. In particular, by adopting a comprehensive representation of the tires in terms of hyperbolic partial differential equations (PDEs), the proposed estimation strategy exploits dynamical inversion to reconstruct the lumped and distributed vehicle states solely from yaw rate and lateral acceleration measurements. Simulation results demonstrate the effectiveness of the observer in estimating the sideslip angle and tire forces even in the presence of noise and model uncertainties.


[149] 2603.10880

The potential and viability of V2G for California BEV drivers

Vehicle-to-Grid (V2G) adoption is hindered by uncertainties regarding its effects on battery lifetime and vehicle usability. These uncertainties are compounded by limited insight into real-world vehicle usage. Here, we leverage real-world Californian BEV usage data to design and evaluate a user-centric V2G strategy. We identified four clustered driver profiles for V2G assessment, ranging from "Daily Chargers" to "Public Chargers". We show that V2G participation is most feasible for "Daily Chargers," and that the effects on battery lifetime depend on calendar aging sensitivity. For batteries with low sensitivity, V2G participation increases capacity loss for all drivers. However, for batteries with high sensitivity, V2G participation can lead to negligible changes in capacity or even improved capacity retention, particularly for drivers who tend to keep their batteries at high states of charge. Our findings enable stakeholders to better assess the potential and viability of V2G adoption.


[150] 2603.14266

Modeling, Optimization and Electromagnetic Validation of Stacked Intelligent Metasurfaces by Using a Multiport Network Model

Stacked intelligent metasurfaces (SIMs) extend the concept of reconfigurable intelligent surfaces by cascading multiple programmable layers, enabling advanced electromagnetic wave transformations for communication and sensing applications. However, most existing optimization frameworks rely on simplified channel abstractions that may overlook key electromagnetic effects such as multiport coupling, circuit losses, and non-ideal hardware behavior. In this paper, we develop a modeling and optimization framework for SIMs based on a multiport network representation using scattering parameters. The proposed formulation captures realistic circuit characteristics and mutual interactions among SIM ports while remaining amenable to optimization. The resulting models are validated through electromagnetic simulations, enabling a systematic comparison between idealized and practical SIM configurations. Numerical results for communication and sensing scenarios confirm that the proposed framework provides accurate performance predictions and enables the effective design of SIM configurations under realistic electromagnetic conditions.


[151] 2603.15093

Beam Prediction Based on Multimodal Large Language Models

Accurate beam prediction is a key enabler for next-generation wireless communication systems. In this paper, we propose a multimodal large language model (LLM)-based beam prediction framework that effectively utilizes contextual information, provided by sensory data including RGB camera images and LiDAR point clouds. To effectively fuse heterogeneous modalities, we design specialized modality encoders together with a beam-guided attention masking mechanism and a high-frequency temporal alignment strategy, enabling robust cross-modal feature integration under dynamic environments. Furthermore, we construct a large-scale multimodal dataset for communication, named Multimodal-Wireless, which covers diverse weather and traffic conditions with high-fidelity ray-tracing labels. Extensive simulation results demonstrate that the proposed approach significantly reduces the reliance on oracle angle-of-departure knowledge and consistently outperforms state-of-the-art multimodal LLM-based beam prediction methods in terms of beam accuracy and communication performance, improving the average Top-1 accuracy to 80.8% and the average normalized gain to 89.1%.


[152] 2603.16096

Near-Field Localization via Reconfigurable Antennas

Reconfigurable antennas (RAs) utilize the electromagnetic (EM) domain to provide dynamic control over antenna radiation patterns, which offers an effective way to enhance power efficiency in wireless links. Unlike conventional arrays with fixed element patterns, RAs enable on-demand beam-pattern synthesis by directly controlling each antenna's EM characteristics. While existing research on RAs has primarily focused on improving spectral efficiency, this paper explores their application for downlink localization. Moreover, the majority of existing works focus on far-field scenarios with little attention on near-field (NF). Motivated by these gaps, we consider a synthesis model in which each antenna generates desired beampatterns from a finite set of EM basis functions. We then formulate a joint optimization problem for the baseband (BB) and EM precoders with the objective of minimizing the user equipment (UE) position error bound (PEB) in NF conditions. Our analytical derivations and extensive simulation results demonstrate that the proposed hybrid precoder design for RAs significantly improves UE positioning accuracy compared to traditional non-reconfigurable arrays.


[153] 2603.17764

Robust Dynamic Pricing and Admission Control with Fairness Guarantees

Dynamic pricing is commonly used to regulate congestion in shared service systems. This paper is motivated by the fact that in the presence of users with varying price sensitivity (responsiveness), conventional monotonic pricing can lead to unfair outcomes by disproportionately excluding price-elastic users, particularly under high or uncertain demand. We therefore develop a fairness-oriented mechanism under demand uncertainty. The paper's contributions are twofold. First, we show that when fairness is imposed as a hard state constraint, the optimal (revenue maximizing) pricing policy is generally non-monotonic in demand. This structural result departs fundamentally from standard surge pricing rules and reveals that price reduction under heavy load may be necessary to maintain equitable access. Second, we address the problem that price elasticity among heterogeneous users is unobservable. To solve it, we develop a robust dynamic pricing and admission control framework that enforces capacity and fairness constraints for all user type distributions consistent with aggregate measurements. By integrating integral High Order Control Barrier Functions (iHOCBFs) with a robust optimization framework under uncertain user-type distribution, we obtain a controller that guarantees forward invariance of safety and fairness constraints while optimizing revenue. Numerical experiments demonstrate improved fairness and revenue performance relative to monotonic surge pricing policies.


[154] 2603.19796

Mixed-Integer vs. Continuous Model Predictive Control for Binary Thrusters: A Comparative Study

Binary on/off thrusters are commonly used for spacecraft attitude and position control during proximity operations. However, their discrete nature poses challenges for conventional continuous control methods. The control of these discrete actuators is either explicitly formulated as a mixed-integer optimization problem or handled in a two-layer approach, where a continuous controller's output is converted to binary commands using analog-to digital modulation techniques such as Delta-Sigma-modulation. This paper provides the first systematic comparison between these two paradigms for binary thruster control, contrasting continuous Model Predictive Control (MPC) with Delta-Sigma modulation against direct Mixed-Integer MPC (MIMPC) approaches. Furthermore, we propose a new variant of MPC for binary actuated systems, which is informed using the state of the Delta-Sigma Modulator. The two variations for the continuous MPC along with the MIMPC are evaluated through extensive simulations using ESA's REACSA platform. Results demonstrate that while all approaches perform similarly in high-thrust regimes, MIMPC achieves superior fuel efficiency in low-thrust conditions. Continuous MPC with modulation shows instabilities at higher thrust levels, while binary informed MPC, which incorporates modulator dynamics, improves robustness and reduces the efficiency gap to the MIMPC. It can be seen from the simulated and real-system experiments that MIMPC offers complete stability and fuel efficiency benefits, particularly for resource-constrained missions, while continuous control methods remain attractive for computationally limited applications.


[155] 1807.04021

On bayesian estimation and proximity operators

There are two major routes to address the ubiquitous family of inverse problems appearing in signal and image processing, such as denoising or deblurring. A first route relies on Bayesian modeling, where prior probabilities are used to embody models of both the distribution of the unknown variables and their statistical dependence with respect to the observed data. The estimation process typically relies on the minimization of an expected loss (e.g. minimum mean squared error, or MMSE). The second route has received much attention in the context of sparse regularization and compressive sensing: it consists in designing (often convex) optimization problems involving the sum of a data delity term and a penalty term promoting certain types of unknowns (e.g., sparsity, promoted through an `1 norm). Well known relations between these two approaches have led to some widely spread mis-conceptions. In particular, while the so-called Maximum A Posterori (MAP) estimate with a Gaussian noise model does lead to an optimization problem with a quadratic data-fidelity term, we disprove through explicit examples the common belief that the converse would be true. It has already been shown [7, 9] that for denoising in the presence of additive Gaussian noise, for any prior probability on the unknowns, MMSE estimation can be expressed as a penalized least squares problem, with the apparent characteristics of a MAP estimation problem with Gaussian noise and a (generally) different prior on the unknowns. In other words, the variational approach is rich enough to build all possible MMSE estimators associated to additive Gaussian noise via a well chosen penalty. We generalize these results beyond Gaussian denoising and characterize noise models for which the same phenomenon occurs. In particular, we prove that with (a variant of) Poisson noise and any prior probability on the unknowns, MMSE estimation can again be expressed as the solution of a penalized least squares optimization problem. For additive scalar denoising the phenomenon holds if and only if the noise distribution is log-concave. In particular, Laplacian denoising can (perhaps surprisingly) be expressed as the solution of a penalized least squares problem. In the multivariate case, the same phenomenon occurs when the noise model belongs to a particular subset of the exponential family. For multivariate additive denoising, the phenomenon holds if and only if the noise is white and Gaussian.


[156] 2312.04140

Polarimetric Light Transport Analysis for Specular Inter-reflection

Polarization is well known for its ability to decompose diffuse and specular reflections. However, the existing decomposition methods only focus on direct reflection and overlook multiple reflections, especially specular inter-reflection. In this paper, we propose a novel decomposition method for handling specular inter-reflection of metal objects by using a unique polarimetric feature: the rotation direction of linear polarization. This rotation direction serves as a discriminative factor between direct and inter-reflection on specular surfaces. To decompose the reflectance components, we actively rotate the linear polarization of incident light and analyze the rotation direction of the reflected light. We evaluate our method using both synthetic and real data, demonstrating its effectiveness in decomposing specular inter-reflections of metal objects. Furthermore, we demonstrate that our method can be combined with other decomposition methods for a detailed analysis of light transport. As a practical application, we show its effectiveness in improving the accuracy of 3D measurement against strong specular inter-reflection.


[157] 2406.08305

MSADM: Large Language Model (LLM) Assisted End-to-End Network Health Management Based on Multi-Scale Semanticization

Network device and system health management is the foundation of modern network operations and maintenance. Traditional health management methods, relying on expert identification or simple rule-based algorithms, struggle to cope with the heterogeneous networks (HNs) environment. Moreover, current state-of-the-art distributed fault diagnosis methods, which utilize specific machine learning techniques, lack multi-scale adaptivity for heterogeneous device information, resulting in unsatisfactory diagnostic accuracy for HNs. In this paper, we develop an LLM-assisted end-to-end intelligent network health management framework. The framework first proposes a multi-scale data scaling method based on unsupervised learning to address the multi-scale data problem in HNs. Secondly, we combine the semantic rule tree with the attention mechanism to propose a Multi-Scale Semanticized Anomaly Detection Model (MSADM) that generates network semantic information while detecting anomalies. Finally, we embed a chain-of-thought-based large-scale language model downstream to adaptively analyze the fault diagnosis results and create an analysis report containing detailed fault information and optimization strategies. We compare our scheme with other fault diagnosis models and demonstrate that it performs well on several metrics of network fault diagnosis.


[158] 2412.11590

A Real-Time System for Scheduling and Managing UAV Delivery in Urban Areas

As urban logistics demand continues to grow, UAV delivery has become a key solution to improve delivery efficiency, reduce traffic congestion, and lower logistics costs. However, to fully leverage the potential of UAV delivery networks, efficient swarm scheduling and management are crucial. In this paper, we propose a real-time scheduling and management system based on the ``Airport-Unloading Station" model, aiming to bridge the gap between high-level scheduling algorithms and low-level execution systems. This system, acting as middleware, accurately translates the requirements from the scheduling layer into specific execution instructions, ensuring that the scheduling algorithms perform effectively in real-world environments. Additionally, we implement three collaborative scheduling schemes involving autonomous ground vehicles (AGVs), unmanned aerial vehicles (UAVs), and ground staff to further optimize overall delivery efficiency. Through extensive experiments, this study demonstrates the rationality and feasibility of the proposed management system, providing practical solution for the commercial application of UAVs delivery in urban. Code: this https URL


[159] 2502.13777

Herglotz-NET: Implicit Neural Representation of Spherical Data with Harmonic Positional Encoding

Representing and processing data in spherical domains presents unique challenges, primarily due to the curvature of the domain, which complicates the application of classical Euclidean techniques. Implicit neural representations (INRs) have emerged as a promising alternative for high-fidelity data representation; however, to effectively handle spherical domains, these methods must be adapted to the inherent geometry of the sphere to maintain both accuracy and stability. In this context, we propose Herglotz-NET (HNET), a novel INR architecture that employs a harmonic positional encoding based on complex Herglotz mappings. This encoding yields a well-posed representation on the sphere with interpretable and robust spectral properties. Moreover, we present a unified expressivity analysis showing that any spherical-based INR satisfying a mild condition exhibits a predictable spectral expansion that scales with network depth. Our results establish HNET as a scalable and flexible framework for accurate modeling of spherical data.


[160] 2503.10475

Stratified Topological Autonomy for Long-Range Coordination (STALC)

In this paper, we present Stratified Topological Autonomy for Long-Range Coordination (STALC), a hierarchical planning approach for multi-robot coordination in real-world environments with significant inter-robot spatial and temporal dependencies. At its core, STALC consists of a multi-robot graph-based planner which combines a topological graph with a novel, computationally efficient mixed-integer programming formulation to generate highly-coupled multi-robot plans in seconds. To enable autonomous planning across different spatial and temporal scales, we construct our graphs so that they capture connectivity between free-space regions and other problem-specific features, such as traversability or risk. We then use receding-horizon planners to achieve local collision avoidance and formation control. To evaluate our approach, we consider a multi-robot reconnaissance scenario where robots must autonomously coordinate to navigate through an environment while minimizing the risk of detection by observers. Through simulation-based experiments, we show that our approach is able to scale to address complex multi-robot planning scenarios. Through hardware experiments, we demonstrate our ability to generate graphs from real-world data and successfully plan across the entire hierarchy to achieve shared objectives.


[161] 2505.10116

Discontinuous integro-differential equations and sliding mode control

The paper deals with analysis and design of sliding mode control systems modeled by finite-dimensional integro-differential equations. Filippov method and equivalent control approach are extended to a class of nonlinear discontinuous integro-differential equations and to a class of control systems modeled by infinite-dimensional differential equations in Banach spaces. Sliding mode control algorithms are designed for distributed input delay systems and for a heat control system.


[162] 2506.03467

Differentially Private Distribution Release of Gaussian Mixture Models via KL-Divergence Minimization

Gaussian Mixture Models (GMMs) are widely used statistical models for representing multi-modal data distributions, with numerous applications in data mining, pattern recognition, data simulation, and machine learning. However, recent research has shown that releasing GMM parameters poses significant privacy risks, potentially exposing sensitive information about the underlying data. In this paper, we address the challenge of releasing GMM parameters while ensuring differential privacy (DP) guarantees. Specifically, we focus on the privacy protection of mixture weights, component means, and covariance matrices. We propose to use Kullback-Leibler (KL) divergence as a utility metric to assess the accuracy of the released GMM, as it captures the joint impact of noise perturbation on all the model parameters. To achieve privacy, we introduce a DP mechanism that adds carefully calibrated random perturbations to the GMM parameters. Through theoretical analysis, we quantify the effects of privacy budget allocation and perturbation statistics on the DP guarantee, and derive a tractable expression for evaluating KL divergence. We formulate and solve an optimization problem to minimize the KL divergence between the released and original models, subject to a given $(\epsilon, \delta)$-DP constraint. Extensive experiments on both synthetic and real-world datasets demonstrate that our approach achieves strong privacy guarantees while maintaining high utility.


[163] 2506.14186

Differentiable Simulation of Hard Contacts with Soft Gradients for Learning and Control

Contact forces introduce discontinuities into robot dynamics that severely limit the use of simulators for gradient-based optimization. Penalty-based simulators such as MuJoCo, soften contact resolution to enable gradient computation. However, realistically simulating hard contacts requires stiff solver settings, which leads to incorrect simulator gradients when using automatic differentiation. Contrarily, using non-stiff settings strongly increases the sim-to-real gap. We analyze penalty-based simulators to pinpoint why gradients degrade under hard contacts. Building on these insights, we propose DiffMJX, which couples adaptive time integration with penalty-based simulation to substantially improve gradient accuracy. A second challenge is that contact gradients vanish when bodies separate. To address this, we introduce contacts from distance (CFD) which combines penalty-based simulation with straight-through estimation. By applying CFD exclusively in the backward pass, we obtain informative pre-contact gradients while retaining physical realism.


[164] 2506.22507

A Unified Cloud-Edge-Terminal Framework for Multimodal Integrated Sensing and Communication

The transition to 6G calls for tightly integrated sensing and communication to support mission-critical services such as autonomous driving, embodied AI, and high-precision telemedicine. However, most existing ISAC designs rely on a single sensing modality (often RF), which limits environmental understanding and becomes a bottleneck in complex and dynamic scenes. This motivates a shift from single-modal to multimodal ISAC, where heterogeneous sensors (e.g., radar, LiDAR, and cameras) complement each other to improve robustness and semantic awareness. In this article, we first summarize key challenges for multimodal ISAC, including heterogeneous fusion, communication overhead, and scalable system design. We then highlight three enabling technologies: large AI models, semantic communications, and multi-agent systems, and discuss how their combination can enable task-oriented multimodal perception. Building on these insights, we propose a unified cloud-edge-terminal (CET) framework that hierarchically distributes intelligence and supports three adaptive operation modes: global fusion mode (GFM), cooperative relay mode (CRM), and peer interaction mode (PIM). A case study evaluates the framework across three modes, demonstrating that GFM achieves the highest accuracy, PIM minimizes latency, and CRM strikes an optimal balance between performance and efficiency. Finally, we conclude with open research issues and future directions.


[165] 2507.01113

Stannic: Systolic STochAstic ONliNe SchedulIng AcCelerator

Efficient workload scheduling is a critical challenge in modern heterogeneous computing environments, particularly in high-performance computing (HPC) systems. Traditional software-based schedulers struggle to efficiently balance workloads due to scheduling overhead, lack of adaptability to stochastic workloads, and suboptimal resource utilization. The scheduling problem further compounds in the context of shared HPC clusters, where job arrivals and processing times are inherently stochastic. Prediction of these elements is possible, but it introduces additional overhead. To perform this complex scheduling, we developed two FPGA-assisted hardware accelerator microarchitectures, Hercules and Stannic. Hercules adopts a task-centric abstraction of stochastic scheduling, whereas Stannic inherits a schedule-centric abstraction. These hardware-assisted solutions leverage parallelism, pre-calculation, and spatial memory access to significantly accelerate scheduling. We accelerate a non-preemptive stochastic online scheduling algorithm to produce heterogeneity-aware schedules in near real time. With Hercules, we achieved a speedup of up to 1060x over a baseline C/C++ implementation, demonstrating the efficacy of a hardware-assisted acceleration for heterogeneity-aware stochastic scheduling. With Stannic, we further improved efficiency, achieving a 7.5x reduction in latency per computation iteration and a 14x increase in the target heterogeneous system size. Experimental results show that the resulting schedules demonstrate efficient machine utilization and low average job latency in stochastic contexts.


[166] 2507.11812

A Multimodal Data Fusion Generative Adversarial Network for Real Time Underwater Sound Speed Field Construction

Sound speed profiles (SSPs) are essential parameters underwater that affects the propagation mode of underwater signals and has a critical impact on the energy efficiency of underwater acoustic communication and accuracy of underwater acoustic positioning. Traditionally, SSPs can be obtained by matching field processing (MFP), compressive sensing (CS), and deep learning (DL) methods. However, existing methods mainly rely on on-site underwater sonar observation data, which put forward strict requirements on the deployment of sonar observation systems. To achieve high-precision estimation of sound velocity distribution in a given sea area without on-site underwater data measurement, we propose a multi-modal data-fusion generative adversarial network model with residual attention block (MDF-RAGAN) for SSP construction. To improve the model's ability for capturing global spatial feature correlations, we embedded the attention mechanisms, and use residual modules for deeply capturing small disturbances in the deep ocean sound velocity distribution caused by changes of SST. Experimental results on real open dataset show that the proposed model outperforms other state-of-the-art methods, which achieves an accuracy with an error of less than 0.3m/s. Specifically, MDF-RAGAN not only outperforms convolutional neural network (CNN) and spatial interpolation (SITP) by nearly a factor of two, but also achieves about 65.8\% root mean square error (RMSE) reduction compared to mean profile, which fully reflects the enhancement of overall profile matching by multi-source fusion and cross-modal attention.


[167] 2507.12442

Characterizing State Space Model and Hybrid Language Model Performance with Long Context

Emerging applications such as AR are driving demands for machine intelligence capable of processing continuous and/or long-context inputs on local devices. However, currently dominant models based on Transformer architecture suffers from the quadratic computational and memory overhead, which hinders applications required to process long contexts. This has spurred a paradigm shift towards new architectures like State Space Models (SSMs) and SSM-Transformer hybrid models, which provide near-linear scaling. The near-linear scaling enabled efficient handling of millions of tokens while delivering high performance in recent studies. Although such works present promising results, their workload characteristics in terms of computational performance and hardware resource requirements are not yet thoroughly explored, which limits our understanding of their implications to the system level optimizations. To address this gap, we present a comprehensive, compara-ive benchmarking of carefully selected Transformers, SSMs, and hybrid models specifically for long-context inference on consumer and embedded GPUs. Our analysis shows that SSMs are well-suited for on-device AI on consumer and embedded GPUs for long context inferences. While Transformers are up to 1.9x faster at short sequences (<8K tokens), SSMs demonstrate a dramatic performance inversion, becoming up to 4x faster at very long contexts (~57K tokens), thanks to their linear computational complexity and ~64% reduced memory footrprint. Our operator-level analysis reveals that custom SSM kernels like selective scan despite being hardware-aware to minimize memory IO, dominate the inference runtime on edge platforms, accounting for over 55% of latency due to their sequential, element-wise nature. SSM-Scope is open-sourced at this https URL


[168] 2508.12335

Semi-Infinite Programming for Collision-Avoidance in Optimal and Model Predictive Control

This paper presents a novel approach for collision avoidance in optimal and model predictive control, in which the environment is represented by a large number of points and the robot as a union of padded polygons. The conditions that none of the points shall collide with the robot can be written in terms of an infinite number of constraints per obstacle point. We show that the resulting semi-infinite programming (SIP) optimal control problem (OCP) can be efficiently tackled through a combination of two methods: local reduction and an external active-set method. Specifically, this involves iteratively identifying the closest point obstacles, determining the lower-level distance minimizer among all feasible robot shape parameters, and solving the upper-level finitely-constrained subproblems. In addition, this paper addresses robust collision avoidance in the presence of ellipsoidal state uncertainties. Enforcing constraint satisfaction over all possible uncertainty realizations extends the dimension of constraint infiniteness. The infinitely many constraints arising from translational uncertainty are handled by local reduction together with the robot shape parameterization, while rotational uncertainty is addressed via a backoff reformulation. A controller implemented based on the proposed method is demonstrated on a real-world robot running at 20Hz, enabling fast and collision-free navigation in tight spaces. An application to 3D collision avoidance is also demonstrated in simulation.


[169] 2509.16963

A Tactile-based Interactive Motion Planner for Robots in Unknown Cluttered Environments

In unknown cluttered environments with densely stacked objects, the free-motion space is extremely barren, posing significant challenges to motion planners. Collision-free planning methods often suffer from catastrophic failures due to unexpected collisions and motion obstructions. To address this issue, this paper proposes an interactive motion planning framework (I-MP), based on a perception-motion loop. This framework empowers robots to autonomously model and reason about contact models, which in turn enables safe expansion of the free-motion space. Specifically, the robot utilizes multimodal tactile perception to acquire stimulus-response signal pairs. This enables real-time identification of objects' mechanical properties and the subsequent construction of contact models. These models are integrated as computational constraints into a reactive planner. Based on fixed-point theorems, the planner computes the spatial state toward the target in real time, thus avoiding the computational burden associated with extrapolating on high-dimensional interaction models. Furthermore, high-dimensional interaction features are linearly superposed in Cartesian space in the form of energy, and the controller achieves trajectory tracking by solving the energy gradient from the current state to the planned state. The experimental results showed that at cruising speeds ranging from 0.01 to 0.07 $m/s$, the robot's initial contact force with objects remained stable at 1.0 +- 0.7 N. In the cabinet scenario test where collision-free trajectories were unavailable, I-MP expanded the free motion space by 37.5 % through active interaction, successfully completing the environmental exploration task.


[170] 2509.17340

AERO-MPPI: Anchor-Guided Ensemble Trajectory Optimization for Agile Mapless Drone Navigation

Agile mapless navigation in cluttered 3D environments poses significant challenges for autonomous drones. Conventional mapping-planning-control pipelines incur high computational cost and propagate estimation errors. We present AERO-MPPI, a fully GPU-accelerated framework that unifies perception and planning through an anchor-guided ensemble of Model Predictive Path Integral (MPPI) optimizers. Specifically, we design a multi-resolution LiDAR point-cloud representation that rapidly extracts spatially distributed "anchors" as look-ahead intermediate endpoints, from which we construct polynomial trajectory guides to explore distinct homotopy path classes. At each planning step, we run multiple MPPI instances in parallel and evaluate them with a two-stage multi-objective cost that balances collision avoidance and goal reaching. Implemented entirely with NVIDIA Warp GPU kernels, AERO-MPPI achieves real-time onboard operation and mitigates the local-minima failures of single-MPPI approaches. Extensive simulations in forests, verticals, and inclines demonstrate sustained reliable flight above 7 m/s, with success rates above 80% and smoother trajectories compared to state-of-the-art baselines. Real-world experiments on a LiDAR-equipped quadrotor with NVIDIA Jetson Orin NX 16G confirm that AERO-MPPI runs in real time onboard and consistently achieves safe, agile, and robust flight in complex cluttered environments. Code is available at this https URL.


[171] 2510.12435

The value of storage in electricity distribution: The role of markets

Electricity distribution companies deploy battery storage to defer grid upgrades by reducing peak demand. In deregulated jurisdictions, such storage often sits idle because regulatory constraints bar participation in electricity markets. Here, we develop an optimization framework that, to our knowledge, provides the first formal model of market participation constraints within storage investment and operation planning. Applying the framework to a Massachusetts case study, we find that market participation delivers similar savings as peak demand reduction. Under current conditions, market participation does not increase storage investment, but at very low storage costs, could incentivize deployment beyond local distribution needs. This might run contrary to the separation of distribution from generation in deregulated markets. Our framework can mitigate this concern by identifying investment levels appropriate for local distribution needs.


[172] 2510.14922

TRI-DEP: A Trimodal Comparative Study for Depression Detection Using Speech, Text, and EEG

Depression is a widespread mental health disorder, yet its automatic detection remains challenging. Prior work has explored unimodal and multimodal approaches, with multimodal systems showing promise by leveraging complementary signals. However, existing studies are limited in scope, lack systematic comparisons of features, and suffer from inconsistent evaluation protocols. We address these gaps by systematically exploring feature representations and modelling strategies across EEG, together with speech and text. We evaluate handcrafted features versus pre-trained embeddings, assess the effectiveness of different neural encoders, compare unimodal, bimodal, and trimodal configurations, and analyse fusion strategies with attention to the role of EEG. Consistent subject-independent splits are applied to ensure robust, reproducible benchmarking. Our results show that (i) the combination of EEG, speech and text modalities enhances multimodal detection, (ii) pretrained embeddings outperform handcrafted features, and (iii) carefully designed trimodal models achieve state-of-the-art performance. Our work lays the groundwork for future research in multimodal depression detection.


[173] 2510.17564

Towards a Practical Understanding of Lagrangian Methods in Safe Reinforcement Learning

Safe reinforcement learning addresses constrained optimization problems where maximizing performance must be balanced against safety constraints, and Lagrangian methods are a widely used approach for this purpose. However, the effectiveness of Lagrangian methods depends crucially on the choice of the Lagrange multiplier $\lambda$, which governs the multi-objective trade-off between return and cost. A common practice is to update the multiplier automatically during training. Although this approach is standard in practice, there remains limited empirical evidence on the optimally achievable trade-off between return and cost as a function of $\lambda$, and there is currently no systematic benchmark comparing automated update mechanisms to this empirical optimum. Therefore, we study (i) the constraint geometry for eight widely used safety tasks and (ii) the previously overlooked constraint-regime sensitivity of different Lagrange multiplier update mechanisms in safe reinforcement learning. Through the lens of multi-objective analysis, we present empirical Pareto frontiers that offer a complete visualization of the trade-off between return and cost in the underlying optimization problem. Our results reveal the highly sensitive nature of $\lambda$ and further show that the restrictiveness of the constraint cost can vary across different cost limits within the same task. This highlights the importance of careful cost limit selection across different regions of cost restrictiveness when evaluating safe reinforcement learning methods. We provide a recommended set of cost limits for each evaluated task and offer an open-source code base: this https URL.


[174] 2512.24955

MSACL: Multi-Step Actor-Critic Learning with Lyapunov Certificates for Exponentially Stabilizing Control

For stabilizing control tasks, model-free reinforcement learning (RL) approaches face numerous challenges, particularly regarding the issues of effectiveness and efficiency in complex high-dimensional environments with limited training data. To address these challenges, we propose Multi-Step Actor-Critic Learning with Lyapunov Certificates (MSACL), a novel approach that integrates exponential stability into off-policy maximum entropy reinforcement learning (MERL). In contrast to existing RL-based approaches that depend on elaborate reward engineering and single-step constraints, MSACL adopts intuitive reward design and exploits multi-step samples to enable exploratory actor-critic learning. Specifically, we first introduce Exponential Stability Labels (ESLs) to categorize training samples and propose a $\lambda$-weighted aggregation mechanism to learn Lyapunov certificates. Based on these certificates, we further design a stability-aware advantage function to guide policy optimization, thereby promoting rapid Lyapunov descent and robust state convergence. We evaluate MSACL across six benchmarks, comprising four stabilizing and two high-dimensional tracking tasks. Experimental results demonstrate its consistent performance improvements over both standard RL baselines and state-of-the-art Lyapunov-based RL algorithms. Beyond rapid convergence, MSACL exhibits robustness against environmental uncertainties and generalization to unseen reference signals. The source code and benchmarking environments are available at \href{this https URL}{this https URL}.


[175] 2601.00614

From 2D to 3D terrain-following area coverage path planning

An algorithm for 3D terrain-following area coverage path planning is presented. Multiple adjacent paths are generated that are (i) locally apart from each other by a distance equal to the working width of a machinery, while (ii) simultaneously floating at a projection distance equal to a specific working height above the terrain. The complexities of the algorithm in comparison to its 2D equivalent are highlighted. These include uniformly spaced elevation data generation using an Inverse Distance Weighting-approach and a local search. Area coverage path planning results for real-world 3D data within an agricultural context are presented to validate the algorithm.


[176] 2601.12494

Multi-Task Instruction Tuning via Data Scheduling for Low-Resource Arabic AudioLLMs

Audio large language models (LLMs) enable unified speech understanding and generation, but adapting them to linguistically complex and dialect-rich settings such as Arabic-English remains challenging. We present a controlled study of multi-task instruction tuning for an Arabic-centric audio LLM across generative tasks including ASR and speech and text summarization, and discriminative tasks including dialect and emotion recognition, in a resource-constrained setting. To support end-to-end Arabic speech summarization, we introduce AraMega-SSum, a first speech summarization resource for training and benchmarking Arabic-centric Audio-LLMs. We compare four training strategies (i) Uniform Task Mixing, (ii) Task-Progressive Curriculum (TPC), (iiii) Aligner-Based Diverse Sampling (ADS) for training-time batch construction, and (iv) A two-stage TPC->ADS strategy. Our results show a clear efficiency-robustness trade-off. ADS speeds up early convergence and improves paralinguistic performance, however, it hurts other tasks. A two-stage TPC-> ADS strategy gives the most reliable overall balance across tasks, offering practical guidance for adapting omni audio LLMs to low-resource, dialect-rich environments. We will make AraMega-SSum and all experimental resources publicly available to the community.


[177] 2602.16127

Reactive Slip Control in Multifingered Grasping: Hybrid Tactile Sensing and Internal-Force Optimization

We build a low-level reflex control layer driven by fast tactile feedback for multifinger grasp stabilization. Our hybrid approach combines learned tactile slip detection with model-based internal-force control to halt in-hand slip while preserving the object-level wrench. The multimodal tactile stack integrates piezoelectric sensing (PzE) for fast slip cues and piezoresistive arrays (PzR) for contact localization, enabling online construction of a contact-centric grasp representation without prior object knowledge. Experiments demonstrate reactive stabilization of multifingered grasps under external perturbations, without explicit friction models or direct force sensing. In controlled trials, slip onset is detected after 20.4 +/- 6 ms. The framework yields a theoretical grasp response latency on the order of 30 ms, with grasp-model updates in less than 5 ms and internal-force selection in about 4 ms. The analysis supports the feasibility of sub-50 ms tactile-driven grasp responses, aligned with human reflex baselines.


[178] 2603.00141

From Scale to Speed: Adaptive Test-Time Scaling for Image Editing

Image Chain-of-Thought (Image-CoT) is a test-time scaling paradigm that improves image generation by extending inference time. Most Image-CoT methods focus on text-to-image (T2I) generation. Unlike T2I generation, image editing is goal-directed: the solution space is constrained by the source image and instruction. This mismatch causes three challenges when applying Image-CoT to editing: inefficient resource allocation with fixed sampling budgets, unreliable early-stage verification using general MLLM scores, and redundant edited results from large-scale sampling. To address this, we propose ADaptive Edit-CoT (ADE-CoT), an on-demand test-time scaling framework to enhance editing efficiency and performance. It incorporates three key strategies: (1) a difficulty-aware resource allocation that assigns dynamic budgets based on estimated edit difficulty; (2) edit-specific verification in early pruning that uses region localization and caption consistency to select promising candidates; and (3) depth-first opportunistic stopping, guided by an instance-specific verifier, that terminates when intent-aligned results are found. Extensive experiments on three SOTA editing models (Step1X-Edit, BAGEL, FLUX.1 Kontext) across three benchmarks show that ADE-CoT achieves superior performance-efficiency trade-offs. With comparable sampling budgets, ADE-CoT obtains better performance with more than 2x speedup over Best-of-N.


[179] 2603.08964

The FABRIC Strategy for Verifying Neural Feedback Systems

Forward reachability analysis is a dominant approach for verifying reach-avoid specifications in neural feedback systems, i.e., dynamical systems controlled by neural networks, and a number of directions have been proposed and studied. In contrast, far less attention has been given to backward reachability analysis for these systems, in part because of the limited scalability of known techniques. In this work, we begin to address this gap by introducing new algorithms for computing both over- and underapproximations of backward reachable sets for nonlinear neural feedback systems. We also describe and implement an integration of these backward reachability techniques with existing ones for forward analysis. We call the resulting algorithm Forward and Backward Reachability Integration for Certification (FaBRIC). We evaluate our algorithms on a representative set of benchmarks and show that they significantly outperform the prior state of the art.


[180] 2603.14374

A Systematic Comparison and Evaluation of Building Ontologies for Deploying Data-Driven Analytics in Smart Buildings

Ontologies play a critical role in data exchange, information integration, and knowledge sharing across diverse smart building applications. Yet, semantic differences between the prevailing building ontologies hamper their purpose of bringing data interoperability and restrict the ability to reuse building ontologies in real-world applications. In this paper, we propose and adopt a framework to conduct a systematic comparison and evaluation of four popular building ontologies (Brick Schema, RealEstateCore, Project Haystack and Google's Digital Buildings) from both axiomatic design and assertions in a use case, namely the Terminological Box (TBox) evaluation and the Assertion Box (ABox) evaluation. In the TBox evaluation, we use the SQuaRE-based Ontology Quality Evaluation (OQuaRE) Framework and concede that Project Haystack and Brick Schema are more compact with respect to the ontology axiomatic design. In the ABox evaluation, we apply an empirical study with sample building data that suggests that Brick Schema and RealEstateCore have greater completeness and expressiveness in capturing the main concepts and relations within the building domain. The results implicitly indicate that there is no universal building ontology for integrating Linked Building Data (LBD). We also discuss ontology compatibility and investigate building ontology design patterns (ODPs) to support ontology matching, alignment, and harmonisation.


[181] 2603.15586

Computational Concept of the Psyche

This article presents an overview of approaches to modeling the human psyche in the context of constructing an artificial one. Based on this overview, a concept of cognitive architecture is proposed, in which the psyche is viewed as the operating system of a living or artificial subject, comprising a space of states, including the state of needs that determine the meaning of a subject's being in relation to stimuli from the external world, and intelligence as a decision-making system regarding actions in this world to satisfy these needs. Based on this concept, a computational formalization is proposed for creating artificial general intelligence systems for an agent through experiential learning in a state space that includes agent's needs, taking into account their biological or existential significance for the intelligent agent, along with agent's sensations and actions. Thus, the problem of constructing artificial general intelligence is formalized as a system for making optimal decisions in the space of specific agent needs under conditions of uncertainty, maximizing success in achieving goals, minimizing existential risks, and maximizing energy efficiency. A minimal experimental implementation of the model is presented.


[182] 2603.15606

Saddle Point Evasion via Curvature-Regularized Gradient Dynamics

Nonconvex optimization underlies many modern machine learning and control tasks, where saddle points pose the dominant obstacle to reliable convergence in high-dimensional settings. Escaping these saddle points deterministically and at a controllable rate remains an open challenge: gradient descent is blind to curvature, stochastic perturbation methods lack deterministic guarantees, and Newton-type approaches suffer from Hessian singularity. We present Curvature-Regularized Gradient Dynamics (CRGD), which augments the objective with a smooth penalty on the most negative Hessian eigenvalue, yielding an augmented cost that serves as an optimization Lyapunov function with user-selectable convergence rates to second-order stationary points. Numerical experiments on a nonconvex matrix factorization example confirm that CRGD escapes saddle points across all tested configurations, with escape time that decreases with the eigenvalue gap, in contrast to gradient descent, whose escape time grows inversely with the gap.


[183] 2603.16865

Prescribed-Time Distributed Generalized Nash Equilibrium Seeking

This paper proposes the first fully distributed algorithm for finding the Generalized Nash Equilibrium (GNE) of a non-cooperative game with shared coupling constraints and general cost coupling at a user-prescribed finite time T. As a foundation, a centralized gradient-based prescribed-time convergence result is established for the GNE problem, extending the optimization Lyapunov function framework to gradient dynamics, the only known realization among existing alternatives that naturally decomposes into per-agent computations. Building on this, a fully distributed architecture is designed in which each agent concurrently runs three coupled dynamics: a prescribed-time distributed state observer, a gradient-based optimization law, and a dual consensus mechanism that enforces the shared-multiplier requirement of the variational GNE, thus guaranteeing convergence to the same solution as the centralized case. The simultaneous operation of these layers creates bidirectional perturbations between consensus and optimization, which are resolved through gain synchronization that matches the temporal singularities of the optimization and consensus layers, ensuring all error components vanish exactly at T. The Fischer-Burmeister reformulation renders the algorithm projection-free and guarantees constraint satisfaction at the deadline. Numerical simulations on a Nash-Cournot game and a time-critical sensor coverage problem validate the approach.