The Koopman operator and extended dynamic mode decomposition (EDMD) as a data-driven technique for its approximation have attracted considerable attention as a key tool for modeling, analysis, and control of complex dynamical systems. However, extensions towards control-affine systems resulting in bilinear surrogate models are prone to demanding data requirements rendering their applicability intricate. In this paper, we propose a framework for data-fitting of control-affine mappings to increase the robustness margin in the associated system identification problem and, thus, to provide more reliable bilinear EDMD schemes. In particular, guidelines for input selection based on subspace angles are deduced such that a desired threshold with respect to the minimal singular value is ensured. Moreover, we derive necessary and sufficient conditions of optimality for maximizing the minimal singular value. Further, we demonstrate the usefulness of the proposed approach using bilinear EDMD with control for non-holonomic robots.
Electrical load classification is generally divided into intrusive and non-intrusive approaches, both having their limitations and advantages. With the non-intrusive approach, controlling appliances is not possible, but the installation cost of a single measurement device is cheap. In comparison, intrusive, smart plug-based solutions offer individual appliance control, but the installation cost is much higher. There have been very few approaches aiming to combine these methods. In this paper we show that extending a smart plug-based solution to multiple loads per plug can reduce control granularity in favor of lowering the system's installation costs. Connecting various loads to a Smart Plug through an extension cord is seldom considered in the literature, even though it is common in households. This scenario is also handled by the hybrid load classification solution presented in this paper.
Stability analysis of the Kalman filter under randomly lost measurements has been widely studied. We revisit this problem in a general continuous-time framework, where both the measurement matrix and noise covariance evolve as random processes, capturing variability in sensing locations. Within this setting, we derive a closed-form upper bound on the expected estimation covariance for continuous-time Kalman filtering. We then apply this framework to spatiotemporal field estimation, where the field is modeled as a Gaussian process observed by randomly located, noisy sensors. Using clarity, introduced in our earlier work as a rescaled form of the differential entropy of a random variable, we establish a grid-independent lower bound on the spatially averaged expected clarity. This result exposes fundamental performance limits through a composite sensing parameter that jointly captures the effects of the number of sensors, noise level, and measurement frequency. Simulations confirm that the proposed bound is tight for the discrete-time Kalman filter, approaching it as the measurement rate decreases, while avoiding the recursive computations required in the discrete-time formulation. Most importantly, the derived limits provide principled and efficient guidelines for sensor network design problem prior to deployment.
The evolution of electric vehicles (EVs) is reshaping the automotive industry, advocating for more sustainable transportation practices. Accurately predicting EV charging behavior is essential for effective infrastructure planning and optimization. However, the charging load of EVs is significantly influenced by uncertainties and randomness, posing challenges for accurate estimation. Furthermore, existing literature reviews lack a systematic analysis of modeling approaches focused on information fusion. This paper comprehensively reviews EV charging load models from the past five years. We categorize state-of-the-art modeling methods into statistical, simulated, and data-driven approaches, examining the advantages and drawbacks of each. Additionally, we analyze the three bottom-up level operations of information fusion in existing models. We conclude by discussing the challenges and opportunities in the field, offering guidance for future research endeavors to advance our understanding and explore practical research directions.
Modern manufacturing demands high flexibility and reconfigurability to adapt to dynamic production needs. Model-based Engineering (MBE) supports rapid production line design, but final reconfiguration requires simulations and validation. Digital Twins (DTs) streamline this process by enabling real-time monitoring, simulation, and reconfiguration. This paper presents a novel platform that automates DT generation and deployment using AutomationML-based factory plans. The platform closes the loop with a GAI-powered simulation scenario generator and automatic physical line reconfiguration, enhancing efficiency and adaptability in manufacturing.
The response-only model class selection capability of a novel deep convolutional neural network method is examined herein in a simple, yet effective, manner. Specifically, the responses from a unique degree of freedom along with their class information train and validate a one-dimensional convolutional neural network. In doing so, the network selects the model class of new and unlabeled signals without the need of the system input information, or full system identification. An optional physics-based algorithm enhancement is also examined using the Kalman filter to fuse the system response signals using the kinematics constraints of the acceleration and displacement data. Importantly, the method is shown to select the model class in slight signal variations attributed to the damping behavior or hysteresis behavior on both linear and nonlinear dynamic systems, as well as on a 3D building finite element model, providing a powerful tool for structural health monitoring applications.
This paper presents a predictive compensation framework for finite-horizon discrete-time linear quadratic dynamic games in the presence of Gauss-Markov deviations from feedback Nash strategies. One player experiences correlated stochastic deviations, modeled via a first-order autoregressive process, while the other compensates using a predictive strategy that anticipates the effect of future correlation. Closed-form recursions for mean and covariance propagation are derived, and the resulting performance improvement is analyzed through the sensitivity of expected cost.
In this work, we start with a generic mathematical framework for the equations of motion (EOM) in flight mechanics with six degrees of freedom (6-DOF) for a general (not necessarily symmetric) fixed-wing aircraft. This mathematical framework incorporates (1) body axes (fixed in the airplane at its center of gravity), (2) inertial axes (fixed in the earth/ground at the take-off point), wind axes (aligned with the flight path/course), (3) spherical flight path angles (azimuth angle measured clockwise from the geographic north, and elevation angle measured above the horizon plane), and (4) spherical flight angles (angle of attack and sideslip angle). We then manipulate these equations of motion to derive a customized version suitable for inverse simulation flight mechanics, where a target flight trajectory is specified while a set of corresponding necessary flight controls to achieve that maneuver are predicted. We then present a numerical procedure for integrating the developed inverse simulation (InvSim) system in time; utilizing (1) symbolic mathematics, (2) explicit fourth-order Runge-Kutta (RK4) numerical integration technique, and (3) expressions based on the finite difference method (FDM); such that the four necessary control variables (engine thrust force, ailerons' deflection angle, elevators' deflection angle, and rudder's deflection angle) are computed as discrete values over the entire maneuver time, and these calculated control values enable the airplane to achieve the desired flight trajectory, which is specified by three inertial Cartesian coordinates of the airplane, in addition to the Euler's roll angle. We finally demonstrate the proposed numerical procedure of flight mechanics inverse simulation (InvSim).
Modern power systems with high penetration of inverter-based resources exhibit complex dynamic behaviors that challenge the scalability and generalizability of traditional stability assessment methods. This paper presents a dynamic recurrent adjacency memory network (DRAMN) that combines physics-informed analysis with deep learning for real-time power system stability forecasting. The framework employs sliding-window dynamic mode decomposition to construct time-varying, multi-layer adjacency matrices from phasor measurement unit and sensor data to capture system dynamics such as modal participation factors, coupling strengths, phase relationships, and spectral energy distributions. As opposed to processing spatial and temporal dependencies separately, DRAMN integrates graph convolution operations directly within recurrent gating mechanisms, enabling simultaneous modeling of evolving dynamics and temporal dependencies. Extensive validations on modified IEEE 9-bus, 39-bus, and a multi-terminal HVDC network demonstrate high performance, achieving 99.85\%, 99.90\%, and 99.69\% average accuracies, respectively, surpassing all tested benchmarks, including classical machine learning algorithms and recent graph-based models. The framework identifies optimal combinations of measurements that reduce feature dimensionality by 82\% without performance degradation. Correlation analysis between dominant measurements for small-signal and transient stability events validates generalizability across different stability phenomena. DRAMN achieves state-of-the-art accuracy while providing enhanced interpretability for power system operators, making it suitable for real-time deployment in modern control centers.
Buses are a vital component of metropolitan public transport, yet conventional bus services often struggle with inefficiencies including extended dwelling time, which increases in-vehicle travel time for non-alighting passengers. A stop-less autonomous modular (SLAM) bus service has emerged as a solution, enabling dynamic capacity to reduce dwelling time. Meanwhile, the electrification of buses is advancing as a strategy to mitigate greenhouse gas emissions and reduces operators' costs, but introduces new operational constraints due to charging requirements. This study develops analytical optimization models for SLAM bus service that integrates vehicle-to-vehicle (V2V) charging technology. By comparing the optimal designs and their feasibility across non-charging case and charging strategies, we identify a sequence of operational stages as ridership grows: from idle capacity under low demand, to full small buses, full large buses, and a proposed frequency-capped regime where only bus capacity expands. Under the mobile charging strategy, this progression further includes an energy-limited regime, in which frequency declines, and ultimately infeasibility under high demand. These findings enable operators to deliver more efficient services.
Motivation: High acceleration factors place a limit on MRI image reconstruction. This limit is extended to segmentation models when treating these as subsequent independent processes. Goal: Our goal is to produce segmentations directly from sparse k-space measurements without the need for intermediate image reconstruction. Approach: We employ a transformer architecture to encode global k-space information into latent features. The produced latent vectors condition queried coordinates during decoding to generate segmentation class probabilities. Results: The model is able to produce better segmentations across high acceleration factors than image-based segmentation baselines. Impact: Cardiac segmentation directly from undersampled k-space samples circumvents the need for an intermediate image reconstruction step. This allows the potential to assess myocardial structure and function on higher acceleration factors than methods that rely on images as input.
Integrated Sensing and Communication (ISAC) is critical for efficient spectrum and hardware utilization in future wireless networks like 6G. However, existing channel models lack comprehensive characterization of ISAC-specific dynamics, particularly the relationship between mono-static (co-located Tx/Rx) and bi-static (separated Tx/Rx) sensing configurations. Empirical measurements in dynamic urban microcell (UMi) environments using a 79-GHz FMCW channel sounder help bridge this gap. Two key findings are demonstrated: (1) mono-static and bi-static channels exhibit consistently low instantaneous correlation due to divergent propagation geometries; (2) despite low instantaneous correlation, both channels share unified temporal consistency, evolving predictably under environmental kinematics. These insights, validated across seven real-world scenarios with moving targets/transceivers, inform robust ISAC system design and future standardization.
Background: Non-invasive imaging-based assessment of blood flow plays a critical role in evaluating heart function and structure. Computed Tomography (CT) is a widely-used imaging modality that can robustly evaluate cardiovascular anatomy and function, but direct methods to estimate blood flow velocity from movies of contrast evolution have not been developed. Purpose: This study evaluates the impact of CT imaging on Physics-Informed Neural Networks (PINN)-based flow estimation and proposes an improved framework, SinoFlow, which uses sinogram data directly to estimate blood flow. Methods: We generated pulsatile flow fields in an idealized 2D vessel bifurcation using computational fluid dynamics and simulated CT scans with varying gantry rotation speeds, tube currents, and pulse mode imaging settings. We compared the performance of PINN-based flow estimation using reconstructed images (ImageFlow) to SinoFlow. Results: SinoFlow significantly improved flow estimation performance by avoiding propagating errors introduced by filtered backprojection. SinoFlow was robust across all tested gantry rotation speeds and consistently produced lower mean squared error and velocity errors than ImageFlow. Additionally, SinoFlow was compatible with pulsed-mode imaging and maintained higher accuracy with shorter pulse widths. Conclusions: This study demonstrates the potential of SinoFlow for CT-based flow estimation, providing a more promising approach for non-invasive blood flow assessment. The findings aim to inform future applications of PINNs to CT images and provide a solution for image-based estimation, with reasonable acquisition parameters yielding accurate flow estimates.
Accurate geometric modeling of the aortic valve from 3D CT images is essential for biomechanical analysis and patient-specific simulations to assess valve health or make a preoperative plan. However, it remains challenging to generate aortic valve meshes with both high-quality and consistency across different patients. Traditional approaches often produce triangular meshes with irregular topologies, which can result in poorly shaped elements and inconsistent correspondence due to inter-patient anatomical variation. In this work, we address these challenges by introducing a template-fitting pipeline with deep neural networks to generate structured quad (i.e., quadrilateral) meshes from 3D CT images to represent aortic valve geometries. By remeshing aortic valves of all patients with a common quad mesh template, we ensure a uniform mesh topology with consistent node-to-node and element-to-element correspondence across patients. This consistency enables us to simplify the learning objective of the deep neural networks, by employing a loss function with only two terms (i.e., a geometry reconstruction term and a smoothness regularization term), which is sufficient to preserve mesh smoothness and element quality. Our experiments demonstrate that the proposed approach produces high-quality aortic valve surface meshes with improved smoothness and shape quality, while requiring fewer explicit regularization terms compared to the traditional methods. These results highlight that using structured quad meshes for the template and neural network training not only ensures mesh correspondence and quality but also simplifies the training process, thus enhancing the effectiveness and efficiency of aortic valve modeling.
Diffusion-weighted magnetic resonance imaging allows for reconstruction of models for structural connectivity in the brain, such as fiber orientation distribution functions (ODFs) that describe the distribution, direction, and volume of white matter fiber bundles in a voxel. Crossing white matter fibers in voxels complicate analysis and can lead to errors in downstream tasks like tractography. We introduce one option for separating fiber ODFs by performing a nonlinear optimization to fit ODFs to the given data and penalizing terms that are not symmetric about the axis of the fiber. However, this optimization is non-convex and computationally infeasible across an entire image (approximately 1.01 x 106 ms per voxel). We introduce DeepFixel, a spherical convolutional neural network approximation for this nonlinear optimization. We model the probability distribution of fibers as a spherical mesh with higher angular resolution than a truncated spherical harmonic representation. To validate DeepFixel, we compare to the nonlinear optimization and a fixel-based separation algorithm of two-fiber and three-fiber ODFs. The median angular correlation coefficient is 1 (interquartile range of 0.00) using the nonlinear optimization algorithm, 0.988 (0.317) using a fiber bundle elements or "fixel"-based separation algorithm, and 0.973 (0.004) using DeepFixel. DeepFixel is more computationally efficient than the non-convex optimization (0.32 ms per voxel). DeepFixel's spherical mesh representation is successful at disentangling at smaller angular separations and smaller volume fractions than the fixel-based separation algorithm.
Feedback-based optimization (FBO) provides a simple control framework for regulating a stable dynamical system to the solution of a constrained optimization problem in the presence of exogenous disturbances, and does so without full knowledge of the plant dynamics. However, closed-loop stability requires the controller to operate on a sufficiently slower timescale than the plant, significantly constraining achievable closed-loop performance. Motivated by this trade-off, we propose an estimator-based modification of FBO which leverages dynamic plant model information to eliminate the time-scale separation requirement of traditional FBO. Under this design, the convergence rate of the closed-loop system is limited only by the dominant eigenvalue of the open-loop system. We extend the approach to the case of design based on only an approximate plant model when the original system is singularly perturbed. The results are illustrated via an application to fast power system frequency control using inverter-based resources.
Intelligent reflecting surfaces (IRSs) have become a vital technology for improving the spectrum and energy efficiency of forthcoming wireless networks. Nevertheless, practical implementation is obstructed by the excessive overhead associated with the frequent transmission of phase shift information (PSI) over bandwidth-constrained control lines. Current deep learning-based compression methods mitigate this problem but are constrained by elevated decoder complexity, inadequate flexibility to dynamic channels, and static compression ratios. This research presents a prompt-conditioned PSI compression system that integrates prompt learning inspired by large models into the PSI compression process to address these difficulties. A hybrid prompt technique that integrates soft prompt concatenation with feature-wise linear modulation (FiLM) facilitates adaptive encoding across diverse signal-to-noise ratios (SNRs), fading kinds, and compression ratios. Furthermore, a variable rate technique incorporates the compression ratio into the prompt embeddings through latent masking, enabling a singular model to adeptly balance reconstruction accuracy. Additionally, a lightweight depthwise convolutional gating (DWCG) decoder facilitates precise feature reconstruction with minimal complexity. Comprehensive simulations indicate that the proposed framework significantly reduces NMSE compared to traditional autoencoder baselines, while ensuring robustness across various channel circumstances and accommodating variable compression ratios within a single model. These findings underscore the framework's promise as a scalable and efficient solution for real-time IRS control in next-generation wireless networks.
This paper introduces an approach to multi-stream quickest change detection and fault isolation for unnormalized and score-based statistical models. Traditional optimal algorithms in the quickest change detection literature require explicit pre-change and post-change distributions to calculate the likelihood ratio of the observations, which can be computationally expensive for higher-dimensional data and sometimes even infeasible for complex machine learning models. To address these challenges, we propose the min-SCUSUM method, a Hyvarinen score-based algorithm that computes the difference of score functions in place of log-likelihood ratios. We provide a delay and false alarm analysis of the proposed algorithm, showing that its asymptotic performance depends on the Fisher divergence between the pre- and post-change distributions. Furthermore, we establish an upper bound on the probability of fault misidentification in distinguishing the affected stream from the unaffected ones.
Co-simulation is a critical approach for the design and analysis of complex cyber-physical systems. It will enhance development efficiency and reduce costs. This paper presents a co-simulation framework integrating ROS 2 and MATLAB/Simulink for quadrotor unmanned aerial vehicle (UAV) control system design and verification. First, a six-degree-of-freedom nonlinear dynamic model of the quadrotor is derived accurately that based on Newton-Euler equations. Second, within the proposed framework, a hierarchical control architecture was designed and implemented: LQR controller for attitude control to achieve optimal regulation performance, and PID controller for position control to ensure robustness and practical applicability. Third, elaborated the architecture of the framework, including the implementation details of the cross-platform data exchange mechanism. Simulation results demonstrate the effectiveness of the framework, highlighting its capability to provide an efficient and standardized solution for rapid prototyping and Software-in-the-Loop (SIL) validation of UAV control algorithms.
Low earth orbit (LEO) satellite-assisted integrated sensing and communications (ISAC) systems have been extensively studied to achieve ubiquitous connectivity. However, the severe signal attenuation and limited transmit power at LEO satellites can degrade ISAC performance. To address this issue, this paper investigated movable antenna (MA)-assisted LEO ISAC systems. We derive the communication signal-to-interference-plus-noise ratio (SINR) and the sensing squared position error bound (SPEB) for evaluating the ISAC performance. Then, we jointly optimize the transmit beamforming and the MA positions to minimize the SPEB under the SINR constraints, total transmit power constraint, and several inherent physical constraints of the MA array. We first simplify the complex problem using the semidefinite relaxation (SDR). Then, we present a novel alternating optimization (AO)-based algorithm to decouple the original problem into two subproblems, consequently convexified and solved. Simulations demonstrate the convergence and effectiveness of the proposed algorithm. Better trade-off between communication and sensing performance, and at least 25% gain in sensing performance are achieved, compared to the benchmarks.
It is well established that the performance of reconfigurable intelligent surface (RIS)-assisted systems critically depends on the optimal placement of the RIS. Previous works consider either simple coverage maximization or simultaneous optimization of the placement of the RIS along with the beamforming and reflection coefficients, most of which assume that the location of the RIS, base station (BS), and users are known. However, in practice, only the spatial variation of user density and obstacle configuration are likely to be known prior to deployment of the system. Thus, we formulate a non-convex problem that optimizes the position of the RIS over the expected minimum signal-to-interference-plus-noise ratio (SINR) of the system with user randomness, assuming that the system employs joint beamforming after deployment. To solve this problem, we propose a recursive coarse-to-fine methodology that constructs a set of candidate locations for RIS placement based on the obstacle configuration and evaluates them over multiple instantiations from the user distribution. The search is recursively refined within the optimal region identified in each stage to determine the final optimal region for RIS deployment. Numerical results are presented to corroborate our findings.
The proliferation of sixth-generation (6G) networks and the massive Internet of Things (IoT) demand wireless communication technologies that are ultra-low-power, secure, and covert. Noise-based communication has emerged as a transformative paradigm that meets these demands by encoding information directly into the statistical properties of noise, rather than using traditional deterministic carriers. This survey provides a comprehensive synthesis of this field, systematically exploring its fundamental principles and key methodologies, including thermal noise modulation (TherMod), noise modulation (NoiseMod) and its variants, and the Kirchhoff-law-Johnson-noise (KLJN) secure key exchange. We address critical practical challenges such as channel estimation and hardware implementation, and highlight emerging applications in simultaneous wireless information and power transfer (SWIPT) and non-orthogonal multiple access (NOMA). Our analysis confirms that noise-based systems offer unparalleled advantages in energy efficiency and covertness, and we conclude by outlining future research directions to realize their potential for enabling the next generation of autonomous and secure wireless networks.
The massive scale of Wireless Foundation Models (FMs) hinders their real-time deployment on edge devices. This letter moves beyond standard knowledge distillation by introducing a novel Multi-Component Adaptive Knowledge Distillation (MCAKD) framework. Key innovations include a Cross-Attention-Based Knowledge Selection (CA-KS) module that selectively identifies critical features from the teacher model, and an Autonomous Learning-Passive Learning (AL-PL) strategy that balances knowledge transfer with independent learning to achieve high training efficiency at a manageable computational cost. When applied to the WiFo FM, the distilled Tiny-WiFo model, with only 5.5M parameters, achieves a 1.6 ms inference time on edge hardware while retaining over 98% of WiFo's performance and its crucial zero-shot generalization capability, making real-time FM deployment viable.
In this work, we consider the problem of executing multiple tasks encoded by value functions, each learned through Reinforcement Learning, using an optimization-based framework. Prior works develop such a framework, but left unanswered a fundamental question of when learned value functions can be concurrently executed. The main contribution of this work is to present theorems which provide necessary and sufficient conditions to concurrently execute sets of learned tasks within subsets of the state space, using a previously proposed min-norm controller. These theorems provide insight into when learned control tasks are possible to be made concurrently executable, when they might already inherently be concurrently executable and when it is not possible at all to make a set of learned tasks concurrently executable using the previously proposed methods. Additional contributions of this work include extending the optimization-based framework to execute multiple tasks encoded by value functions to also account for value functions trained with a discount factor, making the overall framework more compatible with standard RL practices.
This paper investigates the ambiguity function (AF) of the emerging affine frequency division multiplexing (AFDM) waveform for Integrated Sensing and Communication (ISAC) signaling under a pulse shaping regime. Specifically, we first derive the closed-form expression of the average squared discrete period AF (DPAF) for AFDM waveform without pulse shaping, revealing that the AF depends on the parameter $c_1$ and the kurtosis of random communication data, while being independent of the parameter $c_2$. As a step further, we conduct a comprehensive analysis on the AFs of various waveforms, including AFDM, orthogonal frequency division multiplexing (OFDM) and orthogonal chirp-division multiplexing (OCDM). Our results indicate that all three waveforms exhibit the same number of regular depressions in the sidelobes of their AFs, which incurs performance loss for detecting and estimating weak targets. However, the AFDM waveform can flexibly control the positions of depressions by adjusting the parameter $c_1$, which motivates a novel design approach of the AFDM parameters to mitigate the adverse impact of depressions of the strong target on the weak target. Furthermore, a closed-form expression of the average squared DPAF for pulse-shaped random AFDM waveform is derived, which demonstrates that the pulse shaping filter generates the shaped mainlobe along the delay axis and the rapid roll-off sidelobes along the Doppler axis. Numerical results verify the effectiveness of our theoretical analysis and proposed design methodology for the AFDM modulation.
This paper investigates the dynamic properties of planar slider-pusher systems as a motion primitive in manipulation tasks. To that end, we construct a differential kinematic model deriving from the limit surface approach under the quasi-static assumption and with negligible contact friction. The quasi-static model applies to generic slider shapes and circular pusher geometries, enabling a differential kinematic representation of the system. From this model, we analyze differential flatness - a property advantageous for control synthesis and planning - and find that slider-pusher systems with polygon sliders and circular pushers exhibit flatness with the centre of mass as a flat output. Leveraging this property, we propose two control strategies for trajectory tracking: a cascaded quasi-static feedback strategy and a dynamic feedback linearization approach. We validate these strategies through closed-loop simulations incorporating perturbed models and input noise, as well as experimental results using a physical setup with a finger-like pusher and vision-based state detection. The real-world experiments confirm the applicability of the simulation gains, highlighting the potential of the proposed methods for
Brain-computer interfaces (BCIs) allow direct communication between the brain and external devices, frequently using electroencephalography (EEG) to record neural activity. Dimensionality reduction and structured regularization are essential for effectively classifying task-related brain signals, including event-related potentials (ERPs) and motor imagery (MI) rhythms. Current tensor-based approaches, such as Tucker and PARAFAC decompositions, often lack the flexibility needed to fully capture the complexity of EEG data. This study introduces Block-Term Tensor Discriminant Analysis (BTTDA): a novel tensor-based and supervised feature extraction method designed to enhance classification accuracy by providing flexible multilinear dimensionality reduction. Extending Higher Order Discriminant Analysis (HODA), BTTDA uses a novel and interpretable forward model for HODA combined with a deflation scheme to iteratively extract discriminant block terms, improving feature representation for classification. BTTDA and a sum-of-rank-1-terms variant PARAFACDA were evaluated on publicly available ERP (second-order tensors) and MI (third-order tensors) EEG datasets from the MOABB benchmarking framework. Benchmarking revealed that BTTDA and PARAFACDA significantly outperform the traditional HODA method in ERP decoding, resulting in state-of-the art performance (ROC-AUC = 91.25%). For MI, decoding results of HODA, BTTDA and PARAFACDA were subpar, but BTTDA still significantly outperformed HODA (64.52% > 61.00%). The block-term structure of BTTDA enables interpretable and more efficient dimensionality reduction without compromising discriminative power. This offers a promising and adaptable approach for feature extraction in BCI and broader neuroimaging applications.
Integrating flexible loads and storage systems into the residential sector contributes to the alignment of volatile renewable generation with demand. Besides batteries serving as a short-term storage solution, residential buildings can benefit from a Hydrogen (H2) storage system, allowing seasonal shifting of renewable energy. However, as the initial costs of H2 systems are high, coupling a Fuel Cell (FC) with a Heat Pump (HP) can contribute to the size reduction of the H2 system. The present study develops a Comfort-Oriented Energy Management System for Residential Buildings (ComEMS4Build) comprising Photovoltaics (PV), Battery Energy Storage System (BESS), and H2 storage, where FC and HP are envisioned as complementary technologies. The fuzzy-logic-based ComEMS4Build is designed and evaluated over a period of 12 weeks in winter for a family household building in Germany using a semi-synthetic modeling approach. The Rule-Based Control (RBC), which serves as a lower benchmark, is a scheduler designed to require minimal inputs for operation. The Model Predictive Control (MPC) is intended as a cost-optimal benchmark with an ideal forecast. The results show that ComEMS4Build, similar to MPC, does not violate the thermal comfort of occupants in 10 out of 12 weeks, while RBC has a slightly higher median discomfort of 0.68 Kh. The ComEMS4Build increases the weekly electricity costs by 12.06 EUR compared to MPC, while RBC increases the weekly costs by 30.14 EUR. The ComEMS4Build improves the Hybrid Energy Storage System (HESS) utilization and energy exchange with the main grid compared to the RBC. However, when it comes to the FC operation, the RBC has an advantage, as it reduces the toggling counts by 3.48% and working hours by 7.59% compared to MPC...
This paper explores the application of data-driven system identification techniques in the frequency domain to obtain simplified, control-oriented models of photosynthesis regulation under oscillating light conditions. In-silico datasets are generated using simulations of the physics-based Basic DREAM Model (BDM) Funete et al.[2024], with light intensity signals -- comprising DC (static) and AC (modulated) components as input and chlorophyll fluorescence (ChlF) as output. Using these data, the Best Linear Approximation (BLA) method is employed to estimate second-order linear time-invariant (LTI) transfer function models across different operating conditions defined by DC levels and modulation frequencies of light intensity. Building on these local models, a Linear Parameter-Varying (LPV) representation is constructed, in which the scheduling parameter is defined by the DC values of the light intensity, providing a compact state-space representation of the system dynamics.
Human action recognition (HAR) with multi-modal inputs (RGB-D, skeleton, point cloud) can achieve high accuracy but typically relies on large labeled datasets and degrades sharply when sensors fail or are noisy. We present Robust Cross-Modal Contrastive Learning (RCMCL), a self-supervised framework that learns modality-invariant representations and remains reliable under modality dropout and corruption. RCMCL jointly optimizes (i) a cross-modal contrastive objective that aligns heterogeneous streams, (ii) an intra-modal self-distillation objective that improves view-invariance and reduces redundancy, and (iii) a degradation simulation objective that explicitly trains models to recover from masked or corrupted inputs. At inference, an Adaptive Modality Gating (AMG) network assigns data-driven reliability weights to each modality for robust fusion. On NTU RGB+D 120 (CS/CV) and UWA3D-II, RCMCL attains state-of-the-art accuracy in standard settings and exhibits markedly better robustness: under severe dual-modality dropout it shows only an 11.5% degradation, significantly outperforming strong supervised fusion baselines. These results indicate that self-supervised cross-modal alignment, coupled with explicit degradation modeling and adaptive fusion, is key to deployable multi-modal HAR.
In this study, we examine the potential of high-resolution forest mapping using L-band interferometric time series datasets and deep learning modeling. Our SAR data are represented by a time series of nine ALOS-2 PALSAR-2 dual-pol SAR images acquired at near-zero spatial baseline over a study site in Asturias, Northern Spain. Reference data are collected using airborne laser scanning. We examine the performance of several candidate deep learning models from UNet-family with various combinations of input polarimetric and interferometric features. In addition to basic Vanilla UNet, attention reinforced UNet model with squeeze-excitation blocks (SeU-Net) and advanced UNet model with nested structure and skip pathways are used. Studied features include dual pol interferometric observables additionally incorporating model-based derived measures. Results show that adding model-based inverted InSAR features or InSAR coherence layers improves retrieval accuracy compared to using backscatter intensity only. Use of attention mechanisms and nested connection fusion provides better predictions than using Vanilla UNet or traditional machine learning methods. Forest height retrieval accuracies range between 3.1-3.8 m (R2 = 0.45--0.55) at 20 m resolution when only intensity data are used, and improve to less than 2.8 m when both intensity and interferometric coherence features are included. At 40 m and 60 m resolution, retrieval performance further improves, primarily due to higher SNR in both the intensity and interferometric layers. When using intensity at 60 m resolution, best achieved RMSE is 2.2 m, while when using all suitable input features the achieved error is 1.95 m. We recommend this hybrid approach for L-band SAR retrievals also suitable for NISAR and future ROSE-L missions.
Supervisory controllers control cyber-physical systems to ensure their correct and safe operation. Synthesis-based engineering (SBE) is an approach to largely automate their design and implementation. SBE combines model-based engineering with computer-aided design, allowing engineers to focus on 'what' the system should do (the requirements) rather than 'how' it should do it (design and implementation). In the Eclipse Supervisory Control Engineering Toolkit (ESCET) open-source project, a community of users, researchers, and tool vendors jointly develop a toolkit to support the entire SBE process, particularly through the CIF modeling language and tools. In this paper, we first provide a description of CIF's symbolic supervisory controller synthesis algorithm, and thereby include aspects that are often omitted in the literature, but are of great practical relevance, such as the prevention of runtime errors, handling different types of requirements, and supporting input variables (to connect to external inputs). Secondly, we introduce and describe CIF's benchmark models, a collection of 23 freely available industrial and academic models of various sizes and complexities. Thirdly, we describe recent improvements between ESCET versions v0.8 (December 2022) and v4.0 (June 2024) that affect synthesis performance, evaluate them on our benchmark models, and show the current practical synthesis performance of CIF. Fourthly, we briefly look at multi-level synthesis, a non-monolithic synthesis approach, evaluate its gains, and show that while it can help to further improve synthesis performance, further performance improvements are still needed to synthesize complex models.
This paper presents a deep Koopman-based Economic Model Predictive Control (EMPC) for efficient operation of a laboratory-scale pasteurization unit (PU). The method uses Koopman operator theory to transform the complex, nonlinear system dynamics into a linear representation, enabling the application of convex optimization while representing the complex PU accurately. The deep Koopman model utilizes neural networks to learn the linear dynamics from experimental data, achieving a 45% improvement in open-loop prediction accuracy over conventional N4SID subspace identification. Both analyzed models were employed in the EMPC formulation that includes interpretable economic costs, such as energy consumption, material losses due to inadequate pasteurization, and actuator wear. The feasibility of EMPC is ensured using slack variables. The deep Koopman EMPC and N4SID EMPC are numerically validated on a nonlinear model of multivariable PU under external disturbance. The disturbances include feed pump fail-to-close scenario and the introduction of a cold batch to be pastuerized. These results demonstrate that the deep Koopmand EMPC achieves a 32% reduction in total economic cost compared to the N4SID baseline. This improvement is mainly due to the reductions in material losses and energy consumption. Furthermore, the steady-state operation via Koopman-based EMPC requires 10.2% less electrical energy. The results highlight the practical advantages of integrating deep Koopman representations with economic optimization to achieve resource-efficient control of thermal-intensive plants.
Reconfigurable Intelligent Surfaces (RIS) have been recognized as a promising technology to enhance both communication and sensing performance in integrated sensing and communication (ISAC) systems for future 6G networks. However, existing RIS optimization methods for improving ISAC performance are mainly based on semidefinite relaxation (SDR) or iterative algorithms. The former suffers from high computational complexity and limited scalability, especially when the number of RIS elements becomes large, while the latter yields suboptimal solutions whose performance depends on initialization. In this work, we introduce a lightweight RIS phase design framework that provides a closed-form solution and explicitly accounts for the trade-off between communication and sensing, as well as proportional beam gain distribution toward multiple sensing targets. The key idea is to partition the RIS configuration into two parts: the first part is designed to maximize the communication performance, while the second introduces small perturbations to generate multiple beams for multi-target sensing. Simulation results validate the effectiveness of the proposed approach and demonstrate that it achieves performance comparable to SDR but with significantly lower computational complexity.
Nonlinear dynamical systems with input delays pose significant challenges for prediction, estimation, and control due to their inherent complexity and the impact of delays on system behavior. Traditional linear control techniques often fail in these contexts, necessitating innovative approaches. This paper introduces a novel approach to approximate the Koopman operator using an LSTM-enhanced Deep Koopman model, enabling linear representations of nonlinear systems with time delays. By incorporating Long Short-Term Memory (LSTM) layers, the proposed framework captures historical dependencies and efficiently encodes time-delayed system dynamics into a latent space. Unlike traditional extended Dynamic Mode Decomposition (eDMD) approaches that rely on predefined dictionaries, the LSTM-enhanced Deep Koopman model is dictionary-free, which mitigates the problems with the underlying dynamics being known and incorporated into the dictionary. Quantitative comparisons with extended eDMD on a simulated system demonstrate highly significant performance gains in prediction accuracy in cases where the true nonlinear dynamics are unknown and achieve comparable results to eDMD with known dynamics of a system.
In this study, we present and validate an ensemble-based Hankel Dynamic Mode Decomposition with control (HDMDc) for uncertainty-aware seakeeping predictions of a high-speed catamaran, namely the Delft 372 model. Experimental measurements (time histories) of wave elevation at the longitudinal center of gravity, heave, pitch, notional flight-deck velocity, notional bridge acceleration, and total resistance were collected from irregular wave basin tests on a 1:33.3 scale replica of the Delft 372 model under sea state 5 conditions at Fr = 0.425, and organized into training, validation, and test sets. The HDMDc algorithm constructs an equation-free linear reduced-order model of the seakeeping vessel by augmenting states and inputs with their time-lagged copies to capture nonlinear and memory effects. Two ensembling strategies, namely Bayesian HDMDc (BHDMDc), which samples hyperparameters considered stochastic variables with prior distribution to produce posterior mean forecasts with confidence intervals, and Frequentist HDMDc (FHDMDc), which aggregates multiple model obtained over data subsets, are compared in providing seakeeping prediction and uncertainty quantification. The FHDMDc approach is found to improve the accuracy of the predictions compared to the deterministic counterpart, also providing robust uncertainty estimation; whereas the application of BHDMDc to the present test case is not found beneficial in comparison to the deterministic model. FHDMDc-derived probability density functions for the motions closely match both experimental data and URANS results, demonstrating reliable and computationally efficient seakeeping prediction for design and operational support.
Phase-shifted carrier pulse-width modulation (PSC-PWM) is a widely adopted scheduling algorithm in cascaded bridge converters, modular multilevel converters, and reconfigurable batteries. However, non-uniformed pulse widths for the modules with fixed phase shift angles lead to significant ripple current and output-voltage distortion. Voltage uniformity instead would require optimization of the phase shifts of the individual carriers. However, the computational burden for such optimization is beyond the capabilities of any simple embedded controller. This paper proposes a neural network that emulates the behavior of an instantaneous optimizer with significantly reduced computational burden. The proposed method has the advantages of stable performance in predicting the optimum phase-shift angles under balanced battery modules with non-identical modulation indices without requiring extensive lookup tables, slow numerical optimization, or complex controller tuning. With only one (re)training session for any specified number of modules, the proposed method is readily adaptable to different system sizes. Furthermore, the proposed framework also includes a simple scaling strategy that allows a neural network trained for fewer modules to be reused for larger systems by grouping modules and adjusting their phase shifts. The scaling strategy eliminates the need for retraining. Large-scale assessment, simulations, and experiments demonstrate that, on average, the proposed approach can reduce the current ripple and the weighted total harmonic distortion by up to 50 % in real time and is 100 to 500 thousand times faster than a conventional optimizer (e.g., genetic algorithms), making it the only solution for an online application.
Fluorescence Molecular Tomography (FMT) is a promising technique for non-invasive 3D visualization of fluorescent probes, but its reconstruction remains challenging due to the inherent ill-posedness and reliance on inaccurate or often-unknown tissue optical properties. While deep learning methods have shown promise, their supervised nature limits generalization beyond training data. To address these problems, we propose $\mu$NeuFMT, a self-supervised FMT reconstruction framework that integrates implicit neural-based scene representation with explicit physical modeling of photon propagation. Its key innovation lies in jointly optimize both the fluorescence distribution and the optical properties ($\mu$) during reconstruction, eliminating the need for precise prior knowledge of tissue optics or pre-conditioned training data. We demonstrate that $\mu$NeuFMT robustly recovers accurate fluorophore distributions and optical coefficients even with severely erroneous initial values (0.5$\times$ to 2$\times$ of ground truth). Extensive numerical, phantom, and in vivo validations show that $\mu$NeuFMT outperforms conventional and supervised deep learning approaches across diverse heterogeneous scenarios. Our work establishes a new paradigm for robust and accurate FMT reconstruction, paving the way for more reliable molecular imaging in complex clinically related scenarios, such as fluorescence guided surgery.
Landmark Inertial Simultaneous Localisation and Mapping (LI-SLAM) is the problem of estimating the locations of landmarks in the environment and the robot's pose relative to those landmarks using landmark position measurements and measurements from Inertial Measurement Unit (IMU). This paper proposes a nonlinear observer for LI-SLAM posed in continuous time and analyses the observer in a base space that encodes all the observable states of LI-SLAM. The local exponential stability and almost-global asymptotic stability of the error dynamics in base space is established in the proof section and validated using simulations.
Remote monitoring of cardiovascular diseases plays an essential role in early detection of abnormal cardiac function, enabling timely intervention, improved preventive care, and personalized patient treatment. Abnormalities in the heart sounds can be detected automatically via computer-assisted decision support systems, and used as the first-line screening tool for detection of cardiovascular problems, or for monitoring the effects of treatments and interventions. We propose in this paper CardioPHON, an integrated heart sound quality assessment and classification tool that can be used for screening of abnormal cardiac function from phonocardiogram recordings. The model is pretrained in a self-supervised fashion on a collection of six small- and mid-sized heart sound datasets, enables automatic removal of low quality recordings to ensure that subtle sounds of heart abnormalities are not misdiagnosed, and provides a state-of-the-art performance for the heart sound classification task. The multimodal model that combines audio and socio-demographic features demonstrated superior performance, achieving the best ranking on the official leaderboard of the 2022 George B. Moody PhysioNet heart sound challenge, whereas the unimodal model, that is based only on phonocardiogram recordings, holds the first position among the unimodal approaches (a total rank 4), surpassing the models utilizing multiple modalities. CardioPHON is the first publicly released pretrained model in the domain of heart sound recordings, facilitating the development of data-efficient artificial intelligence models that can generalize to various downstream tasks in cardiovascular diagnostics.
In this paper, we focus on recovery control of nonlinear systems from attacks or failures. The main challenges of this problem lie in (1) learning the unknown dynamics caused by attacks or failures with formal guarantees, and (2) finding the invariant set of states to formally ensure the state deviations allowed from the nominal trajectory. To solve this problem, we propose to apply the Recurrent Equilibrium Networks (RENs) to learn the unknown dynamics using the data from the real-time system states. The input-output property of this REN model is guaranteed by incremental integral quadratic constraints (IQCs). Then, we propose a funnel-based control method to achieve system recovery from the deviated states. In particular, a sufficient condition for nominal trajectory stabilization is derived together with the invariant funnels along the nominal trajectory. Eventually, the effectiveness of our proposed control method is illustrated by a simulation example of a DC microgrid control application.
This paper presents a switch-type attenuator working from 20 to 100 GHz. The attenuator adopts a capacitive compensation technique to reduce phase error. The small resistors in this work are implemented with metal lines to reduce the intrinsic parasitic capacitance, which helps minimize the amplitude and phase errors over a wide frequency range. Moreover, the utilization of metal lines also reduces the chip area. In addition, a continuous tuning attenuation unit is employed to improve the overall attenuation accuracy of the attenuator. The passive attenuator is designed and fabricated in a standard 65nm CMOS. The measurement results reveal a relative attenuation range of 7.5 dB with a continuous tuning step within 20-100 GHz. The insertion loss is 1.6-3.8 dB within the operation band, while the return losses of all states are better than 11.5 dB. The RMS amplitude and phase errors are below 0.15 dB and 1.6°, respectively.
Hybrid power plants (HPPs) combine multiple power generators (conventional/variable) and energy storage capabilities to support generation inadequacy and grid demands. This paper introduces a modeling and control design framework for hybrid power plants (HPPs) consisting of a wind farm, solar plant, and battery storage. Specifically, this work adapts established modeling paradigms for wind farms, solar plants and battery models into a control affine form suitable for control design at the supervisory level. In the case of wind and battery models, generator torque and cell current control laws are developed using nonlinear control and control barrier function techniques to track a command from a supervisory control law while maintaining safe and stable operation. The utility of this modeling and control framework is illustrated through a test case using a utility demand signal for tracking, time varying wind and irradiance data, and a rule-based supervisory control law.
Persistent cost and schedule deviations remain a major challenge in the U.S. construction industry, revealing the limitations of deterministic CPM and static document-based estimating. This study presents an integrated 4D/5D digital-twin framework that couples Building Information Modeling (BIM) with natural-language processing (NLP)-based cost mapping, computer-vision (CV)-driven progress measurement, Bayesian probabilistic CPM updating, and deep-reinforcement-learning (DRL) resource-leveling. A nine-month case implementation on a Dallas-Fort Worth mid-rise project demonstrated measurable gains in accuracy and efficiency: 43% reduction in estimating labor, 6% reduction in overtime, and 30% project-buffer utilization, while maintaining an on-time finish at 128 days within P50-P80 confidence bounds. The digital-twin sandbox also enabled real-time "what-if" forecasting and traceable cost-schedule alignment through a 5D knowledge graph. Findings confirm that integrating AI-based analytics with probabilistic CPM and DRL enhances forecasting precision, transparency, and control resilience. The validated workflow establishes a practical pathway toward predictive, adaptive, and auditable construction management.
Designing frictional interfaces to exhibit prescribed macroscopic behavior is a challenging inverse problem, made difficult by the non-uniqueness of solutions and the computational cost of contact simulations. Traditional approaches rely on heuristic search over low-dimensional parameterizations, which limits their applicability to more complex or nonlinear friction laws. We introduce a generative modeling framework using Variational Autoencoders (VAEs) to infer surface topographies from target friction laws. Trained on a synthetic dataset composed of 200 million samples constructed from a parameterized contact mechanics model, the proposed method enables efficient, simulation-free generation of candidate topographies. We examine the potential and limitations of generative modeling for this inverse design task, focusing on balancing accuracy, throughput, and diversity in the generated solutions. Our results highlight trade-offs and outline practical considerations when balancing these objectives. This approach paves the way for near-real-time control of frictional behavior through tailored surface topographies.
The electric grid is increasingly vital, supporting essential services such as healthcare, heating and cooling transportation, telecommunications, and water systems. This growing dependence on reliable power underscores the need for enhanced grid resilience. This study presents Eversource's Climate Vulnerability Assessment (CVA) for bulk distribution substations in Massachusetts, evaluating risks from storm surge, sea level rise, precipitation, and extreme temperatures. The focus is on developing a cost-efficient model to guide targeted resilience investments. This is achieved by overcoming the limitations of single-variable analyses through hazard-specific assessments that integrate spatial, climate, electrical asset, and other relevant data; and applying sensitivity analysis to establish data-driven thresholds for actionable climate risks. By integrating geospatial analysis and data modeling with power engineering principles, this study provides a practical and replicable framework for equitable, data-informed climate adaptation planning. The results indicate that thresholds for certain climate hazards can be highly sensitive and result in significantly larger sets of stations requiring mitigation measures to adequately adapt to climate change, indicating that high-fidelity long-term climate projections are critical.
Traumatic brain injury (TBI) is intrinsically heterogeneous, and typical clinical outcome measures like the Glasgow Coma Scale complicate this diversity. The large variability in severity and patient outcomes render it difficult to link structural damage to functional deficits. The Federal Interagency Traumatic Brain Injury Research (FITBIR) repository contains large-scale multi-site magnetic resonance imaging data of varying resolutions and acquisition parameters (25 shared studies with 7,693 sessions that have age, sex and TBI status defined - 5,811 TBI and 1,882 controls). To reveal shared pathways of injury of TBI through imaging, we analyzed T1-weighted images from these sessions by first harmonizing to a local dataset and segmenting 132 regions of interest (ROIs) in the brain. After running quality assurance, calculating the volumes of the ROIs, and removing outliers, we calculated the z-scores of volumes for all participants relative to the mean and standard deviation of the controls. We regressed out sex, age, and total brain volume with a multivariate linear regression, and we found significant differences in 37 ROIs between subjects with TBI and controls (p < 0.05 with independent t-tests with false discovery rate correction). We found that differences originated in 1) the brainstem, occipital pole and structures posterior to the orbit, 2) subcortical gray matter and insular cortex, and 3) cerebral and cerebellar white matter using independent component analysis and clustering the component loadings of those with TBI.
We address the problem of quickest change detection in Markov processes with unknown transition kernels. The key idea is to learn the conditional score $\nabla_{\mathbf{y}} \log p(\mathbf{y}|\mathbf{x})$ directly from sample pairs $( \mathbf{x},\mathbf{y})$, where both $\mathbf{x}$ and $\mathbf{y}$ are high-dimensional data generated by the same transition kernel. In this way, we avoid explicit likelihood evaluation and provide a practical way to learn the transition dynamics. Based on this estimation, we develop a score-based CUSUM procedure that uses conditional Hyvarinen score differences to detect changes in the kernel. To ensure bounded increments, we propose a truncated version of the statistic. With Hoeffding's inequality for uniformly ergodic Markov processes, we prove exponential lower bounds on the mean time to false alarm. We also prove asymptotic upper bounds on detection delay. These results give both theoretical guarantees and practical feasibility for score-based detection in high-dimensional Markov models.
Cyberattacks targeting critical infrastructures, such as water treatment facilities, represent significant threats to public health, safety, and the environment. This paper introduces a systematic approach for modeling and assessing covert man-in-the-middle (MitM) attacks that leverage system identification techniques to inform the attack design. We focus on the attacker's ability to deploy a covert controller, and we evaluate countermeasures based on the Process-Aware Stealthy Attack Detection (PASAD) anomaly detection method. Using a second-order linear time-invariant with time delay model, representative of water treatment dynamics, we design and simulate stealthy attacks. Our results highlight how factors such as system noise and inaccuracies in the attacker's plant model influence the attack's stealthiness, underscoring the need for more robust detection strategies in industrial control environments.
Photoplethysmography (PPG) signals, which measure changes in blood volume in the skin using light, have recently gained attention in biometric authentication because of their non-invasive acquisition, inherent liveness detection, and suitability for low-cost wearable devices. However, PPG signal quality is challenged by motion artifacts, illumination changes, and inter-subject physiological variability, making robust feature extraction and classification crucial. This study proposes a lightweight and cost-effective biometric authentication framework based on PPG signals extracted from low-frame-rate fingertip videos. The CFIHSR dataset, comprising PPG recordings from 46 subjects at a sampling rate of 14 Hz, is employed for evaluation. The raw PPG signals undergo a standard preprocessing pipeline involving baseline drift removal, motion artifact suppression using Principal Component Analysis (PCA), bandpass filtering, Fourier-based resampling, and amplitude normalization. To generate robust representations, each one-dimensional PPG segment is converted into a two-dimensional time-frequency scalogram via the Continuous Wavelet Transform (CWT), effectively capturing transient cardiovascular dynamics. We developed a hybrid deep learning model, termed CVT-ConvMixer-LSTM, by combining spatial features from the Convolutional Vision Transformer (CVT) and ConvMixer branches with temporal features from a Long Short-Term Memory network (LSTM). The experimental results on 46 subjects demonstrate an authentication accuracy of 98%, validating the robustness of the model to noise and variability between subjects. Due to its efficiency, scalability, and inherent liveness detection capability, the proposed system is well-suited for real-world mobile and embedded biometric security applications.
The control of ensembles of dynamical systems is an intriguing and challenging problem, arising for example in quantum control. We initiate the investigation of optimal control of ensembles of discrete-time systems, focusing on minimising the average finite horizon cost over the ensemble. For very general nonlinear control systems and stage and terminal costs, we establish existence of minimisers under mild assumptions. Furthermore, we provide a $\Gamma$-convergence result which enables consistent approximation of the challenging ensemble optimal control problem, for example, by using empirical probability measures over the ensemble. Our results form a solid foundation for discrete-time optimal control of ensembles, with many interesting avenues for future research.
Minimum-volume nonnegative matrix factorization (min-vol NMF) has been used successfully in many applications, such as hyperspectral imaging, chemical kinetics, spectroscopy, topic modeling, and audio source separation. However, its robustness to noise has been a long-standing open problem. In this paper, we prove that min-vol NMF identifies the groundtruth factors in the presence of noise under a condition referred to as the expanded sufficiently scattered condition which requires the data points to be sufficiently well scattered in the latent simplex generated by the basis vectors.
The recent and ongoing expansion of marine infrastructure, including offshore wind farms, oil and gas platforms, artificial islands, and aquaculture facilities, highlights the need for effective monitoring systems. The development of robust models for offshore infrastructure detection relies on comprehensive, balanced datasets, but falls short when samples are scarce, particularly for underrepresented object classes, shapes, and sizes. By training deep learning-based YOLOv10 object detection models with a combination of synthetic and real Sentinel-1 satellite imagery acquired in the fourth quarter of 2023 from four regions (Caspian Sea, South China Sea, Gulf of Guinea, and Coast of Brazil), this study investigates the use of synthetic training data to enhance model performance. We evaluated this approach by applying the model to detect offshore platforms in three unseen regions (Gulf of Mexico, North Sea, Persian Gulf) and thereby assess geographic transferability. This region-holdout evaluation demonstrated that the model generalises beyond the training areas. In total, 3,529 offshore platforms were detected, including 411 in the North Sea, 1,519 in the Gulf of Mexico, and 1,593 in the Persian Gulf. The model achieved an F1 score of 0.85, which improved to 0.90 upon incorporating synthetic data. We analysed how synthetic data enhances the representation of unbalanced classes and overall model performance, taking a first step toward globally transferable detection of offshore infrastructure. This study underscores the importance of balanced datasets and highlights synthetic data generation as an effective strategy to address common challenges in remote sensing, demonstrating the potential of deep learning for scalable, global offshore infrastructure monitoring.
Music editing has emerged as an important and practical area of artificial intelligence, with applications ranging from video game and film music production to personalizing existing tracks according to user preferences. However, existing models face significant limitations, such as being restricted to editing synthesized music generated by their own models, requiring highly precise prompts, or necessitating task-specific retraining, thus lacking true zero-shot capability. Leveraging recent advances in rectified flow and diffusion transformers, we introduce MusRec, the first zero-shot text-to-music editing model capable of performing diverse editing tasks on real-world music efficiently and effectively. Experimental results demonstrate that our approach outperforms existing methods in preserving musical content, structural consistency, and editing fidelity, establishing a strong foundation for controllable music editing in real-world scenarios.
Affine Frequency Division Multiplexing (AFDM) has been proposed as an effective waveform for achieving the full diversity of doubly-dispersive (delay-Doppler) channels. While this property is closely related to range and velocity estimation in sensing, this article focuses on other AFDM features that are particularly relevant for addressing two challenges in integrated sensing and communication (ISAC) systems: (1) maintaining receiver complexity and energy consumption at acceptable levels while supporting the large bandwidths required for high delay/range resolution, and (2) mitigating interference in multiradar environments. In monostatic sensing, where direct transmitter-receiver leakage is a major impairment, we show that AFDM-based ISAC receivers can address the first challenge through their compatibility with low-complexity self-interference cancellation (SIC) schemes and reduced sampling rates via analog dechirping. In bistatic sensing, where such analog solutions may not be feasible, we demonstrate that AFDM supports sub-Nyquist sampling without requiring hardware modifications while preserving delay resolution. Finally, we show that the second challenge can be addressed by leveraging the resource-assignment flexibility of the discrete affine Fourier transform (DAFT) underlying the AFDM waveform.
Recent breakthroughs in language-queried audio source separation (LASS) have shown that generative models can achieve higher separation audio quality than traditional masking-based approaches. However, two key limitations restrict their practical use: (1) users often require operations beyond separation, such as sound removal; and (2) relying solely on text prompts can be unintuitive for specifying sound sources. In this paper, we propose PromptSep to extend LASS into a broader framework for general-purpose sound separation. PromptSep leverages a conditional diffusion model enhanced with elaborated data simulation to enable both audio extraction and sound removal. To move beyond text-only queries, we incorporate vocal imitation as an additional and more intuitive conditioning modality for our model, by incorporating Sketch2Sound as a data augmentation strategy. Both objective and subjective evaluations on multiple benchmarks demonstrate that PromptSep achieves state-of-the-art performance in sound removal and vocal-imitation-guided source separation, while maintaining competitive results on language-queried source separation.
We consider a time-slotted job-assignment system with a central server, N users and a machine which changes its state according to a Markov chain (hence called a Markov machine). The users submit their jobs to the central server according to a stochastic job arrival process. For each user, the server has a dedicated job queue. Upon receiving a job from a user, the server stores that job in the corresponding queue. When the machine is not working on a job assigned by the server, the machine can be either in internally busy or in free state, and the dynamics of these states follow a binary symmetric Markov chain. Upon sampling the state information of the machine, if the server identifies that the machine is in the free state, it schedules a user and submits a job to the machine from the job queue of the scheduled user. To maximize the number of jobs completed per unit time, we introduce a new metric, referred to as the age of job completion. To minimize the age of job completion and the sampling cost, we propose two policies and numerically evaluate their performance. For both of these policies, we find sufficient conditions under which the job queues will remain stable.
Model predictive control (MPC) achieves stability and constraint satisfaction for general nonlinear systems, but requires computationally expensive online optimization. This paper studies approximations of such MPC controllers via neural networks (NNs) to achieve fast online evaluation. We propose safety augmentation that yields deterministic guarantees for convergence and constraint satisfaction despite approximation inaccuracies. We approximate the entire input sequence of the MPC with NNs, which allows us to verify online if it is a feasible solution to the MPC problem. We replace the NN solution by a safe candidate based on standard MPC techniques whenever it is infeasible or has worse cost. Our method requires a single evaluation of the NN and forward integration of the input sequence online, which is fast to compute on resource-constrained systems. The proposed control framework is illustrated using two numerical non-linear MPC benchmarks of different complexity, demonstrating computational speedups that are orders of magnitude higher than online optimization. In the examples, we achieve deterministic safety through the safety-augmented NNs, where a naive NN implementation fails.
Magnetic Resonance Imaging (MRI) is a crucial diagnostic tool, but high-resolution scans are often slow and expensive due to extensive data acquisition requirements. Traditional MRI reconstruction methods aim to expedite this process by filling in missing frequency components in the K-space, performing 3D-to-3D reconstructions that demand full 3D scans. In contrast, we introduce X-Diffusion, a novel cross-sectional diffusion model that reconstructs detailed 3D MRI volumes from extremely sparse spatial-domain inputs, achieving 2D-to-3D reconstruction from as little as a single 2D MRI slice or few slices. A key aspect of X-Diffusion is that it models MRI data as holistic 3D volumes during the cross-sectional training and inference, unlike previous learning approaches that treat MRI scans as collections of 2D slices in standard planes (coronal, axial, sagittal). We evaluated X-Diffusion on brain tumor MRIs from the BRATS dataset and full-body MRIs from the UK Biobank dataset. Our results demonstrate that X-Diffusion not only surpasses state-of-the-art methods in quantitative accuracy (PSNR) on unseen data but also preserves critical anatomical features such as tumor profiles, spine curvature, and brain volume. Remarkably, the model generalizes beyond the training domain, successfully reconstructing knee MRIs despite being trained exclusively on brain data. Medical expert evaluations further confirm the clinical relevance and fidelity of the generated this http URL our knowledge, X-Diffusion is the first method capable of producing detailed 3D MRIs from highly limited 2D input data, potentially accelerating MRI acquisition and reducing associated costs. The code is available on the project website this https URL .
In this paper, we propose a full-duplex integrated sensing and communication (ISAC) system enabled by a movable antenna (MA). By leveraging the characteristic of MA that can increase the spatial diversity gain, the performance of the system can be enhanced. We formulate a problem of minimizing the total transmit power consumption via jointly optimizing the discrete position of MA elements, beamforming vectors, sensing signal covariance matrix and user transmit power. Given the significant coupling of optimization variables, the formulated problem presents a non-convex optimization challenge that poses difficulties for direct resolution. To address this challenging issue, the discrete binary particle swarm optimization (BPSO) algorithm framework is employed to solve the formulated problem. Specifically, the discrete positions of MA elements are first obtained by iteratively solving the fitness function. The difference-of-convex (DC) programming and successive convex approximation (SCA) are used to handle non-convex and rank-1 terms in the fitness function. Once the BPSO iteration is complete, the discrete positions of MA elements can be determined, and we can obtain the solutions for beamforming vectors, sensing signal covariance matrix and user transmit power. Numerical results demonstrate the superiority of the proposed system in reducing the total transmit power consumption compared with fixed antenna arrays.
Purpose: To explore best-practice approaches for generating synthetic chest X-ray images and augmenting medical imaging datasets to optimize the performance of deep learning models in downstream tasks like classification and segmentation. Materials and Methods: We utilized a latent diffusion model to condition the generation of synthetic chest X-rays on text prompts and/or segmentation masks. We explored methods like using a proxy model and using radiologist feedback to improve the quality of synthetic data. These synthetic images were then generated from relevant disease information or geometrically transformed segmentation masks and added to ground truth training set images from the CheXpert, CANDID-PTX, SIIM, and RSNA Pneumonia datasets to measure improvements in classification and segmentation model performance on the test sets. F1 and Dice scores were used to evaluate classification and segmentation respectively. One-tailed t-tests with Bonferroni correction assessed the statistical significance of performance improvements with synthetic data. Results: Across all experiments, the synthetic data we generated resulted in a maximum mean classification F1 score improvement of 0.150453 (CI: 0.099108-0.201798; P=0.0031) compared to using only real data. For segmentation, the maximum Dice score improvement was 0.14575 (CI: 0.108267-0.183233; P=0.0064). Conclusion: Best practices for generating synthetic chest X-ray images for downstream tasks include conditioning on single-disease labels or geometrically transformed segmentation masks, as well as potentially using proxy modeling for fine-tuning such generations.
The movable antenna (MA)-enabled integrated sensing and communication (ISAC) system attracts widespread attention as an innovative framework. The ISAC system integrates sensing and communication functions, achieving resource sharing across various domains, significantly enhancing communication and sensing performance, and promoting the intelligent interconnection of everything. Meanwhile, MA utilizes the spatial variations of wireless channels by dynamically adjusting the positions of MA elements at the transmitter and receiver to improve the channel and further enhance the performance of the ISAC systems. In this paper, we first outline the fundamental principles of MA and introduce the application scenarios of MA-enabled ISAC systems. Then, we summarize the advantages of MA-enabled ISAC systems in enhancing spectral efficiency, achieving flexible and precise beamforming, and making the signal coverage range adjustable. Besides, a specific case is studied to show the performance gains in terms of transmit power that MA brings to ISAC systems. Finally, we discuss the challenges of MA-enabled ISAC and future research directions, aiming to provide insights for future research on MA-enabled ISAC systems.
Recent advances in deep neural networks (DNNs) have significantly improved various audio processing applications, including speech enhancement, synthesis, and hearing-aid algorithms. DNN-based closed-loop systems have gained popularity in these applications due to their robust performance and ability to adapt to diverse conditions. Despite their effectiveness, current DNN-based closed-loop systems often suffer from sound quality degradation caused by artifacts introduced by suboptimal sampling methods. To address this challenge, we introduce dCoNNear, a novel DNN architecture designed for seamless integration into closed-loop frameworks. This architecture specifically aims to prevent the generation of spurious artifacts-most notably tonal and aliasing artifacts arising from non-ideal sampling layers. We demonstrate the effectiveness of dCoNNear through a proof-of-principle example within a closed-loop framework that employs biophysically realistic models of auditory processing for both normal and hearing-impaired profiles to design personalized hearing-aid algorithms. We further validate the broader applicability and artifact-free performance of dCoNNear through speech-enhancement experiments, confirming its ability to improve perceptual sound quality without introducing architecture-induced artifacts. Our results show that dCoNNear not only accurately simulates all processing stages of existing non-DNN biophysical models but also significantly improves sound quality by eliminating audible artifacts in both hearing-aid and speech-enhancement applications. This study offers a robust, perceptually transparent closed-loop processing framework for high-fidelity audio applications.
Mixed vehicle platoons, comprising connected and automated vehicles (CAVs) and human-driven vehicles (HDVs), hold significant potential for enhancing traffic performance. However, most existing control strategies assume linear system dynamics and often ignore the impact of adverse factors such as noise, disturbances, and attacks, which are inherent to real-world scenarios. To address these limitations, we propose a Robust Nonlinear Data-Driven Predictive Control (RNDDPC) framework that ensures safe and optimal control under uncertain and adverse conditions. By utilizing Koopman operator theory, we map the system's nonlinear dynamics into a higher-dimensional space, constructing a Koopman-based model that approximates the behavior of the original nonlinear system. To mitigate modeling errors associated with this predictor, we introduce a data-driven reachable set analysis technique that performs secondary learning using matrix zonotope sets, generating a reachable set predictor for over-approximation of the future states of the underlying system. Then, we formulate the RNDDPC optimization problem and solve it in a receding horizon manner for robust control inputs. Extensive simulations demonstrate that the proposed framework significantly outperforms baseline methods in tracking performance under noise, disturbances, and attacks.
Designing safe controllers is crucial and notoriously challenging for input-constrained safety-critical control systems. Backup control barrier functions offer an approach for the construction of safe controllers online by considering the flow of the system under a backup controller. However, in the presence of model uncertainties, the flow cannot be accurately computed, making this method insufficient for safety assurance. To tackle this shortcoming, we integrate backup control barrier functions with uncertainty estimators and calculate the flow under a reconstruction of the model uncertainty while refining this estimate over time. We prove that the controllers resulting from the proposed Uncertainty Estimator Backup Control Barrier Function (UE-bCBF) approach guarantee safety, are robust to unknown disturbances, and satisfy input constraints.
Confidence estimation can improve the reliability of melody estimation by indicating which predictions are likely incorrect. The existing classification-based approach provides confidence for predicted pitch classes but fails to capture the magnitude of deviation from the ground truth. To address this limitation, we reformulate melody estimation as a regression problem and propose a novel approach to estimate uncertainty directly from the histogram representation of the pitch values, which correlates well with the deviation between the prediction and the ground-truth. We design three methods to model pitch on a continuous support range of histogram, which introduces the challenge of handling the discontinuity of unvoiced from the voiced pitch values. The first two methods address the abrupt discontinuity by mapping the pitch values to a continuous range, while the third adopts a fully Bayesian formulation, which models voicing detection as a classification and voiced pitch estimation as a regression task. Experimental results demonstrate that regression-based formulations yield more reliable uncertainty estimates compared to classification-based approaches in identifying incorrect pitch predictions. Comparing the proposed methods with a state-of-the-art regression model, it is observed that the Bayesian method performs the best at estimating both the melody and its associated uncertainty.
In this paper, low-order models of the frequency and voltage response of mixed-generation, low-inertia systems are presented. These models are unique in their ability to efficiently and accurately model frequency and voltage dynamics without increasing the computational burden as the share of inverters is increased in a system. The models are validated against industry-grade electromagnetic transient simulation, compared to which the proposed models are several orders of magnitude faster. The accuracy and efficiency of the low-inertia frequency and voltage models makes them well suited for a variety of planning and operational studies, especially for multi-scenario and probabilistic studies, as well as for screening studies to establish impact zones based on the dynamic interactions between inverters and synchronous generators.
Battery management systems (BMSs) rely on real-time estimation of battery temperature distribution in battery cells to ensure safe and optimal operation of Lithium-ion batteries (LIBs). However, physical BMS often suffers from memory and computational resource limitations required by highfidelity models. Temperature prediction using physics-based models becomes challenging due to their higher computational time. In contrast, machine learning based approaches offer faster predictions but demand larger memory overhead. In this work, we develop a lightweight and efficient Kolmogorov-Arnold networks (KAN) based thermal model, KAN-Therm, to predict the core temperature of a cylindrical battery. We have compared the memory overhead and computation costs of our method with Multi-layer perceptron (MLP), recurrent neural network (RNN), and long shortterm memory (LSTM) network. Our results show that the proposed KAN-Therm model exhibit the best prediction accuracy with the least memory overhead and computation time.
This paper proposes Mode-Aware Probabilistic Scheduling (MAPS), a novel adaptive control framework tailored for DC motor systems experiencing varying friction. MAPS uniquely integrates an Interacting Multiple Model (IMM) estimator with a Linear Parameter-Varying (LPV) based control strategy, leveraging real-time mode probability estimates to perform probabilistic gain scheduling. A key innovation of MAPS lies in directly using the updated mode probabilities as the interpolation weights for online gain synthesis in the LPV controller, thereby tightly coupling state estimation with adaptive control. This seamless integration enables the controller to dynamically adapt control gains in real time, effectively responding to changes in frictional operating modes without requiring explicit friction model identification. Validation on a Hardware-in-the-Loop Simulation (HILS) environment demonstrates that MAPS significantly enhances both state estimation accuracy and reference tracking performance compared to Linear Quadratic Regulator (LQR) controllers relying on predefined scheduling variables. These results establish MAPS as a robust, generalizable solution for friction-aware adaptive control in uncertain, time-varying environments, with practical real-time applicability.
Eye tracking has become a key technology for gaze-based interactions in Extended Reality (XR). However, conventional frame-based eye-tracking systems often fall short of XR's stringent requirements for high accuracy, low latency, and energy efficiency. Event cameras present a compelling alternative, offering ultra-high temporal resolution and low power consumption. In this paper, we present JaneEye, an energy-efficient event-based eye-tracking hardware accelerator designed specifically for wearable devices, leveraging sparse, high-temporal-resolution event data. We introduce an ultra-lightweight neural network architecture featuring a novel ConvJANET layer, which simplifies the traditional ConvLSTM by retaining only the forget gate, thereby halving computational complexity without sacrificing temporal modeling capability. Our proposed model achieves high accuracy with a pixel error of 2.45 on the 3ET+ dataset, using only 17.6K parameters, with up to 1250 Hz event frame rate. To further enhance hardware efficiency, we employ custom linear approximations of activation functions (hardsigmoid and hardtanh) and fixed-point quantization. Through software-hardware co-design, our 12-nm ASIC implementation operates at 400 MHz, delivering an end-to-end latency of 0.5 ms (equivalent to 2000 Frames Per Second (FPS)) at an energy efficiency of 18.9 $\mu$J/frame. JaneEye sets a new benchmark in low-power, high-performance eye-tracking solutions suitable for integration into next-generation XR wearables.
Image-to-image translation models have achieved notable success in converting images across visual domains and are increasingly used for medical tasks such as predicting post-operative outcomes and modeling disease progression. However, most existing methods primarily aim to match the target distribution and often neglect spatial correspondences between the source and translated images. This limitation can lead to structural inconsistencies and hallucinations, undermining the reliability and interpretability of the predictions. These challenges are accentuated in clinical applications by the stringent requirement for anatomical accuracy. In this work, we present TraceTrans, a novel deformable image translation model designed for post-operative prediction that generates images aligned with the target distribution while explicitly revealing spatial correspondences with the pre-operative input. The framework employs an encoder for feature extraction and dual decoders for predicting spatial deformations and synthesizing the translated image. The predicted deformation field imposes spatial constraints on the generated output, ensuring anatomical consistency with the source. Extensive experiments on medical cosmetology and brain MRI datasets demonstrate that TraceTrans delivers accurate and interpretable post-operative predictions, highlighting its potential for reliable clinical deployment.
We establish structural properties of optimal stopping problems under time-consistent dynamic (coherent) risk measures, focusing on value function monotonicity and the existence of control limit (threshold) optimal policies. While such results are well developed for risk-neutral (expected-value) models, they remain underexplored in risk-averse settings. Coherent risk measures typically lack the tower property and are subadditive rather than additive, complicating structural analysis. We show that value function monotonicity mirrors the risk-neutral case. Moreover, if the risk envelope associated with each coherent risk measure admits a minimal element, the risk-averse optimal stopping problem reduces to an equivalent risk-neutral formulation. We also develop a general procedure for identifying control limit optimal policies and use it to derive practical, verifiable conditions on the risk measures and MDP structure that guarantee their existence. We illustrate the theory and verify these conditions through optimal stopping problems arising in operations, marketing, and finance.
Imaging systems have traditionally been designed to mimic the human eye and produce visually interpretable measurements. Modern imaging systems, however, process raw measurements computationally before or instead of human viewing. As a result, the information content of raw measurements matters more than their visual interpretability. Despite the importance of measurement information content, current approaches for evaluating imaging system performance do not quantify it: they instead either use alternative metrics that assess specific aspects of measurement quality or assess measurements indirectly with performance on secondary tasks. We developed the theoretical foundations and a practical method to directly quantify mutual information between noisy measurements and unknown objects. By fitting probabilistic models to measurements and their noise characteristics, our method estimates information by upper bounding its true value. By applying gradient-based optimization to these estimates, we also developed a technique for designing imaging systems called Information-Driven Encoder Analysis Learning (IDEAL). Our information estimates accurately captured system performance differences across four imaging domains (color photography, radio astronomy, lensless imaging, and microscopy). Systems designed with IDEAL matched the performance of those designed with end-to-end optimization, the prevailing approach that jointly optimizes hardware and image processing algorithms. These results establish mutual information as a universal performance metric for imaging systems that enables both computationally efficient design optimization and evaluation in real-world conditions. A video summarizing this work can be found at: this https URL
5G and beyond cellular systems embrace the disaggregation of Radio Access Network (RAN) components, exemplified by the evolution of the fronthaul (FH) connection between cellular baseband and radio unit equipment. Crucially, synchronization over the FH is pivotal for reliable 5G services. In recent years, there has been a push to move these links to an Ethernet-based packet network topology, leveraging existing standards and ongoing research for Time-Sensitive Networking (TSN). However, TSN standards, such as Precision Time Protocol (PTP), focus on performance with little to no concern for security. This increases the exposure of the open FH to security risks. Attacks targeting synchronization mechanisms pose significant threats, potentially disrupting 5G networks and impairing connectivity. In this paper, we demonstrate the impact of successful spoofing and replay attacks against PTP synchronization. We show how a spoofing attack is able to cause a production-ready O-RAN and 5G-compliant private cellular base station to catastrophically fail within 2 seconds of the attack, necessitating manual intervention to restore full network operations. To counter this, we design a Machine Learning (ML)-based monitoring solution capable of detecting various malicious attacks with over 97.5% accuracy.
The Mini Wheelbot is a balancing, reaction wheel unicycle robot designed as a testbed for learning-based control. It is an unstable system with highly nonlinear yaw dynamics, non-holonomic driving, and discrete contact switches in a small, powerful, and rugged form factor. The Mini Wheelbot can use its wheels to stand up from any initial orientation - enabling automatic environment resets in repetitive experiments and even challenging half flips. We illustrate the effectiveness of the Mini Wheelbot as a testbed by implementing two popular learning-based control algorithms. First, we showcase Bayesian optimization for tuning the balancing controller. Second, we use imitation learning from an expert nonlinear MPC that uses gyroscopic effects to reorient the robot and can track higher-level velocity and orientation commands. The latter allows the robot to drive around based on user commands - for the first time in this class of robots. The Mini Wheelbot is not only compelling for testing learning-based control algorithms, but it is also just fun to work with, as demonstrated in the video of our experiments.
Analysis of continuous-time piecewise linear systems based on piecewise quadratic (PWQ) Lyapunov functions typically requires continuity of these functions over a partition of the state space. Several conditions for guaranteeing continuity of PWQ functions over state space partitions can be found in the literature. In this technical note, we show that these continuity conditions are equivalent over so-called simplicial conic partitions. As a consequence, the choice of which condition to impose can be based solely on practical considerations such as specific application or numerical aspects, without introducing additional conservatism in the analysis.
We study the robustness of an agent decision-making model in finite-population games, with a particular focus on the Kullback-Leibler Divergence Regularized Learning (KLD-RL) model. Specifically, we examine how the model's parameters influence the impact of various sources of noise and modeling inaccuracies -- factors commonly encountered in engineering applications of population games -- on agents' decision-making. Our analysis provides insights into how these parameters can be effectively tuned to mitigate such effects. Theoretical results are supported by numerical examples and simulation studies that validate the analysis and illustrate practical strategies for parameter selection.
Accurate day-ahead forecasts of solar irradiance are required for the large-scale integration of solar photovoltaic (PV) systems into the power grid. However, current forecasting solutions lack the temporal and spatial resolution required by system operators. In this paper, we introduce SolarCrossFormer, a novel deep learning model for day-ahead irradiance forecasting, that combines satellite images and time series from a ground-based network of meteorological stations. SolarCrossFormer uses novel graph neural networks to exploit the inter- and intra-modal correlations of the input data and improve the accuracy and resolution of the forecasts. It generates probabilistic forecasts for any location in Switzerland with a 15-minute resolution for horizons up to 24 hours ahead. One of the key advantages of SolarCrossFormer its robustness in real life operations. It can incorporate new time-series data without retraining the model and, additionally, it can produce forecasts for locations without input data by using only their coordinates. Experimental results over a dataset of one year and 127 locations across Switzerland show that SolarCrossFormer yield a normalized mean absolute error of 6.1 % over the forecasting horizon. The results are competitive with those achieved by a commercial numerical weather prediction service.
In this study, the global convergence properties of the Oja flow, a continuous-time algorithm for principal component extraction, was established for general square matrices. The Oja flow is a matrix differential equation on the Stiefel manifold designed to extract a dominant subspace. Although its analysis has traditionally been restricted to symmetric positive-definite matrices, where it acts as a gradient flow, recent applications have extended its use to general matrices. In this non-symmetric case, the flow extracts the invariant subspace corresponding to the eigenvalues with the largest real parts. However, prior convergence results have been purely local, leaving the global behavior as an open problem. The findings of this study fill this gap by providing a comprehensive global convergence analysis, establishing that the flow converges exponentially for almost all initial conditions. We also propose a modification to the algorithm that enhances its numerical stability. As an application of this theory, we developed novel methods for model reduction of linear dynamical systems and the synthesis of low-rank stabilizing controllers. The study advances the theoretical understanding of the Oja flow and demonstrates its potential as a reliable and versatile tool for analyzing and controlling complex linear systems.
Optimization plays a central role in intelligent systems and cyber-physical technologies, where speed and reliability of convergence directly impact performance. In control theory, optimization-centric methods are standard: controllers are designed by repeatedly solving optimization problems, as in linear quadratic regulation, $H_\infty$ control, and model predictive control. In contrast, this paper develops a control-centric framework for optimization itself, where algorithms are constructed directly from Lyapunov stability principles rather than being proposed first and analyzed afterward. A key element is the stationarity vector, which encodes first-order optimality conditions and enables Lyapunov-based convergence analysis. By pairing a Lyapunov function with a selectable decay law, we obtain continuous-time dynamics with guaranteed exponential, finite-time, fixed-time, or prescribed-time convergence. Within this framework, we introduce three feedback realizations of increasing restrictiveness: the Hessian-gradient, Newton, and gradient dynamics. Each realization shapes the decay of the stationarity vector to achieve the desired rate. These constructions unify unconstrained optimization, extend naturally to constrained problems via Lyapunov-consistent primal-dual dynamics, and broaden the results for minimax and generalized Nash equilibrium seeking problems beyond exponential stability. The framework provides systematic design tools for optimization algorithms in control and game-theoretic problems.
Robustness has become one of the most critical problems in machine learning (ML). The science of interpreting ML models to understand their behavior and improve their robustness is referred to as explainable artificial intelligence (XAI). One of the state-of-the-art XAI methods for computer vision problems is to generate saliency maps. A saliency map highlights the pixel space of an image that excites the ML model the most. However, this property could be misleading if spurious and salient features are present in overlapping pixel spaces. In this paper, we propose a caption-based XAI method, which integrates a standalone model to be explained into the contrastive language-image pre-training (CLIP) model using a novel network surgery approach. The resulting caption-based XAI model identifies the dominant concept that contributes the most to the models prediction. This explanation minimizes the risk of the standalone model falling for a covariate shift and contributes significantly towards developing robust ML models. Our code is available at this https URL
Over-the-air (OTA) federated learning (FL) has been well recognized as a scalable paradigm that exploits the waveform superposition of the wireless multiple-access channel to aggregate model updates in a single use. Existing OTA-FL designs largely enforce zero-bias model updates by either assuming \emph{homogeneous} wireless conditions (equal path loss across devices) or forcing zero-bias updates to guarantee convergence. Under \emph{heterogeneous} wireless scenarios, however, such designs are constrained by the weakest device and inflate the update variance. Moreover, prior analyses of biased OTA-FL largely address convex objectives, while most modern AI models are highly non-convex. Motivated by these gaps, we study OTA-FL with stochastic gradient descent (SGD) for general smooth non-convex objectives under wireless heterogeneity. We develop novel OTA-FL SGD updates that allow a structured, time-invariant model bias while facilitating reduced variance updates. We derive a finite-time stationarity bound (expected time average squared gradient norm) that explicitly reveals a bias-variance trade-off. To optimize this trade-off, we pose a non-convex joint OTA power-control design and develop an efficient successive convex approximation (SCA) algorithm that requires only statistical CSI at the base station. Experiments on a non-convex image classification task validate the approach: the SCA-based design accelerates convergence via an optimized bias and improves generalization over prior OTA-FL baselines.
Underwater multi-robot cooperative coverage remains challenging due to partial observability, limited communication, environmental uncertainty, and the lack of access to global localization. To address these issues, this paper presents a semantics-guided fuzzy control framework that couples Large Language Models (LLMs) with interpretable control and lightweight coordination. Raw multimodal observations are compressed by the LLM into compact, human-interpretable semantic tokens that summarize obstacles, unexplored regions, and Objects Of Interest (OOIs) under uncertain perception. A fuzzy inference system with pre-defined membership functions then maps these tokens into smooth and stable steering and gait commands, enabling reliable navigation without relying on global positioning. Then, we further coordinate multiple robots by introducing semantic communication that shares intent and local context in linguistic form, enabling agreement on who explores where while avoiding redundant revisits. Extensive simulations in unknown reef-like environments show that, under limited sensing and communication, the proposed framework achieves robust OOI-oriented navigation and cooperative coverage with improved efficiency and adaptability, narrowing the gap between semantic cognition and distributed underwater control in GPS-denied, map-free conditions.
Wireless sensing technologies can now detect heartbeats using radio frequency and acoustic signals, raising significant privacy concerns. Existing privacy solutions either protect from all sensing systems indiscriminately preventing any utility or operate post-data collection, failing to enable selective access where authorized devices can monitor while unauthorized ones cannot. We present a key-based physical obfuscation system, PrivyWave, that addresses this challenge by generating controlled decoy heartbeat signals at cryptographically-determined frequencies. Unauthorized sensors receive a mixture of real and decoy signals that are indistinguishable without the secret key, while authorized sensors use the key to filter out decoys and recover accurate measurements. Our evaluation with 13 participants demonstrates effective protection across both sensing modalities: for mmWave radar, unauthorized sensors show 21.3 BPM mean absolute error while authorized sensors maintain a much smaller 5.8 BPM; for acoustic sensing, unauthorized error increases to 42.0 BPM while authorized sensors achieve 9.7 BPM. The system operates across multiple sensing modalities without per-modality customization and provides cryptographic obfuscation guarantees. Performance benchmarks show robust protection across different distances (30-150 cm), orientations (120° field of view), and diverse indoor environments, establishing physical-layer obfuscation as a viable approach for selective privacy in pervasive health monitoring.