Solving the alternating current power flow equations in real time is essential for secure grid operation, yet classical Newton-Raphson solvers can be slow under stressed conditions. Existing graph neural networks for power flow are typically trained on a single system and often degrade on different systems. We present PowerModelsGAT-AI, a physics-informed graph attention network that predicts bus voltages and generator injections. The model uses bus-type-aware masking to handle different bus types and balances multiple loss terms, including a power-mismatch penalty, using learned weights. We evaluate the model on 14 benchmark systems (4 to 6,470 buses) and train a unified model on 13 of these under N-2 (two-branch outage) conditions, achieving an average normalized mean absolute error of 0.89% for voltage magnitudes and R^2 > 0.99 for voltage angles. We also show continual learning: when adapting a base model to a new 1,354-bus system, standard fine-tuning causes severe forgetting with error increases exceeding 1000% on base systems, while our experience replay and elastic weight consolidation strategy keeps error increases below 2% and in some cases improves base-system performance. Interpretability analysis shows that learned attention weights correlate with physical branch parameters (susceptance: r = 0.38; thermal limits: r = 0.22), and feature importance analysis supports that the model captures established power flow relationships.
Electroencephalography (EEG) provides a non-invasive window into neural dynamics at high temporal resolution and plays a pivotal role in clinical neuroscience research. Despite this potential, prevailing computational approaches to EEG analysis remain largely confined to task-specific classification objectives or coarse-grained pattern recognition, offering limited support for clinically meaningful interpretation. To address these limitations, we introduce NeuroNarrator, the first generalist EEG-to-text foundation model designed to translate electrophysiological segments into precise clinical narratives. A cornerstone of this framework is the curation of NeuroCorpus-160K, the first harmonized large-scale resource pairing over 160,000 EEG segments with structured, clinically grounded natural-language descriptions. Our architecture first aligns temporal EEG waveforms with spatial topographic maps via a rigorous contrastive objective, establishing spectro-spatially grounded representations. Building on this grounding, we condition a Large Language Model through a state-space-inspired formulation that integrates historical temporal and spectral context to support coherent clinical narrative generation. This approach establishes a principled bridge between continuous signal dynamics and discrete clinical language, enabling interpretable narrative generation that facilitates expert interpretation and supports clinical reporting workflows. Extensive evaluations across diverse benchmarks and zero-shot transfer tasks highlight NeuroNarrator's capacity to integrate temporal, spectral, and spatial dynamics, positioning it as a foundational framework for time-frequency-aware, open-ended clinical interpretation of electrophysiological data.
This paper presents a port-Hamiltonian formulation of vehicle-manipulator systems (VMS), a broad class of robotic systems including aerial manipulators, underwater manipulators, space robots, and omnidirectional mobile manipulators. Unlike existing Lagrangian formulations that obscure the underlying energetic structure, the proposed port-Hamiltonian formulation explicitly reveals the energy flow and conservation properties of these complex mechanical systems. We derive the port-Hamiltonian dynamics from first principles using Hamiltonian reduction theory. Two complementary formulations are presented: a standard form that directly exposes the energy structure, and an inertially-decoupled form that leverages the principal bundle structure of the VMS configuration space and is particularly suitable for control design and numerical simulation. The coordinate-free geometric approach we follow avoids singularities associated with local parameterizations of the base orientation. We rigorously establish the mathematical equivalence between our port-Hamiltonian formulations and existing reduced Euler-Lagrange and Boltzmann-Hamel equations found in the robotics and geometric mechanics literature.
Forecasting Electroncephalography (EEG) signals during cognitive events remains a fundamental challenge in neuroscience and Brain-Computer Interfaces (BCIs), as existing methods struggle to capture both the stochastic nature of neural dynamics and the semantic context of behavioral tasks. We present the Dual-Enhanced COnditioned Diffusion (DECODE) for EEG, a novel framework that unifies semantic guidance from natural language descriptions with temporal dynamics from historical signals to generate event-specific neural responses. DECODE leverages pre-trained language models to condition the diffusion process on rich textual descriptions of cognitive events, while maintaining temporal coherence through history-based Langevin dynamics. Evaluated on a real-world driving task dataset with five distinct behaviors, DECODE achieves sub-microvolt prediction accuracy (MAE = 0.626 microvolt) over 75 timestep horizons while maintaining well-calibrated uncertainty estimates. Our framework demonstrates that natural language can effectively bridge high-level cognitive descriptions and low-level neural dynamics, opening new possibilities for zero-shot generalization to novel behaviors and interpretable BCIs. By generating physiologically plausible, event-specific EEG trajectories conditioned on semantic descriptions, DECODE establishes a new paradigm for understanding and predicting context-dependent neural activity.
This study presents AI-HEART, a cloud-based information system for managing and analysing long-duration ambulatory electrocardiogram (ECG) recordings and supporting clinician decision-making. The platform operationalises an end-to-end pipeline that ingests multi-day three-lead ECGs, normalises inputs, performs signal preprocessing, and applies dedicated deep neural networks for wave delineation, noise/quality detection, and beat- and rhythm-level multi-class arrhythmia classification. To address class imbalance and real-world signal variability, model development combines large clinically annotated datasets with expert-in-the-loop curation and generative augmentation for under-represented rhythms. Empirical evaluation on three-lead ambulatory ECG data shows that delineation accuracy is sufficient for automated interval measurement, noise detection reliably flags poor-quality segments, and arrhythmia classification achieves high specificity with clinically useful macro-averaged performance across common and rarer rhythms. Beyond predictive accuracy, AI-HEART provides a scalable deployment approach for integrating AI into routine ECG services, enabling traceable outputs, audit-friendly storage of recordings and derived annotations, and clinician review/editing that captures feedback for controlled model improvement. The findings demonstrate the technical feasibility and operational value of a noise-aware AI-ECG platform as a digital health information system.
In many low-income countries, neighborhood diesel generators are widely used to compensate for unreliable or unavailable national electricity grids. These diesel-based microgrids are typically characterized by market power, significant pollution, and weak regulatory oversight. In parallel, households increasingly deploy off-grid solar photovoltaic (PV) systems to gain control over electricity supply. However, these systems suffer from curtailed excess generation during peak solar hours and unreliable access at other times. While prior studies have optimized microgrids in developing contexts from a techno-economic perspective, they largely neglect the market power exerted by monopolistic private generators. This paper addresses this gap by developing a bi-level game-theoretic model that enables household-generated electricity to be fed into the microgrid while explicitly accounting for the market power of a neighborhood diesel generator company (DGC). The regulator sets price and feed-in-tariff caps to maximize household economic surplus (HES), while the DGC acts as a profit-maximizing agent controlling access and supply. The model is applied to a Lebanese case study using high-resolution empirical data collected via logging devices. Results show that: (i) price and feed-in-tariff caps substantially increase HES and consistently induce significant household PV feed-in to the microgrid; (ii) higher DGC budgets or greater PV-owner penetration lead to pronounced gains in HES; and (iii) the renewable energy share reaches 60% under base conditions and approaches 100% at sufficiently high budgets or PV-owner penetration levels, compared to 0% under the status quo.
Robust and interpretable dementia diagnosis from noisy, non-stationary electroencephalography (EEG) is clinically essential yet remains challenging. To this end, we propose SeeGraph, a Sparse-Explanatory dynamic EEG-graph network that models time-evolving functional connectivity and employs a node-guided sparse edge mask to reveal the connections that drive diagnostic decisions, while remaining robust to noise and cross-site variability. SeeGraph comprises four components: (1) a dual-trajectory temporal encoder that models dynamic EEG with two streams, where node signals capture regional oscillations and edge signals capture interregional coupling; (2) a topology-aware positional encoder that derives graph-spectral Laplacian coordinates from the fused connectivity and augments node embeddings; (3) a node-guided sparse explanatory edge mask that gates the connectivity into a compact subgraph; and (4) a gated graph predictor that operates on the sparsified graph. The framework is trained using cross-entropy loss together with a sparsity regularizer on the mask, yielding noise-robust and interpretable diagnoses. The effectiveness of SeeGraph is validated on public and in-house EEG cohorts, including patients with neurodegenerative dementias and healthy controls, under both raw and noise-perturbed conditions. Its sparse, node-guided explanations highlight disease-relevant connections and align with established clinical findings on functional connectivity alterations, thereby offering transparent cues for neurological evaluation.
Large language models (LLMs) are becoming an increasingly important component of human--computer interaction, enabling users to coordinate a wide range of intelligent agents through natural language. While language-based interfaces are powerful and flexible, they implicitly assume that users can reliably produce explicit linguistic input, an assumption that may not hold for users with speech or motor impairments, e.g., Amyotrophic Lateral Sclerosis (ALS). In this work, we investigate whether neural signals can be used as an alternative input to LLMs, particularly to support those socially marginalized or underserved users. We build a simple brain-LLM interface, which uses EEG signals to guide image generation models at test time. Specifically, we first train a classifier to estimate user satisfaction from EEG signals. Its predictions are then incorporated into a test-time scaling (TTS) framework that dynamically adapts model inference using neural feedback collected during user evaluation. The experiments show that EEG can predict user satisfaction, suggesting that neural activity carries information on real-time preference inference. These findings provide a first step toward integrating neural feedback into adaptive language-model inference, and hopefully open up new possibilities for future research on adaptive LLM interaction.
Structured illumination microscopy (SIM) has emerged as a widely adopted super-resolution fluorescence imaging modality, offering high speed, low phototoxicity, large field-of-view, and compatibility with conventional probes. However, when applied to thick or scattering specimens, conventional two-dimensional SIM (2D-SIM) suffers from the missing cone problem in its optical transfer function, resulting in prominent out-of-focus background and severe reconstruction artifacts that compromise image fidelity. Here, we present four-beam interference structured illumination microscopy (4I-SIM), which introduces additional interference orders to expand lateral frequency support and compensate the axial missing cone simultaneously. This strategy achieves artifact-free super-resolution with intrinsic optical sectioning, effectively overcoming the fundamental limitation of 2D-SIM without additional acquisition overhead. Experimental validation across diverse thick fixed and live specimens demonstrates that 4I-SIM delivers nearly twofold lateral resolution enhancement and substantially improved sectioning compared with its 2D counterpart, achieving lateral and axial resolutions of 103 nm and 336 nm, respectively. In particular, 4I-SIM reveals mitochondrial remodeling and apoptosis under high-glucose stress with millisecond temporal resolution -- features that remain obscured with conventional SIM. With minimal hardware modification, low phototoxicity, and open-source reconstruction tools, 4I-SIM establishes a practical and reproducible platform for simultaneous super-resolution and optical sectioning imaging in complex biological environments.
End-to-end automatic speech recognition often degrades on domain-specific data due to scarce in-domain resources. We propose a synthetic-data-based domain adaptation framework with two contributions: (1) a large language model (LLM)-based text augmentation pipeline with a filtering strategy that balances lexical diversity, perplexity, and domain-term coverage, and (2) phonetic respelling augmentation (PRA), a novel method that introduces pronunciation variability through LLM-generated orthographic pseudo-spellings. Unlike conventional acoustic-level methods such as SpecAugment, PRA provides phonetic diversity before speech synthesis, enabling synthetic speech to better approximate real-world variability. Experimental results across four domain-specific datasets demonstrate consistent reductions in word error rate, confirming that combining domain-specific lexical coverage with realistic pronunciation variation significantly improves ASR robustness.
Self-attention scales quadratically with sequence length, limiting transformer-based speech models on edge devices. We introduce the Learnable Pulse Accumulator (LPA), an O(n) replacement that substitutes key-query dot products with learned gating functions: content-dependent rectangular pulses, periodic windows, and position-dependent basis functions. An MSE diagnostic sweep determines per-layer replacement difficulty and ordering. Replacing 8 of 12 wav2vec2-base layers yields 10.61% word error rate (WER) on LibriSpeech test-clean, +7.24 percentage points (pp) over the 3.37% baseline, with 3.27x speedup at 120s audio on Apple M4 Pro via an optimized MLX inference path. Cross-domain validation on SepFormer speech enhancement shows all 16 intra-chunk attention layers can be replaced without collapse, suggesting the depth wall arises from linguistic computation rather than an LPA limitation. LPA's near-binary gates at inference enable dense GPU computation with no CPU-GPU synchronization, and all operations map to mobile neural accelerators.
Deep learning dominates speech processing but relies on massive datasets, global backpropagation-guided weight updates, and produces entangled representations. Assembly Calculus (AC), which models sparse neuronal assemblies via Hebbian plasticity and winner-take-all competition, offers a biologically grounded alternative, yet prior work focused on discrete symbolic inputs. We introduce an AC-based speech processing framework that operates directly on continuous speech by combining three key contributions:(i) neural encoding that converts speech into assembly-compatible spike patterns using probabilistic mel binarisation and population-coded MFCCs; (ii) a multi-area architecture organising assemblies across hierarchical timescales and classes; and (iii) cross-area update schemes for downstream tasks. Applied to two core tasks of boundary detection and segment classification, our framework detects phone (F1=0.69) and word (F1=0.61) boundaries without any weight training, and achieves 47.5% and 45.1% accuracy on phone and command recognition. These results show that AC-based dynamical systems are a viable alternative to deep learning for speech processing.
Simultaneous speech-to-speech translation (SimulS2S) is essential for real-time multilingual communication, with increasing integration into meeting and streaming platforms. Despite this, SimulS2S remains underexplored in research, where current solutions often rely on resource-intensive training procedures and operate on short-form, pre-segmented utterances, failing to generalize to continuous speech. To bridge this gap, we propose SimulU, the first training-free policy for long-form SimulS2S. SimulU adopts history management and speech output selection strategies that exploit cross-attention in pre-trained end-to-end models to regulate both input history and output generation. Evaluations on MuST-C across 8 languages show that SimulU achieves a better or comparable quality-latency trade-off against strong cascaded models. By eliminating the need for ad-hoc training, SimulU offers a promising path to end-to-end SimulS2S in realistic, long-form scenarios.
Many registration problems are ill-posed in homogeneous or noisy regions, and dense voxel-wise decoders can be unnecessarily high-dimensional. A sparse control-point parameterisation provides a compact, smooth deformation representation while reducing memory and improving stability. This work investigates the required control points for learning-based registration network development. We present GridReg, a learning-based registration framework that replaces dense voxel-wise decoding with displacement predictions at a sparse grid of control points. This design substantially cuts the parameter count and memory while retaining registration accuracy. Multiscale 3D encoder feature maps are flattened into a 1D token sequence with positional encoding to retain spatial context. The model then predicts a sparse gridded deformation field using a cross-attention module. We further introduce grid-adaptive training, enabling an adaptive model to operate at multiple grid sizes at inference without retraining. This work quantitatively demonstrates the benefits of using sparse grids. Using three data sets for registering prostate gland, pelvic organs and neurological structures, the results suggested a significant improvement with the usage of grid-controled displacement field. Alternatively, the superior registration performance was obtained using the proposed approach, with a similar or less computational cost, compared with existing algorithms that predict DDFs or displacements sampled on scattered key points.
Speech Large Language Models (SpeechLLMs) process spoken input directly, retaining cues such as accent and perceived gender that were previously removed in cascaded pipelines. This introduces speaker identity dependent variation in responses. We present a large-scale intersectional evaluation of accent and gender bias in three SpeechLLMs using 2,880 controlled interactions across six English accents and two gender presentations, keeping linguistic content constant through voice cloning. Using pointwise LLM-judge ratings, pairwise comparisons, and Best-Worst Scaling with human validation, we detect consistent disparities. Eastern European-accented speech receives lower helpfulness scores, particularly for female-presenting voices. The bias is implicit: responses remain polite but differ in helpfulness. While LLM judges capture the directional trend of these biases, human evaluators exhibit significantly higher sensitivity, uncovering sharper intersectional disparities.
Ultrasound imaging is an essential first-line tool for assessing hepatic steatosis. While conventional B-mode ultrasound imaging has limitations in providing detailed tissue characterization, ultrasound Nakagami imaging holds promise for visualizing and quantifying tissue scattering in backscattered signals, with potential applications in fat fraction analysis. However, existing methods for Nakagami imaging struggle with optimal window size selection and suffer from estimator instability, leading to degraded image resolution. To address these challenges, we propose a novel method called UNICORN (Ultrasound Nakagami Imaging via Score Matching and Adaptation), which offers an accurate, closed-form estimator for Nakagami parameter estimation based on the score function of the ultrasound envelope signal. Unlike methods that visualize only specific regions of interest (ROI) and estimate parameters within fixed window sizes, our approach provides comprehensive parameter mapping by providing a pixel-by-pixel estimator, resulting in high-resolution imaging. We demonstrated that our proposed estimator effectively assesses hepatic steatosis and provides visual distinction in the backscattered statistics associated with this condition. Through extensive experiments using real envelope data from patient, we validated that UNICORN enables clinical detection of hepatic steatosis and exhibits robustness and generalizability.
Automatic speech recognition systems based on neural networks are vulnerable to adversarial attacks that alter transcriptions in a malicious way. Recent works in this field have focused on making attacks work in over-the-air scenarios, however such attacks are typically detectable by human hearing, limiting their potential applications. In the present work we explore different approaches of making over-the-air attacks less detectable, as well as the impact these approaches have on the attacks' effectiveness.
Human listeners exhibit the remarkable ability to segregate a desired sound from complex acoustic scenes through selective auditory attention, motivating the study of Targeted Sound Detection (TSD). The task requires detecting and localizing a target sound in a mixture when a reference audio of that sound is provided. Prior approaches, rely on generating a sound-discriminative conditional embedding vector for the reference and pairing it with a mixture encoder, jointly optimized with a multi-task learning approach. In this work, we propose a unified encoder architecture that processes both the reference and mixture audio within a shared representation space, promoting stronger alignment while reducing architectural complexity. This design choice not only simplifies the overall framework but also enhances generalization to unseen classes. Following the multi-task training paradigm, our method achieves substantial improvements over prior approaches, surpassing existing methods and establishing a new state-of-the-art benchmark for targeted sound detection, with a segment-level F1 score of 83.15% and an overall accuracy of 95.17% on the URBAN-SED dataset.
In this work, we study the design of receivers for uplink multi-user systems, aiming to estimate both the channel and the transmitted symbols. We consider two estimation strategies: (i) a joint estimation approach, where the channel and symbols are estimated simultaneously, and (ii) a sequential estimation approach, where the channel is first estimated and then used for symbol detection. For both strategies, we derive the Cramér-Rao Bound (CRB) for symbol estimation to characterize fundamental performance limits. When efficient receivers achieving the CRB exist, these bounds provide accurate lower bounds on the mutual information. In general, however, such receivers may not be available, and we instead use these same CRB-based metrics as practical proxies for achievable throughput. Leveraging tools from random matrix theory (RMT), we analyze the asymptotic behavior of these lower bounds under various asymptotic regimes for both estimation strategies. This analysis enables the derivation of generic power allocation guidelines that asymptotically maximize the proxy metrics. Simulation results confirm the accuracy of the asymptotic expressions and their effectiveness in guiding resource allocation decisions.
Data-driven model predictive control based on Willems' fundamental lemma has proven effective for linear systems, but extending stability guarantees to nonlinear systems remains an open challenge. In this paper, we establish conditions under which data-driven MPC, applied directly to input-output data from a nonlinear system, yields practical exponential stability. The key insight is that the existence of an approximate Koopman linear embedding certifies that the nonlinear data can be interpreted as noisy data from a linear time-invariant system, enabling the application of existing robust stability theories. Crucially, the Koopman embedding serves only as a theoretical certificate; the controller itself operates on raw nonlinear data without knowledge of the lifting functions. We further show that the proportional structure of the embedding residual can be exploited to obtain an ultimate bound that depends only on the irreducible offset, rather than the worst-case embedding error. The framework is demonstrated on a synchronous generator connected to an infinite bus, for which we construct an explicit physics-informed embedding with error bounds.
Ensuring the safety of control systems often requires the satisfaction of constraints on states (such as position or velocity), control inputs (such as force), and a mixture of states and inputs (such as power that depends on both velocity and force). This paper presents a safety-critical control framework for enforcing mixed state-input constraints through a generalization of backup control barrier functions (backup CBFs). First, we extend the backup CBF approach to maintain multiple decoupled state and input constraints using a single backup set-backup controller pair. Second, we address mixed state-input constraints by converting them into state constraints using a projection from the state-input space to the state space along the backup controller. In the special case of decoupled state and input constraints, the proposed method simplifies the synthesis of backup CBFs by eliminating the need for saturating backup control laws. Finally, we demonstrate the efficacy of the proposed method on an inverted pendulum example, where constraints on the angle (state), torque (input), and power (mixture of state and input) are satisfied simultaneously.
Nonlinear parameter-varying (NPV) systems are a class of nonlinear systems whose dynamics explicitly depend on time-varying external parameters, making them suitable for modeling real-world systems with dynamics variations. Traditional synthesis methods for NPV systems, such as sum-of-squares (SOS) optimization, are only applicable to control-affine systems, face scalability challenges and often lead to conservative results due to structural restrictions. To address these limitations, we propose Neural-NPV, a two-stage learning-based framework that leverages neural networks to jointly synthesize a PD controller and a PD Lyapunov function for an NPV system under input constraints. In the first stage, we utilize a computationally cheap, gradient-based counterexample-guided procedure to synthesize an approximately valid PD Lyapunov function and a PD controller. In the second stage, a level-set guided refinement is then conducted to obtain a valid Lyapunov function and controller while maximizing the robust region of attraction (R-ROA). We demonstrate the advantages of Neural-NPV in terms of applicability, performance, and scalability compared to SOS-based methods through numerical experiments involving an simple inverted pendulum with one scheduling parameter and a quadrotor system with three scheduling parameters.
Converter-based generators and loads are growing in prevalence on power grids across the globe. The rise of these resources necessitates controllers that handle the power electronic devices' strict current limits without jeopardizing stability or overly constraining behavior. Existing controllers often employ complex, cascaded control loop architecture to saturate currents, but these controllers are challenging to tune properly and can destabilize following large disturbances. In this paper, we extend previous analysis to prove the feasible output region of a grid-connected converter is convex regardless of filter topology. We then formulate a convex optimal control problem from which we derive a projected gradient descent-based controller with convergence guarantees. This approach drives the converter toward optimality in real-time and differs from conventional control strategies that regulate converter outputs around predefined references regardless of surrounding grid conditions. Simulation results demonstrate safe and stabilizing behavior of the proposed controller, in both the single-converter-infinite-bus systems and multi-converter networks.
This paper presents a new dynamic integral quadratic constraint (IQC) for the repeated Rectified Linear Unit (ReLU). These dynamic IQCs can be used to analyze stability and induced $\ell_2$-gain performance of discrete-time, recurrent neural networks (RNNs) with ReLU activation functions. These analysis conditions can be incorporated into learning-based controller synthesis methods, which currently rely on static IQCs. We show that our proposed dynamic IQCs for repeated ReLU form a superset of the dynamic IQCs for repeated, slope-restricted nonlinearities. We also prove that the $\ell_2$-gain bounds are nonincreasing with respect to the horizon used in the dynamic IQC filter. A numerical example using a simple (academic) RNN shows that our proposed IQCs lead to less conservative bounds than existing IQCs.
Polarization imaging is a technique that creates a pixel map of the polarization state in a scene. Although invisible to the human eye, polarization can assist various sensing and computer vision tasks. Existing polarization cameras use spatial or temporal multiplexing, which increases the camera volume, weight, cost, or all of the above. Recent lensless imaging approaches, such as DiffuserCam, have demonstrated that compact imaging systems can be realized by replacing the lens with a coding element and performing computational reconstruction. In this work, we propose a compact lensless polarization camera composed of a diffuser and a simple striped polarization mask. By combining this optical design with a reconstruction algorithm that explicitly models the polarization-encoded lensless measurements, four linear polarization images are recovered from a single snapshot. Our results demonstrate the potential of lensless approaches for polarization imaging and reveal the physical factors that govern reconstruction quality, guiding the development of high-quality practical systems.
Linear-quadratic Gaussian games provide a framework for modeling strategic interactions in multi-agent systems, where agents must estimate system states from noisy observations while also making decisions to optimize a quadratic cost. However, these formulations usually require agents to utilize the full set of available observations when forming their state estimates, which can be unrealistic in large-scale or resource-constrained settings. In this paper, we consider linear-quadratic Gaussian games with sparse interagent observations. To enforce sparsity in the estimation stage, we design a distributed estimator that balances estimation effectiveness with interagent measurement sparsity via a group lasso problem, while agents implement feedback Nash strategies based on their state estimates. We provide sufficient conditions under which the sparse estimator is guaranteed to trigger a corrective reset to the optimal estimation gain, ensuring that estimation quality does not degrade beyond a level determined by the regularization parameters. Simulations on a formation game show that the proposed approach yields a significant reduction in communication resources consumed while only minimally affecting the nominal equilibrium trajectories.
This paper presents a comprehensive physical layer security (PLS) framework for fluid antenna system (FAS)-aided short-packet communications under the variable block-correlation model (VBCM). We consider a downlink wiretap scenario in which a base station transmits confidential short packets to a legitimate receiver user (RU) in the presence of an eavesdropper user (EU), where both the RU and EU are equipped with fluid antennas. Unlike existing FAS security analyses that rely on constant block-correlation models or infinite-blocklength assumptions, we incorporate the VBCM to accurately capture the non-uniform spatial correlation structure inherent in practical FAS deployments. By employing a piecewise linear approximation of the decoding error probability and Gauss-Chebyshev quadrature, we derive closed-form and asymptotic expressions for the average achievable secrecy throughput (AAST). We further prove that the AAST is monotonically non-decreasing in the number of RU ports, which reduces the three-dimensional joint optimization of transmit power, blocklength, and port number to a two-dimensional grid search (GS). Numerical results demonstrate that the FAS-aided system achieves up to an order-of-magnitude secrecy throughput improvement over conventional fixed-position antenna systems, and reveal that blocklength selection is the most critical design parameter in the joint optimization.
This work considers the problem of using multiple aerial carriers to hold a cable-suspended load while remaining in periodic motion at all times. Using a novel differential geometric perspective, it is shown that the problem may be recast as that of finding an immersion of the unit circle into the smooth manifold of admissible configurations. Additionally, this manifold is shown to be path connected under a mild assumption on the attachment points of the carriers to the load. Based on these ideas, a family of simple linear solutions to the original problems is presented that overcomes the constraints of alternative solutions previously proposed in the literature. Simulation results demonstrate the flexibility of the theory in identifying suitable solutions.
It has been shown that the channel state information (CSI) of a Wi-Fi system can be exploited to localize Wi-Fi devices or track trajectory of a moving target. In the existing literature, both sensing tasks are treated separately and some prior information is usually requested, including the signal fingerprints, the locations of some anchor devices in the Wi-Fi system, and etc. In the proposed WiSLAT method, however, it is shown that both sensing tasks can assist each other, such that the request on prior system information can be eliminated. Particularly, in a Wi-Fi system with an access point (AP) and at least three stations, where the locations of the stations are unknown, the WiSLAT is designed to detect the Doppler frequencies of the downlink CSI at the stations, such that their locations and the trajectory of the target with respect to the AP can be inferred. The joint detection can be conducted by searching the optimal stations' locations and target's trajectory, such that their corresponding Doppler frequencies fit the observed ones best. Due to the tremendous non-convex search space, a low-complexity sub-optimal algorithm integrating alternate optimization, extended Kalman filter and density-based clustering is proposed in WiSLAT. Experiments conducted in indoor environments demonstrate the effectiveness of WiSLAT, achieving a median trajectory-tracking error of 0.68 m.
Modern cyber-physical systems are complex, and requirements are often written in Signal Temporal Logic (STL). Writing the right STL is difficult in practice; engineers benefit from concrete executions that illustrate what a specification actually admits. Trace synthesis addresses this need, but a single witness rarely suffices to understand intent or explore edge cases - diverse satisfying behaviors are far more informative. We introduce diversified trace synthesis: the automatic generation of sets of behaviorally diverse traces that satisfy a given STL formula. Building on a MILP encoding of STL and system model, we formalize three complementary diversification objectives - Boolean distance, random Boolean distance, and value distance - all captured by an objective function and solved iteratively. We implement these ideas in STLts-Div, a lightweight Python tool that integrates with Gurobi.
Current reconfigurable intelligent surface (RIS)-aided near-field (NF) localization methods assume the RIS position is known a priori, and it has limited their practical applicability. This paper applies a hybrid RIS (HRIS) at an unknown position to locate non-line-of-sight (NLOS) NF targets. To this end, we first propose a two-stage gridless localization framework for achieving HRIS self-localization, and then determine the positions of the NF targets. In the first stage, we use the NF Fresnel approximation to convert the signal model into a virtual far-field model through delay-based cross-correlation of centrally symmetric HRIS elements. Such a conversion will naturally extend the aperture of the virtual array. A single-snapshot decoupled atomic norm minimization (DANM) algorithm is then proposed to locate an NF target relative to the HRIS, which includes a two-dimensional (2-D) direction of arrival (DOA) estimation with automatic pairing, the multiple signal classification (MUSIC) method for range estimation, and a total least squares (TLS) method to eliminate the Fresnel approximation error. In the second stage, we leverage the unique capability of HRIS in simultaneous sensing and reflection to estimate the HRIS-to-base station (BS) direction vectors using atomic norm minimization (ANM), and derive the three-dimensional (3-D) HRIS position with two BSs via the least squares (LS)-based geometric triangulation. Furthermore, we propose a semidefinite relaxation (SDR)-based HRIS phase optimization method to enhance the received signal power at the BSs, thereby improving the HRIS localization accuracy, which, in turn, enhances NF target positionings. The Cramer-Rao bound (CRB) for the NF target parameters and the position error bound (PEB) for the HRIS coordinates are derived as performance benchmarks.
We study a target coverage problem in which a team of sensing agents, operating under limited communication, must collaboratively monitor targets that may be adaptively repositioned by an attacker. We model this interaction as a zero-sum game between the sensing team (known as the defender) and the attacker. However, computing an exact Nash equilibrium (NE) for this game is computationally prohibitive as the action space of the defender grows exponentially with the number of sensors and their possible orientations. Exploiting the submodularity property of the game's utility function, we propose a distributed framework that enables agents to self-configure their communication neighborhoods under bandwidth constraints and collaboratively maximize the target coverage. We establish theoretical guarantees showing that the resulting sensing strategies converge to an approximate NE of the game. To our knowledge, this is the first distributed, communication-aware approach that scales effectively for games with combinatorial action spaces while explicitly incorporating communication constraints. To this end, we leverage the distributed bandit-submodular optimization framework and the notion of Value of Coordination that were introduced in [1]. Through simulations, we show that our approach attains near-optimal game value and higher target coverage compared to baselines.
Urban flood emergency response increasingly relies on infrastructure impact forecasts rather than hazard variables alone. However, real-time predictions are unreliable due to biased rainfall, incomplete flood knowledge, and sparse observations. Conventional open-loop forecasting propagates impacts without adjusting the system state, causing errors during critical decisions. This study presents CRAF (Crowdsourcing-Enhanced Real-Time Awareness and Forecasting), a physics-informed, closed-loop framework that converts sparse human-sensed evidence into rolling, decision-grade impact forecasts. By coupling physics-based simulation learning with crowdsourced observations, CRAF infers system conditions from incomplete data and propagates them forward to produce multi-step, real-time predictions of zone-level building functionality loss without online retraining. This closed-loop design supports continuous state correction and forward prediction under weakly structured data with low-latency operation. Offline evaluation demonstrates stable generalization across diverse storm scenarios. In operational deployment during Typhoon Haikui (2023) in Fuzhou, China, CRAF reduces 1-3 hour-ahead forecast errors by 84-95% relative to fixed rainfall-driven forecasting and by 73-80% relative to updated rainfall-driven forecasting, while limiting computation to 10 minutes per update cycle. These results show that impact-state alignment-rather than hazard refinement alone-is essential for reliable real-time decision support, providing a pathway toward operational digital twins for resilient urban infrastructure systems.
To address the challenges of high-dimensional channel estimation and underutilized spatial correlations among users in holographic MIMO (HMIMO) systems, this paper proposes a joint graph-cut algorithm for multi-user channel estimation in the wavenumber domain. The size of the conventional angular domain channel matrix increases with the number of antennas in densely-spaced HMIMO. Therefore, user channels are projected into the wavenumber domain via a Fourier harmonic transform, revealing their inherent clustered sparsity and exploiting common scatterer clusters among users. Subsequently, a joint graph-cut channel estimation (JGC-CE) algorithm based on multi-user common supports is designed. In each iteration, the algorithm first partitions user clusters to extract shared supports. Then for each user, it performs users' individual graph update and channel estimation to reconstruct the channel matrix. Simulation results demonstrate that the proposed method outperforms independent estimation schemes for individual users in accuracy while reducing pilot length.
Reliable Sound Source Localization (SSL) plays an essential role in many downstream tasks, where informed decision making depends not only on accurate localization but also on the confidence in each estimate. This need for reliability becomes even more pronounced in challenging conditions, such as reverberant environments and multi-source scenarios. However, existing SSL methods typically provide only point estimates, offering limited or no Uncertainty Quantification (UQ). We leverage the Conformal Prediction (CP) framework and its extensions for controlling general risk functions to develop two complementary UQ approaches for SSL. The first assumes that the number of active sources is known and constructs prediction regions that cover the true source locations. The second addresses the more challenging setting where the source count is unknown, first reliably estimating the number of active sources and then forming corresponding prediction regions. We evaluate the proposed methods on extensive simulations and real-world recordings across varying reverberation levels and source configurations. Results demonstrate reliable finite-sample guarantees and consistent performance for both known and unknown source-count scenarios, highlighting the practical utility of the proposed frameworks for uncertainty-aware SSL.
Velopharyngeal dysfunction (VPD) is characterized by inadequate velopharyngeal closure during speech and often causes hypernasality and reduced intelligibility. Although speech-based machine learning models can perform well under standardized clinical recording conditions, their performance often drops in real-world settings because of domain shift caused by differences in devices, channels, noise, and room acoustics. To improve robustness, we propose a two-stage framework for VPD screening. First, a nasality-focused speech representation is learned by supervised contrastive pre-training on an auxiliary corpus with phoneme alignments, using oral-context versus nasal-context supervision. Second, the encoder is frozen and used with lightweight classifiers on 0.5-second speech chunks, whose probabilities are aggregated to produce recording-level decisions with a fixed threshold. On an in-domain clinical cohort of 82 subjects, the proposed method achieved perfect recording-level screening performance (macro-F1 = 1.000, accuracy = 1.000). On a separate out-of-domain set of 131 heterogeneous public Internet recordings, large pretrained speech representations degraded substantially, while MFCC was the strongest baseline (macro-F1 = 0.612, accuracy = 0.641). The proposed method achieved the best out-of-domain performance (macro-F1 = 0.679, accuracy = 0.695), improving on the strongest baseline under the same evaluation protocol. These results suggest that learning a nasality-focused representation before clinical classification can reduce sensitivity to recording artifacts and improve robustness for deployable speech-based VPD screening.
Image registration is an ill-posed dense vision task, where multiple solutions achieve similar loss values, motivating probabilistic inference. Variational inference has previously been employed to capture these distributions, however restrictive assumptions about the posterior form can lead to poor characterisation, overconfidence and low-quality samples. More flexible posteriors are typically bottlenecked by the complexity of high-dimensional covariance matrices required for dense 3D image registration. In this work, we present a memory and computationally efficient inference method, Structured SIR, that enables expressive, multi-modal, characterisation of uncertainty with high quality samples. We propose the use of a Sampled Importance Resampling (SIR) algorithm with a novel memory-efficient high-dimensional covariance parameterisation as the sum of a low-rank covariance and a sparse, spatially structured Cholesky precision factor. This structure enables capturing complex spatial correlations while remaining computationally tractable. We evaluate the efficacy of this approach in 3D dense image registration of brain MRI data, which is a very high-dimensional problem. We demonstrate that our proposed methods produces uncertainty estimates that are significantly better calibrated than those produced by variational methods, achieving equivalent or better accuracy. Crucially, we show that the model yields highly structured multi-modal posterior distributions, enable effective and efficient uncertainty quantification.
This paper introduces PowerDAG, an agentic AI system for automating complex distribution-grid analysis. We address the reliability challenges of state-of-the-art agentic systems in automating complex engineering workflows by introducing two innovative active mechanisms: (i) \textbf{adaptive retrieval}, which uses a similarity-decay cutoff algorithm to dynamically select the most relevant annotated exemplars as context, and (ii) \textbf{just-in-time (JIT) supervision}, which actively intercepts and corrects tool-usage violations during execution. On a benchmark of unseen distribution grid analysis queries, PowerDAG achieves a 100\% success rate with GPT-5.2 and 94.4--96.7\% with smaller open-source models, outperforming base ReAct (41--88\%), LangChain (30--90\%), and CrewAI (9--41\%) baselines by margins of 6--50 percentage points.
The integration of artificial intelligence into next-generation wireless networks necessitates the accurate construction of radio maps (RMs) as a foundational prerequisite for electromagnetic digital twins. A RM provides the digital representation of the wireless propagation environment, mapping complex geographical and topological boundary conditions to critical spatial-spectral metrics that range from received signal strength to full channel state information matrices. This tutorial presents a comprehensive survey of learning-based RM construction, systematically addressing three intertwined dimensions: data, paradigms, and physics-awareness. From the data perspective, we review physical measurement campaigns, ray tracing simulation engines, and publicly available benchmark datasets, identifying their respective strengths and fundamental limitations. From the paradigm perspective, we establish a core taxonomy that categorizes RM construction into source-aware forward prediction and source-agnostic inverse reconstruction, and examine five principal neural architecture families spanning convolutional neural networks, vision transformers, graph neural networks, generative adversarial networks, and diffusion models. We further survey optics-inspired methods adapted from neural radiance fields and 3D Gaussian splatting for continuous wireless radiation field modeling. From the physics-awareness perspective, we introduce a three-level integration framework encompassing data-level feature engineering, loss-level partial differential equation regularization, and architecture-level structural isomorphism. Open challenges including foundation model development, physical hallucination detection, and amortized inference for real-time deployment are discussed to outline future research directions.
To reduce CO2 emissions and tackle increasing fuel costs, the aviation industry is swiftly moving towards the electrification of aircraft. From the viewpoint of systems and control, a key challenge brought by this transition corresponds to the management and safe operation of the propulsion system's onboard electrical power distribution network. In this work, for a series-hybrid-electric propulsion system, we propose a distributed adaptive controller for regulating the voltage of a DC bus that energizes the electricity-based propulsion system. The proposed controller -- whose design is based on principles of back-stepping, adaptive, and passivity-based control techniques -- also enables the proportional sharing of the electric load among multiple converter-interfaced sources, which reduces the likelihood of over-stressing individual sources. Compared to existing control strategies, our method ensures stable, convergent, and accurate voltage regulation and load-sharing even if the effects of power lines of unknown resistances and inductances are considered. The performance of the proposed control scheme is experimentally validated and compared to state-of-the-art controllers in a power hardware-in-the-loop (PHIL) environment.
This article presents a fully 3D-printed wideband metasurface folded reflectarray antenna (MFRA) operating in the millimeter-wave n257 band. The proposed MFRA integrates a novel polarization-rotating reflective metasurface (RMS), a compact embedded horn feed, and a polarization-selective metasurface polarization grid (MPG), all fabricated using a low-cost in-house 3D-printed method. Unlike conventional PCB-based FRAs constrained to planar unit-cell geometries, the proposed anisotropic meta-element design exploits full three-dimensional dielectric control by tailoring varying unit-cell heights. This volumetric tuning, combined with the spatial distribution of the meta-elements, enables phase compensation exceeding $400^{\circ}$ across the aperture, supporting robust wideband performance. An MFRA prototype is in-house fabricated and experimentally validated. Measured results agree well with simulations, achieving a $-10$ dB impedance bandwidth of 20.7\% (26--32 GHz) and a peak realized gain of 31.1 dBi at 28.2 GHz. The antenna exhibits sidelobe levels below $-20$ dB, cross-polarization below $-30$ dB, and a compact height-to-diameter ratio of 0.20. Stable pencil beams with an average HPBW of $3.7^{\circ}$ are maintained across the operating band. To further validate the robustness of the proposed in-house designed MFRA, a commercially manufactured RMS was also obtained, whose measured performance shows excellent agreement with the in-house 3D-printed version, confirming a cost-effective rapid-prototyping antenna solution. The proposed MFRA is a cost-effective solution for beyond 5G and 6G high-gain point-to-point mmWave wireless applications, such as fixed wireless access, near field communication, and beam focusing.
To characterize lobar and segmental airway volume differences between systemic lupus erythematosus (SLE) patients with interstitial lung disease (ILD) and those without ILD (non-ILD) using a deep learning-based approach on non-contrast chest high-resolution CT (HRCT). Methods: A retrospective analysis was conducted on 106 SLE patients (27 SLE-ILD, 79 SLE-non-ILD) who underwent HRCT. A customized deep learning framework based on the U-Net architecture was developed to automatically segment airway structures at the lobar and segmental levels via HRCT. Volumetric measurements of lung lobes and segments derived from the segmentations were statistically compared between the two groups using two-sample t-tests (significance threshold: p < 0.05). Results: At lobar level, significant airway volume enlargement in SLE-ILD patients was observed in the right upper lobe (p=0.009) and left upper lobe (p=0.039) compared to SLE-non-ILD. At the segmental level, significant differences were found in segments including R1 (p=0.016), R3 (p<0.001), and L3 (p=0.038), with the most marked changes in the upper lung zones, while lower zones showed non-significant trends. Conclusion: Our study demonstrates that an automated deep learning-based approach can effectively quantify airway volumes on HRCT scans and reveal significant, region-specific airway dilation in patients with SLE-ILD compared to those without ILD. The pattern of involvement, predominantly affecting the upper lobes and specific segments, highlights a distinct topographic phenotype of SLE-ILD and implicates airway structural alterations as a potential biomarker for disease presence. This AI-powered quantitative imaging biomarker holds promise for enhancing the early detection and monitoring of ILD in the SLE population, ultimately contributing to more personalized patient management.
We present a formulation of an optimal control problem for a two-dimensional diffusion process governed by a Fokker-Planck equation to achieve a nonequilibrium steady state with a desired circulation while accelerating convergence toward the stationary distribution. To achieve the control objective, we introduce costs for both the probability density function and flux rotation to the objective functional. We formulate the optimal control problem through dimensionality reduction of the Fokker-Planck equation via eigenfunction expansion, which requires a low-computational cost. We demonstrate that the proposed optimal control achieves the desired circulation while accelerating convergence to the stationary distribution through numerical simulations.
Parallel-wound no-insulation (PW-NI) high-temperature superconducting (HTS) coils significantly reduce charging delay while maintaining excellent self-protection capability, demonstrating great potential for high-field applications. Existing models that couple the T-A formulation with equivalent circuits have demonstrated high accuracy in electromagnetic analysis of PW-NI coils. However, eliminating the computational overhead caused by frequent variable mapping and data exchange between electromagnetic and circuit modules is important for improving computational efficiency, particularly in long-duration transient simulations of large-scale magnets. To address this issue, an extended T-A formulation based on potential-chain recursion, termed PCR-TA, is proposed. By directly embedding inter-tape current sharing and radial current bypass behaviors into the finite-element framework, this method computes the transient electromagnetic response of PW-NI coils without requiring an explicit equivalent circuit model. Building upon it, a multi-scale approach is further developed for large-scale PW-NI coils. The validity of the proposed method and its multi-scale extension is verified through comparisons with experimental measurements and field-circuit coupled modeling results. Comparative analyses demonstrate that the PCR-TA method achieves a speedup of approximately 2.4 over the field-circuit coupled method, whereas its multi-scale extension further increases this speedup to roughly 5.8. Furthermore, the PCR-TA method is extended to model the continuous transition of PW-NI coils from power-supply charging to closed-loop operation. This work provides an efficient method and tool for the electromagnetic modeling of PW-NI coils under both driven and closed-loop operating conditions.
Conventional hybrid beamforming architectures are often compared with one another and with the fully-digital architecture under the same \emph{radiated} antenna power. However, the physically relevant budget is the power injected by the RF-chain outputs into the passive analog RF network, which is then usually transferred to the antenna ports in a contractive (lossy) manner. This issue is especially pronounced for fully-connected splitter--phase-shifter--combiner networks, whose physical power transfer remains contractive even under ideal passive-component assumptions. Motivated by this injected-power viewpoint, this paper proposes a hybrid beamforming architecture based on a programmable unitary RF network. Under ideal passive-component assumptions, all injected RF-chain power reaches the antenna ports without loss. The analog RF network is realized as an \emph{interlaced mixer--phase} architecture consisting of fixed (non-programmable) mixing layers interleaved with programmable diagonal phase-shifting layers. We derive a closed-form digital beamformer and a low-complexity programming method for the analog beamformer, yielding a hybrid precoder that closely matches the fully-digital precoder. Narrowband simulations with continuous and quantized phases, benchmarked against the fully-digital architecture, the physically modeled fully-connected phase-shifter baselines, and an ideal-lossless Butler/DFT beam-selection baseline under equal total injected RF-chain power, show that the continuous-phase and 6-bit realizations of the proposed architecture are nearly indistinguishable from the fully-digital benchmark and achieve significant gains over the baseline hybrid beamforming architectures.
Recently a novel multi-antenna architecture termed ray antenna array (RAA) was proposed, where several simple uniform linear arrays (sULAs) are arranged in a ray-like structure to enhance communication and sensing performance. By eliminating the need for phase shifters, it also significantly reduces hardware costs. However, RAA is prone to signal blockage and has no elevation angle resolution capability in three-dimensional (3D) scenarios. To address such issues, in this paper we propose a novel spherical directly-connected antenna array (DCAA), which composes of multiple simple uniform planar arrays (sUPAs) placed over a spherical surface. All elements within each sUPA are directly connected. Compared to conventional arrays with hybrid analog/digital beamforming (HBF), DCAA significantly reduces hardware cost, improves energy focusing, and provides superior and uniform angular res olution for 3D space. These advantages make DCAA particularly suitable for integrated sensing and communication (ISAC) in low-altitude unmanned aerial vehicles (UAV) swarm scenarios, where targets may frequently move away from the boresight of traditional arrays, degrading both communication and sensing performance. Simulation results demonstrate that the proposed spherical DCAA achieves significantly better angular resolution and higher spectral efficiency than conventional array with HBF, highlighting its strong potential for UAV swarm ISAC systems.
A novel generative site-specific beamforming (GenSSBF) approach, termed fast beam-brainstorm (F-BBS), is proposed to address the practical bottlenecks of slow beam generation and fixed channel probing lengths in existing GenSSBF. To accelerate beam generation, F-BBS utilizes a two-stage distillation strategy that learns an average velocity field, instead of an instantaneous one, to guide the beam generative process. This strategy enables larger generation steps, realizing few-step or even one-step beam generation. Furthermore, to accommodate flexible channel probing lengths, a stochastic masking mechanism and a beam index-aware masked condition encoder are proposed, enabling a single trained model to operate with variable-length channel probing observations without retraining. Therefore, FBBS achieves the fast generation of high-fidelity communication beams from coarse and variable-length channel probing feedback, i.e., reference signal received power (RSRP), from user equipments. Simulation results on accurate ray-tracing datasets show that 1) F-BBS achieves comparable performance while reducing the beam generation cost by over 90% compared with diffusion-based GenSSBF solutions, 2) F-BBS realizes robust performance across variable channel probing length, and 3) FBBS offers a desirable trade-off between beamforming gain and beam probing overhead.
Learning-based model predictive control (MPC) can enhance control performance by correcting for model inaccuracies, enabling more precise state trajectory predictions than traditional MPC. A common approach is to model unknown residual dynamics as a Gaussian process (GP), which leverages data and also provides an estimate of the associated uncertainty. However, the high computational cost of online learning poses a major challenge for real-time GP-MPC applications. This work presents an efficient implementation of an approximate spatio-temporal GP model, offering online learning at constant computational complexity. It is optimized for GP-MPC, where it enables improved control performance by learning more accurate system dynamics online in real-time, even for time-varying systems. The performance of the proposed method is demonstrated by simulations and hardware experiments in the exemplary application of autonomous miniature racing.
This paper presents a hierarchical decision-making framework for autonomous systems operating under uncertainty, demonstrated through autonomous driving as a representative application. Surrounding agents are modeled using Hybrid Markov Decision Processes (HMDPs) that jointly capture maneuver-level and dynamic-level uncertainties, enabling the multi-modal environmental prediction. The ego agent is modeled using a separate HMDP and integrated into a Model Predictive Control (MPC) framework that unifies maneuver selection with dynamic feasibility within a single optimization. A set of joint chance constraints serves as the bridge between environmental prediction and optimization, incorporating multi-modal environment predictions into the MPC formulation and ensuring safety across all plausible interaction scenarios. The proposed framework provides theoretical guarantees on recursive feasibility and asymptotic stability, and its benefits in terms of safety and efficiency are validated through comprehensive evaluations in highway and urban environments, together with comparisons against a rule-based baseline.
This paper examines defending the power grid against load-altering attacks using electric vehicle charging. It proposes to preventively segment the cyber infrastructure that charging station operators (CSOs) use to communicate with and control their charging stations, thereby limiting the impact of successful cyber-attacks. Using real German charging station data and a reconstructed transmission grid model, a threat analysis shows that without segmentation, the successful hack of just two CSOs can overload two transmission grid branches, exceeding the N-1 security margin and necessitating defense measures. A novel defense design problem is then formulated that minimizes the number of imposed segmentations while bounding the number of branch overloads under worst-case attacks. The resulting IP-MILP bi-level problem can be solved with an exact column and constraint generation algorithm and with heuristics for fast computation on large-scale instances. For the near-real-world Germany case, the applicability of the heuristics is demonstrated and validated under relevant load and dispatch scenarios. It is found that the simple scheme of segmenting CSOs evenly by their installed capacity leads to only 23% more segments compared to the heuristic optimization result, suggesting potential relevance as a regulatory measure.
This work investigates antenna coding optimization to enhance the channel capacity of single-input single-output orthogonal frequency division multiplexing (SISO-OFDM) systems empowered by highly reconfigurable pixel antennas. We first introduce the model for pixel antenna empowered SISO-OFDM systems using a beamspace channel representation. We next formulate the problem to maximize the channel capacity through jointly optimizing antenna coding and the power allocation across subcarriers and solve it by Successive Exhaustive Boolean Optimization (SEBO) and water-filling (WF) algorithm. To reduce computational complexity, a codebook-based approach is also proposed for antenna coding optimization. Simulation results show that the channel capacity of SISO-OFDM system across all signal-to-noise-ratio (SNR) regions considered can be enhanced through leveraging pixel antennas as compared to using conventional antenna with fixed configuration. This result demonstrates the effectiveness of antenna coding technology empowered by pixel antenna in enhancing SISO-OFDM systems.
The maximal positively invariant (MPI) set is obtained through a backward reachability procedure involving the iterative computation and intersection of predecessor sets under state and input constraints. However, standard static feedback synthesis may place some of the closed-loop eigenvalues at zero, leading to rank-deficient dynamics. This affects the MPI computation by inducing projections onto lower-dimensional subspaces during intermediate steps. By exploiting the Schur decomposition, we explicitly address this singular case and propose a robust algorithm that computes the MPI set in both polyhedral and constrained-zonotope representations.
Integrated sensing and communication (ISAC) has emerged as a key paradigm for next-generation wireless systems, which allows wireless resources to be used for data transmission and target sensing simultaneously. In this paper, multi-user collaborative target detection in the uplink ISAC system is investigated. To incorporate the target sensing functionality, the system relies on the reuse of uplink signals from the communication users. Specifically, we analyze an uplink multi-user single-input multiple-output (MU-SIMO) communication system with bistatic sensing. Using the channel statistics, we formulate the problem of joint optimal pilot and data power allocation to maximize the uplink ergodic sum rate while meeting communication and sensing quality-of-service (QoS) requirements. To address this non-convex problem, we propose an alternating optimization (AO)-based iterative framework, where the joint power allocation problem is decomposed into two sub-problems. Specifically, the pilot power allocation is optimized using a penalty dual decomposition (PDD)-based gradient ascent algorithm, while the data power allocation is solved via successive convex approximation (SCA). Once the long-term power allocation is determined, the base station (BS) estimates the instantaneous channels using a minimum mean-squared error (MMSE) estimator. Subsequently, based on the estimated instantaneous channel state information (CSI), the receive beamforming for communication users is optimized via another SCA-based method to maximize the sum rate. Meanwhile, the optimal receive beamforming for the target is obtained in closed-form through eigenvalue decomposition (EVD).
Polarization diversity offers significant flexibility for enhancing integrated sensing and communications (ISAC). However, conventional dual-polarized arrays typically require dedicated radio-frequency (RF) chains for each polarization branch, leading to prohibitive hardware costs. To address this, polarization-reconfigurable (PR) antennas have emerged as a cost-effective alternative, enabling polarization flexibility with reduced hardware complexity by driving two polarization branches with a single RF chain. In this paper, we investigate fairness-aware beamforming for ISAC systems equipped with PR antennas. Specifically, we jointly optimize the transmit beamforming and PR control coefficients to maximize the minimum signal-to-interference-plus-noise ratio (SINR) for communication users and the minimum signal-to-clutter-plus-noise ratio (SCNR) for sensing targets. The resulting problem is highly nonconvex and nonsmooth due to the strong coupling among optimization variables in the max-min objective, as well as the nonconvex spherical constraints imposed by the PR antennas. To tackle this, we derive an equivalent smooth reformulation by introducing auxiliary variables and transforming the minimum operators into inequality constraints. Subsequently, we develop an exact-penalty product Riemannian manifold gradient descent (EP-PRMGD) algorithm, which integrates an exact penalty method with Riemannian optimization to guarantee convergence to a Karush-Kuhn-Tucker (KKT) point. Numerical results demonstrate that the proposed PR-enabled ISAC scheme achieves performance comparable to dual-polarized architectures while utilizing only half the RF chains, thereby validating its effectiveness in balancing fairness and hardware efficiency.
Dynamic pricing is commonly used to regulate congestion in shared service systems. This paper is motivated by the fact that when heterogeneaous user groups (in terms of price responsiveness) are present, conventional monotonic pricing can lead to unfair outcomes by disproportionately excluding price-elastic users, particularly under high or uncertain demand. The paper's contributions are twofold. First, we show that when fairness is imposed as a hard state constraint, the optimal (revenue maximizing) pricing policy is generally non-monotonic in demand. This structural result departs fundamentally from standard surge pricing rules and reveals that price reduction under heavy load may be necessary to maintain equitable access. Second, we address the problem that price elasticity among heterogeneous users is unobservable. To solve it, we develop a robust dynamic pricing and admission control framework that enforces resource capacity and fairness constraints for all user type distributions consistent with aggregate measurements. By integrating integral High Order Control Barrier Functions (iHOCBFs) with a worst case robust optimization framework, we obtain a controller that guarantees forward invariance of safety and fairness constraints while optimizing revenue. Numerical experiments demonstrate improved fairness and revenue performance relative to monotonic surge pricing policies.
This paper presents results from a vehicle-to-vehicle channel measurement campaign conducted in the millimeter-wave (MMW) frequency bands at center frequencies of 60GHz and 80GHz, each with a bandwidth of 2GHz. The measurements were performed in a dynamic oncoming-vehicle scenario using a time-domain channel sounder with high-resolution data acquisition. Power delay profiles were extracted to study the temporal evolution of multipath components, and the root mean square (RMS) delay spread was analyzed to characterize the temporal dispersion of the channel. The results demonstrate differences between the two frequency bands. At 60GHz, the RMS delay spread is well approximated by a Gaussian distribution with a higher median value, while at 80GHz it follows a lognormal distribution with a lower median. Furthermore, the number of resolvable multipath components was found to be nearly twice as high at 60\,GHz compared to 80GHz, highlighting the impact of antenna beamwidth and frequency-dependent propagation mechanisms.
Descriptor systems arise naturally in applications governed by algebraic constraints, such as power networks and chemical processes. The singular system matrix in descriptor systems may introduce non-causal dynamics, where the current output depends on future inputs and, in the presence of stochastic process and measurement noise, on future noise realizations as well. This paper proposes a data-driven predictive control framework for stochastic descriptor systems that accommodates algebraic constraints and impulsive modes without explicit system identification. A causal innovation representation is constructed by augmenting the system state with a noise buffer that encapsulates the non-causal stochastic interactions, transforming the descriptor system into an equivalent proper state-space form. Willems' Fundamental Lemma is then extended to the innovation form with fully data-verifiable conditions. Building on these results, a practical Inno-DeePC algorithm is developed that integrates offline innovation estimation and online predictive control. Numerical experiments on a direct-current (DC) microgrid demonstrate the effectiveness of the proposed approach for stochastic descriptor systems.
This paper presents a unified decision-making framework that integrates Hybrid Markov Decision Processes (HMDPs) with Model Predictive Control (MPC), augmented by velocity-dependent safety margins and a prediction-aware hysteresis mechanism. Both the ego and surrounding vehicles are modeled as HMDPs, allowing discrete maneuver transition and kinematic evolution to be jointly considered within the MPC optimization. Safety margins derived from the Intelligent Driver Model (IDM) adapt to traffic context but vary with speed, which can cause oscillatory decisions and velocity fluctuations. To mitigate this, we propose a frozen-release hysteresis mechanism with distinct trigger and release thresholds, effectively enlarging the reaction buffer and suppressing oscillations. Decision continuity is further safeguarded by a two-layer recovery scheme: a global bounded relaxation tied to IDM margins and a deterministic fallback policy. The framework is evaluated through a case study, an ablation against a no-hysteresis baseline, and largescale randomized experiments across 18 traffic settings. Across 8,050 trials, it achieves a collision rate of only 0.05%, with 98.77% of decisions resolved by nominal MPC and minimal reliance on relaxation or fallback. These results demonstrate the robustness and adaptability of the proposed decision-making framework in heterogeneous traffic conditions.
The aim of this paper is to provide a comparison of channel characteristics for vehicle-to-vehicle (V2V) communication at 60 GHz and 80 GHz frequency bands in a high-mobility scenario where two vehicles pass each other in opposite directions. The study is based on measurements of the time-varying channel impulse response capturing the behavior of multi-path propagation during vehicle motion. By directly comparing these two frequency bands under identical measurement conditions, we attempt to quantify the differences in power delay profile, root mean square (RMS) delay spread, RMS Doppler spread, and intervals (regions) of stationarity in time domain. The results show that these bands do not differ significantly, but the 80 GHz band exhibits somewhat greater RMS delay spread and RMS Doppler spread when calculated over the entire delay-Doppler spectrum, and conversely exhibits shorter stationarity regions. However, the characteristics of the measurement setup in the two bands and their influence on comparative measurements must be considered. In particular, attention must be paid to the impact of antennas.
Large audio language models (LALMs) can answer questions about speech, music, and environmental sounds, yet their internal reasoning is largely opaque and difficult to validate. We describe TalTech's solution to the Agent Track of the Interspeech 2026 Audio Reasoning Challenge, in which systems are evaluated on reasoning process quality, specifically the factual accuracy, logical soundness, and completeness of their reasoning chains. Our multi-source ensemble pipeline uses two LALMs that generate independent observations, while a separate text-only reasoning model cross-checks these against outputs from 25 acoustic tools organized into reliability tiers. By grounding every inference step in explicit, reliability-tagged evidence, the system produces dense, verifiable reasoning chains. Our system ranked first in the challenge, outperforming all competing systems by a wide margin in challenge's reasoning quality metric.
Physics-informed machine learning surrogates are increasingly explored to accelerate dynamic simulation of generators, converters, and other power grid components. The key question, however, is not only whether a surrogate matches a stand-alone component model on average, but whether it remains accurate after insertion into a differential-algebraic simulator, where the surrogate outputs enter the algebraic equations coupling the component to the rest of the system. This paper formulates that in-simulator use as a verification and validation (V\&V) problem. A finite-horizon bound is derived that links allowable component-output error to algebraic-coupling sensitivity, dynamic error amplification, and the simulation horizon. Two complementary settings are then studied: model-based verification against a reference component solver, and data-based validation through conformal calibration of the component-output variables exchanged with the simulator. The framework is general, but the case study focuses on physics-informed neural-network surrogates of second-, fourth-, and sixth-order synchronous-machine models. Results show that good stand-alone surrogate accuracy does not by itself guarantee accurate in-simulator behavior, that the largest discrepancies concentrate in stressed operating regions, and that small equation residuals do not necessarily imply small state-trajectory errors.
During conversational interactions, humans subconsciously engage in concurrent thinking while listening to a speaker. Although this internal cognitive processing may not always manifest as explicit linguistic structures, it is instrumental in formulating high-quality responses. Inspired by this cognitive phenomenon, we propose a novel Full-duplex LAtent and Internal Reasoning method named FLAIR that conducts latent thinking simultaneously with speech perception. Unlike conventional "thinking" mechanisms in NLP, which require post-hoc generation, our approach aligns seamlessly with spoken dialogue systems: during the user's speaking phase, it recursively feeds the latent embedding output from the previous step into the next step, enabling continuous reasoning that strictly adheres to causality without introducing additional latency. To enable this latent reasoning, we design an Evidence Lower Bound-based objective that supports efficient supervised finetuning via teacher forcing, circumventing the need for explicit reasoning annotations. Experiments demonstrate the effectiveness of this think-while-listening design, which achieves competitive results on a range of speech benchmarks. Furthermore, FLAIR robustly handles conversational dynamics and attains competitive performance on full-duplex interaction metrics.
We establish a canonical decomposition of the infinitesimal Koopman generator of any port-Hamiltonian (pH) system into skew-adjoint (energy-conserving), positive-semidefinite (dissipative), and input-port components, proving that the generator satisfies an energy-dissipation inequality on a dense subdomain of $L^2(\mu)$ for any invariant measure $\mu$ satisfying a mild joint-invariance condition stated in Theorem 1. This infinite-dimensional splitting carries over exactly to finite-dimensional Galerkin approximations, yielding structure-constrained surrogate models that provably inherit passivity with a quadratic storage function in the lifted observable space. Leveraging this structure, we design passivity-based controllers directly in the lifted space and establish asymptotic stability of the lifted closed-loop system via LaSalle's invariance principle under a mild detectability condition. For linear pH systems, the decomposition recovers the true pH matrices exactly, confirming that the structural constraints arise naturally from the operator theory rather than being imposed by hand. The framework unifies port-Hamiltonian systems theory and Koopman spectral methods, providing a rigorous operator-theoretic foundation for energy-consistent lifting of nonlinear pH dynamics.
This paper proposes a real-time control policy for cascaded hydropower systems that incorporates decision-dependent uncertainty (DDU) to capture the coupling of streamflow uncertainties across the network. The framework jointly models exogenous forecast errors and endogenous uncertainty propagation, explicitly characterizing the dependence between upstream releases and downstream inflow variability through a heteroskedastic variance model conditioned on past errors, variance, and control actions. We formulate a joint chance-constrained optimization problem to ensure reliable system operation under uncertainty, and develop a tractable supporting hyperplane algorithm that enables explicit and adaptive risk allocation under DDU. We establish convergence of the proposed method and show that it recovers the Bonferroni approximation under steady-state conditions. A randomized case study based on Columbia River data demonstrates that the proposed framework improves both energy generation and reservoir reliability by accounting for DDU. Sensitivity analyses on drought severity and model parameters further highlight the value of adaptive risk allocation for resilient hydropower operations.
The recently proposed affine frequency division multiplexing (AFDM) waveform can adjust the time-frequency diversity gain by tuning chirp-rate parameter. Therefore, it is a candidate waveform in doubly-selective channels. This letter reveals that the modulation-symbol-domain shaping pulse of AFDM is generated by a convolution-like operation between the time-domain and frequency-domain shaping pulses, indicating that the modulation-symbol-domain pulse shaping of AFDM can be achieved by separately shaping in the time domain and frequency domain. Based on this, this letter presents an AFDM modulation-symbol-domain pulse shaping transceiver which has an ability to achieve the Nyquist pulse shaping, and provides the corresponding input-output relationship. Numerical results demonstrate the effectiveness of the proposed transceiver in improving the channel sparsity and pilot-to-data interference.
From a Bayesian perspective, score-based diffusion solves inverse problems through joint inference, embedding the likelihood with the prior to guide the sampling process. However, this formulation fails to explain its practical behavior: the prior offers limited guidance, while reconstruction is largely driven by the measurement-consistency term, leading to an inference process that is effectively decoupled from the diffusion dynamics. To clarify this structure, we reinterpret the role of diffusion in inverse problem solving as an initialization stage within an expectation--maximization (EM)--style framework, where the diffusion stage and the data-driven refinement are fully decoupled. We introduce \textbf{DAPS++}, which allows the likelihood term to guide inference more directly while maintaining numerical stability and providing insight into why unified diffusion trajectories remain effective in practice. By requiring fewer function evaluations (NFEs) and measurement-optimization steps, \textbf{DAPS++} achieves high computational efficiency and robust reconstruction performance across diverse image restoration tasks.
Foundation models have recently extended beyond natural language and vision to timeseries domains, including physiological signals. However, progress in electrodermal activity (EDA) modeling is hindered by the absence of large-scale, curated, and openly accessible datasets. EDA reflects sympathetic nervous system activity and is widely used to infer cognitive load, stress, and engagement. Yet very few wearable devices provide continuous, unobtrusive sensing, and the only large-scale archive to date is proprietary. To address this gap, we compile EDAMAME, a collection of EDA traces from 24 public datasets, comprising more than 25,000 hours from 634 users. Using this resource, we train UME, the first dedicated foundation model for EDA. In eight out of ten scenarios, UME outperforms baselines and matches generalist timeseries foundation models while using 20x fewer computational resources. Our findings, however, also highlight the intrinsic challenges of EDA modeling, motivating further research to unlock its full potential. All datasets, model weights, and code are released to support further research.
Inertial measurement unit-based online handwriting recognition enables the recognition of input signals collected across different writing surfaces but remains challenged by uneven character distributions and inter-writer variability. In this work, we systematically investigate two strategies to address these issues: sub-word tokenization and concatenation-based data augmentation. Our experiments on the OnHW-Words500 dataset reveal a clear dichotomy between handling inter-writer and intra-writer variance. On the writer-independent split, structural abstraction via Bigram tokenization significantly improves performance to unseen writing styles, reducing the word error rate (WER) from 15.40% to 12.99%. In contrast, on the writer-dependent split, tokenization degrades performance due to vocabulary distribution shifts between the training and validation sets. Instead, our proposed concatenation-based data augmentation acts as a powerful regularizer, reducing the character error rate by 34.5% and the WER by 25.4%. Further analysis shows that short, low-level tokens benefit model performance and that concatenation-based data augmentation performance gain surpasses those achieved by proportionally extended training. These findings reveal a clear variance-dependent effect: sub-word tokenization primarily mitigates inter-writer stylistic variability, whereas concatenation-based data augmentation effectively compensates for intra-writer distributional sparsity.
Model predictive control offers a powerful framework for managing constrained systems, but its repeated online optimization can become computationally prohibitive. Multiparametric programming addresses this challenge by precomputing optimal solutions offline, enabling real-time control through simple function evaluation. While extensively developed for discrete-time systems, this approach suffers from combinatorial growth in solution complexity as discretization is refined. This paper presents a systematic continuous-time multiparametric framework for linear-quadratic optimal control that directly solves Pontryagin's optimality conditions without discretization artifacts. Through two illustrative examples, we demonstrate that continuous-time formulations yield solutions with substantially fewer critical regions than their discrete-time counterparts. Beyond this reduction in partition complexity, the continuous-time approach provides deeper insight into system dynamics by explicitly identifying switching times and eliminating discretization artifacts that obscure the true structure of optimal control policies. Knowledge of the switching structure also accelerates online optimization methods by providing analytical information about the solution topology. Clear step-by-step algorithms are provided for identifying switching structures, computing parametric switching times, and constructing critical regions, making the continuous-time framework accessible for practical implementation.
Reliable and interpretable automated assessment of second-language (L2) speech remains a central challenge, as large speech-language models (SpeechLLMs) often struggle to align with the nuanced variability of human raters. To address this, we introduce a rubric-guided reasoning framework that explicitly encodes multi-aspect human assessment criteria: accuracy, fluency, and prosody, while calibrating model uncertainty to capture natural rating variability. We fine-tune the Qwen2-Audio-7B-Instruct model using multi-rater human judgments and develop an uncertainty-calibrated regression approach supported by conformal calibration for interpretable confidence intervals. Our Gaussian uncertainty modeling and conformal calibration approach achieves the strongest alignment with human ratings, outperforming regression and classification baselines. The model reliably assesses fluency and prosody while highlighting the inherent difficulty of assessing accuracy. Together, these results demonstrate that rubric-guided, uncertainty-calibrated reasoning offers a principled path toward trustworthy and explainable SpeechLLM-based speech assessment.
The automated piano enables note densities, polyphony, and register changes far beyond human physical limits, yet the three dominant traditions for composing such textures--Nancarrow's tempo canons, Xenakis's stochastic distributions, and L-system grammars--have developed in isolation. This paper presents Amanous, a hardware-aware composition system for Yamaha Disklavier that unifies these methodologies through distribution-switching: L-system symbols select distinct distributional regimes rather than merely modulating parameters within a fixed family. Four contributions are reported. (1) A four-layer architecture (symbolic, parametric, numeric, physical) produces statistically distinct sections with large effect sizes (d = 3.70-5.34), validated by per-layer degradation and ablation experiments. (2) A hardware abstraction layer formalizes velocity-dependent latency and key reset constraints, keeping superhuman textures within the Disklavier's actuable envelope. (3) A density sweep reveals a computational saturation transition at 24-30 notes/s (bootstrap 95% CI: 23.3-50.0), beyond which single-domain melodic metrics lose discriminative power and cross-domain coupling becomes necessary. (4) A convergence point calculus operationalizes tempo-canon geometry as a control interface, enabling convergence events to trigger distribution switches linking macro-temporal structure to micro-level texture. All results are computational; a psychoacoustic validation protocol is proposed for future work. The pipeline has been deployed on a physical Disklavier, demonstrating algorithmic self-consistency and sub-millisecond software precision. Supplementary materials (Excerpts 1-4): this https URL. Source code: this https URL.
Neural audio codecs discretize speech via residual vector quantization (RVQ), forming a coarse-to-fine hierarchy across quantizers. While codec models have been explored for representation learning, their discrete structure remains underutilized in speech deepfake detection. In particular, different quantization levels capture complementary acoustic cues, where early quantizers encode coarse structure and later quantizers refine residual details that reveal synthesis artifacts. Existing systems either rely on continuous encoder features or ignore this quantizer-level hierarchy. We propose a hierarchy-aware representation learning framework that models quantizer-level contributions through learnable global weighting, enabling structured codec representations aligned with forensic cues. Keeping the speech encoder backbone frozen and updating only 4.4% additional parameters, our method achieves relative EER reductions of 46.2% on ASVspoof 2019 and 13.9% on ASVspoof5 over strong baselines.
The Inaugural Music Source Restoration (MSR) Challenge targets the recovery of original, unprocessed stems from fully mixed and mastered music. Unlike conventional music source separation, MSR requires reversing complex production processes such as equalization, compression, reverberation, and other real-world degradations. To address MSR, we propose a two-stage system. First, an ensemble of pre-trained separation models produces preliminary source estimates. Then a set of pre-trained BSRNN-based restoration models performs targeted reconstruction to refine these estimates. On the official MSR benchmark, our system surpasses the baselines on all metrics, ranking second among all submissions. The code is available at this https URL
Multi-uncrewed aerial vehicle (UAV) cooperative perception has emerged as a promising paradigm for diverse low-altitude economy applications, where complementary multi-view observations are leveraged to enhance perception performance via wireless communications. However, the massive visual data generated by multiple UAVs poses significant challenges in terms of communication latency and resource efficiency. To address these challenges, this paper proposes a communication-efficient cooperative perception framework, termed Base-Station-Helped UAV (BHU), which reduces communication overhead while enhancing perception performance. Specifically, we employ a Top-K selection mechanism to identify the most informative pixels from UAV-captured RGB images, enabling sparsified visual transmission with reduced data volume and latency. The sparsified images are transmitted to a ground server via multi-user MIMO (MU-MIMO), where a Swin-large-based MaskDINO encoder extracts bird's-eye-view (BEV) features and performs cooperative feature fusion for ground vehicle perception. Furthermore, we develop a diffusion model-based deep reinforcement learning (DRL) algorithm to jointly select cooperative UAVs, sparsification ratios, and precoding matrices, achieving a balance between communication efficiency and perception utility. Simulation results on the Air-Co-Pred dataset demonstrate that, compared with traditional CNN-based BEV fusion baselines, the proposed BHU framework improves perception performance by over 5% while reducing communication overhead by 85%, providing an effective solution for multi-UAV cooperative perception under resource-constrained wireless environments.
Mobile Edge Computing (MEC) technology has been introduced to enable could computing at the edge of the network in order to help resource limited mobile devices with time sensitive data processing tasks. In this paradigm, mobile devices can offload their computationally heavy tasks to more efficient nearby MEC servers via wireless communication. Consequently, the main focus of researches on the subject has been on development of efficient offloading schemes, leaving the privacy of mobile user out. While the Blockchain technology is used as the trust mechanism for secured sharing of the data, the privacy issues induced from wireless communication, namely, usage pattern and location privacy are the centerpiece of this work. The effects of these privacy concerns on the task offloading Markov Decision Process (MDP) is addressed and the MDP is solved using a Deep Recurrent Q-Netwrok (DRQN). The Numerical simulations are presented to show the effectiveness of the proposed method.
The rapid growth of electric vehicles (EVs) requires more effective charging infrastructure planning. Infrastructure layout not only determines deployment cost, but also reshapes charging behavior and influences overall system performance. In addition, destination charging and en-route charging represent distinct charging regimes associated with different power requirements, which may lead to substantially different infrastructure deployment outcomes. This study applies an agent-based modeling framework to generate trajectory-level latent public charging demand under three charging regimes based on a synthetic representation of the Melbourne (Australia) metropolitan area. Two deployment strategies, an optimization-based approach and a utilization-refined approach, are evaluated across different infrastructure layouts. Results show that utilization-refined deployments reduce total system cost, accounting for both infrastructure deployment cost and user generalized charging cost, with the most significant improvement observed under the combined charging regime. In particular, a more effective allocation of AC slow chargers reshapes destination charging behavior, which in turn reduces unnecessary reliance on en-route charging and lowers detour costs associated with en-route charging. This interaction highlights the behavioral linkage between destination and en-route charging regimes and demonstrates the importance of accounting for user response and multiple charging regimes in charging infrastructure planning.
Traditional speaker diarization systems have primarily focused on constrained scenarios such as meetings and interviews, where the number of speakers is limited and acoustic conditions are relatively clean. To explore open-world speaker diarization, we extend this task to the visual media domain, encompassing complex audiovisual programs such as films and TV series. This new setting introduces several challenges, including long-form video understanding, a large number of speakers, cross-modal asynchrony between audio and visual cues, and uncontrolled in-the-wild variability. To address these challenges, we propose Cinematic Speaker Registration & Diarization (CineSRD), a unified multimodal framework that leverages visual, acoustic, and linguistic cues from video, speech, and subtitles for speaker annotation. CineSRD first performs visual anchor clustering to register initial speakers and then integrates an audio language model for speaker turn detection, refining annotations and supplementing unregistered off-screen speakers. Furthermore, we construct and release a dedicated speaker diarization benchmark for visual media that includes Chinese and English programs. Experimental results demonstrate that CineSRD achieves superior performance on the proposed benchmark and competitive results on conventional datasets, validating its robustness and generalizability in open-world visual media settings.
We propose a constricting Control Barrier Function (CBF) framework for prescribed-time control of control-affine systems with input constraints. Given a system starting outside a target safe set, we construct a time-varying safety tube that shrinks from a relaxed set containing the initial condition to the target set at a user-specified deadline. Any controller rendering this tube forward invariant guarantees prescribed-time recovery by construction. The constriction schedule is bounded and tunable by design, in contrast to prescribed-time methods where control effort diverges near the deadline. Feasibility under input constraints reduces to a single verifiable condition on the constriction rate, yielding a closed-form minimum recovery time as a function of control authority and initial violation. The framework imposes a single affine constraint per timestep regardless of state dimension, scaling to settings where grid-based reachability methods are intractable. We validate on a 16-dimensional multi-agent system and a unicycle reach-avoid problem, demonstrating prescribed-time recovery with bounded control effort.
Generalized Nash Equilibrium Problems (GNEPs) arise in many applications, including non-cooperative multi-agent control problems. Although many methods exist for finding generalized Nash equilibria, most of them rely on assuming knowledge of the objective functions or being able to query the best responses of the agents. We present a method for learning solutions of GNEPs only based on querying agents for their preference between two alternative decisions. We use the collected preference data to learn a GNEP whose equilibrium approximates a GNE of the underlying (unknown) problem. Preference queries are selected using an active-learning strategy that balances exploration of the decision space and exploitation of the learned GNEP. We present numerical results on game-theoretic linear quadratic regulation problems, as well as on other literature GNEP examples, showing the effectiveness of the proposed method.
Hamilton-Jacobi (HJ) reachability provides formal safety guarantees for dynamical systems, but solving high-dimensional HJ partial differential equations limits its use in real-time planning. This paper presents a contingency-aware multi-goal navigation framework that integrates learning-based reachability with sampling-based planning in unknown environments. We use Fourier Neural Operator (FNO) to approximate the solution operator of the Hamilton-Jacobi-Isaacs variational inequality under varying obstacle configurations. We first provide a theoretical under-approximation guarantee on the safe backward reach-avoid set, which enables formal safety certification of the learned reachable sets. Then, we integrate the certified reachable sets with an incremental multi-goal planner, which enforces reachable-set constraints and a recovery policy that guarantees finite-time return to a safe region. Overall, we demonstrate that the proposed framework achieves asymptotically optimal navigation with provable contingency behavior, and validate its performance through real-time deployment on KUKA's youBot in Webots simulation.
Nash equilibria provide a principled framework for modeling interactions in multi-agent decision-making and control. However, many equilibrium-seeking methods implicitly assume that each agent has access to the other agents' objectives and constraints, an assumption that is often unrealistic in practice. This letter studies a class of asymmetric-information two-player constrained games with decoupled feasible sets, in which Player 1 knows its own objective and constraints while Player 2 is available only through a best-response map. For this class of games, we propose an asymmetric projected gradient descent-best response iteration that does not require full mutual knowledge of both players' optimization problems. Under suitable regularity conditions, we establish the existence and uniqueness of the Nash equilibrium and prove global linear convergence of the proposed iteration when the best-response map is exact. Recognizing that best-response maps are often learned or estimated, we further analyze the inexact case and show that, when the approximation error is uniformly bounded by $\varepsilon$, the iterates enter an explicit $O(\varepsilon)$ neighborhood of the true Nash equilibrium. Numerical results on a benchmark game corroborate the predicted convergence behavior and error scaling.
Collecting everyday speech data for prosodic analysis is challenging due to the confounding of prosody and semantics, privacy constraints, and participant compliance. We introduce and empirically evaluate a content-controlled, privacy-first smartphone protocol that uses scripted read-aloud sentences to standardize lexical content (including prompt valence) while capturing natural variation in prosodic delivery. The protocol performs on-device prosodic feature extraction, deletes raw audio immediately, and transmits only derived features for analysis. We deployed the protocol in a large study (N = 560; 9,877 recordings), evaluated compliance and data quality, and conducted diagnostic prediction tasks on the extracted features, predicting speaker sex and concurrently reported momentary affective states (valence, arousal). We discuss implications and directions for advancing and deploying the protocol.
Many wireless vision applications, such as autonomous driving, require preservation of global structural information rather than only per-pixel fidelity. However, existing Deep joint source-channel coding (DeepJSCC) schemes mainly optimize pixel-wise losses and provide no explicit protection of connectivity or topology. This letter proposes TopoJSCC, a topology-aware DeepJSCC framework that integrates persistent-homology regularizers to end-to-end training. Specifically, we enforce topological consistency by penalizing Wasserstein distances between cubical persistence diagrams of original and reconstructed images, and between Vietoris--Rips persistence of latent features before and after the channel to promote a robust latent manifold. TopoJSCC is based on end-to-end learning and requires no side information. Experiments show improved topology preservation and peak signal-to-noise ratio (PSNR) in low signal-to-noise ratio (SNR) and bandwidth-ratio regimes.
Networked multi-agent dynamical systems have been used to model how individual opinions evolve over time due to the opinions of other agents in the network. Particularly, such a model has been used to study how a planning agent can be used to steer opinions in a desired direction through repeated, budgeted interventions. In this paper, we consider the problem where individuals' susceptibilities to external influences are unknown. We propose an online algorithm that alternates between estimating this susceptibility parameter, and using the current estimate to drive the opinion to a desired target. We provide conditions that guarantee stability and convergence to the desired target opinion when the planning agent faces budgetary or temporal constraints. Our analysis shows that the key advantage of estimating the susceptibility parameter is that it helps achieve near-optimal convergence to the target opinion given a finite amount of intervention rounds, and, for a given intervention budget, quantifies how close the opinion can get to the desired target.
This paper presents a particle swarm optimization algorithm that leverages surrogate modeling to replace the conventional global best solution with the minimum of an n-dimensional quadratic form, providing a better-conditioned dynamic attractor for the swarm. This refined convergence target, informed by the local landscape, enhances global convergence behavior and increases robustness against premature convergence and noise, while incurring only minimal computational overhead. The surrogate-augmented approach is evaluated against the standard algorithm through a numerical study on a set of benchmark optimization functions that exhibit diverse landscapes. To ensure statistical significance, 400 independent runs are conducted for each function and algorithm, and the results are analyzed based on their statistical characteristics and corresponding distributions. The quadratic surrogate attractor consistently outperforms the conventional algorithm across all tested functions. The improvement is particularly pronounced for quasi-convex functions, where the surrogate model can exploit the underlying convex-like structure of the landscape.
This paper develops a multi-period optimization framework to design a voluntary renewable program (VRP) for an electric utility company, aiming to maximize total renewable energy deployments. In the business model of VRP, the utility must ensure it generates renewable energy up to the total amount of contract during each market episode (i.e., a year), while all the revenue collected from the VRP must either be used to invest in procuring renewable capacities or to maintain the current renewable fleet and infrastructure. We thus formulate the problem as an optimal pricing problem coupled with revenue allocation and renewable deployment decisions. We model the demand function of voluntary renewable contracts as an exponential decay function based on survey data. We analytically derive the optimal pricing policy of the VRP as a function of the current grid carbon intensity. We prove that a myopic policy is conditionally optimal, which maximizes renewable capacity in each period, attains the long-run optimum due to the utility's revenue-neutral constraint. We show different binding conditions and marginal values of decision variables correspond to different phases of the energy transition, and that the utility should strategically design its revenue-sharing decisions, balancing investments in renewable expansion and subsidizing existing renewable fleets. Finally, we show that voluntary renewable programs can only extend renewable penetration but cannot achieve net-zero emissions or a fully renewable grid. This pricing-allocation-expansion framework highlights both the potential and limitations of voluntary renewable demand, providing analytical insight into optimal policy design and the qualitative shifts occurring during the energy transition process.
Large audio-language models (LALMs) can produce expressive speech, yet reliable emotion control remains elusive: conversions often miss the target affect and may degrade linguistic fidelity through refusals, hallucinations, or paraphrase. We present, to our knowledge, the first neuron-level study of emotion control in speech-generative LALMs and demonstrate that compact emotion-sensitive neurons (ESNs) are causally actionable, enabling training-free emotion steering at inference time. ESNs are identified via success-filtered activation aggregation enforcing both emotion realization and content preservation. Across three LALMs (Qwen2.5-Omni-7B, MiniCPM-o 4.5, Kimi-Audio), ESN interventions yield emotion-specific gains that generalize to unseen speakers and are supported by automatic and human evaluation. Controllability depends on selector design, mask sparsity, filtering, and intervention strength. Our results establish a mechanistic framework for training-free emotion control in speech generation.
Reducing latency and energy consumption is critical to improving the efficiency of memory systems in modern computing. This work introduces ReLMXEL (Reinforcement Learning for Memory Controller with Explainable Energy and Latency Optimization), a explainable multi-agent online reinforcement learning framework that dynamically optimizes memory controller parameters using reward decomposition. ReLMXEL operates within the memory controller, leveraging detailed memory behavior metrics to guide decision-making. Experimental evaluations across diverse workloads demonstrate consistent performance gains over baseline configurations, with refinements driven by workload-specific memory access behaviour. By incorporating explainability into the learning process, ReLMXEL not only enhances performance but also increases the transparency of control decisions, paving the way for more accountable and adaptive memory system designs.
In dynamically varying optical wireless communication (OWC) links, conventional quadrature amplitude modulation (QAM) in optical orthogonal frequency-division multiplexing (OFDM) requires frequent channel estimation and equalization, incurring pilot overhead and processing latency. This paper proposes a virtual polarization modulation (VPM)-based direct-current-biased optical OFDM (DCO-OFDM) scheme that maps each data symbol onto the three-dimensional Stokes space and places its corresponding Jones vector across two adjacent OFDM subcarriers. Using a rotation-based analytical framework, closed-form symbol error rate (SER) expressions are derived for arbitrary spherical constellations, along with upper and lower bounds and high signal-to-noise ratio (SNR) approximations. The framework is further extended to practical OWC scenarios with frequency-selective channels and atmospheric turbulence. Monte Carlo (MC) simulations validate the theoretical results. The results show that under practical OWC impairments, VPM outperforms QAM with least-squares (LS) channel estimation and minimum mean square error (MMSE) equalization. At a target SER of $10^{-5}$, 16-VPM achieves SNR gains of approximately 7.5 dB and 4 dB over equalized 16-QAM and 8-QAM, respectively, in frequency-selective channels, and a 6 dB advantage over equalized 16-QAM under atmospheric turbulence. By eliminating the need for channel state information, the proposed VPM-based DCO-OFDM provides a robust and low-latency solution for dynamic OWC links.
We study the exact decision problem for feedback capacity of finite-state channels (FSCs). Given an encoding $e$ of a binary-input binary-output rational unifilar FSC with specified rational initial distribution, and a rational threshold $q$, we ask whether the feedback capacity satisfies $C_{fb}(W_e, \pi_{1,e}) \ge q$. We prove that this exact threshold problem is undecidable, even when restricted to a severely constrained class of rational unifilar FSCs with bounded state space. The reduction is effective and preserves rationality of all channel parameters. As a structural consequence, the exact threshold predicate does not lie in the existential theory of the reals ($\exists\mathbb{R}$), and therefore cannot admit a universal reduction to finite systems of polynomial equalities and inequalities over the real numbers. In particular, there is no algorithm deciding all instances of the exact feedback-capacity threshold problem within this class. These results do not preclude approximation schemes or solvability for special subclasses; rather, they establish a fundamental limitation for exact feedback-capacity reasoning in general finite-state settings. At the metatheoretic level, the undecidability result entails corresponding Gödel-Tarski-Löb incompleteness phenomena for sufficiently expressive formal theories capable of representing the threshold predicate.
Asset management requires accurate 3D models to inform the maintenance, repair, and assessment of buildings, maritime vessels, and other key structures as they age. These downstream applications rely on high-fidelity models produced from aerial surveys in close proximity to the asset, enabling operators to locate and characterise deterioration or damage and plan repairs. Captured images typically have high overlap between adjacent camera poses, sufficient detail at millimetre scale, and challenging visual appearances such as reflections and transparency. However, existing 3D reconstruction datasets lack examples of these conditions, making it difficult to benchmark methods for this task. We present a new dataset with ground truth depth maps, camera poses, and mesh models of three synthetic scenes with simulated inspection trajectories and varying levels of surface condition on non-Lambertian scene content. We evaluate state-of-the-art reconstruction methods on this dataset. Our results demonstrate that current approaches struggle significantly with the dense capture trajectories and complex surface conditions inherent to this domain, exposing a critical scalability gap and pointing toward new research directions for deployable 3D reconstruction in asset inspection. Project page: this https URL
The solvability condition of the power flow equation is important in operational planning and control as it guarantees the existence and uniqueness of a solution for a given set of power injections. As renewable generation becomes more prevalent, the steady-state operating point of the system changes more frequently, making it increasingly challenging to verify power flow solvability by running the AC power flow solver after each change in power injections. This process can be computationally intensive, and numerical solvers do not always converge reliably to an operational solution. In this paper, we propose a sufficient condition for the solvability of the lossless real power flow equation based on the cycle space of a meshed network. The proposed condition yields a less conservative solvability certificate than existing sufficient conditions on the tested systems and can serve as a useful foundation for developing solvability conditions for the fully coupled power flow equations.
Advanced autonomous driving systems require accurate vehicle dynamics modeling. However, identifying a precise dynamics model remains challenging due to strong nonlinearities and the coupled longitudinal and lateral dynamic characteristics. Previous research has employed physics-based analytical models or neural networks to construct vehicle dynamics representations. Nevertheless, these approaches often struggle to simultaneously achieve satisfactory performance in terms of system identification efficiency, modeling accuracy, and compatibility with linear control strategies. In this paper, we propose a fully data-driven dynamics modeling method tailored for complex distributed electric-drive trucks (DETs), leveraging Koopman operator theory to represent highly nonlinear dynamics in a lifted linear embedding space. To achieve high-precision modeling, we first propose a novel dual-branch encoder which encodes dynamic states and provides a powerful basis for the proposed Koopman-based methods entitled KODE. A physics-informed supervision mechanism, grounded in the geometric consistency of temporal vehicle motion, is incorporated into the training process to facilitate effective learning of both the encoder and the Koopman operator. Furthermore, to accommodate the diverse driving patterns of DETs, we extend the vanilla Koopman operator to a mixture-of-Koopman operator framework, enhancing modeling capability. Simulations conducted in a high-fidelity TruckSim environment and real-world experiments demonstrate that the proposed approach achieves state-of-the-art performance in long-term dynamics state estimation.
Sufficient testing under corner cases is critical for the long-term operation of vehicle-infrastructure cooperation systems (VICS). However, existing corner-case generation methods are primarily AI-driven, and VICS testing under corner cases is typically limited to simulation. In this paper, we introduce an L5 ''Interactable'' level to the VICS digital twin (VICS-DT) taxonomy, extending beyond the conventional L4 ''Optimizable'' level. We further propose an L5-level VICS testing framework, IMPACT (Interactive Mixed-digital-twin Paradigm for Advanced Cooperative vehicle-infrastructure Testing). By enabling direct human interactions with VICS entities, IMPACT incorporates highly uncertain and unpredictable human behaviors into the testing loop, naturally generating high-quality corner cases that complement AI-based methods. Furthermore, the mixedDT-enabled ''Physical-Virtual Action Interaction'' facilitates safe VICS testing under corner cases, incorporating real-world environments and entities rather than purely in simulation. Finally, we implement IMPACT on the I-VIT (Interactive Vehicle-Infrastructure Testbed), and experiments demonstrate its effectiveness. The experimental videos are available at our project website: this https URL.
The objective comparison of Reinforcement Learning (RL) algorithms is notoriously complex as outcomes and benchmarking of performances of different RL approaches are critically sensitive to environmental design, reward structures, and stochasticity inherent in both algorithmic learning and environmental dynamics. To manage this complexity, we introduce a rigorous benchmarking framework by extending converse optimality to discrete-time, control-affine, nonlinear systems with noise. Our framework provides necessary and sufficient conditions, under which a prescribed value function and policy are optimal for constructed systems, enabling the systematic generation of benchmark families via homotopy variations and randomized parameters. We validate it by automatically constructing diverse environments, demonstrating our framework's capacity for a controlled and comprehensive evaluation across algorithms. By assessing standard methods against a ground-truth optimum, our work delivers a reproducible foundation for precise and rigorous RL benchmarking.
This paper analyzes the physical layer security performance of massive uplink Internet of Things (IoT) networks operating under the finite blocklength (FBL) regime. IoT devices and base stations (BS) are modeled using a stochastic geometry approach, while an eavesdropper is placed at a random location around the transmitting device. This system model captures security risks common in dense IoT deployments. Analytical expressions for the secure success probability, secrecy outage probability and secrecy throughput are derived to characterize how stochastic interference, fading and eavesdropper spatial uncertainty interact with FBL constraints in short packet uplink transmissions. Numerical results illustrate key system behavior under different network and channel conditions.
Learning-based semantic communication (SemCom) has recently emerged as a promising paradigm for improving the transmission efficiency of wireless networks. However, existing methods typically rely on extensive end-to-end training, which is both inflexible and computationally expensive in dynamic wireless environments. Moreover, they fail to exploit redundancy across multiple transmissions of semantically similar content, limiting overall efficiency. To overcome these limitations, we propose a channel-aware generative adversarial network (GAN) inversion-based joint source-channel coding (CAGI-JSCC) framework that enables training-free SemCom by leveraging a pre-trained SemanticStyleGAN model. By explicitly incorporating wireless channel characteristics into the GAN inversion process, CAGI-JSCC adapts to varying channel conditions without additional training. Furthermore, we introduce a cache-enabled dynamic codebook (CDC) that caches disentangled semantic components at both the transmitter and receiver, allowing the system to reuse previously transmitted content. This semantic-level caching can continuously reduce redundant transmissions as experience accumulates. Extensive experiments on image transmission demonstrate the effectiveness of the proposed framework. In particular, our system achieves comparable perceptual quality with an average bandwidth compression ratio (BCR) of 1/224, and as low as 1/1024 for a single image, significantly outperforming baselines with a BCR of 1/128.
In the emerging mixed traffic environments, Connected and Autonomous Vehicles (CAVs) have to interact with surrounding human-driven vehicles (HDVs). This paper introduces MSH-MCCT (Multi-Source Human-in-the-Loop Mixed Cloud Control Testbed), a novel CAV testbed that captures complex interactions between various CAVs and HDVs. Utilizing the Mixed Digital Twin concept, which combines Mixed Reality with Digital Twin, MSH-MCCT integrates physical, virtual, and mixed platforms, along with multi-source control inputs. Bridged by the mixed platform, MSH-MCCT allows human drivers and CAV algorithms to operate both physical and virtual vehicles within multiple fields of view. Particularly, this testbed facilitates the coexistence and real-time interaction of physical and virtual CAVs \& HDVs, significantly enhancing the experimental flexibility and scalability. Experiments on vehicle platooning in mixed traffic showcase the potential of MSH-MCCT to conduct CAV testing with multi-source real human drivers in the loop through driving simulators of diverse fidelity. The videos for the experiments are available at our project website: this https URL.
We provide a method to design adaptive controllers for nonlinear systems using model predictive control (MPC). By combining a certainty-equivalent MPC formulation with least-mean-square parameter adaptation, we obtain an adaptive controller with strong robust performance guarantees: The cumulative tracking error and violation of state constraints scale linearly with noise energy, disturbance energy, and path length of parameter variation. A key technical contribution is developing the underlying certainty-equivalent MPC that tracks output references, accounts for actuator limitations and desired state constraints, requires no system-specific offline design, and provides strong inherent robustness properties. This is achieved by leveraging finite-horizon rollouts, artificial references, recent analysis techniques for optimization-based controllers, and soft state constraints. For open-loop stable systems, we derive a semi-global result that applies to arbitrarily large measurement noise, disturbances, and parametric uncertainty. For stabilizable systems, we derive a regional result that is valid within a given region of attraction and for sufficiently small uncertainty. Applicability and benefits are demonstrated with numerical simulations involving systems with large parametric uncertainty: a linear stable chain of mass-spring-dampers and a nonlinear unstable quadrotor navigating obstacles.
We propose RHYME-XT, an operator-learning framework for surrogate modeling of spatiotemporal control systems governed by input-affine nonlinear partial integro-differential equations (PIDEs) with localized rhythmic behavior. RHYME-XT uses a Galerkin projection to approximate the infinite-dimensional PIDE on a learned finite-dimensional subspace with spatial basis functions parameterized by a neural network. This yields a projected system of ODEs driven by projected inputs. Instead of integrating this non-autonomous system, we directly learn its flow map using an architecture for learning flow functions, avoiding costly computations while obtaining a continuous-time and discretization-invariant representation. Experiments on a neural field PIDE show that RHYME-XT outperforms a state-of-the-art neural operator and is able to transfer knowledge effectively across models trained on different datasets, through a fine-tuning process.
Aerial simultaneous transmitting and reflecting reconfigurable intelligent surfaces (STAR-RIS) enables full-space coverage in dynamic wireless networks. However, most existing works assume fixed user grouping, overlooking the fact that STAR-RIS deployment inherently determines whether users are served via transmission or reflection. To address this, we propose a joint deployment and beamforming framework, where an aerial STAR-RIS dynamically adjusts its location and orientation to adaptively control user grouping and enhance hybrid beamforming. We formulate a Markov decision process (MDP) capturing the coupling among deployment, grouping, and signal design. To solve the resulting non-convex and time-varying problem, we develop a PPO-based reinforcement learning algorithm that adaptively balances user grouping and beamforming resources through online policy learning. Simulation results show 57.1\% and 285\% sum-rate gains over fixed-deployment and RIS-free baselines, respectively, demonstrating the benefit of user-grouping-aware control in STAR-RIS-aided systems.
Deep learning models for pulmonary disease screening from Computed Tomography (CT) scans promise to alleviate the immense workload on radiologists. Still, their high computational cost, stemming from processing entire 3D volumes, remains a major barrier to widespread clinical adoption. Current sub-sampling techniques often compromise diagnostic integrity by introducing artifacts or discarding critical information. To overcome these limitations, we propose an Efficient and Reliable Framework (ERF) that fundamentally improves the practicality of automated CT analysis. Our framework introduces two core innovations: (1) A Cluster-based Sub-Sampling (CSS) method that efficiently selects a compact yet comprehensive subset of CT slices by optimizing for both representativeness and diversity. By integrating an efficient k-nearest neighbor search with an iterative refinement process, CSS bypasses the computational bottlenecks of previous methods while preserving vital diagnostic features. (2) An Ambiguity-aware Uncertainty Quantification (AUQ) mechanism, which enhances reliability by specifically targeting data ambiguity arising from subtle lesions and artifacts. Unlike standard uncertainty measures, AUQ leverages the predictive discrepancy between auxiliary classifiers to construct a specialized ambiguity score. By maximizing this discrepancy during training, the system effectively flags ambiguous samples where the model lacks confidence due to visual noise or intricate pathologies. Validated on two public datasets with 2,654 CT volumes across diagnostic tasks for 3 pulmonary diseases, ERF achieves diagnostic performance comparable to the full-volume analysis (over 90% accuracy and recall) while reducing processing time by more than 60%. This work represents a significant step towards deploying fast, accurate, and trustworthy AI-powered screening tools in time-sensitive clinical settings.
In South Korea, power grid is currently operated based on the static line rating (SLR) method, where the transmission line capacity is determined based on extreme weather conditions. However, with global warming, there is a concern that the temperatures during summer may exceed the SLR criteria, posing safety risks. On the other hand, the conservative estimates used for winter conditions limit the utilization of renewable energy. Proposals to install new lines face significant financial and environmental hurdles, complicating efforts to adapt to these changing conditions. Dynamic Line Rating (DLR) offers a real-time solution but requires extensive weather monitoring and complex integration. This paper proposes a novel method that improves on SLR by analyzing historical data to refine line rating criteria on a monthly, seasonal, and semi-annual basis. Through simulations, we show our approach significantly enhances cost effectiveness and reliability of the power system, achieving efficiencies close to DLR with existing infrastructure. This method offers a practical alternative to overcome the limitations of SLR and the implementation challenges of DLR.
To address the computational challenges of Model Predictive Control (MPC), recent research has studied using imitation learning to approximate MPC with a computationally efficient Deep Neural Network (DNN). However, this introduces a common issue in learning-based control, the simulation-to-reality (sim-to-real) gap. Inspired by Robust Tube MPC, this study proposes a new control framework that addresses this issue from a control perspective. The framework ensures the DNN operates in the same environment as the source domain, addressing the sim-to-real gap with great data collection efficiency. Moreover, an input refinement governor is introduced to address the DNN's inability to adapt to variations in model parameters, enabling the system to satisfy MPC constraints more robustly under parameter-changing conditions. The proposed framework was validated through two case studies: cart-pole control and vehicle collision avoidance control, which analyzed the principles of the proposed framework in detail and demonstrated its application to a vehicle control case.
Accurate phase estimation -- the process of assigning phase values between $0$ and $2\pi$ to repetitive or periodic signals -- is a cornerstone in the analysis of oscillatory signals across diverse fields, from neuroscience to robotics, where it is fundamental, e.g., to understanding coordination in neural networks, cardiorespiratory coupling, and human-robot interaction. However, existing methods are often limited to offline processing and/or constrained to one-dimensional signals. In this paper, we introduce ROPE, which, to the best of our knowledge, is the first phase-estimation algorithm capable of (i) handling signals of arbitrary dimension and (ii) operating in real-time, with minimal error. ROPE identifies repetitions within the signal to segment it into (pseudo-)periods and assigns phase values by performing efficient, tractable searches over previous signal segments. We extensively validate the algorithm on a variety of signal types, including trajectories from chaotic dynamical systems, human motion-capture data, and electrocardiographic recordings. Our results demonstrate that ROPE is robust against noise and signal drift, and achieves significantly superior performance compared to state-of-the-art phase estimation methods. This advancement enables real-time analysis of complex biological rhythms, opening new pathways, for example, for early diagnosis of pathological rhythm disruptions and developing rhythm-based therapeutic interventions in neurological and cardiovascular disorders.
Instruction-guided text-to-speech (ITTS) enables users to control speech generation through natural language prompts, offering a more intuitive interface than traditional TTS. However, the alignment between user style instructions and listener perception remains largely unexplored. This work first presents a perceptual analysis of ITTS controllability across two expressive dimensions (adverbs of degree and graded emotion intensity) and collects human ratings on speaker age and word-level emphasis attributes. To comprehensively reveal the instruction-perception gap, we provide a data collection with large-scale human evaluations, named Expressive VOice Control (E-VOC) corpus. Furthermore, we reveal that (1) gpt-4o-mini-tts is the most reliable ITTS model with great alignment between instruction and generated utterances across acoustic dimensions. (2) The 5 analyzed ITTS systems tend to generate Adult voices even when the instructions ask to use child or Elderly voices. (3) Fine-grained control remains a major challenge, indicating that most ITTS systems have substantial room for improvement in interpreting slightly different attribute instructions.
Research on soundscapes has shifted the focus of environmental acoustics from noise levels to the perception of sounds, incorporating contextual factors. Soundscape emotion recognition (SER) models perception using a set of features, with arousal and valence commonly regarded as sufficient descriptors of affect. In this work, we blend \emph{graph learning} techniques with a novel \emph{information criterion} to develop a feature selection framework for SER. Specifically, we estimate a sparse graph representation of feature relations using linear structural equation models (SEM) tailored to the widely used Emo-Soundscapes dataset. The resulting graph captures the relations between input features and the two emotional outputs. To determine the appropriate level of sparsity, we propose a novel \emph{generalized elbow detector}, which provides both a point estimate and an uncertainty interval. We conduct an extensive evaluation of our methods, including visualizations of the inferred relations. While several of our findings align with previous studies, the graph representation also reveals a strong connection between arousal and valence, challenging common SER assumptions.
We present an algorithm for planning trajectories that avoid obstacles and satisfy key-door precedence specifications expressed with a fragment of signal temporal logic. Our method includes a novel exact convex partitioning of the obstacle free space that encodes connectivity among convex free space sets, key sets, and door sets. We then construct an augmented graph of convex sets that exactly encodes the key-door precedence specifications. By solving a shortest path problem in this augmented graph of convex sets, our pipeline provides an exact solution up to a finite parameterization of the trajectory. To illustrate the effectiveness of our approach, we present a method to generate key-door mazes that provide challenging problem instances, and we perform numerical experiments to evaluate the proposed pipeline. Our pipeline is faster by several orders of magnitude than recent state-of-the art methods that use general purpose temporal logic tools.
This paper presents a novel approach for ensuring safe operation of systems subject to input nonlinearities and time-varying safety constraints. We extend the time-varying barrier function framework to address time-varying safety constraints and explicitly account for control-dependent nonlinearities at the plant input. Guaranteed bounds on the input-output behavior of these nonlinearities are provided through pointwise-in-time quadratic constraints. The result is a class of robust time-varying control barrier functions that define a safety filter. This filter ensures robust safety for all admissible nonlinearities while minimally modifying the command generated by a baseline controller. We derive a second-order cone program (SOCP) to compute this safety filter online and provide feasibility conditions for ball-constrained inputs. The proposed approach is demonstrated on a spacecraft docking maneuver.
NashOpt is an open-source Python library for computing and designing generalized Nash equilibria (GNEs) in noncooperative games with shared constraints and real-valued decision variables. The library exploits the joint Karush-Kuhn-Tucker (KKT) conditions of all players to handle both general nonlinear GNEs and linear-quadratic games, including their variational versions. Nonlinear games are solved via nonlinear least-squares formulations, relying on JAX for automatic differentiation. Linear-quadratic GNEs are reformulated as mixed-integer linear programs, enabling efficient computation of multiple equilibria. The framework also supports inverse-game and Stackelberg game-design problems. The capabilities of NashOpt are demonstrated through several examples, including noncooperative game-theoretic control problems of linear quadratic regulation and model predictive control. The library is available at this https URL
The Access Traffic Steering, Switching, and Splitting (ATSSS) defined in the latest 3GPP Release 19 enables traffic flow over the multiple access paths to achieve the lower-latency End-to-end (E2E) delivery for 6G time-sensitive services. However, the existing E2E multi-path operation often falls short of more stringent QoS requirements for 6G time-sensitive services. This work proposes a Reconfigurable Intelligent Surfaces (RIS)-aided E2E multi-path uplink (UL) transmission architecture that explicitly accounts for both radio link latency and N3 backhaul latency, via the coupled designs of the UL traffic-splitting ratio, transmit power, receive combining, and RIS phase shift under practical constraints to achieve the minimum average E2E latency. We develop an alternating optimization framework that updates the above target parameters to be optimized. The simulations were conducted to compare the effectiveness of the proposed E2E optimization framework that lowers the average E2E latency up to 43% for a single user and 32% for the whole system compared with baselines in our prior work [1].
Reinforcement Learning (RL) has achieved remarkable success in solving complex sequential decision-making problems. However, its application to safety-critical physical systems remains constrained by the lack of stability guarantees. Standard RL algorithms prioritize reward maximization, often yielding policies that may induce oscillations or unbounded state divergence. There has been significant work in incorporating Lyapunov-based stability guarantees in RL algorithms with key challenges being selecting a candidate Lyapunov function, computational complexity by using excessive function approximators and conservative policies by incorporating stability criterion in the learning process. In this work we propose a novel Lyapunov-constrained Soft Actor-Critic (LC-SAC) algorithm using Koopman operator theory. We propose use of extended dynamic mode decomposition (EDMD) to produce a linear approximation of the system and use this approximation to derive a closed form solution for candidate Lyapunov function. This derived Lyapunov function is incorporated in the SAC algorithm to further provide guarantees for a policy that stabilizes the nonlinear system. The results are evaluated trajectory tracking of a 2D Quadrotor environment based on safe-control-gym. The proposed algorithm shows training convergence and decaying violations for Lyapunov stability criterion compared to baseline vanilla SAC algorithm. GitHub Repository: this https URL
Integrated sensing and communication (ISAC) can substantially improve spectral, hardware, and energy efficiency by unifying radar sensing and data communications. In wideband and scattering-rich environments, clutter often dominates weak target reflections and becomes a fundamental bottleneck for reliable sensing. Practical ISAC clutter includes "cold" clutter arising from environmental backscatter of the probing waveform, and "hot" clutter induced by external interference and reflections from the environment whose statistics can vary rapidly over time. In this article, we develop a unified wideband multiple-input multiple-output orthogonal frequency-division multiplexing (MIMO-OFDM) signal model that captures both clutter types across the space, time, and frequency domains. Building on this model, we review clutter characterization at multiple levels, including amplitude statistics, robust spherically invariant random vector (SIRV) modeling, and structured covariance representations suitable for limited-snapshot regimes. We then summarize receiver-side suppression methods in the temporal and spatial domains, together with extensions to space-time adaptive processing (STAP) and space-frequency-time adaptive processing (SFTAP), and we provide guidance on selecting techniques under different waveform and interference conditions. To move beyond reactive suppression, we discuss clutter-aware transceiver co-design that couples beamforming and waveform optimization with practical communication quality-of-service (QoS) constraints to enable proactive clutter avoidance. We conclude with open challenges and research directions toward environment-adaptive and clutter-resilient ISAC for next-generation networks.
This paper introduces an uncertainty compensation-based robust adaptive model predictive control (MPC) framework for linear systems with nonlinear time-varying uncertainties. The framework integrates an L1 adaptive controller to compensate for the matched uncertainty and a robust feedback controller, designed using linear matrix inequalities, to mitigate the effect of unmatched uncertainty on target output channels. Uniform bounds on the errors between the system's states and control inputs and those of a nominal (i.e., uncertainty-free) system are derived. These error bounds are then used to tighten the actual system's state and input constraints, enabling the design of an MPC for the nominal system under these tightened constraints. Referred to as uncertainty compensation-based MPC (UC-MPC), this approach ensures constraint satisfaction while delivering enhanced performance compared to existing methods. Simulation results for a flight control example and a spacecraft landing on an asteroid demonstrate the effectiveness of the proposed framework.
Fully unsupervised deep generative modeling (FU-DGM) is promising for compressively sampled MRI (CS-MRI) when training data or compute are limited. Classical FU-DGMs such as DIP and INR rely on architectural priors, but the ill-conditioned inverse problem often demands many iterations and easily overfits measurement noise. We propose CogGen, a cognitive-load-informed FU-DGM that casts CS-MRI as staged inversion and regulates task-side "cognitive load" by progressively scheduling intrinsic difficulty and extraneous interference. CogGen replaces uniform data fitting with an easy-to-hard k-space weighting/selection strategy: early iterations emphasize low-frequency, high-SNR, structure-dominant samples, while higher-frequency or noise-dominated measurements are introduced later. We realize this schedule through self-paced curriculum learning (SPCL) with complementary criteria: a student mode that reflects what the model can currently learn and a teacher mode that indicates what it should follow, supporting both soft weighting and hard selection. Experiments and analyses show that CogGen-DIP and CogGen-INR improve reconstruction fidelity and convergence behavior compared with strong unsupervised baselines and competitive supervised pipelines.
Nonlinear optimisation techniques are commonly employed to minimise complex cost functions, with their effectiveness determined largely by the structure of the underlying error landscape. These methods require initial parameter values, and in the presence of multiple local minima, they are prone to becoming trapped in suboptimal regions. The likelihood of locating the global minimum increases substantially when the initialisation lies within its corresponding basin of attraction. Consequently, high-quality initial parameters are critical for successful optimisation. This technical report outlines a new strategy for selecting suitable initial parameters for a trigonometric model and unevenly sampled data, ensuring that the optimisation procedure starts sufficiently close to the global minimum. The proposed parameter estimation approach is strictly NI-based, interpretable, and explainable. It targets at complicated cases which include: samples with strong random noise, samples with only few covered periods, and samples which cover only a fraction of one period. Special attention is put on the frequency estimation. It can be shown that an estimation of initial parameters with sufficient accuracy is possible down to a signal-noise-ratio of 1.4 dB at much lower computational costs than the Lomb-Scargle-periodogram method requires.
This paper develops a control-theoretic framework for analyzing agentic systems embedded within feedback control loops, where an AI agent may adapt controller parameters, select among control strategies, invoke external tools, reconfigure decision architectures, and modify control objectives during operation. These capabilities are formalized by interpreting agency as hierarchical runtime decision authority over elements of the control architecture, leading to an augmented closed-loop representation in which physical states, internal memory, tool outputs, interaction signals, and design variables evolve as a coupled dynamical system. A five-level hierarchy of agency is defined, ranging from fixed control laws to runtime synthesis of control architectures and objectives. The analysis shows that increasing agency introduces interacting dynamical mechanisms such as time-varying adaptation, endogenous switching, decision-induced delays, and structural reconfiguration. The framework is developed in both nonlinear and linear settings, providing explicit design constraints for AI-enabled control systems in safety-critical applications.
Spoofing-robust automatic speaker verification (SASV) aims to integrate automatic speaker verification (ASV) and countermeasure (CM). A popular solution is fusion of independent ASV and CM scores. To better modeling SASV, some frameworks integrate ASV and CM within a single network. However, these solutions are typically bi-encoder based, offer limited interpretability, and cannot be readily adapted to new evaluation parameters without retraining. Based on this, we propose a unified end-to-end framework via a three-class formulation that enables log-likelihood ratio (LLR) inference from class logits for a more interpretable decision pipeline. Experiments show comparable performance to existing methods on ASVSpoof5 and better results on SpoofCeleb. The visualization and analysis also prove that the three-class reformulation provides more interpretability.
The system frequency is a critical measure of power system stability and understanding, and modeling it are key to ensure reliable power system operations. Koopman-based autoencoders are effective at approximating complex nonlinear data patterns, with potential applications in the frequency dynamics of power systems. However, their non-invertibility can result in a distorted latent representation, leading to significant prediction errors. Invertible neural networks (INNs) in combination with the Koopman operator framework provide a promising approach to address these limitations. In this study, we analyze different INN architectures and train them on simulation datasets. We further apply extensions to the networks to address inherent limitations of INNs and evaluate their impact. We find that coupling-layer INNs achieve the best performance when used in isolation. In addition, we demonstrate that hybrid approaches can improve the performance when combined with suitable INNs, while reducing the generalization capabilities in combination with disadvantageous architectures. Overall, our results provide a clearer overview of how architectural choices influence INN performance, offering guidance for selecting and designing INNs for modeling power system frequency dynamics.
We propose a deep learning framework for COVID-19 detection and disease classification from chest CT scans that integrates both 2.5D and 3D representations to capture complementary slice-level and volumetric information. The 2.5D branch processes multi-view CT slices (axial, coronal, sagittal) using a DINOv3 vision transformer to extract robust visual features, while the 3D branch employs a ResNet-18 architecture to model volumetric context and is pretrained with Variance Risk Extrapolation (VREx) followed by supervised contrastive learning to improve cross-source robustness. Predictions from both branches are combined through logit-level ensemble inference. Experiments on the PHAROS-AIF-MIH benchmark demonstrate the effectiveness of the proposed approach: for binary COVID-19 detection, the ensemble achieves 94.48% accuracy and a 0.9426 Macro F1-score, outperforming both individual models, while for multi-class disease classification the 2.5D DINOv3 model achieves the best performance with 79.35% accuracy and a 0.7497 Macro F1-score. These results highlight the benefit of combining pretrained slice-based representations with volumetric modeling for robust multi-source medical imaging analysis. Code is available at this https URL
The Hawkes process models self-exciting event streams, requiring a strictly non-negative and stable stochastic intensity. Standard identification methods enforce these properties using non-negative causal bases, yielding conservative parameter constraints and severely ill-conditioned least-squares Gram matrices at higher model orders. To overcome this, we introduce a system-theoretic identification framework utilizing the sign-indefinite orthonormal Laguerre basis, which guarantees a well-conditioned asymptotic Gram matrix independent of model order. We formulate a constrained least-squares problem enforcing the necessary and sufficient conditions for positivity and stability. By constructing the empirical Gram matrix via a Lyapunov equation and representing the constraints through a sum-of-squares trace equivalence, the proposed estimator is efficiently computed via semidefinite programming.
This paper presents a new data-driven robust predictive control law, for linear systems affected by unknown-but-bounded process disturbances. A sequence of input-state data is used to construct a suitable uncertainty representation based on interval matrices. Then, the effect of uncertainty along the prediction horizon is bounded through an operator leveraging matrix zonotopes. This yields a tube that is exploited within a variable-horizon optimal control problem, to guarantee robust satisfaction of state and input constraints. The resulting data-driven predictive control scheme is proven to be recursively feasible and practically stable. A numerical example shows that the proposed approach compares favorably to existing methods based on zonotopic tubes.
Physics-Informed Neural Networks (PINNs) have demonstrated that embedding physical laws directly into the learning objective can significantly enhance the efficiency and physical consistency of neural network solutions. Inspired by this principle, we ask a natural question: can physical information be similarly embedded into the fitness function of evolutionary algorithms? In this work, we propose Physics-Informed Evolution (PIE), a novel framework that incorporates physical information derived from governing physical laws into the evolutionary fitness landscape, bridging the long-standing connection between learning and evolution in artificial intelligence. As a concrete instantiation, we apply PIE to quantum control problems governed by the Schrödinger equation, where the goal is to find optimal control fields that drive quantum systems from initial states to desired target states. We validate PIE on three representative quantum control benchmarks: state preparation in V-type three-level systems, entangled state generation in superconducting quantum circuits, and two-atom cavity QED systems, under varying levels of system uncertainty. Extensive comparisons against ten single-objective and five multi-objective evolutionary baselines demonstrate that PIE consistently achieves higher fidelity, lower state deviation, and improved robustness. Our results suggest that the physics-informed principle extends naturally beyond neural network training to the broader domain of evolutionary computation.
Inspired by the recent successes of Inverse Optimization (IO) across various application domains, we propose a novel offline Reinforcement Learning (ORL) algorithm for continuous state and action spaces, leveraging the convex loss function called ``sub-optimality loss'' from the IO literature. To mitigate the distribution shift commonly observed in ORL problems, we further employ a robust and non-causal Model Predictive Control (MPC) expert steering a nominal model of the dynamics using in-hindsight information stemming from the model mismatch. Unlike the existing literature, our robust MPC expert enjoys an exact and tractable convex reformulation. In the second part of this study, we show that the IO hypothesis class, trained by the proposed convex loss function, enjoys ample expressiveness and {reliably recovers teacher behavior in MuJoCo benchmarks. The method achieves competitive results compared to widely-used baselines in sample-constrained settings, despite using} orders of magnitude fewer parameters. To facilitate the reproducibility of our results, we provide an open-source package implementing the proposed algorithms and the experiments. The code is available at this https URL.
The accuracy and robustness of machine learning models against adversarial attacks are significantly influenced by factors such as training data quality, model architecture, the training process, and the deployment environment. In recent years, duplicated data in training sets, especially in language models, has attracted considerable attention. It has been shown that deduplication enhances both training performance and model accuracy in language models. While the importance of data quality in training image classifier Deep Neural Networks (DNNs) is widely recognized, the impact of duplicated images in the training set on model generalization and performance has received little attention. In this paper, we address this gap and provide a comprehensive study on the effect of duplicates in image classification. Our analysis indicates that the presence of duplicated images in the training set not only negatively affects the efficiency of model training but also may result in lower accuracy of the image classifier. This negative impact of duplication on accuracy is particularly evident when duplicated data is non-uniform across classes or when duplication, whether uniform or non-uniform, occurs in the training set of an adversarially trained model. Even when duplicated samples are selected in a uniform way, increasing the amount of duplication does not lead to a significant improvement in accuracy.
Advances in AI and Robotics have accelerated significant initiatives in agriculture, particularly in the areas of robot navigation and 3D digital twin creation. A significant bottleneck impeding this progress is the critical lack of "in-the-wild" datasets that capture the full complexities of real farmland, including non-rigid motion from wind, drastic illumination variance, and morphological changes resulting from growth. This data gap fundamentally limits research on robust AI models for autonomous field navigation and scene-level dynamic 3D reconstruction. In this paper, we present AgriChrono, a modular robotic data collection platform and multi-modal dataset designed to capture these dynamic farmland conditions. Our platform integrates multiple sensors, enabling remote, time-synchronized acquisition of RGB, Depth, LiDAR, IMU, and Pose data for efficient and repeatable long-term data collection in real-world agricultural environments. We successfully collected 18TB of data over one month, documenting the entire growth cycle of Canola under diverse illumination conditions. We benchmark state-of-the-art 3D reconstruction methods on AgriChrono, revealing the profound challenge of reconstructing high-fidelity, dynamic non-rigid scenes in such farmland settings. This benchmark validates AgriChrono as a critical asset for advancing model generalization, and its public release is expected to significantly accelerate research and development in precision agriculture. The code and dataset are publicly available at: this https URL
We present an inverse dynamic game-based algorithm to learn parametric constraints from a given dataset of local Nash equilibrium interactions between multiple agents. Specifically, we introduce mixed-integer linear programs (MILP) encoding the Karush-Kuhn-Tucker (KKT) conditions of the interacting agents, which recover constraints consistent with the local Nash stationarity of the interaction demonstrations. We establish theoretical guarantees that our method learns inner approximations of the true safe and unsafe sets. We also use the interaction constraints recovered by our method to design motion plans that robustly satisfy the underlying constraints. Across simulations and hardware experiments, our methods accurately inferred constraints and designed safe interactive motion plans for various classes of constraints, both convex and non-convex, from interaction demonstrations of agents with nonlinear dynamics.
Audio is the primary modality for human communication and has driven the success of Automatic Speech Recognition (ASR) technologies. However, such audio-centric systems inherently exclude individuals who are deaf or hard of hearing. Visual alternatives such as sign language and lip reading offer effective substitutes, and recent advances in Sign Language Translation (SLT) and Visual Speech Recognition (VSR) have improved audio-less communication. Yet, these modalities have largely been studied in isolation, and their integration within a unified framework remains underexplored. In this paper, we propose the first unified framework capable of handling diverse combinations of sign language, lip movements, and audio for spoken-language text generation. We focus on three main objectives: (i) designing a unified, modality-agnostic architecture capable of effectively processing heterogeneous inputs; (ii) exploring the underexamined synergy among modalities, particularly the role of lip movements as non-manual cues in sign language comprehension; and (iii) achieving performance on par with or superior to state-of-the-art models specialized for individual tasks. Building on this framework, we achieve performance on par with or better than task-specific state-of-the-art models across SLT, VSR, ASR, and Audio-Visual Speech Recognition. Furthermore, our analysis reveals a key linguistic insight: explicitly modeling lip movements as a distinct modality significantly improves SLT performance by capturing critical non-manual cues.
Topology optimization is a promising approach for mitigating congestion and managing changing grid conditions, but it is computationally challenging and requires approximations. Conventional distribution factors like PTDFs and LODFs, based on DC power flow, fail to capture voltage variations, reactive power, and losses, thereby limiting their use in detailed optimization tasks such as busbar splitting. This paper introduces generalized distribution factors derived from a voltage-sensitive linearization of the full AC power flow equations. The proposed formulation accurately reflects reactive power flows, Ohmic losses, and voltage deviations while remaining computationally efficient. We derive and evaluate generalized PTDFs, LODFs, and topology modification factors using matrix identities. We discuss potential applications including voltage-aware N-1 security analysis and topology optimization with a focus on busbar splitting. Numerical experiments demonstrate close agreement with full AC solutions, significantly outperforming the traditional DC approximation.
This Review consolidates publicly available aerial wireless measurement datasets collected using AERPAW. We organize signal-level, power-level, and KPI-level datasets under a unified taxonomy, harmonize metadata, and provide verified access with reproducible post-processing scripts. The curated catalog supports propagation modeling, machine learning, localization, and system-level evaluation for 5G-Advanced and emerging 6G aerial networks.
Reinforcement learning (RL), while powerful and expressive, can often prioritize performance at the expense of safety. Yet safety violations can lead to catastrophic outcomes in real-world deployments. Control Barrier Functions (CBFs) offer a principled method to enforce dynamic safety -- traditionally deployed online via safety filters. While the result is safe behavior, the fact that the RL policy does not have knowledge of the CBF can lead to conservative behaviors. This paper proposes CBF-RL, a framework for generating safe behaviors with RL by enforcing CBFs in training. CBF-RL has two key attributes: (1) minimally modifying a nominal RL policy to encode safety constraints via a CBF term, (2) and safety filtering of the policy rollouts in training. Theoretically, we prove that continuous-time safety filters can be deployed via closed-form expressions on discrete-time roll-outs. Practically, we demonstrate that CBF-RL internalizes the safety constraints in the learned policy -- both enforcing safer actions and biasing towards safer rewards -- enabling safe deployment without the need for an online safety filter. We validate our framework through ablation studies on navigation tasks and on the Unitree G1 humanoid robot, where CBF-RL enables safer exploration, faster convergence, and robust performance under uncertainty, enabling the humanoid robot to avoid obstacles and climb stairs safely in real-world settings without a runtime safety filter.
Group Relative Policy Optimization (GRPO) has shown promise in aligning image and video generative models with human preferences. However, applying it to modern flow matching models is challenging because of its deterministic sampling paradigm. Current methods address this issue by converting Ordinary Differential Equations (ODEs) to Stochastic Differential Equations (SDEs), which introduce stochasticity. However, this SDE-based GRPO suffers from issues of inefficient credit assignment and incompatibility with high-order solvers for fewer-step sampling. In this paper, we first reinterpret existing SDE-based GRPO methods from a distance optimization perspective, revealing their underlying mechanism as a form of contrastive learning. Based on this insight, we propose Neighbor GRPO, a novel alignment algorithm that completely bypasses the need for SDEs. Neighbor GRPO generates a diverse set of candidate trajectories by perturbing the initial noise conditions of the ODE and optimizes the model using a softmax distance-based surrogate leaping policy. We establish a theoretical connection between this distance-based objective and policy gradient optimization, rigorously integrating our approach into the GRPO framework. Our method fully preserves the advantages of deterministic ODE sampling, including efficiency and compatibility with high-order solvers. We further introduce symmetric anchor sampling for computational efficiency and group-wise quasi-norm reweighting to address reward flattening. Extensive experiments demonstrate that Neighbor GRPO significantly outperforms SDE-based counterparts in terms of training cost, convergence speed, and generation quality.
Generating accurate circuit schematics from high-level natural language descriptions remains a persistent challenge in electronic design automation (EDA), as large language models (LLMs) frequently hallucinate components, violate strict physical constraints, and produce non-machine-readable outputs. To address this, we present CircuitLM, a multi-agent pipeline that translates user prompts into structured, visually interpretable $\texttt{CircuitJSON}$ schematics. The framework mitigates hallucination and ensures physical viability by grounding generation in a curated, embedding-powered component knowledge base through five sequential stages: (i) component identification, (ii) canonical pinout retrieval, (iii) chain-of-thought reasoning, (iv) JSON schematic synthesis, and (v) interactive force-directed visualization. We evaluate the system on a dataset of 100 unique circuit-design prompts using five state-of-the-art LLMs. To systematically assess performance, we deploy a rigorous dual-layered evaluation methodology: a deterministic Electrical Rule Checking (ERC) engine categorizes topological faults by strict severity (Critical, Major, Minor, Warning), while an LLM-as-a-judge meta-evaluator identifies complex, context-aware design flaws that bypass standard rule-based checkers. Ultimately, this work demonstrates how targeted retrieval combined with deterministic and semantic verification can bridge natural language to structurally viable, schematic-ready hardware and safe circuit prototyping. Our code and data will be made public.
Vision Language Models (VLMs) have advanced perception in autonomous driving (AD), but they remain vulnerable to adversarial threats. These risks range from localized physical patches to imperceptible global perturbations. Existing defense methods for VLMs remain limited and often fail to reconcile robustness with clean-sample performance. To bridge these gaps, we propose NutVLM, a comprehensive self-adaptive defense framework designed to secure the entire perception-decision lifecycle. Specifically, we first employ NutNet++ as a sentinel, which is a unified detection-purification mechanism. It identifies benign samples, local patches, and global perturbations through three-way classification. Subsequently, localized threats are purified via efficient grayscale masking, while global perturbations trigger Expert-guided Adversarial Prompt Tuning (EAPT). Instead of the costly parameter updates of full-model fine-tuning, EAPT generates "corrective driving prompts" via gradient-based latent optimization and discrete projection. These prompts refocus the VLM's attention without requiring exhaustive full-model retraining. Evaluated on the Dolphins benchmark, our NutVLM yields a 4.89% improvement in overall metrics (e.g., Accuracy, Language Score, and GPT Score). These results validate NutVLM as a scalable security solution for intelligent transportation. Our code is available at this https URL.
Networks of interdependent industrial assets (clients) are tightly coupled through physical processes and control inputs, raising a key question: how would the output of one client change if another client were operated differently? This is difficult to answer because client-specific data are high-dimensional and private, making centralization of raw data infeasible. Each client also maintains proprietary local models that cannot be modified. We propose a federated framework for causal representation learning in state-space systems that captures interdependencies among clients under these constraints. Each client maps high-dimensional observations into low-dimensional latent states that disentangle intrinsic dynamics from control-driven influences. A central server estimates the global state-transition and control structure. This enables decentralized counterfactual reasoning where clients predict how outputs would change under alternative control inputs at others while only exchanging compact latent states. We prove convergence to a centralized oracle and provide privacy guarantees. Our experiments demonstrate scalability, and accurate cross-client counterfactual inference on synthetic and real-world industrial control system datasets.
Non-prehensile planar manipulation, including pushing and press-and-slide, is critical for diverse robotic tasks, but notoriously challenging due to hybrid contact mechanics, under-actuation, and asymmetric friction limits that traditionally necessitate computationally expensive iterative control. In this paper, we propose a mode-aware framework for planar manipulation with one or two robotic arms based on contact topology selection and reduced-order kinematic modeling. Our core insight is that complex wrench-twist limit surface mechanics can be abstracted into a discrete library of physically intuitive models. We systematically map various single-arm and bimanual contact topologies to simple non-holonomic formulations, e.g. unicycle for simplified press-and-slide motion. By anchoring trajectory generation to these reduced-order models, our framework computes the required object wrench and distributes feasible, friction-bounded contact forces via a direct algebraic allocator. We incorporate manipulator kinematics to ensure long-horizon feasibility and demonstrate our fast, optimization-free approach in simulation across diverse single-arm and bimanual manipulation tasks. Supplementary videos and additional information are available at: this https URL
We study supervisory switching control for partially-observed linear dynamical systems. The objective is to identify and deploy the best controller for the unknown system by periodically selecting among a collection of $N$ candidate controllers, some of which may destabilize the underlying system. While classical estimator-based supervisory control guarantees asymptotic stability, it lacks quantitative finite-time performance bounds. Conversely, current non-asymptotic methods in both online learning and system identification require restrictive assumptions that are incompatible in a control setting, such as system stability, which preclude testing potentially unstable controllers. To bridge this gap, we propose a novel, non-asymptotic analysis of supervisory control that adapts multi-armed bandit algorithms to a control-theoretic setting. The proposed data-driven algorithm evaluates candidate controllers via scoring criteria that leverage system observability to isolate the effects of state history, enabling both detection of destabilizing controllers and accurate system identification. We present two algorithmic variants with dimension-free, finite-time guarantees, where each identifies the most suitable controller in $\mathcal{O}(N \log N)$ steps, while simultaneously achieving finite $L_2$-gain with respect to system disturbances.
Mechanistic interpretability has transformed the analysis of transformer circuits by decomposing model behavior into competing algorithms, identifying phase transitions during training, and deriving closed-form predictions for when and why strategies shift. However, this program has remained largely confined to sequence-prediction architectures, leaving embodied control systems without comparable mechanistic accounts. Here we extend this framework to sensorimotor-cognitive development, using infant motor learning as a model system. We show that foundational inductive biases give rise to causal control circuits, with learned gating mechanisms converging toward theoretically motivated uncertainty thresholds. The resulting dynamics reveal a clean phase transition in the arbitration gate whose commitment behavior is well described by a closed-form exponential moving-average surrogate. We identify context window k as the critical parameter governing circuit formation: below a minimum threshold (k$\leq$4) the arbitration mechanism cannot form; above it (k$\geq$8), gate confidence scales asymptotically as log k. A two-dimensional phase diagram further reveals task-demand-dependent route arbitration consistent with the prediction that prospective execution becomes advantageous only when prediction error remains within the task tolerance window. Together, these results provide a mechanistic account of how reactive and prospective control strategies emerge and compete during learning. More broadly, this work sharpens mechanistic accounts of cognitive development and provides principled guidance for the design of interpretable embodied agents.