Weaving ramps are critical bottlenecks in highway networks due to conflicting traffic flows and complex interactions among heterogeneous vehicle types. In mixed-autonomy settings, the presence of controllable autonomous vehicles (AVs) introduces new opportunities to influence system-level outcomes, yet the structural impact of such control remains poorly understood. This paper develops a unified equilibrium framework to capture, predict, and optimize aggregate lane-choice behavior in weaving ramps with heterogeneous vehicle populations. We first formulate a Wardrop-based model capturing the selfish behavior of human-driven vehicles (HDVs) and establish existence, uniqueness, and validity of the resulting equilibrium. We then introduce a Stackelberg--Wardrop formulation in which AVs act as strategic leaders optimizing system performance, while HDVs respond through equilibrium adaptation. The framework is further generalized to incorporate heterogeneous behavioral preferences of HDVs and AVs via a Social Value Orientation (SVO) model. Our analysis reveals a fundamental structural property of mixed-autonomy traffic systems: under selfish HDV behavior, the impact of AV penetration is inherently non-increasing, exhibiting plateau regions where performance remains unchanged and improves only at critical thresholds. These results provide principled guidance for the design of AV control and incentive mechanisms in the presence of selfish human behavior, and demonstrate how strategically controlled autonomous agents can be deployed to induce system-level efficiency gains in mixed-autonomy transportation networks.
Computed Tomography (CT) is a widely used imaging modality in medical and industrial applications. To limit radiation exposure and measurement time, there is a growing interest in sparse-view CT, where the number of projection views is significantly reduced. Deep neural networks have shown great promise in improving reconstruction quality in sparse-view CT, especially generative diffusion models. However, these methods struggle to scale to large 3D volumes due to several reasons: (i) the high memory and computational requirements of 3D models, (ii) the lack of large 3D training datasets, and (iii) the inconsistencies across slices when using 2D models independently on each slice. We overcome these limitations and scale diffusion-based sparse-view CT reconstruction to large 3D volumes by combining conditional diffusion with explicit data consistency. We propose Conditional Diffusion Posterior Alignment (CDPA) to enable scalable 3D sparse-view CT reconstruction. A 2D U-Net diffusion model is conditioned on an initial 3D reconstruction to improve inter-slice consistency, combined with data-consistency alignment to match measured projections. Experiments on synthetic and real Cone Beam CT (CBCT) data show state-of-the-art performance, with ablations that confirm the synergistic effects of the proposed pipeline. Finally, we show that the same principles also strengthen fast denoising U-Nets, yielding near-diffusion quality at a fraction of the computational cost.
Integrated sensing and communication (ISAC) is poised to be a defining feature of 6G networks, promising to transform cellular base stations (BSs) into ubiquitous radar sensors. However, a significant gap exists between the theoretical promise of ISAC and the commercial reality of legacy cellular communication infrastructure. Existing communication networks are constrained by fragmented spectrum, blockage-prone environments, and cost-prohibitive high-rate analog-to-digital converters (ADCs). These limitations stifle the high-resolution sensing required for emerging applications. This article advocates a shift from dependence on physical resources to computational synthesis and introduces a unified full stack virtualization framework that upgrades legacy networks with minimal hardware changes, spanning signal generation, propagation, and acquisition. Specifically, we virtualize signal generation via space-time -frequency synthesis across distributed BSs to synthesize a larger effective aperture and a wider effective bandwidth. We then virtualize signal propagation by leveraging environmental multipath and digital maps to reinterpret reflections as massive virtual arrays. Finally, we virtualize signal acquisition using sub Nyquist strategies to bypass sampling bottlenecks. We demonstrate that by trading computation for hardware, commercial networks can achieve fine-grained sensing without expensive retrofitting.
Accurate forecasting of electric load and renewable generation is essential for reliable and cost effective power system operations. Recent advances in transformer based and foundation machine learning models, driven by large scale pretraining, increased available data and computation, in addition to architectural innovations, have shown promise in time series forecasting across multiple domains. However, their application to power system forecasting tasks remains largely underexplored. This work presents a comprehensive, empirical benchmark of state of the art time series foundation models, transformer architectures, and deep learning baselines for solar, wind, and load forecasting using the high resolution ARPAE PERFORM dataset for the Electric Reliability Council of Texas (ERCOT) grid. Eight core capabilities are assessed, including zero shot performance, fine tuning efficiency, multivariate input and output handling, horizon sensitivity, generalization to unseen sites, probabilistic forecasting, and context window effects. Models evaluated include TimesFM, Chronos Bolt, MoiraiL, MOMENT, Tiny Time Mixer, Temporal Fusion Transformer, PatchTST, TimeXer, LSTM, and CNN. The manuscript aims to provide clear guidance on when foundation models can provide enhanced renewable and load forecasting capabilities and when other approaches remain the more practical choice for power system operations.
We propose a hybrid reinforcement and self-supervised learning framework for accelerating generalized Benders decomposition (GBD). In this framework, a graph based reinforcement learning agent operates on a bipartite representation of the master problem and, together with a verification mechanism, determines the integer variable assignments that solve the master problem. These assignments are then used as inputs to a KKT informed neural network, trained via self supervision to predict primal dual solutions that approximately satisfy the Karush Kuhn Tucker conditions of the subproblem. The predicted solutions are used to construct Benders cuts directly. The framework is evaluated on a mixed integer nonlinear programming case study, where it achieves a 57.5% reduction in solution time relative to classical GBD while consistently recovering optimal solutions across all test instances.
Sub-gram flapping-wing flying insect robots (FIRs) are challenging to model because of mechanical complexity in their wings, unsteady aerodynamic flow, and the difficulty of making precise measurements at a small scale. Coupling effects between roll and pitch torque actuation have not previously been measured because a two-axis sensor that is sensitive enough has not been realized. To address this shortcoming, we introduce a microfabricated gimbal design capable of precisely and simultaneously measuring roll and pitch torques as well as thrust. We then used it to measure the extent to which a pitch torque command affects roll torque and vice versa on a 180 mg piezo-actuated flapping-wing flying platform. Our results show a high coefficient of determination in the linear regression for both pitch (0.95) and roll (0.98) and low cross-correlation coefficients (-0.001 and -0.085, respectively) across the full range of simultaneous torque commands, indicating negligible cross-axis coupling. Similarly, thrust force deviates by a maximum of only 5.8% from the mean thrust value. These results validate the assumption that pitch and toll can be considered independently in control and will inform future models of how inputs affect the aerodynamics of resonant flapping-wing systems.
Independent component analysis (ICA) estimates a demixing matrix that can recover statistically independent sources from linear mixtures. FastICA is a popular ICA algorithm due to its efficiency, but its performance strongly depends on a user-chosen nonlinear function matched to the source distribution. When the source distribution is unknown, this function must be guessed at, and incorrect guesses can lead to significant drops in performance. We remove the need to guess by estimating a suitable function directly from the observed data. Our experiments show that the separation error stays close to the best fixed choice across synthetic mixtures comprising heavy-tailed or discrete sources while retaining a FastICA-like runtime.
Mispronunciation Detection and Diagnosis (MDD) requires modeling fine-grained acoustic deviations. However, current ASR-derived MDD systems often face inherent limitations. In particular, CTC-based models favor sequence-level alignments that neglect transient mispronunciation cues, while explicit canonical priors bias predictions toward intended targets. To address these bottlenecks, we propose a prompt-free framework decoupling acoustic fidelity from canonical guidance. First, we introduce CROTTC, an acoustic model enforcing monotonic, frame-level alignment to accurately capture pronunciation deviations. Second, we implicitly inject mispronunciation information via the IF strategy under the knowledge transfer principle. Experiments show CROTTC-IF achieves a 71.77% F1-score on L2-ARCTIC and 71.70% F1-score on the Iqra'Eval2 leaderboard. With empirical analysis, we demonstrate that decoupling acoustics from explicit priors provides highly robust MDD.
Ensuring safety is a critical requirement for autonomous systems, yet providing formal guarantees for nominal controllers remains a significant challenge. In this paper, we propose a modular sampling-based safety filter to ensure the safety of arbitrary nominal control inputs. At each timestep, the filter evaluates the safety of the nominal input by leveraging control sequence samples generated via Stein Variational Model Predictive Control (SV-MPC). This approach approximates a safety-conditioned posterior distribution over control sequences, enabling the filter to effectively capture multimodal safe regions in complex, non-convex environments. The filter guarantees safety by overriding the nominal input when all sampled control sequence candidates are deemed unsafe. By leveraging the scenario approach, the proposed method provides a probabilistic guarantee on its restrictiveness. We validate the filter through collision avoidance tasks in both single- and multi-vehicle settings, demonstrating its efficacy in navigating cluttered environments where nominal controllers may fail.
Affine frequency division multiplexing (AFDM) has emerged as a promising integrated sensing and communication (ISAC) waveform due to its intrinsic chirp signalling nature. Nevertheless, practical AFDM-based ISAC still faces two key obstacles, namely, high ambiguity function (AF) sidelobes and high peak-to-average power ratio (PAPR). By leveraging the reserved chirp-subcarrier (RCS) symbols, we develop a unified AFDM waveform design framework for AF shaping and/or PAPR control. The proposed framework supports three modes: AF shaping via weighted integrated sidelobe level (ISL) minimization, PAPR minimization, and joint AF shaping and PAPR control under a prescribed PAPR constraint. To solve the formulated nonconvex problem and to accommodate the discrete-phase constraints on the optionally optimized pre-chirp parameters, a joint ISL-PAPR-discrete-phase majorization-minimization (JIPD-MM) algorithm is developed. Simulation results verify the effectiveness of the proposed framework under all three design modes. The joint mode further demonstrates that the prescribed PAPR constraint can be effectively satisfied while still achieving meaningful ISL reduction. These gains are also reflected in improved weak-target detectability under multitarget scenarios and lower bit error rate (BER) under power-amplifier (PA) nonlinearity.
Using self-supervised learning (SSL) models has significantly improved performance for downstream speech tasks, surpassing the capabilities of traditional hand-crafted features. This study investigates the amalgamation of SSL models, with the aim to leverage both their individual strengths and refine extracted features to achieve improved speech recognition models for naturalistic scenarios. Our research investigates the massive naturalistic Fearless Steps (FS) APOLLO resource, with particular focus on the FS Challenge (FSC) Phase-4 corpus, providing the inaugural analysis of this dataset. Additionally, we incorporate the CHiME-6 dataset to evaluate performance across diverse naturalistic speech scenarios. While exploring previously proposed Feature Refinement Loss and fusion methods, we found these methods to be less effective on the FSC Phase-4 corpus. To address this, we introduce a novel deep cross-attention (DCA) fusion method, designed to elevate performance, especially for the FSC Phase-4 corpus. Our objective is to foster creation of superior FS APOLLO community resources, catering to the diverse needs of researchers across various disciplines. The proposed solution achieves an absolute +1.1% improvement in WER, providing effective meta-data creation for the massive FS APOLLO community resource.
Generative audio modeling has largely been fragmented into specialized tasks, text-to-speech (TTS), text-to-music (TTM), and text-to-audio (TTA), each operating under heterogeneous control paradigms. Unifying these modalities remains a fundamental challenge due to the intrinsic dissonance between structured semantic representations (speech/music) and unstructured acoustic textures (sound effects). In this paper, we introduce UniSonate, a unified flow-matching framework capable of synthesizing speech, music, and sound effects through a standardized, reference-free natural language instruction interface. To reconcile structural disparities, we propose a novel dynamic token injection mechanism that projects unstructured environmental sounds into a structured temporal latent space, enabling precise duration control within a phoneme-driven Multimodal Diffusion Transformer (MM-DiT). Coupled with a multi-stage curriculum learning strategy, this approach effectively mitigates cross-modal optimization conflicts. Extensive experiments demonstrate that UniSonate achieves state-of-the-art performance in instruction-based TTS (WER 1.47%) and TTM (SongEval Coherence 3.18), while maintaining competitive fidelity in TTA. Crucially, we observe positive transfer, where joint training on diverse audio data significantly enhances structural coherence and prosodic expressiveness compared to single-task baselines. Audio samples are available at this https URL.
In spite of the utility of 3-D electron back-scattered diffraction (EBSD) microscopy, the data collection process can be time-consuming with serial-sectioning. Hence, it is natural to look at other modalities, such as polarized light (PL) data, to accelerate EBSD data collection, supplemented with shared information. Complementarily, features in chaotic PL data could even be enriched with a handful of EBSD measurements. To inherently learn the complex dynamics between EBSD and PL to solve these inverse problems, we use an unconditional multimodal diffusion model, motivated by progress in diffusion models for inverse problems. Although trained solely on synthetic data once, our model has strong generalizable capabilities on real data which can be low-resolution, noisy, corrupted, and misregistered. With inference-time scaling, we show gains in performance on a variety of objectives including grain boundary prediction, super-resolution, and denoising. With our model, we demonstrate that there is little difference from full resolution performance with only 25% (1/4 the resolution) of EBSD data and corrupted PL data.
While Large Audio Language Models (LALMs) achieve strong performance on short audio, they degrade on long-form inputs. This degradation is more severe in temporal awareness tasks, where temporal alignment becomes increasingly inaccurate as audio duration grows. We attribute these limitations to the lack of data, benchmarks, and modeling approaches tailored for long-form temporal awareness. To bridge this gap, we first construct LAT-Chronicle, a 1.2k hour long-form audio dataset with temporal annotations across real-world scenarios. We further develop LAT-Bench, the first human-verified benchmark supporting audio up to 30 minutes while covering three core tasks: Dense Audio Caption, Temporal Audio Grounding, and Targeted Audio Caption. Leveraging these resources, we propose LAT-Audio, formulating temporal awareness as a progressive global-to-local reasoning paradigm. A global timeline is first constructed as an aligned temporal-semantic context,and the Think-With-Audio Chain-of-Thought (TWA-CoT) is then introduced to perform iterative reasoning by incorporating local audio information via tool use. Experiments show that LAT-Audio surpasses existing models on long-form audio temporal awareness tasks and improves robustness to input duration. We release the dataset, benchmark, and model to facilitate future research at this https URL.
Reconfigurable antenna systems (RASs), such as fluid antennas and movable antennas, are poised to play a pivotal role in sixth-generation (6G) systems by dynamically adapting the antenna elements for system performance enhancement. However, unlocking their full potential requires channel models that accurately capture the influence of antenna configurations on the radiation, propagation, and reception of signals. Existing channel models suffer from several limitations, such as neglecting polarization effects, being restricted to specific antenna types, or relying on oversimplified assumptions. In this paper, we propose a general electromagnetic (EM)-based channel model grounded in spherical vector wave expansion (SVWE). The proposed EM-based channel model captures the impact of antenna position and orientation on the channel gain, thereby making it particularly well-suited for RASs. The effectiveness and accuracy are validated through comparisons with commercial simulation software, demonstrating excellent agreement in predicted channel gains. Moreover, it is shown that antenna orientation is a critical factor governing communication performance, and that dynamically adjusting the antenna orientation yields up to 70% improvement in achievable communication rate compared to a fixed-antenna configuration.
Audio effects play an essential role in sound design. This research addresses the task of audio effect estimation, which aims to estimate the configuration of applied effects from a wet signal. Existing approaches to this problem can be categorized into predictive approaches, which use models pre-trained in a data-driven manner, and search-based approaches, which are based on wet signal reconstruction. In this study, we propose a novel approach that integrates these approaches: first, DNNs predict the dry signal and effect configuration, and then a search is performed based on wet signal reconstruction using these predictions. By estimating the dry signal in the prediction stage, it becomes possible to complement or improve the predictions using reconstruction similarity as an objective function. The experimental evaluation showed that methods based on the proposed approach outperformed the method solely based on the predictive approach. Furthermore, the findings suggest that the task division of predicting the effect type combination followed by the search-based estimation of order and parameters was the most effective across various metrics.
This paper addresses decentralized control of large-scale heterogeneous multi-agent systems subject to bounded external disturbances and limited communication, with the objective of satisfying cooperative Signal Temporal Logic (STL) specifications. The considered specifications involve spatiotemporal tasks that require collaboration among multiple agents, including agents beyond direct communication neighborhoods. To address the communication constraints, a $k$-hop Prescribed Performance State Observer ($k$-hop PPSO) is designed to enable each agent to estimate the states of agents up to $k$ communication hops away using only information from $1$-hop neighbors, while guaranteeing predefined performance bounds on the estimation errors. The estimation error bounds are explicitly incorporated into a reformulation of the spatial robustness of the STL specifications, yielding robustness measures that account for worst-case estimation uncertainty. Based on the modified robustness, a decentralized continuous-time feedback control law is designed to guarantee satisfaction of the STL specifications in the presence of bounded disturbances and estimation errors. The proposed framework provides formal correctness guarantees using only local information and limited communication. Numerical simulations illustrate the theoretical results.
Wave-domain processing is an emerging paradigm where signal processing operations are partially shifted from the digital to the electromagnetic (EM) domain. Leveraging reconfigurable EM devices, this approach aims to reduce complexity, energy consumption, and latency in next-generation wireless systems employing holographic MIMO. This paper establishes fundamental theorems on the controllability of generic reconfigurable EM devices, where wave processing is achieved through the dynamic configuration of passive scatterers. Specifically, we derive necessary and sufficient conditions for controllability as a function of geometry and mutual coupling between elements. Finally, we provide a detailed discussion and numerical results characterizing the interplay between the number of elements, physical size, degrees of freedom, and directivity.
This paper presents a novel control strategy for multi-agent shepherding of non-cohesive targets in obstacle-rich environments. Unlike previous approaches that assume cohesive flocking behavior, our method handles targets that interact only with nearby herders through repulsive forces and exhibit no inter-target coordination. Each herder employs a hybrid control policy that combines direct goal-oriented steering with obstacle-tangent maneuvering, enabling targets to circumnavigate obstacles while being guided toward a goal region. The herder dynamics integrate three key behaviors: return-to-goal motion when idle, target steering with adaptive directional control, and obstacle avoidance using both normal and tangential force components. Numerical simulations demonstrate superior performance compared to existing shepherding methods, achieving higher target confinement rates in cluttered environments. Experimental validation using TurtleBot4 herders and Osoyoo target robots in an indoor arena confirms the practical effectiveness of the proposed approach.
Depthwise separable convolutional (DSConv) layers have been successfully applied to deep learning (DL)-based joint source-channel coding (JSCC) schemes to reduce computational complexity. However, a systematic investigation of the layerwise and ratio-wise replacement of standard convolutional (Conv) layers with DSConv layers in JSCC systems for wireless image transmission remains largely unexplored. In this letter, we propose a configurable lightweight JSCC framework that incorporates a selective replacement strategy, enabling flexible substitution of standard Conv layers with DSConv layers at various layer positions and replacement ratios. By adjusting the proportion of layers replaced, we achieve different model compression levels and analyze their impact on reconstruction performance. Furthermore, we investigate how replacements at different encoder and decoder depths influence reconstruction quality under a fixed replacement ratio. Our results show that Conv-to-DSConv replacement at intermediate layers achieves a favorable complexity-performance trade-off, revealing layer-wise redundancy in DL-based JSCC systems. Extensive experiments further demonstrate that the proposed framework achieves substantial parameter reduction with only slight performance degradation, enabling flexible complexity-performance trade-offs for resource-constrained edge devices.
Multi-speaker automatic speech recognition (ASR) aims to transcribe conversational speech involving multiple speakers, requiring the model to capture not only what was said, but also who said it and sometimes when it was spoken. Recent Speech-LLM approaches have shown the potential of unified modeling for this task, but jointly learning speaker attribution, temporal structure, and lexical recognition remains difficult and data-intensive. At the current stage, leveraging reliable speaker diarization as an explicit structural prior provides a practical and efficient way to simplify this task. To effectively exploit such priors, we propose DM-ASR, a diarization-aware multi-speaker ASR framework that reformulates the task as a multi-turn dialogue generation process. Given an audio chunk and diarization results, DM-ASR decomposes transcription into a sequence of speaker- and time-conditioned queries, each corresponding to one speaker in one time segment. This formulation converts multi-speaker recognition into a series of structured sub-tasks, explicitly decoupling speaker-temporal structure from linguistic content and enabling effective integration of diarization cues with the reasoning capability of large language models. We further introduce an optional word-level timestamp prediction mechanism that interleaves word and timestamp tokens, yielding richer structured outputs and better transcription quality. Our analysis shows that diarization systems provide more reliable speaker identities and segment-level boundaries, while LLMs excel at modeling linguistic content and long-range dependencies, demonstrating their complementary strengths. Experiments on Mandarin and English benchmarks show that the proposed approach achieves strong performance with relatively small models and training data, while remaining competitive with or outperforming existing unified approaches.
This paper proposes and analyzes Riemannian optimization algorithms on the manifold of unitary and symmetric matrices, denoted ${\cal {U}}_s$, which naturally models the scattering matrices of passive and reciprocal devices such as beyond-diagonal reconfigurable intelligent surfaces (BD-RISs). Despite its relevance, the geometry of ${\cal {U}}_s$ has remained largely unexplored, and existing BD-RIS optimization methods either ignore the symmetry constraint or rely on costly Takagi-based parameterizations. We first provide a rigorous geometric characterization of ${\cal {U}}_s$, deriving its tangent space, a simple retraction, and closed-form expressions for geodesics. Building on these results, we develop two Riemannian manifold optimization (MO) algorithms tailored to ${\cal {U}}_s$: a line-search (LS) based scheme and a phase-optimization (PO) update along geodesics. We then apply the proposed framework to BD-RIS-assisted multiple-input multiple-output (MIMO) links, addressing sum-gain maximization, rate maximization, and minimum mean-square error problems, where they outperform existing approaches. Furthermore, we show that when the number of BD-RIS elements exceeds the total number of antennas, the optimal scattering matrix is low-rank, which motivates and enables efficient low-rank variants of the proposed algorithms.
We present a novel framework for line-of-sight (LoS) delay-Doppler (DD) estimation in dense scattering propagation environments. We present two time-frequency (TF) domain pilot sequences inspired by the Zadoff-Chu sequence that exhibit desirable autocorrelation properties. Further, we present a twisted convolution-based approach for LoS DD estimation directly from the TF-domain received signal, avoiding an additional TF to DD transformation, which is commonly found in literature. Numerical results from simulations demonstrate that the proposed framework significantly outperforms traditional single-carrier Zadoff-Chu sequences in both delay and Doppler estimation over a wide range of Rician fading factor and SNR values.
Understanding social dominance in animal behavior is critical for neuroscience and behavioral studies. In this work, we explore the capability of Multimodal Large Language Models(MLLMs) to analyze raw behavioral video of mice and predict their dominance hierarchy. We introduce MTT-Bench, a novel benchmark comprising annotated videos of pairwise mouse interactions for Mouse Tube Test analysis. Building on existing MLLM architectures, we fine-tune these models to perform zero-shot inference on unseen behavioral sequences, predicting social dominance without explicit labels during testing. Our framework demonstrates promising results, showing high agreement with tube test rankings. This work opens a new direction for applying foundation models to ethology and social behavior analysis, without the need to design domain-specific models.
The emergence of large-scale pretrained foundation models has transformed computer vision, enabling strong performance across diverse downstream tasks. However, their potential for physics-based inverse problems, such as accelerated cardiac MRI reconstruction, remains largely underexplored. In this work, we investigate whether natural-domain foundation models can serve as effective image priors for accelerated cardiac MRI reconstruction, and compare the performance obtained against domain-specific counterparts such as BiomedCLIP. We propose an unrolled reconstruction framework that incorporates pretrained, frozen visual encoders, such as CLIP, DINOv2, and BiomedCLIP, within each cascade to guide the reconstruction process. Through extensive experiments, we show that while task-specific state-of-the-art reconstruction models such as E2E-VarNet achieve superior performance in standard in-distribution settings, foundation-model-based approaches remain competitive. More importantly, in challenging cross-domain scenarios, where models are trained on cardiac MRI and evaluated on anatomically distinct knee and brain datasets--foundation models exhibit improved robustness, particularly under high acceleration factors and limited low-frequency sampling. We further observe that natural-image-pretrained models, such as CLIP, learn highly transferable structural representations, while domain-specific pretraining (BiomedCLIP) provides modest additional gains in more ill-posed regimes. Overall, our results suggest that pretrained foundation models offer a promising source of transferable priors, enabling improved robustness and generalization in accelerated MRI reconstruction.
We study whether deep networks for medical imaging learn useful nonrobust features - predictive input patterns that are not human interpretable and highly susceptible to small adversarial perturbations - and how these features impact test performance. We show that models trained only on nonrobust features achieve well above chance accuracy across five MedMNIST classification tasks, confirming their predictive value in-distribution. Conversely, adversarially trained models that primarily rely on robust features sacrifice in-distribution accuracy but yield markedly better performance under controlled distribution shifts (MedMNIST-C). Overall, nonrobust features boost standard accuracy yet degrade out-of-distribution performance, revealing a practical robustness-accuracy trade-off in medical imaging classification tasks that should be tailored to the requirements of the deployment setting.
Optical wireless communication (OWC) is a promising technology for supporting data intensive services in indoor environments due to its large unregulated spectrum, high spatial reuse, and potential for multigigabit data rates. In particular, vertical cavity surface emitting laser (VCSEL) based systems enable highly directional transmission, allowing efficient spatial separation of users and improved link performance. However, the use of narrow optical beams also makes system performance highly sensitive to user mobility and device orientation, as movement directly affects beam alignment and optical channel gain. Consequently, power allocation strategies that ignore mobility dynamics often provision excess optical power to maintain reliable connectivity, resulting in inefficient energy use. In this work, a power control framework for dynamic indoor OWC networks that explicitly accounts for mobility driven channel variation is developed. It uses a hybrid Gauss Markov and learning based approach that captures both user movement continuity and behaviour driven orientation changes. The mobility states are then used to guide power allocation decisions. Simulation results show that incorporating mobility aware channel prediction enables more accurate power allocation, and improves energy efficiency compared with conventional power control schemes in dynamic indoor environments.
Respiratory airflow signals provide critical insight into breathing mechanics, yet conventional analysis methods remain limited in their ability to characterize the internal structure of individual breaths. Traditional approaches treat airflow as a quasi-periodic signal and rely on global descriptors such as tidal volume or peak flow, obscuring sub-breath events that reflect neuromuscular coordination and compensatory breathing strategies. This study introduces a parametric framework for decomposing inspiratory airflow into a small number of time-localized components with explicit amplitude, onset time, and duration parameters. Unlike spectral or data-adaptive methods, the proposed approach employs physiologically grounded basis functions, Half-Sine, Gaussian, and Beta, to represent intrabreath waveform morphology through constrained nonlinear optimization. Evaluation across 8,276 breaths demonstrates high reconstruction accuracy (mean squared error $<$ 0.001 for four-component models) and robust parameter precision under moderate noise. Component-derived features describing sub-breath timing and coordination improved classification of cognitive fatigue states arising from cognitive-respiratory competition by up to 30.7% in Matthews correlation coefficient compared with classical respiratory metrics. These results establish that modeling airflow as a sum of parameterized, time-localized primitives provides an interpretable and precise foundation for quantifying intrabreath organization, compensatory breathing dynamics, and respiratory motor control adaptation under cognitive-respiratory dual-task demands.
The Terahertz (THz) band (0.1-10 THz) has emerged as a critical frontier for future communication systems, offering ultra-wide bandwidths that enable Terabits-per-second (Tbps) wireless links and high-precision sensing and imaging. However, practical deployment of THz systems is hindered by unique challenges, including intricate channel characteristics, high-dimensional and large-scale optimization problems, and highly dynamic network environments. Artificial Intelligence (AI) serves as a transformative enabler to address these challenges, providing robust capabilities for precise modeling, advanced signal processing, complex optimization, real-time decision-making, and prediction, among others. Reciprocally, the unprecedented bandwidth and high-resolution sensing capabilities of THz networks provide a promising physical infrastructure for AI, facilitating training, inference, and data collection. This survey presents a systematic and comprehensive overview of AI-driven solutions across the entire THz communication network and the symbiosis of AI and THz networks. To begin with, a foundational overview of AI technologies tailored for wireless communications is presented. Subsequently, AI-based innovations are investigated, spanning from hardware design, channel modeling, physical layer optimization, up to higher-layer network protocols and advanced THz services, including mobile edge computing and sensing-empowered applications. In parallel, the capacity of THz networks to serve AI is examined, underscoring a profound paradigm shift towards a mutual symbiosis where AI and THz co-evolve and empower each other. Finally, by synthesizing these state-of-the-art advancements and identifying open research directions, this survey highlights the potential of AI in copilot with development of THz communication systems.
In this paper, we present the Electric Mobility Dial-a-Ride Problem (EM-DARP), which extends the Electric Vehicle Dial-a-Ride Problem (EV-DARP) to better accommodate human-focused mobility services. The problem involves utilizing a fleet of heterogeneous Electric Vehicles (EVs) to fulfill a set of customer requests with DARP and mobility-related specifications, while incorporating visits to charging stations amid requests. The problem is formulated as a Mixed-Integer Linear Program (MILP) and subsequently solved for a number of curated evaluation scenarios to demonstrate its practical applicability.
We investigate the problem of jointly testing a pair of composite hypotheses and, depending on the test result, estimating a random parameter under distributional uncertainties. Specifically, it is assumed that the distribution of the data given the parameter of interest, is subject to uncertainty. Both, a Bayesian formulation and a Neyman-Pearson-like formulation, are considered. It is shown that the optimal policy induces an $f$-similarity that must be maximized to identify the least favorable distributions. Besides the general results, the implementation is investigated using a band-type uncertainty model. For designing the minimax procedures, existing algorithms are modified to increase convergence speed while maintaining numerical stability. The proposed theory is supplemented by numerical results for both formulations.
Advances in technology are transforming sustainable cattle farming practices, with electronic feeding systems generating big longitudinal datasets on individual animal feed intake, offering the possibility for autonomous precision livestock systems. However, the literature still lacks a methodology that fully leverages these longitudinal big data to accurately predict feed intake accounting for environmental conditions. To fill this gap, we developed an AI-based framework to accurately predict feed intake of individual animals and pen-level aggregation. Data from 19 experiments (>16.5M samples; 2013-2024) conducted at Nancy M. Cummings Research Extension & Education Center (Carmen, ID) feedlot facility and environmental data from AgriMet Network weather stations were used to develop two novel environmental indices: InComfort-Index, based solely on meteorological variables, showed good predictive capability for thermal comfort but had limited ability to predict feed intake; EASI-Index, a hybrid index integrating environmental variables with feed intake behavior, performed well in predicting feed intake but was less effective for thermal comfort. Together with the environmental indices, machine learning models were trained and the best-performing machine learning model (XGBoost) accuracy was RMSE of 1.38 kg/day for animal-level and only 0.14 kg/(day-animal) at pen-level. This approach provides a robust AI-based framework for predicting feed intake in individual animals and pens, with potential applications in precision management of feedlot cattle, through feed waste reduction, resource optimization, and climate-adaptive livestock management.
Recent works have demonstrated that attention-based transformer and large language model (LLM) architectures can achieve strong channel state prediction (CSP) performance by capturing long-range temporal dependencies across channel state information (CSI) sequences. However, these models suffer from quadratic scaling in sequence length, leading to substantial computational cost, memory consumption, and inference latency, which limits their applicability in real-time and resource-constrained wireless deployments. In this paper, we investigate whether selective state space models (SSMs) can serve as a hardware-efficient alternative for CSI prediction. We propose MambaCSP, a hybrid-attention SSM architecture that replaces LLM-based prediction backbones with a linear-time Mamba model. To overcome the local-only dependencies of pure SSMs, we introduce lightweight patch-mixer attention layers that periodically inject cross-token attentions, helping with long-context CSI prediction. Extensive MISO-OFDM simulations show that MambaCSP improves prediction accuracy over LLM-based approaches by 9-12%, while delivering up to 3.0x higher throughput, 2.6x lower VRAM usage, and 2.9x faster inference. Our results demonstrate that hybrid state space architectures provide a promising direction for scalable and hardware-efficient AI-native CSI prediction in future wireless networks.
The problem of controlling hybrid dynamical systems using model predictive control (MPC) is formulated and sufficient conditions for asymptotic stability of a set are provided. Hybrid dynamical systems are modeled in terms of hybrid equations, involving a differential equation and a difference equation with inputs and constraints. The proposed hybrid MPC algorithm uses a suitable prediction and control horizon construction inspired by hybrid time domains. Structural properties of the hybrid optimization problem, its feasible set, and its value function are provided. Checkable conditions to guarantee asymptotic stability of a set are provided. These conditions are given in terms of properties on the stage cost, terminal cost, and the existence of static state-feedback laws, related through a control Lyapunov function condition. Examples illustrate the results throughout the paper.
Accurate yet low-latency channel state information (CSI) acquisition is essential for multiple-input multiple-output (MIMO) communication systems. While advanced deep generative models, such as score-based and diffusion models, enable high-fidelity CSI reconstruction from limited pilot observations, they often suffer from high inference latency. To achieve accurate CSI estimation under stringent latency constraints, this paper proposes a null-space flow matching (FM) framework that decomposes pilot-limited MIMO channel estimation into a range-space reconstruction problem and a null-space generation problem. Specifically, the range-space component of the channel is directly recovered from noisy pilot observations, while only the ambiguous null-space component is iteratively refined using an FM-based generative prior. To further improve the robustness of the proposed framework, we introduce a power-law time schedule to better allocate the limited number of refinement steps, along with a noise-aware adaptive correction strategy to suppress channel noise on the refinement trajectory. Experimental results demonstrate that our method achieves a competitive normalized mean square error (NMSE) even under a strict latency budget of around 3 ms, while delivering superior estimation accuracy and faster inference than both model-based and generative baselines.
Several brain foundation models (FM) have recently been proposed to predict brain disorders by modelling dynamic functional connectivity (FC). While they demonstrate remarkable model performance and zero- or few-shot generalization, the salient features identified as potential biomarkers are yet to be thoroughly evaluated. We propose RE-CONFIRM, a framework for evaluating the robustness of potential biomarker candidates elucidated by deep learning (DL) models including FMs. From experiments on five large datasets of Autism Spectrum Disorder (ASD), Attention-deficit Hyperactivity Disorder (ADHD), and Alzheimer's Disease (AD), we found that although commonly used performance metrics provide an intuitive assessment of model predictions, they are insufficient for evaluating the robustness of biomarkers identified by these models. RE-CONFIRM metrics revealed that simply finetuning FMs leads to models that fail to capture regional hubs effectively, even in disorders where hubs are known to be implicated, such as ASD and ADHD. In view of this, we propose Hub-LoRA (Low-Rank Adaptation) as a fine-tuning technique that enables FMs to not only outperform customised DL models but also produce neurobiologically faithful biomarkers supported by meta-analyses. RE-CONFIRM is generalizable and can be easily applied to ascertain the robustness of DL models trained on functional MRI datasets. Code is available at: this https URL.
Portamento in string performance has been studied primarily as a binary presence-or-absence phenomenon, with existing research measuring frequency of occurrence and, less commonly, duration in milliseconds. This paper introduces a third quantitative descriptor; the spectrographic gradient of the portamento slide, measured in Hz/second, and demonstrates its measurement using a protocol combining Sonic Visualizer's melodic spectrogram layer, GIMP pixel analysis, and metric calibration against the spectrogram's known frequency axis. The gradient captures what duration alone cannot: the steepness of the pitch trajectory, which encodes the expressive character of the slide independently of its length. Applied to the opening measures of. Specifically because their monophonic texture permits reliable spectrographic pitch tracking. The method yields gradient values ranging from approximately 600~Hz/s in late-period recordings to over 4,000~Hz/s in early twentieth-century performances. The paper further documents a gain-recovery protocol that extends the analysable corpus to analogue recordings from the 1930s where portamento traces are faint in digital transfer. Applying the method to a corpus of 22 recordings spanning 1930--2012, the paper tests the hypothesis that gradient steepness correlates negatively with tempo: that slower performances produce steeper, longer slides while faster performances produce shallower slides or none at all. The results support this hypothesis, suggesting that the widely documented decline of portamento across the twentieth century is not a binary transition from presence to absence but a continuou
Optimal wireless transmitter placement is a central task in radio-network planning, yet exhaustive search becomes prohibitively expensive at scale. This paper studies the single-transmitter setting under a fixed learned propagation surrogate, where exhaustive per-pixel evaluation remains tractable and provides surrogate-exact ground truth. We introduce a dataset of 167,525 urban scenarios (RadioMapSeer-Deployment) with dual surrogate-exact labels for coverage-optimal and power-optimal transmitter locations. Ground-truth analysis reveals an asymmetric coverage-power trade-off: coverage-optimal placement sacrifices 13.86% of received power, whereas power-optimal placement sacrifices only 5.50% of coverage; the best achievable balanced placement lies at $\bar{d}=2.60$ from the ideal point (100%,100%). We evaluate two learning formulations: indirect heatmap-based models that predict received-power radio maps, and direct score-map models that predict the objective landscape over feasible transmitter locations. Within the heatmap family, discriminative models deliver one-shot predictions 1350-2400x faster than exhaustive search, while diffusion models additionally support multi-sample inference that improves single-objective performance and, by reusing the same sample pool under a balanced criterion, recovers strong balanced placements without explicit multi-objective training. Dual score-map strategies combining power and coverage score maps match the exhaustive balanced optimum ($\bar{d}=2.60$) and remain close across smaller candidate budgets, at 14-22x speedups after candidate re-evaluation. Both formulations admit very fast one-shot inference; on this benchmark, dual score-map methods are strongest for balanced placement, whereas heatmap formulations remain attractive for their physically meaningful intermediate maps and, in the diffusion setting, for inference-time search.
Reliable visual perception under low illumination remains a core challenge for autonomous robotic systems, where degraded image quality directly compromises navigation, inspection, and various operations. A recent training free approach showed that Bayesian optimisation with Gaussian Processes can adaptively select brightness, contrast, and denoising parameters on a per-image basis, achieving competitive enhancement without any learned model. However, that framework is limited to three parameters, applies no illumination decomposition or white balance correction, and relies on Non-Local Means denoising, which tends to over smooth edges under noisy conditions. This paper proposes FLARE-BO (Fused Luminance and Adaptive Retinex Enhancement via Bayesian Optimisation), an extended framework that jointly optimises eight parameters spanning across gamma correction, LIME-style illumination normalisation, chrominance denoising, bilateral filtering, NLM denoising, Grey-World automatic white balance, and adaptive post smoothing. The search engine employs a unit hypercube parameter normalisation, objective standardisation, Sobol quasi-random initialisation, and Log Expected Improvement acquisition for principled exploration of the expanded space. Performance of the proposed method is benchmarked using the Low Light paired dataset (LOL) and results show marked improvements of the proposed method over existing methods that were not specifically trained using this dataset.
Tinnitus is a prevalent auditory condition lacking objective biomarkers, motivating the search for reliable neural signatures. EEG, being a noninvasive method of brain imaging with a high temporal resolution provides a way to investigate the neural dynamics that may be associated with tinnitus. The generalizability of EEG-based tinnitus biomarkers across different datasets remains a critical challenge. Microstate theory has allowed for the characterization of quasi-stable topographic configurations in EEG, with some studies reporting altered microstate dynamics in tinnitus patients. This work seeks to improve upon existing dynamical systems analysis and their viability in identifying a robust biomarker. Dynamical features were extracted from two resting-state EEG datasets for the binary classification of tinnitus. Here, robustness is quantified as cross-dataset generalization, which is critical for clinical translation. We employ microstate analysis by identifying topographic states, from which transition probability and state duration features are derived. We also apply Koopman operator analysis through Dynamic Mode Decomposition (DMD) to dimensionality-reduced EEG to extract features in single-window. A linear SVM is trained on each feature set and evaluated in a cross-dataset generalization paradigm. PCA-based Koopman features yield the strongest discrimination metrics across both transfer directions, outperforming microstate-derived features. A Wasserstein-distance consistency analysis further reveals that Koopman eigenvalue \emph{magnitude}, encoding oscillation stability, generalizes across datasets ($\bar{\rho} = 0.685$), whereas eigenvalue \emph{phase}, encoding oscillation frequency, does not ($\bar{\rho} = 1.583$), providing interpretable evidence that altered oscillatory decay rates, rather than frequency shifts, constitute the more robust tinnitus biomarker.
Conventional scalp-based EEG systems are cumbersome to use, requiring extensive setup, restrictive wiring, and conductive gels that can dry out and limit long-term monitoring, while also carrying social stigma. As a result, there is increasing interest in in-ear EEG technology to improve comfort, convenience, and discretion for users. This work presents a personalized in-ear EEG monitor (IEEM) that simultaneously captures EEG signals from the outer ear while delivering audio playback through the same device. The earpiece is custom-molded to precisely match the user's ear anatomy, providing effective sound isolation from the environment and enabling direct audio transmission into the ear canal. Testing of the assembled earpiece shows successful detection of electrooculography (EOG), eye blinks, jaw clenches, auditory steady-state responses (ASSR), and alpha modulation. Electrochemical impedance spectroscopy (EIS) measurements confirm stable electrode-skin contact, with impedance values similar to those of traditional dry electrodes. The integrated approach enables potential closed-loop neuromodulation applications all in the ear where brain activity can be monitored in real-time and corresponding acoustic stimulation delivered adaptively.
We study the convergence of model-based policy gradient for the deterministic, scalar, discounted linear-quadratic regulator when the controller is an overparameterized one-hidden-layer ReLU network without biases. Although the optimal LQR controller is linear, neural parameterization creates a redundant nonconvex weight space with a possibly asymmetric piecewise-linear controller. We show that this structure can still be analyzed exactly through the two effective gains induced on the positive and negative half-lines. Under suitable random initialization, sufficient width, and a small step size, the model-based policy gradient remains stable, decreases the cost geometrically, and drives the effective gains to the unique optimal scalar LQR gain with high probability.
Here, we explore the problem of error propagation mitigation in modular digital twins as a sequential decision process. Building on a companion study that used a Hidden Markov Model (HMM) to infer latent error regimes from surrogate-physics residuals, we develop a Markov Decision Process (MDP) in which the inferred regimes serve as states, corrective interventions serve as actions, and a scalar reward that takes into consideration the cost-benefit tradeoff between system fidelity and maintenance expense. The baseline transition matrix is extracted from the HMM-learned parameters. We then extend the formulation to a Partially Observable MDP (POMDP) that accounts for the imperfect nature of regime classification by maintaining a belief distribution updated via Bayesian filtering, with the HMM confusion matrix serving as the observation model. Both formulations are solved via dynamic programming and validated through Gillespie stochastic simulation. We then benchmark two model-free reinforcement learning algorithms, Q-learning and REINFORCE, to assess whether effective policies can be learned without explicit model knowledge. A systematic comparison of different intervention policies demonstrates that the MDP policy achieves the highest cumulative reward and fraction of time in nominal operation, while the POMDP recovers approximately 95\% of MDP performance under realistic observation noise. Sensitivity analyses across observation quality, repair probability, and discount factor confirm the robustness of these conclusions, and the major gaps in the policy hierarchy are statistically significant at $p < 0.001$. The gap between MDP and POMDP performance quantifies the value of information providing a principled criterion for investing in improved classification accuracy.
While generative text-to-speech (TTS) models approach human-level quality, monolithic metrics fail to diagnose fine-grained acoustic artifacts or explain perceptual collapse. To address this, we propose TTS-PRISM, a multi-dimensional diagnostic framework for Mandarin. First, we establish a 12-dimensional schema spanning stability to advanced expressiveness. Second, we design a targeted synthesis pipeline with adversarial perturbations and expert anchors to build a high-quality diagnostic dataset. Third, schema-driven instruction tuning embeds explicit scoring criteria and reasoning into an efficient end-to-end model. Experiments on a 1,600-sample Gold Test Set show TTS-PRISM outperforms generalist models in human alignment. Profiling six TTS paradigms establishes intuitive diagnostic flags that reveal fine-grained capability differences. TTS-PRISM is open-source, with code and checkpoints at this https URL.
Rhythm transcription is a key subtask of notation-level Automatic Music Transcription (AMT). While deep learning models have been extensively used for detecting the metrical grid in audio and MIDI performances, beat-based rhythm quantization remains largely unexplored. In this work, we introduce a novel deep learning approach for quantizing MIDI performances using a priori beat information. Our method leverages the transformer architecture to effectively process synchronized score and performance data for training a quantization model. Key components of our approach include dataset preparation, a beat-based pre-quantization method to align performance and score times within a unified framework, and a MIDI tokenizer tailored for this task. We adapt a transformer model based on the T5 architecture to meet the specific requirements of rhythm quantization. The model is evaluated using a set of score-level metrics designed for objective assessment of quantization performance. Through systematic evaluation, we optimize both data representation and model architecture. Additionally, we apply performance and score augmentations, such as transposition, note deletion, and performance-side time jitter, to enhance the model's robustness. Finally, a qualitative analysis compares our model's quantization performance against state-of-the-art probabilistic and deep-learning models on various example pieces. Our model achieves an onset F1-score of 97.3% and a note value accuracy of 83.3% on the ASAP dataset. It generalizes well across time signatures, including those not seen during training, and produces readable score output. Fine-tuning on instrument-specific datasets further improves performance by capturing characteristic rhythmic and melodic patterns. This work contributes a robust and flexible framework for beat-based MIDI quantization using transformer models.
We study linear quadratic dynamic games where players are uncertain about each other's control policies or goals and consequently seek to be strategically robust. Building on recent work on strategically robust and risk-averse game theory, we first formalize the problem of strategically robust linear quadratic dynamic games. We show that these can be rewritten as simple transformations of linear quadratic games in which each player chooses a controller in a fictitious game in which they are faced with an adversary who is penalized for deviating from the other players' policies. This formulation naturally induces a novel notion of dynamic equilibrium, which we call a strategically robust dynamic equilibrium. We establish existence and uniqueness of such equilibria and furthermore show that the equilibrium policies are Markovian, linear, and can be efficiently computed via coupled backward Riccati equations. Through numerical simulations, including experiments in a network game, we illustrate the benefits of strategic robustness in designing robust and resilient decentralized control schemes. Our experiments also expose a "free-lunch" phenomenon in games in which robustness does not incur a corresponding loss in performance but can yield improvements in players' utilities and social welfare.
This paper studies an integrated sensing and communication (ISAC) system where a multi-antenna base station (BS) communicates with multiple single-antenna users in the downlink and senses the unknown and random angle information of a target based on its prior distribution information and the received echo signals. We focus on a challenging scenario with heterogeneous unknown parameters where the target's reflection coefficient is also unknown with no prior information. We consider a general transmit beamforming structure with both communication beams and dedicated sensing beams, where the communication users can cancel the interference caused by the pre-determined sensing signals. By adopting the periodic posterior Cramer-Rao bound (PCRB) to quantify a lower bound of the mean-cyclic error (MCE) for sensing the periodic angle parameter, we optimize the transmit beamforming to minimize the periodic PCRB, subject to individual communication user rate constraints, which is a non-convex problem. By leveraging the semi-definite relaxation (SDR) technique and Lagrange duality theory, we derive the optimal solution and prove that at most one dedicated sensing beam is needed. Numerical results validate our analysis and effectiveness of the proposed beamforming design.
Driver drowsiness is a major cause of traffic accidents worldwide, posing a serious threat to public safety. Vision-based driver monitoring systems often rely on fixed Eye Aspect Ratio (EAR) and Mouth Aspect Ratio (MAR) thresholds; however, such fixed values frequently fail to generalize across individuals due to variations in facial structure, illumination, and driving conditions. This paper proposes a personalized driver drowsiness detection system that monitors eyelid movements, head position, and yawning behavior in real time and provides warnings when signs of fatigue are detected. The system employs driver-specific EAR and MAR thresholds, calibrated before driving, to improve classical metric-based detection. In addition, deep learning-based Convolutional Neural Network (CNN) models are integrated to enhance accuracy in challenging scenarios. The system is evaluated using publicly available datasets as well as a custom dataset collected under diverse lighting conditions, head poses, and user characteristics. Experimental results show that personalized thresholding improves detection accuracy by 2-3% compared to fixed thresholds, while CNN-based classification achieves 99.1% accuracy for eye state detection and 98.8% for yawning detection, demonstrating the effectiveness of combining classical metrics with deep learning for robust real-time driver monitoring.
This paper discusses the left and right ranks of quaternion matrices with Hankel structure. While they are in general different for arbitrary quaternion matrices, we show that the left and right ranks of quaternion Hankel matrices are equal. Moreover, we establish the relation between Hankel matrices and the existence of linear recurrence relations with quaternion coefficients.
Many engineered systems must balance competing objectives, such as performance and safety, cost and reliability, or efficiency and sustainability, and are naturally modeled as compositions of interacting subsystems. We study online multi-objective decision-making in monotone co-design, where functionalities and resources are partially ordered, and the goal is to identify the target-feasible antichain of non-dominated trade-offs using few expensive evaluations. We introduce optimistic evaluators: history-dependent bounds on functionality and resource mappings that enable safe elimination of implementations before full evaluation. Based on these evaluators, we develop an elimination-based rejection-sampling algorithm, prove its soundness, and show that the admissible region shrinks monotonically as information accumulates. We instantiate the framework under monotonicity, Lipschitz continuity, and linear-parametric structure. For compositional co-design problems modeled by multigraphs, we show how local optimistic certificates propagate through the tractable remainder of the graph to yield system-level optimistic feasibility and resource bounds. Experiments on multi-robot fleet design, intermodal mobility systems, and synthetic monotone and Lipschitz benchmarks show substantial sample-efficiency gains over uniform sampling, Bayesian optimization, and multi-objective evolutionary algorithms.
Imitation learning is a well-established approach for machine-learning-based control. However, its applicability depends on having access to demonstrations, which are often expensive to collect and/or suboptimal for solving the task. In this work, we present GCImOpt, an approach to learn efficient goal-conditioned policies by training on datasets generated by trajectory optimization. Our approach for dataset generation is computationally efficient, can generate thousands of optimal trajectories in minutes on a laptop computer, and produces high-quality demonstrations. Further, by means of a data augmentation scheme that treats intermediate states as goals, we are able to increase the training dataset size by an order of magnitude. Using our generated datasets, we train goal-conditioned neural network policies that can control the system towards arbitrary goals. To demonstrate the generality of our approach, we generate datasets and then train policies for various control tasks, namely cart-pole stabilization, planar and three-dimensional quadcopter stabilization, and point reaching using a 6-DoF robot arm. We show that our trained policies can achieve high success rates and near-optimal control profiles, all while being small (less than 80,000 neural network parameters) and fast enough (up to more than 6,000 times faster than a trajectory optimization solver) that they could be deployed onboard resource-constrained controllers. We provide videos, code, datasets and pre-trained policies under a free software license; see our project website this https URL.
This paper investigates backdoor attack planning in stochastic control systems modeled as Markov Decision Processes (MDPs). A backdoor attack involves an adversary deploying a policy that performs well in the original MDP to pass testing, but behaves maliciously at runtime when combined with a trigger that perturbs system dynamics. We consider a sophisticated attacker capable of jointly optimizing the backdoor policy and its trigger using only a blackbox simulator. During execution, the attacker has access only to partial observations of the system state and is restricted to introduce small perturbations to the system's transition dynamics. We formulate the attack planning problem as a constrained Markov game with an augmented state space and two players: Player 0 learns a backdoor policy that maximizes attack rewards when the trigger is active. However, when the trigger is inactive, the backdoor policy behaves near-optimally in the original MDP; Player 1 designs a finite-memory, observation-based trigger to activate the attack. We propose a switching gradient-based optimization algorithm to jointly solve for the backdoor policy and trigger. Experiments on a case study demonstrate the effectiveness of our method in achieving stealthy and successful backdoor attacks, and how the attack performance varies under different parameters related to the stealthiness of the backdoor attack.
In this work, we propose an event-triggered moving horizon estimation (ET-MHE) scheme for the remote state estimation of general nonlinear systems. In the presented method, whenever an event is triggered, a single measurement is transmitted and the nonlinear MHE optimization problem is subsequently solved. If no event is triggered, the current state estimate is updated using an open-loop prediction based on the system dynamics. Moreover, we introduce a novel event-triggering rule under which we demonstrate robust global exponential stability of the ET-MHE scheme, assuming a suitable detectability condition is met. In addition, we show that with the adoption of a varying horizon length, a tighter bound on the estimation error can be achieved. Finally, we validate the effectiveness of the proposed method through two illustrative examples.
With the widespread adoption of AI, machine-to-machine communications are rapidly increasing, reshaping the requirements for optical networks. Recent advances in Gaussian noise modeling for digital coherent transmission have raised expectations for digital-twin-based operation. However, unlike digital twins in wireless communication, which are already well established, significant barriers remain for commercialization in optical networks. This paper discusses the evolving requirements of optical networks in the AI era and proposes a practical Optical Network Digital Twin architecture enabling dynamic and Quality of Transmission aware operation beyond conventional management. Representative use cases, including operator-driven optimization, user-operator collaboration, and multi-operator interconnection, are presented, along with the architectural framework and key challenges toward practical deployment.
We propose a Reinforcement Learning framework for sparse indirect control of large-scale multi-agent systems, where few controlled agents shape the collective behavior of many uncontrolled agents. The approach addresses this multi-scale challenge by coupling ODEs (modeling controlled agents) with a PDE (describing the uncontrolled population density), capturing how microscopic control achieves macroscopic objectives. Our method combines model-free Reinforcement Learning with adaptive interaction strength compensation to overcome sparse actuation limitations. Numerical validation demonstrates effective density control, with the system achieving target distributions while maintaining robustness to disturbances and measurement noise, confirming that learning-based sparse control can replace computationally expensive online optimization.
Reachability analysis has been a prominent way to provide safety guarantees for neurally controlled autonomous systems, but its direct application to neural perception components is infeasible due to imperfect or intractable perception models. Typically, this issue has been bypassed by complementing reachability with statistical analysis of perception error, say with conformal prediction (CP). However, existing CP methods for time-series data often provide conservative bounds. The corresponding error accumulation over time has made it challenging to combine statistical bounds with symbolic reachability in a way that is provable, scalable, and minimally conservative. To reduce conservatism and improve scalability, our key insight is that perception error varies significantly with the system's dynamical state. This article proposes state-dependent conformal prediction, which exploits that dependency in constructing tight high-confidence bounds on perception error. Based on this idea, we provide an approach to partition the state space, using a genetic algorithm, so as to optimize the tightness of conformal bounds. Finally, since using these bounds in reachability analysis leads to additional uncertainty and branching in the resulting hybrid system, we propose a branch-merging reachability algorithm that trades off uncertainty for scalability so as to enable scalable and tight verification. The evaluation of our verification methodology on two complementary case studies demonstrates reduced conservatism compared to the state of the art.
This paper focuses on adaptive control of the discrete-time linear quadratic regulator (adaptive LQR). Recent literature has made significant contributions in proving non-asymptotic convergence rates, but existing approaches have a few drawbacks that pose barriers for practical implementation. These drawbacks include (i) a requirement of an initial stabilizing controller, (ii) a reliance on exploration for closed-loop stability, and/or (iii) computationally intensive algorithms. This paper proposes a new algorithm that overcomes these drawbacks for a particular class of discrete-time systems. This algorithm leverages direct model-reference adaptive control (direct MRAC) and combines it with an epoch-based approach in order to address the drawbacks (i)-(iii) with a provable high-probability regret bound comparable to existing literature. Simulations demonstrate that the proposed approach yields regrets that are comparable to those from existing methods when the conditions (i) and (ii) are met, and yields regrets that are significantly smaller when either of these two conditions is not met.
Many unmanned aerial vehicles (UAVs) can remain aerodynamically flyable after sustaining structural or control surface damage, yet insufficient robustness in conventional autopilots often leads to mission failure. This paper proposes a robust adaptive sliding mode controller (RASMC) for fixed-wing UAVs subject to aerodynamic coefficient perturbations and partial loss of control surface effectiveness. A damage-aware flight dynamics model is developed to systematically analyze the impact of such impairments on the closed-loop behavior. The RASMC is designed to ensure reliable tracking and stabilization, while a gain adaptation law maintains low control effort under nominal conditions and increases the gains as needed in the presence of aerodynamic damage. Lyapunov-based stability guarantees are derived, and assumptions on admissible uncertainty bounds are formulated to characterize the limits within which closed-loop stability and performance can be ensured. The proposed controller is implemented within an existing UAV autopilot framework, where outer-loop guidance and speed control modules provide reference commands to the RASMC for attitude stabilization. Simulations demonstrate that, despite significant damage, all closed-loop states remain stable with bounded tracking errors.
Microwave sounding is the leading driver of global numerical weather forecasting, but is limited by the scalability of such instruments. With modern machining and commercial microwave components, it is now possible to design low size, weight, power, and cost (SWaP-C) microwave spectrometers while maintaining wide bandwidth performance. Here we report on the status of CubeSounder, a spectrometer tailored for water vapor radiometry that utilizes passive wave guide filter banks. After developing a prototype and high altitude balloon payload, we demonstrated CubeSounder on commercial stratospheric balloon flights. We report on our design process, especially the simulation and fabrication of the custom millimeter-wave filter banks. We also report the initial results of the data collected from the balloon flights.
The fading-memory (FM) property captures the progressive loss of influence of past inputs on a system's current output and has originally been formalized by Boyd and Chua in an operator-theoretic framework. Despite its importance for systems approximation, reservoir computing, and recurrent neural networks, its connection with state-space notions of nonlinear stability, especially incremental ones, remains understudied. This paper introduces a state-space definition of FM. In state-space, FM can be interpreted as an extension of incremental input-to-output stability ($\delta$IOS) that explicitly incorporates a memory kernel upper-bounding the decay of past input differences. It is also closely related to Boyd and Chua's FM definition, with the sole difference of requiring uniform, instead of general, continuity of the memory functional with respect to an input-fading norm. We demonstrate that incremental input-to-state stability ($\delta$ISS) implies FM semi-globally for time-invariant systems under an equibounded input assumption. Notably, Boyd and Chua's approximation theorems apply to $\delta$ISS state-space models. As a closing application, we show that, under mild assumptions, the state-space model of current-driven memristors possess the FM property.
Evaluating AI generated dubbed content is inherently multi-dimensional, shaped by synchronization, intelligibility, speaker consistency, emotional alignment, and semantic context. Human Mean Opinion Scores (MOS) remain the gold standard but are costly and impractical at scale. We present a hierarchical multimodal architecture for perceptually meaningful dubbing evaluation, integrating complementary cues from audio, video, and text. The model captures fine-grained features such as speaker identity, prosody, and content from audio, facial expressions and scene-level cues from video and semantic context from text, which are progressively fused through intra and inter-modal layers. Lightweight LoRA adapters enable parameter-efficient fine-tuning across modalities. To overcome limited subjective labels, we derive proxy MOS by aggregating objective metrics with weights optimized via active learning. The proposed architecture was trained on 12k Hindi-English bidirectional dubbed clips, followed by fine-tuning with human MOS. Our approach achieves strong perceptual alignment (PCC > 0.75), providing a scalable solution for automatic evaluation of AI-dubbed content.
Evaluating the emotional intelligence (EI) of audio language models (ALMs) is critical. However, existing benchmarks mostly rely on synthesized speech, are limited to single-turn interactions, and depend heavily on open-ended scoring. This paper proposes HumDial-EIBench, a comprehensive benchmark for evaluating ALMs' EI. Using real-recorded human dialogues from the ICASSP 2026 HumDial Challenge, it reformulates emotional tracking and causal reasoning into multiple-choice questions with adversarial distractors, mitigating subjective scoring bias for cognitive tasks. It retains the generation of empathetic responses and introduces an acoustic-semantic conflict task to assess robustness against contradictory multimodal signals. Evaluations of eight ALMs reveal that most models struggle with multi-turn emotional tracking and implicit causal reasoning. Furthermore, all models exhibit decoupled textual and acoustic empathy, alongside a severe text-dominance bias during cross-modal conflicts.
The paper studies the optimal density steering problem for nonlinear continuous-time stochastic systems. To accurately capture nonlinear dynamics in high-uncertainty regions that deviate significantly from a nominal linearization point, we introduce the concept of Multiple Distribution-to-Distribution Linearization. The proposed approach first approximates the boundary distributions using Gaussian Mixture Models (GMMs), and decomposes the original nonlinear problem into a collection of Gaussian-to-Gaussian Optimal Covariance Steering (OCS) subproblems between pairs of mixture components. Each elementary OCS problem is solved via local linearization around the mean trajectory connecting the corresponding initial and terminal Gaussian components. The resulting elementary policies are then combined according to their associated conditional densities. We prove that the proposed multi-linearization approach yields tighter approximation error bounds than single-linearization for a broad class of problems. The effectiveness of the approach is demonstrated through numerical experiments on an Earth-to-Mars orbit transfer scenario.
Channel knowledge map (CKM) is a promising technique to achieve environment-aware wireless communication and sensing. Constructing the complete CKM based on channel knowledge observations at sparse locations is a fundamental problem for CKM-enabled wireless networks. However, most existing works on CKM construction only consider the special type of CKM, i.e., the channel gain map (CGM), which only records the channel gain value for each location. In this paper, we consider the channel spatial correlation map (SCM) construction, which signifies the location-specific spatial correlation matrix for multi-antenna systems. Unlike CGM construction, constructing SCM poses significant challenges due to its extremely high-dimensional structure. To address this issue, we first decompose the high-dimensional SCM into lower-dimensional path gain map (PGM) and path angle map (PAM). Then we propose a deep learning model termed E-SRResNet for constructing high-quality SCM from sparse samples, which incorporates multi-head attention (MHA) mechanisms and multi-scale feature fusion (MSFF) to accurately model both local and global spatial relationships of channel parameters and complex nonlinear mappings. Furthermore, we preprocess the dataset to provide priors including line-of-sight (LoS) map, binary building map and base station (BS) map for the model to reconstruct SCM more accurately. Simulations conducted on the CKMImageNet dataset demonstrate that the proposed E-SRResNet achieves significant performance improvements over baseline methods. Moreover, the cosine similarity between the constructed SCM and the ground truth exceeds 0.8 in most regions, validating the effectiveness of the proposed construction method.
A problem of online estimation of unknown parameters is considered for a linear regression equation, which is affected by an additive perturbation that can be caused by measurement noise (that corrupts regressor and regressand), as well as external perturbations. Known approaches to solve this problem typically have one of the following disadvantages: 1) they ensure convergence of a parametric error to a compact set with non-adjustable bound, 2) independence of all system regressor elements from the perturbation/noise is required to annihilate them, 3) an instrumental variable is needed to be selected. On the basis of the novel perturbation annihilation procedure, in the present paper, we propose three new estimation laws, which are free from the above-mentioned drawbacks and ensure exponential convergence of the parametric error to an arbitrarily small neighborhood of zero, particularly, in case more than a half (not all) of the regressor elements are independent from additive perturbation. One of the proposed estimation laws is used for the design of Generalized Parameter Estimation-Based Observer (GPEBO) for nonlinear affine systems to enhance GPEBO performance in case when the measured system output is corrupted by noise. The theoretical results are supported by examples and mathematical modelling.
Full-duplex interaction, where speakers and listeners converse simultaneously, is a key element of human communication often missing from traditional spoken dialogue systems. These systems, based on rigid turn-taking paradigms, struggle to respond naturally in dynamic conversations. The Full-Duplex Interaction Track of ICASSP 2026 Human-like Spoken Dialogue Systems Challenge (HumDial Challenge) aims to advance the evaluation of full-duplex systems by offering a framework for handling real-time interruptions, speech overlap, and dynamic turn negotiation. We introduce a comprehensive benchmark for full-duplex spoken dialogue systems, built from the HumDial Challenge. We release a high-quality dual-channel dataset of real human-recorded conversations, capturing interruptions, overlapping speech, and feedback mechanisms. This dataset forms the basis for the HumDial-FDBench benchmark, which assesses a system's ability to handle interruptions while maintaining conversational flow. Additionally, we create a public leaderboard to compare the performance of open-source and proprietary models, promoting transparent, reproducible evaluation. These resources support the development of more responsive, adaptive, and human-like dialogue systems.
We consider the problem of operating a battery in a home connected to the grid to minimize electricity cost, which combines an energy charge and a tiered peak power charge based on the average of the $N$ largest daily peak powers in each billing month. With perfect foresight of loads and prices, the minimum cost is the solution of a mixed-integer linear program (MILP), which provides a lower bound on the cost of any implementable policy. We propose a model predictive control (MPC) policy that uses simple forecasts of loads and prices and solves a small MILP at each time step. Numerical experiments on one year of data from a home in Trondheim, Norway, show that the MPC policy attains a cost within $1.7\%$ of the prescient bound, and saves close to three times as much as the best rule-based policy we consider.
In this paper, we study the task of subjective speech quality assessment (SSQA), which refers to predicting the perceptual quality of speech. Owing to the development of deep neural network models, SSQA has greatly advanced and has been widely applied in scientific papers to evaluate speech generation systems. Nonetheless, the insufficient out-of-domain (OOD) generalization ability of current SSQA models is underexplored and often overlooked by researchers. To study this problem systematically, we present MOS-Bench, a diverse SSQA dataset collection that currently contains 8 training sets and 17 test sets. Through extensive experiments, we first highlight the OOD generalization challenges of existing models. We then evaluate the efficacy of multiple-dataset training, comparing straightforward data pooling against AlignNet, an existing domain-aware method. We demonstrate that pooling multiple training sets provides a simple yet effective solution, and variation in the data is a key factor for robust generalization beyond training data size.
Robot swarms navigating through unknown obstacle environments are an emerging research area that faces challenges. Performing tasks in such environments requires swarms to achieve autonomous localization, perception, decision-making, control, and planning. The limited computational resources of onboard platforms present significant challenges for planning and control. Reactive planners offer low computational demands and high re-planning frequencies but lack predictive capabilities, often resulting in local minima. Multi-step planners can make multi-step predictions to reduce deadlocks, but they require substantial computation, resulting in a lower replanning frequency. This paper proposes a novel homotopic trajectory planning framework for a robot swarm that combines centralized homotopic trajectory planning (optimal virtual tube planning) with distributed control, enabling low-computation, high-frequency replanning, thereby uniting the strengths of multi-step and reactive planners. Based on multi-parametric programming, homotopic optimal trajectories are approximated by affine functions. The resulting approximate solutions have computational complexity $O(n_t)$, where $n_t$ is the number of trajectory parameters. This low complexity makes centralized planning of a large number of optimal trajectories practical and, when combined with distributed control, enables rapid, low-cost replanning.} The effectiveness of the proposed method is validated through several simulations and experiments.
Tibetan is a low-resource language with minimal parallel speech corpora spanning its three major dialects-Ü-Tsang, Amdo, and Kham-limiting progress in speech modeling. To address this issue, we propose FMSD-TTS, a few-shot, multi-speaker, multi-dialect text-to-speech framework that synthesizes parallel dialectal speech from limited reference audio and explicit dialect labels. Our method features a novel speaker-dialect fusion module and a Dialect-Specialized Dynamic Routing Network (DSDR-Net) to capture fine-grained acoustic and linguistic variations across dialects while preserving speaker identity. Extensive objective and subjective evaluations demonstrate that FMSD-TTS significantly outperforms baselines in both dialectal expressiveness and speaker similarity. We further validate the quality and utility of the synthesized speech through a challenging speech-to-speech dialect conversion task. Our contributions include: (1) a novel few-shot TTS system tailored for Tibetan multi-dialect speech synthesis, (2) the public release of a large-scale synthetic Tibetan speech corpus generated by FMSD-TTS, and (3) an open-source evaluation toolkit for standardized assessment of speaker similarity, dialect consistency, and audio quality.
Electrocardiograms (ECGs) provide non-invasive measurements of heart activity and are established tools for detecting cardiac arrhythmias. Although supervised machine learning has emerged as a promising approach for automated heartbeat classification, substantial variations in ECG signals across individuals and leads, combined with inconsistent labeling standards and dataset biases, make it difficult to develop generalizable models. Dimensionality reduction maps high-dimensional data into a lower-dimensional space while preserving the underlying structure, enabling visualization and pattern discovery. Conventional methods, e.g., principal component analysis, prioritize large variances and typically overlook subtle yet clinically relevant patterns. Here, we show that nonlinear dimensionality reduction (NLDR) algorithms, e.g., t-SNE and UMAP, can identify medically relevant features in ECG signals without pretraining or prior information. Using the MIT-BIH Arrhythmia Database, we show that: a) applying NLDR to a mixed population of heartbeats reveals inter-individual morphological differences, as signals from the same person cluster together in latent spaces; and b) applying NLDR to heartbeats of a single individual separates normal beats from arrhythmias into distinct clusters, identifiable in an unsupervised manner. To our knowledge, this is the first systematic evaluation of NLDR for unsupervised arrhythmia detection. Both UMAP and t-SNE achieved trustworthiness scores >=0.95, indicating that local neighborhoods are well preserved in the embedding. Classification on 2D embeddings outperforms the original high-dimensional space, with a k-NN classifier discriminating individual recordings with >=80% accuracy and identifying arrhythmias with median accuracy >=98% and median F1-score >=85%. These results show that NLDR holds much promise for cardiac monitoring and personalized healthcare.
We consider federated learning of linearly-parameterized nonlinear systems. We establish theoretical guarantees on the effectiveness of federated nonlinear system identification compared to centralized approaches, demonstrating that the convergence rate improves as the number of clients increases. Although the convergence rates in the linear and nonlinear cases differ only by a constant, this constant depends on the feature map $\phi$, which can be carefully chosen in the nonlinear setting to increase excitation and improve performance. We experimentally validate our theory in physical settings where client devices are driven by i.i.d. control inputs and control policies exhibiting i.i.d. random perturbations, ensuring non-active exploration. Experiments use trajectories from nonlinear dynamical systems characterized by real-analytic feature functions, including polynomial and trigonometric components, representative of physical systems including pendulum and quadrotor dynamics. We analyze the convergence behavior of the proposed method under varying noise levels and data distributions. Results show that federated learning consistently improves convergence of any individual client as the number of participating clients increases.
Video sequences often contain structured noise and background artifacts that obscure dynamic content, posing challenges for accurate analysis and restoration. Robust principal component methods address this by decomposing data into low-rank and sparse components. Still, the sparsity assumption often fails to capture the rich variability present in real video data. To overcome this limitation, a hybrid framework that integrates low-rank temporal modeling with diffusion posterior sampling is proposed. The proposed method, Nuclear Diffusion, is evaluated on a real-world medical imaging problem, namely cardiac ultrasound dehazing, and demonstrates improved dehazing performance compared to traditional RPCA concerning contrast enhancement (gCNR) and signal preservation (KS statistic). These results highlight the potential of combining model-based temporal models with deep generative priors for high-fidelity video restoration.
Hidden Markov Models (HMMs) are fundamental for modeling sequential data, yet learning their parameters from observations remains challenging. Classical methods like the Baum-Welch algorithm are computationally intensive and prone to local optima, while modern spectral algorithms offer provable guarantees but may produce probability outputs outside valid ranges. This work introduces Belief Net, a differentiable filtering framework that learns HMM parameters by formulating the forward filter as a structured neural network and optimizing it with stochastic gradient descent. This architecture recursively updates the belief state, which represents the posterior probability distribution over hidden states based on the observation history. Unlike black-box transformer models, Belief Net's learnable weights are explicitly the logits of the initial distribution, transition matrix, and emission matrix, ensuring full interpretability. The model processes observation sequences using a decoder-only (causal) architecture and is trained end-to-end with standard autoregressive next-observation prediction loss. On synthetic HMM data, Belief Net achieves faster convergence than Baum-Welch while successfully recovering parameters in both undercomplete and overcomplete settings, whereas spectral methods prove ineffective in the latter. Comparisons with transformer-based models are also presented on real-world language data.
Real-time multi-spectral photoacoustic imaging (RT-mPAI) often suffers from synchronization instabilities when interfacing fast-tuning lasers with data acquisition platforms executing on non-real-time operating systems. To overcome this, we establish an open-source hardware-software architecture tailored for the widely adopted combination of the OPOTEK Phocus lasers and Verasonics Vantage systems. By employing an independent micro-controller for deterministic laser trigger counting alongside a decoupled client-server data streaming framework, the proposed system circumvents OS-induced timing deviations and local storage bottlenecks. By open-sourcing this pipeline and cultivating a collaborative environment to share both code and ideas, we aim to lower the technical and cost barriers for RT-mPAI, thereby democratizing access to stable RT-mPAI research and, more ambitiously, fostering a vibrant open-source community.