While the shift from cascaded dialogue systems to end-to-end (E2E) speech Large Language Models (LLMs) improves latency and paralinguistic modeling, E2E models often exhibit a significant performance degradation compared to their text-based counterparts. The standard Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) training methods fail to close this gap. To address this, we propose X-OPD, a novel Cross-Modal On-Policy Distillation framework designed to systematically align the capabilities of Speech LLMs to their text-based counterparts. X-OPD enables the Speech LLM to explore its own distribution via on-policy rollouts, where a text-based teacher model evaluates these trajectories and provides token-level feedback, effectively distilling teacher's capabilities into student's multi-modal representations. Extensive experiments across multiple benchmarks demonstrate that X-OPD significantly narrows the gap in complex tasks while preserving the model's inherent capabilities.
Stacked intelligent metasurfaces (SIMs) represent a breakthrough in wireless hardware by comprising multilayer, programmable metasurfaces capable of analog computing in the electromagnetic (EM) wave domain. By examining their architectural analogies, this article reveals a deeper connection between SIMs and artificial neural networks (ANNs). Leveraging this profound structural similarity, this work introduces a learnable SIM architecture and proposes a learnable SIM-based machine learning (ML) paradigm for sixth-generation (6G)-andbeyond systems. Then, we develop two SIM-empowered wireless signal processing schemes to effectively achieve multi-user signal separation and distinguish communication signals from jamming signals. The use cases highlight that the proposed SIM-enabled signal processing system can significantly enhance spectrum utilization efficiency and anti-jamming capability in a lightweight manner and pave the way for ultra-efficient and intelligent wireless infrastructures.
The study explores a hybrid centralized-federated approach for Human Activity Recognition (HAR) using a Transformer-based architecture. With the increasing ubiquity of edge devices, such as smartphones and wearables, a significant amount of private data from wearable and inertial sensors is generated, facilitating discreet monitoring of human activities, including resting, sleeping, and walking. This research focuses on deploying HAR technologies using mobile sensor data and leveraging Federated Learning within the Flower framework to evaluate the training of a federated model derived from a centralized baseline. The experimental results demonstrate the effectiveness of the proposed hybrid approach in improving the accuracy and robustness of HAR models while preserving data privacy in a non-IID data scenario. The federated learning setup demonstrated comparable performance to centralized models, highlighting the potential of federated learning to strike a balance between data privacy and model performance in real-world applications.
Virtual sensing aims to infer hard-to-measure quantities from accessible measurements and is central to perception and control in physical systems. Despite rapid progress from first-principle and hybrid models to modern data-driven methods research remains siloed, leaving no established default approach that transfers across processes, modalities, and sensing configurations. We introduce MuViS, a domain-agnostic benchmarking suite for multimodal virtual sensing that consolidates diverse datasets into a unified interface for standardized preprocessing and evaluation. Using this framework, we benchmark established approaches spanning gradient-boosted decision trees and deep neural network (NN) architectures, and show that none of these provides a universal advantage, underscoring the need for generalizable virtual sensing architectures. MuViS is released as an open-source, extensible platform for reproducible comparison and future integration of new datasets and model classes.
Analog computing with microwave signals can enable exceptionally fast computations, potentially surpassing the limits of conventional digital computing. For example, by letting some input signals propagate through a linear microwave network and reading the corresponding output signals, we can instantly compute a matrix-vector product without any digital operations. In this paper, we investigate the computational capabilities of linear microwave networks made exclusively of two low-cost and fundamental components: hybrid couplers and phase shifters, which are both implementable in microstrip. We derive a sufficient and necessary condition characterizing the class of linear transformations that can be computed in the analog domain using these two components. Within this class, we identify three transformations of particular relevance to signal processing, namely the discrete Fourier transform (DFT), the Hadamard transform, and the Haar transform. For each of these, we provide a systematic design method to construct networks of hybrid couplers and phase shifters capable of computing the transformation for any size power of two. To validate our theoretical results, a hardware prototype was designed and fabricated, integrating hybrid couplers and phase shifters to implement the $4\times4$ DFT. A systematic calibration procedure was subsequently developed to characterize the prototype and compensate for fabrication errors. Measured results from the prototype demonstrate successful DFT computation in the analog domain, showing high correlation with theoretical expectations. By realizing an analog computer through standard microwave components, this work demonstrates a practical pathway toward low-latency, real-time analog signal processing.
Coronary artery calcification (CAC) is a strong predictor of cardiovascular risk but remains underutilized in clinical routine thoracic imaging due to the need for dedicated imaging protocols and manual annotation. We present DeepCAC2, a publicly available dataset containing automated CAC segmentations, coronary artery calcium scores, and derived risk categories generated from low-dose chest CT scans of the National Lung Screening Trial (NLST). Using a fully automated deep learning pipeline trained on expert-annotated cardiac CT data, we processed 127,776 CT scans from 26,228 individuals and generated standardized CAC segmentations and risk estimates for each acquisition. We already provide a public dashboard as a simple tool to visually inspect a random subset of 200 NLST patients of the dataset. The dataset will be released with DICOM-compatible segmentation objects and structured metadata to support reproducible downstream analysis. The deep learning pipeline will be made publicly available as a DICOM-compatible this http URL container. DeepCAC2 provides a transparent, large-scale, public, fully reproducible resource for research in cardiovascular risk assessment, opportunistic screening, and imaging biomarker development.
Precision agriculture promises higher yields and sustainability, but adoption is slowed by the high cost of cyber-physical systems (CPS) and the lack of systematic design methods. We present a cost-aware design space exploration (DSE) framework for multimodal drone-rover platforms to integrate budget, energy, sensing, payload, computation, and communication constraints. Using integer linear programming (ILP) with SAT-based verification, our approach trades off among cost, coverage, and payload while ensuring constraint compliance and a multitude of alternatives. We conduct case studies on smaller and larger-sized farms to show that our method consistently achieves full coverage within budget while maximizing payload efficiency, outperforming state-of-the-art CPS DSE approaches.
We propose Uni-ArrayDPS, a novel diffusion-based refinement framework for unified multi-channel speech enhancement and separation. Existing methods for multi-channel speech enhancement/separation are mostly discriminative and are highly effective at producing high-SNR outputs. However, they can still generate unnatural speech with non-linear distortions caused by the neural network and regression-based objectives. To address this issue, we propose Uni-ArrayDPS, which refines the outputs of any strong discriminative model using a speech diffusion prior. Uni-ArrayDPS is generative, array-agnostic, and training-free, and supports both enhancement and separation. Given a discriminative model's enhanced/separated speech, we use it, together with the noisy mixtures, to estimate the noise spatial covariance matrix (SCM). We then use this SCM to compute the likelihood required for diffusion posterior sampling of the clean speech source(s). Uni-ArrayDPS requires only a pre-trained clean-speech diffusion model as a prior and does not require additional training or fine-tuning, allowing it to generalize directly across tasks (enhancement/separation), microphone array geometries, and discriminative model backbones. Extensive experiments show that Uni-ArrayDPS consistently improves a wide range of discriminative models for both enhancement and separation tasks. We also report strong results on a real-world dataset. Audio demos are provided at \href{this https URL}{this https URL}.
Reachability computations that rely on learned or estimated models require calibration in order to uphold confidence about their guarantees. Calibration generally involves sampling scenarios inside the reachable set. However, producing reasonable probabilistic guarantees may require many samples, which can be costly. To remedy this, we propose that calibration of reachable sets be performed using active learning strategies. In order to produce a probabilistic guarantee on the active learning, we adapt the Pick-to-Learn algorithm, which produces generalization bounds for standard supervised learning, to the active learning setting. Our method, Approximate Pick-to-Learn, treats the process of choosing data samples as maximizing an approximate error function. We can then use conformal prediction to ensure that the approximate error is close to the true model error. We demonstrate our technique for a simulated drone racing example in which learning is used to provide an initial guess of the reachable tube. Our method requires fewer samples to calibrate the model and provides more accurate sets than the baselines. We simultaneously provide tight generalization bounds.
In this paper, we investigate the performance of a fluid antenna relay (FAR)-assisted downlink communication system utilizing non-orthogonal multiple access (NOMA). The FAR, which integrates a fluid antenna system (FAS), is equipped on an autonomous aerial vehicle (AAV), and introduces extra degrees of freedom to improve the performance of the system. The transmission is divided into a first phase from the base station (BS) to the users and the FAR, and a second phase where the FAR forwards the signal using amplify-and-forward (AF) or decode-and-forward (DF) relaying to reduce the outage probability (OP) for the user maintaining weaker channel conditions. To analyze the OP performance of the weak user, Copula theory and the Gaussian copula function are employed to model the statistical distribution of the FAS channels. Analytical expressions for weak user's OP are derived for both the AF and the DF schemes. Simulation results validate the effectiveness of the proposed scheme, showing that it consistently outperforms benchmark schemes without the FAR. In addition, numerical simulations also demonstrate the values of the relaying scheme selection parameter under different FAR positions and communication outage thresholds.
For extremely large-scale arrays (XL-arrays), the discrete Fourier transform (DFT) codebook, conventionally used in the far-field, has recently been employed for near-field beam training. However, most existing methods rely on the line-of-sight (LoS) dominant channel assumption, which may suffer degraded communication performance when applied to the general multi-path scenario due to the more complex received signal power pattern at the user. To address this issue, we propose in this paper a new hybrid learning-and-optimization-based beam training method that first leverages deep learning (DL) to obtain coarse channel parameter estimates, and then refines them via a model-based optimization algorithm, hence achieving high-accuracy estimation with low computational complexity. Specifically, in the first stage, a tailored U-Net architecture is developed to learn the non-linear mapping from the received power pattern to coarse estimates of the angles and ranges of multi-path components. In particular, the inherent permutation ambiguity in multi-path parameter matching is effectively resolved by a permutation invariant training (PIT) strategy, while the unknown number of paths is estimated based on defined path existence logits. In the second stage, we further propose an efficient particle swarm optimization method to refine the angular and range parameters within a confined search region; in the meanwhile, a Gerchberg-Saxton algorithm is used to retrieve multi-path channel gains from the received power pattern. Last, numerical results demonstrate that the proposed hybrid design significantly outperforms various benchmarks in terms of parameter estimation accuracy and achievable rate, yet with low computational complexity.
Low-field (LF) magnetic resonance imaging (MRI) improves accessibility and reduces costs but generally has lower signal-to-noise ratios and degraded contrast compared to high field (HF) MRI, limiting its clinical utility. Simulating LF MRI from HF MRI enables virtual evaluation of novel imaging devices and development of LF algorithms. Existing low field simulators rely on noise injection and smoothing, which fail to capture the contrast degradation seen in LF acquisitions. To this end, we introduce an end-to-end LF-MRI synthesis framework that learns HF to LF image degradation directly from a small number of paired HF-LF MRIs. Specifically, we introduce a novel HF to LF coordinate-image decoupled neural operator (H2LO) to model the underlying degradation process, and tailor it to capture high-frequency noise textures and image structure. Experimental results in T1w and T2w MRI demonstrate that H2LO produces more faithful simulated low-field images than existing parameterized noise synthesis models and popular image-to-image translation models. Furthermore, it improves performance in downstream image enhancement tasks, showcasing its potential to enhance LF MRI diagnostic capabilities.
Hamilton-Jacobi (HJ) reachability provides formal safety guarantees for nonlinear systems. However, it becomes computationally intractable in high-dimensional settings, motivating learning-based approximations that may introduce unsafe errors or overly optimistic safe sets. In this work, we propose a hierarchical probabilistic verification framework for reachability learning that bridges offline global certification and online local refinement. We first construct a coarse safe set using scenario optimization, providing an efficient global probabilistic certificate. We then introduce an online local refinement module that expands the certified safe set near its boundary by solving a sequence of convex programs, recovering regions excluded by the global verification. This refinement reduces conservatism while focusing computation on critical regions of the state space. We provide probabilistic safety guarantees for both the global and locally refined sets. Integrated with a switching mechanism between a learned reachability policy and a model-based controller, the proposed framework improves success rates in goal-reaching tasks with safety constraints, as demonstrated in simulation experiments of two drones racing to a goal with complex safety constraints.
Integrating Automatic Speech Recognition (ASR) into Speech Emotion Recognition (SER) enhances modeling by providing linguistic context. However, conventional feature fusion faces performance bottlenecks, and multi-task learning often suffers from optimization conflicts. While task vectors and model merging have addressed such conflicts in NLP and CV, their potential in speech tasks remains largely unexplored. In this work, we propose an Adaptive Layer-wise Task Vector Merging (AdaLTM) framework based on WavLM-Large. Instead of joint optimization, we extract task vectors from in-domain ASR and SER models fine-tuned on emotion datasets. These vectors are integrated into a frozen base model using layer-wise learnable coefficients. This strategy enables depth-aware balancing of linguistic and paralinguistic knowledge across transformer layers without gradient interference. Experiments on the MSP-Podcast demonstrate that the proposed approach effectively mitigates conflicts between ASR and SER.
This paper develops a direct data-driven framework for constructing reduced-order models (ROMs) of discrete-time linear dynamical systems with unknown dynamics and process disturbances. The proposed scheme enables controller synthesis on the ROM and its refinement to the original system by an interface function designed using noisy data. To achieve this, the notion of simulation functions (SFs) is employed to establish a formal relation between the original system and its ROM, yielding a quantitative bound on the mismatch between their output trajectories. To construct such relations and interface functions, we rely on data collected from the unknown system. In particular, using noise-corrupted input-state data gathered along a single trajectory of the system, and without identifying the original dynamics, we propose data-dependent conditions, cast as a semidefinite program, for the simultaneous construction of ROMs, SFs, and interface functions. Through a case study, we demonstrate that data-driven controller synthesis on the ROM, combined with controller refinement via the interface function, enables the enforcement of complex specifications beyond stability.
This paper proposes a distributed event-triggered control method that not only guarantees consensus of multi-agent systems but also satisfies a prescribed LQ performance constraint. Taking the standard distributed control scheme with all-time communication as a baseline, we consider the problem of designing an event-triggered communication rule such that the resulting LQ cost satisfies a performance constraint with respect to the baseline cost while consensus is achieved. For general linear agents over an undirected graph, we employ local state predictors and a local triggering condition based only on information available to each agent. We then derive a sufficient condition for the proposed method to satisfy the performance constraint and guarantee consensus. In addition, we develop a tractable parameter design method for selecting the triggering parameters offline. Numerical examples demonstrate the effectiveness of the proposed method.
Mechanical vibration monitoring often requires high sampling rates and generates large data volumes, posing challenges for storage, transmission, and power efficiency. Compressive Sensing (CS) offers a promising approach to overcome these constraints by exploiting signal sparsity to enable sub-Nyquist acquisition and efficient reconstruction. This study presents a comprehensive comparative analysis of the key components of the CS framework: sparse basis, measurement matrix, and reconstruction algorithm for machinery vibration signals. In addition, a hardware-efficient measurement matrix, the Wang matrix, originally developed for image compression, is introduced and evaluated for the first time in this context. Experimental assessment using the HUMS2023 and the CETIM gearbox datasets demonstrates that this matrix achieves superior reconstruction quality, with higher SNR, compared to conventional Gaussian and Bernoulli matrices, especially at high compression ratios.
In traditional views, the build-up of accelerating energy during faults can cause the well-known first-swing angle instability in synchronous generators (SGs). Interestingly, this letter presents a new insight that the accumulation of decelerating energy due to the low voltage ride-through (LVRT) and recovery control of grid-following inverter-based resources (GFL-IBRs), might also result in transient angle instability in SGs. The transient energy accumulated during angle-decreasing swing transforms into the acceleration energy of the subsequent swing, hence such phenomena often manifest as multi-swing instability. Both theoretical analysis and simulation support these findings.
Deploying synchronous condensers (SynCons) near grid-following renewable energy sources (GFLRs) is an effective and increasingly adopted strategy for grid support. However, the potential transient instability risks in such configurations remain an open research question. This study investigates the mechanism of dominant synchronization instability source transition upon SynCon integration and proposes a straightforward approach to enhance system stability by leveraging their interactive characteristics. Firstly, a dual-timescale decoupling model is established, partitioning the system into a fast subsystem representing phase-locked loop (PLL) dynamics and a slow subsystem characterizing SynCon rotor dynamics. The study then examines the influence of SynCons on the transient stability of nearby PLLs and their own inherent stability. The study shows that SynCon's voltage-source characteristics and its time-scale separation from PLL dynamics can significantly enhance the PLL's stability boundary and mitigate non-coherent coupling effects among multiple GFLRs. However, the dominant instability source shifts from the fast-time-scale PLL to the slow-time-scale SynCon after SynCon integration. Crucially, this paper demonstrates that the damping effect of PLL control can also be transferred from the fast to the slow time scale, allowing well-tuned PLL damping to suppress SynCon rotor acceleration. Consequently, by utilizing SynCon's inherent support capability and a simple PLL damping loop, the transient stability of the co-located system can be significantly enhanced. These conclusions are validated using a converter controller-based Hardware-in-the-Loop (CHIL) platform.
This paper presents a port-Hamiltonian formulation of hysteretic energy storage elements. First, we revisit the passivity property of backlash-driven storage elements by presenting a family of storage functions associated to the dissipativity property of such elements. We explicitly derive the corresponding available storage and required supply functions `a la Willems [1], and show the interlacing property of the aforementioned family of storage functions sandwiched between the available storage and required supply functions. Second, using the proposed family of storage functions, we present a port-Hamiltonian formulation of hysteretic inductors as prototypical storage elements in port-Hamiltonian systems. In particular, we show how a Hamiltonian function can be chosen from the family of storage functions and how the hysteretic elements can be expressed as port-Hamiltonian system with feedthrough term, where the feedthrough term represents energy dissipation. Correspondingly, we illustrate its applicability in describing an RLC circuit (in parallel and in series) containing a hysteretic inductor element.
Most Rate-Splitting Multiple Access (RSMA) implementations rely on successive interference cancellation (SIC) at the receiver, whose performance is inherently limited by error propagation during common-stream decoding. This paper addresses this issue by developing a SIC-free RSMA receiver based on joint demapping (JD), which directly evaluates bit vectors over a composite constellation. Using a two-user Multiple-Input Single-Output (MISO) prototype, we conduct over-the-air measurements to systematically compare SIC and JD-based receivers. The results show that the proposed SIC-free receiver provides stronger reliability and better practicality over a wider operating range, with all observations being consistent with theoretical expectations.
While conventional power system protection isolates faulty components only after a fault has occurred, fault prediction approaches try to detect faults before they can cause significant damage. Although initial studies have demonstrated successful proofs of concept, development is hindered by scarce field data and ineffective feature selection. To address these limitations, this paper proposes a surrogate task that uses simulation data for feature selection. This task exhibits a strong correlation (r = 0.92) with real-world fault prediction performance. We generate a large dataset containing 20000 simulations with 34 event classes and diverse grid configurations. From 1556 candidate features, we identify 374 optimal features. A case study on three substations demonstrates the effectiveness of the selected features, achieving an F1-score of 0.80 and outperforming baseline approaches that use frequency-domain and wavelet-based features.
In scenarios where synchronous generators (SGs) and grid-following renewable energy sources (GFLR) are co-located, existing research, which mainly focuses on the first-swing stability of SGs, often overlooks ongoing dynamic interactions between GFLRs and SGs throughout the entire rotor swing period. To address this gap, this study first reveals that the angle oscillations of SG can cause periodic grid voltage fluctuations, potentially triggering low-voltage ride-through (LVRT) control switching of GFLR repeatedly. Then, the periodic energy changes of SGs under "circular" and "rectangular" LVRT limits are analyzed. The results indicate that circular limits are detrimental to SG's first-swing stability, while rectangular limits and their slow recovery strategies can lead to SG's multi-swing instability. Conservative stability criteria are also proposed for these phenomena. Furthermore, an additional controller based on feedback linearization is introduced to enhance the entire period transient stability of SG by adjusting the post-fault GFLR output current. Finally, the efficacy of the analysis is validated through electromagnetic transient simulations and controller hardware-in-the-loop (CHIL) tests.
Beyond-diagonal reconfigurable intelligent surface (BD-RIS) generalizes the conventional diagonal RIS (D-RIS) by introducing tunable inter-element connections, offering enhanced wave manipulation capabilities. However, realizing the advantages of BD-RIS requires accurate channel state information (CSI), whose acquisition becomes significantly more challenging due to the increased number of channel coefficients, leading to prohibitively large pilot training overhead in BD-RIS-aided multi-user multiple-input multiple-output (MU-MIMO) systems. Existing studies reduce pilot overhead by exploiting the channel correlations induced by the Kronecker-product or multi-linear structure of BD-RIS-aided channels, which neglect the spatial correlation among antennas and the statistical correlation across RIS-user channels. In this paper, we propose a learning-based channel estimation framework, namely the joint training scattering matrix learning and channel estimation framework (JTSMLCEF), which jointly optimizes the BD-RIS training scattering matrix and estimates the cascaded channels in an end-to-end manner to achieve accurate channel estimation and reduce the pilot overhead. The proposed JTSMLCEF follows a two-phase channel estimation protocol to enable adaptive training scattering matrix optimization with a training scattering matrix optimizer (TSMO) and cascaded channel estimation with a dual-attention channel estimator (DACE). Specifically, the DACE is designed with intra-user and inter-user attention modules to capture the multi-dimensional correlations in multi-user cascaded channels. Simulation results demonstrate the superiority of JTSMLCEF. Compared with the current state-of-the-art method, it reduces the pilot overhead by $80\%$ while further reducing the normalized mean squared error (NMSE) by $82.6\%$ and $92.5\%$ in indoor and urban micro-cell (UMi) scenarios, respectively.
This paper investigates a smart spectrum-sharing framework for reconfigurable intelligent surface (RIS)-aided local high-quality wireless networks (LHQWNs) within a mobile network operator (MNO) ecosystem. Although RISs are often considered potentially harmful due to interference, this work shows that properly controlled RISs can enhance the quality of service (QoS). The proposed system enables temporary spectrum access for multiple vertical service providers (VSPs) by dynamically allocating radio resources according to traffic demand. The spectrum is divided into dedicated subchannels assigned to individual VSPs and reusable subchannels shared among multiple VSPs, while RIS is employed to improve propagation conditions. We formulate a multi-VSP utility maximization problem that jointly optimizes subchannel assignment, transmit power, and RIS phase configuration while accounting for spectrum access costs, RIS leasing costs, and QoS constraints. The resulting mixed-integer non-linear program (MINLP) is intractable using conventional optimization methods. To address this challenge, the problem is modeled as a Markov decision process (MDP) and solved using deep reinforcement learning (DRL). Specifically, deep deterministic policy gradient (DDPG) and soft actor-critic (SAC) algorithms are developed and compared. Simulation results show that SAC outperforms DDPG in convergence speed, stability, and achievable utility, reaching up to 96% of the exhaustive search benchmark and demonstrating the potential of RIS to improve overall utility in multi-VSP scenarios.
As most optical satellites remotely acquire multispectral images (MSIs) with limited spatial resolution, multispectral unmixing (MU) becomes a critical signal processing technology for analyzing the pure material spectra for high-precision classification and identification. Unlike the widely investigated hyperspectral unmixing (HU) problem, MU is much more challenging as it corresponds to the underdetermined blind source separation (BSS) problem, where the number of sources is larger than the number of available multispectral bands. In this article, we transform MU into its overdetermined counterpart (i.e., HU) by inventing a radically new quantum deep image prior (QDIP), which relies on the virtual band-splitting task conducted on the observed MSI for generating the virtual hyperspectral image (HSI). Then, we perform HU on the virtual HSI to obtain the virtual hyperspectral sources. Though HU is overdetermined, it still suffers from the ill-posed issue, for which we employ the convex geometry structure of the HSI pixels to customize a weighted simplex shrinkage (WSS) regularizer to mitigate the ill-posedness. Finally, the virtual hyperspectral sources are spectrally downsampled to obtain the desired multispectral sources. The proposed geometry/quantum-empowered MU (GQ-$\mu$) algorithm can also effectively obtain the spatial abundance distribution map for each source, where the geometric WSS regularization is adaptively and automatically controlled based on the sparsity pattern of the abundance tensor. Simulation and real-world data experiments demonstrate the practicality of our unsupervised GQ-$\mu$ algorithm for the challenging MU task. Ablation study demonstrates the strength of QDIP, not achieved by classical DIP, and validates the mechanics-inspired WSS geometry regularizer.
Static synchronous compensators (STATCOMs) manage reactive power compensation in modern power grids and have become essential for the integration of renewable energy sources such as wind farms. Cascaded H bridges have become the preferred topology for high-power STATCOMs, but balancing module capacitor voltages remains a persistent challenge. Conventional solutions equip every module with a voltage sensor -- a component that is costly, temperature-sensitive, and prone to aging-related failures. Recent parallel-capable module topologies can balance voltage through switched-capacitor operation. The latest developments reduced the sensor requirement from one per module to one per arm. However, these implementations require twice as many individual transistors compared to series-only topologies. We present a STATCOM solution based on the four-transistor four-diode (4T4D) series\,/\,parallel chopper cell. This topology achieves bidirectional parallelization with only four transistors per module -- exactly as many as a conventional full bridge. Furthermore, we propose a dual-loop control strategy that fully eliminates module voltage sensors by inferring voltage levels from the modulation index. This scheme also improves output quality by regulating the modulation depth. We validated our proposal through simulation and experiments. We built a prototype to interface the grid. The prototype further passed robustness tests with step change, current direction reversal, and grid disturbance. This work demonstrates the first modular STATCOM implementation that combines minimum transistor count with complete elimination of module voltage sensors.
Text-guided diffusion models have advanced image editing by enabling intuitive control through language. However, despite their strong capabilities, we surprisingly find that SOTA methods struggle with simple, everyday transformations such as rain or blur. We attribute this limitation to weak and inconsistent textual supervision during training, which leads to poor alignment between language and vision. Existing solutions often rely on extra finetuning or stronger text conditioning, but suffer from high data and computational requirements. We argue that diffusion-based editing capabilities aren't lost but merely hidden from text. The door to cost-efficient visual editing remains open, and the key lies in a vision-centric paradigm that perceives and reasons about visual change as humans do, beyond words. Inspired by this, we introduce Visual Diffusion Conditioning (VDC), a training-free framework that learns conditioning signals directly from visual examples for precise, language-free image editing. Given a paired example -one image with and one without the target effect- VDC derives a visual condition that captures the transformation and steers generation through a novel condition-steering mechanism. An accompanying inversion-correction step mitigates reconstruction errors during DDIM inversion, preserving fine detail and realism. Across diverse tasks, VDC outperforms both training-free and fully fine-tuned text-based editing methods. The code and models are open-sourced at this https URL
This paper investigates an uplink multi-user covert communication system with spatially distributed users. Unlike prior works that approximate channel statistics using averaged parameters and homogeneous assumptions, this study explicitly models each user's geometric position and corresponding user-to-Willie and user-to-Bob channel variances. This approach enables an accurate characterization of spatially heterogeneous covert environments. We mathematically prove that a generalized on-off power control scheme, which jointly accounts for both Bob's and Willie's channels, constitutes the optimal transmission strategy in heterogeneous user configurations. Leveraging the optimal strategy, we derive closed-form expressions for the minimum detection error probability and the minimum number of cooperative users required to satisfy a covert constraint. With the closed-form expressions, comprehensive theoretical analyses are conducted, which are validated by Monte-Carlo simulations. One important insight obtained from the analysis is that user spatial heterogeneity can enhance covert communication performance. Building on these findings, a piecewise search algorithm is proposed to achieve exact optimality with significantly reduced computational complexity. We demonstrate that optimization considering user's spatial heterogeneity achieves substantially improved covert communication performance than that based on the assumption of spatial homogeneity.
In user-generated content (UGC) transcoding, source videos typically suffer various degradations due to prior compression, editing, or suboptimal capture conditions. Consequently, existing video compression paradigms that solely optimize for fidelity relative to the reference become suboptimal, as they force the codec to replicate the inherent artifacts of the non-pristine source. To address this, we propose a novel perceptually inspired loss function for learning-based UGC video transcoding that redefines the role of the reference video, shifting it from a ground-truth pixel anchor to an informative contextual guide. Specifically, we train a lightweight neural quality model based on a Selective Structured State-Space Model (Mamba) optimized using a weakly-supervised Siamese ranking strategy. The proposed model is then integrated into the rate-distortion optimization (RDO) process of two neural video codecs (DCVC and HiNeRV) as a loss function, aiming to generate reconstructed content with improved perceptual quality. Our experiments demonstrate that this framework achieves substantial coding gains over both autoencoder and implicit neural representation-based baselines, with 8.46% and 12.89% BD-rate savings, respectively.
This paper proposes a physics-informed learning framework for a class of recurrent neural networks tailored to large-scale and networked systems. The approach aims to learn control-oriented models that preserve the structural and stability properties of the plant. The learning algorithm is formulated as a convex optimisation problem, allowing the inclusion of linear matrix inequality constraints to enforce desired system features. Furthermore, when the plant exhibits structural modularity, the resulting optimisation problem can be parallelised, requiring communication only among neighbouring subsystems. Simulation results show the effectiveness of the proposed approach.
The number of low Earth orbit (LEO) satellite constellations has grown rapidly in recent years, bringing a major change to global wireless communications. As LEO satellite links take on a growing role in critical services such as emergency communications, navigation, wide-area data collection, and military operations, keeping these links secure has become an important concern. In particular, verifying the identity of a satellite transmitter is now a basic requirement for protecting the services that rely on satellite access. In this article, we propose an active challenge-response authentication framework in which the verifier checks the satellite at randomly chosen times that are not known in advance, removing the fixed measurement window that existing passive methods expose to adversaries. The proposed framework uses the deterministic yet unpredictably sampled nature of orbital observables to establish a physics based root of trust for satellite identity authentication. This approach transforms satellite authentication from static feature matching into a spatiotemporal consistency verification problem inherently constrained by orbital dynamics, providing robust protection even against trajectory-aware spoofing attacks.
Reflecting intelligent surface (RIS) is a promising technology for 6G mobile communications. However, identifying the niche of RIS within the mobile networks is a challenging task. To mitigate the escalating system complexity of mobile networks, we propose the concept of Intelligent Reflection as a Service (IRaaS), and discuss its system architecture, enabling technologies, and deployment strategy, respectively. By leveraging technologies such as resource pooling, service based architecture (SBA), cloud infrastructure, and model-free signal processing, IRaaS empowers telecom operators to deliver on-demand intelligent reflection services without a radical update of current communication protocols. In addition, IRaaS brings a novel deployment strategy that creates new opportunities for the vendors of intelligent reflection service and balances the interests of both telecom operators and property owners. IRaaS is expected to speed up the rollout of RIS from both technical perspective and commercial perspective, fostering an authentic smart radio environment for future mobile communications.
Reactive sputtering is a plasma-based technique to deposit a thin film on a substrate. This contribution presents a novel parameter-interval estimation method for a well-established model that describes the uncertain and nonlinear reactive sputtering process behaviour. Building on a proposed monotonicity-based model classification, the method guarantees that all parameterizations within the parameter interval yield output trajectories and static characteristics consistent with the enclosure induced by the parameter interval. Correctness and practical applicability of the new method are demonstrated by an experimental validation, which also reveals inherent structural limitations of the well-established process model for state-estimation tasks.
The evolution toward 6G communication systems is expected to rely on integrated three-dimensional network architectures where terrestrial infrastructures coexist with non-terrestrial stations such as satellites, enabling ubiquitous connectivity and service continuity. In this context, accurate channel models for satellite-to-ground propagation in urban environments are essential, particularly for user equipment located at street level where obstruction and multipath effects are significant. This work investigates satellite-to-urban propagation through deterministic ray-tracing simulations. Three representative urban layouts are considered, namely dense urban, urban, and suburban. Multiple use cases are investigated, including handheld devices, vehicular terminals, and fixed rooftop receivers operating across several frequency bands. The analysis focuses on the relative importance of competing propagation mechanisms and on two key channel parameters, namely the Rician K-factor and the delay spread, which are relevant for the calibration of channel models to be used in link- and system-level simulations. Results highlight the strong - and in some cases unconventional - dependence of channel dispersion and fading characteristics on satellite elevation, antenna placement, and urban morphology.
Early screening via colonoscopy is critical for colon cancer prevention, yet developing robust AI systems for this domain is hindered by the lack of densely annotated, long-sequence video datasets. Existing datasets predominantly focus on single-class polyp detection and lack the rich spatial, temporal, and linguistic annotations required to evaluate modern Multimodal Large Language Models (MLLMs). To address this critical gap, we introduce Colon-Bench, generated via a novel multi-stage agentic workflow. Our pipeline seamlessly integrates temporal proposals, bounding-box tracking, AI-driven visual confirmation, and human-in-the-loop review to scalably annotate full-procedure videos. The resulting verified benchmark is unprecedented in scope, encompassing 528 videos, 14 distinct lesion categories (including polyps, ulcers, and bleeding), over 300,000 bounding boxes, 213,000 segmentation masks, and 133,000 words of clinical descriptions. We utilize Colon-Bench to rigorously evaluate state-of-the-art MLLMs across lesion classification, Open-Vocabulary Video Object Segmentation (OV-VOS), and video Visual Question Answering (VQA). The MLLM results demonstrate surprisingly high localization performance in medical domains compared to SAM-3. Finally, we analyze common VQA errors from MLLMs to introduce a novel "colon-skill" prompting strategy, improving zero-shot MLLM performance by up to 9.7% across most MLLMs. The dataset and the code are available at this https URL .
The limited availability of dysarthric speech data makes cross-lingual detection an important but challenging problem. A key difficulty is that speech representations often encode language-dependent structure that can confound dysarthria detection. We propose a representation-level language shift (LS) that aligns source-language self-supervised speech representations with the target-language distribution using centroid-based vector adaptation estimated from healthy-control speech. We evaluate the approach on oral DDK recordings from Parkinson's disease speech datasets in Czech, German, and Spanish under both cross-lingual and multilingual settings. LS substantially improves sensitivity and F1 in cross-lingual settings, while yielding smaller but consistent gains in multilingual settings. Representation analysis further shows that LS reduces language identity in the embedding space, supporting the interpretation that LS removes language-dependent structure.
This paper proposes a unified control framework based on Response-Aware Risk-Constrained Control Barrier Function for dynamic safety boundary control of vehicles. Addressing the problem of physical model parameter mismatch, the framework constructs an uncertainty propagation model that fuses nominal dynamics priors with direct vehicle body responses. Utilizing simplified single-track dynamics to provide a baseline direction for control gradients and covering model deviations through statistical analysis of body response signals, the framework eliminates the dependence on accurate online estimation of road surface adhesion coefficients. By introducing Conditional Value at Risk (CVaR) theory, the framework reformulates traditional deterministic safety constraints into probabilistic constraints on the tail risk of barrier function derivatives. Combined with a Bayesian online learning mechanism based on inverse Wishart priors, it identifies environmental noise covariance in real-time, adaptively tuning safety margins to reduce performance loss under prior parameter mismatch. Finally, based on Control Lyapunov Function (CLF), a unified Second-Order Cone Programming (SOCP) controller is constructed. Theoretical analysis establishes convergence of Sequential Convex Programming to local Karush-Kuhn-Tucker points and provides per-step probabilistic safety bounds. High-fidelity dynamics simulations demonstrate that under extreme conditions, the method not only eliminates the output divergence phenomenon of traditional methods but also achieves Pareto improvement in both safety and tracking performance. For the chosen risk level, the per-step safety violation probability is theoretically bounded by approximately 2%, validated through high-fidelity simulations showing zero boundary violations across all tested scenarios.
The classical asymptotic gain (AG) is a concept known from the input-to-state stability theory. Given a uniform input bound, AG estimates the asymptotic bound of the output. Sometimes, however, more information is known about the input than just a bound. In this paper we consider the case of a periodic input. Under the assumption that the system converges to a periodic solution, we introduce a new gain, called period-aware asymptotic gain (PAG), which employs periodicity to enable a sharper asymptotic estimation of the output. Since the PAG can distinguish between short-period ("high-frequency") and long-period ("low-frequency") signals, it is able to rigorously quantify such properties as bandwidth, resonant behavior, and high-frequency damping. We discuss how the PAG can be computed and illustrate it with a numerical example from the field of power electronics.
The fast motion of Low Earth Orbit (LEO) satellites causes the propagation channel to vary rapidly, and its behavior is strongly shaped by the surrounding environment, especially at low elevation angles where signals are highly susceptible to terrain blockage and other environmental effects. Existing studies mostly rely on assumed statistical channel distributions and therefore ignore the influence of the actual geographic environment. In this paper, we propose an environment-aware channel modeling method for air-to-ground wireless links. We leverage real environmental data, including digital elevation models (DEMs) and land cover information, together with ray tracing (RT) to determine whether a link is line-of-sight (LOS) or non-line-of-sight (NLOS) and to identify possible reflection paths of the signal. The resulting obstruction and reflection profiles are then combined with models of diffraction loss, vegetation absorption, and atmospheric attenuation to quantitatively characterize channel behavior in realistic geographic environments. Since RT is computationally intensive, we use RT-generated samples and environmental features to train a scalable diffusion model that can efficiently predict channel performance for arbitrary satellite and ground terminal positions, thereby supporting real-time decision-making. In the experiments, we validate the proposed model with measurement data from both cellular and LEO satellite links, demonstrating its effectiveness in realistic environments.
Converting process sketches into executable simulation models remains a major bottleneck in process systems engineering, requiring substantial manual effort and simulator-specific expertise. Recent advances in generative AI have improved both engineering-diagram interpretation and LLM-assisted flowsheet generation, but these remain largely disconnected: diagram-understanding methods often stop at extracted graphs, while text-to-simulation workflows assume structured inputs rather than raw visual artifacts. To bridge this gap, we present an end-to-end multi-agent large language model system that converts process diagrams directly into executable Aspen HYSYS flowsheets. The framework decomposes the task into three coordinated layers: diagram parsing and interpretation, simulation model synthesis, and multi-level validation. Specialized agents handle visual interpretation, graph-based intermediate representation construction, code generation for the HYSYS COM interface, execution, and structural verification. We evaluate the framework on four chemical engineering case studies of increasing complexity, from a simple desalting process to an industrial aromatic production flowsheet with multiple recycle loops. The system produces executable HYSYS models in all cases, achieving complete structural fidelity on the two simpler cases and strong performance on the more complex ones, with connection consistency above 0.93 and stream consistency above 0.96. These results demonstrate a viable end-to-end sketch-to-simulation workflow while highlighting remaining challenges in dense recycle structures, implicit diagram semantics, and simulator-interface constraints.
Automatic depression detection from doctor-patient conversations has gained momentum thanks to the availability of public corpora and advances in language modeling. However, interpretability remains limited: strong performance is often reported without revealing what drives predictions. We analyze three datasets: ANDROIDS, DAIC-WOZ, E-DAIC and identify a systematic bias from interviewer prompts in semi-structured interviews. Models trained on interviewer turns exploit fixed prompts and positions to distinguish depressed from control subjects, often achieving high classification scores without using participant language. Restricting models to participant utterances distributes decision evidence more broadly and reflects genuine linguistic cues. While semi-structured protocols ensure consistency, including interviewer prompts inflates performance by leveraging script artifacts. Our results highlight a cross-dataset, architecture-agnostic bias and emphasize the need for analyses that localize decision evidence by time and speaker to ensure models learn from participants' language.
AI assistants can decompose multi-step workflows, but they do not natively speak industrial protocols such as Modbus, MQTT/Sparkplug B, or OPC UA, so this paper presents INDUSTRICONNECT, a prototype suite of Model Context Protocol (MCP) adapters that expose industrial operations as schema-discoverable AI tools while preserving protocol-specific connectivity and safety controls; the system uses a common response envelope and a mock-first workflow so adapter behavior can be exercised locally before connecting to plant equipment, and a deterministic benchmark covering normal, fault-injected, stress, and recovery scenarios evaluates the flagship adapters, comprising 870 runs (480 normal, 210 fault-injected, 120 stress, 60 recovery trials) and 2820 tool calls across 7 fault scenarios and 12 stress scenarios, where the normal suite achieved full success, the fault suite confirmed structured error handling with adapter-level uint16 range validation, the stress suite identified concurrency boundaries, and same-session recovery after endpoint restart is demonstrated for all three protocols, with results providing evidence spanning adapter correctness, concurrency behavior, and structured error handling for AI-assisted industrial operations.
Analog design often slows down because even small changes to device sizes or biases require expensive simulation cycles, and high-quality solutions typically occupy only a narrow part of a very large search space. While existing optimizers reduce some of this burden, they largely operate without the kind of judgment designers use when deciding where to search next. This paper presents an actor-critic optimization framework (ACOF) for analog sizing that brings that form of guidance into the loop. Rather than treating optimization as a purely black-box search problem, ACOF separates the roles of proposal and evaluation: an actor suggests promising regions of the design space, while a critic reviews those choices, enforces design legality, and redirects the search when progress is hampered. This structure preserves compatibility with standard simulator-based flows while making the search process more deliberate, stable, and interpretable. Across our test circuits, ACOF improves the top-10 figure of merit by an average of 38.9% over the strongest competing baseline and reduces regret by an average of 24.7%, with peak gains of 70.5% in FoM and 42.2% lower regret on individual circuits. By combining iterative reasoning with simulation-driven search, the framework offers a more transparent path toward automated analog sizing across challenging design spaces.
Quantifying human movement (kinematics) and musculoskeletal forces (kinetics) at scale, such as estimating quadriceps force during a sit-to-stand movement, could transform prediction, treatment, and monitoring of mobility-related conditions. However, quantifying kinematics and kinetics traditionally requires costly, time-intensive analysis in specialized laboratories, limiting clinical translation. Scalable, accurate tools for biomechanical assessment are needed. We introduce OpenCap Monocular, an algorithm that estimates 3D skeletal kinematics and kinetics from a single smartphone video. The method refines 3D human pose estimates from a monocular pose estimation model (WHAM) via optimization, computes kinematics of a biomechanically constrained skeletal model, and estimates kinetics via physics-based simulation and machine learning. We validated OpenCap Monocular against marker-based motion capture and force plate data for walking, squatting, and sit-to-stand tasks. OpenCap Monocular achieved low kinematic error (4.8° mean absolute error for rotational degrees of freedom; 3.4 cm for pelvis translations), outperforming a regression-only computer vision baseline by 48% in rotational accuracy (p = 0.036) and 69% in translational accuracy (p < 0.001). OpenCap Monocular also estimated ground reaction forces during walking with accuracy comparable to, or better than, our prior two-camera OpenCap system. We demonstrate that the algorithm estimates important kinetic outcomes with clinically meaningful accuracy in applications related to frailty and knee osteoarthritis, including estimating knee extension moment during sit-to-stand transitions and knee adduction moment during walking. OpenCap Monocular is deployed via a smartphone app, web app, and secure cloud computing (this https URL), enabling free, accessible single-smartphone biomechanical assessments.
Optimization algorithms can be interpreted through the lens of dynamical systems as the interconnection of linear systems and a set of subgradient nonlinearities. This dynamical systems formulation allows for the analysis and synthesis of optimization algorithms by solving robust control problems. In this work, we use the celebrated internal model principle in control theory to structurally factorize convergent composite optimization algorithms into suitable network-dependent internal models and core subcontrollers. As the key benefit, we reveal that this permits us to synthesize optimization algorithms even if information is transmitted over networks featuring dynamical phenomena such as time delays, channel memory, or crosstalk. Design of these algorithms is achieved under bisection in the exponential convergence rate either through a nonconvex local search or by alternation of convex semidefinite programs. We demonstrate factorization of existing optimization algorithms and the automated synthesis of new optimization algorithms in the networked setting.
Biohybrid robots integrate living tissues with engineered artificial structures to achieve organism-inspired actuation and behavior. A persistent challenge is delivering stimulation and control signals without relying on tethered wiring or bulky hardware immersed in cell-culture media. Wireless bioelectronics addresses this limitation by enabling the remote transfer of control signals, typically via radio-frequency magnetic fields, to locally stimulate muscle tissues at tissue-electrode interfaces. In parallel, wireless optoelectronics enables remote control of optogenetically modified, muscle-based robots by embedding light emitters that initiate muscle actuation through light-gated ion channels. Further advances incorporate neuromuscular junctions, leveraging biological signal transduction to enable selective control of multiple actuators through wireless frequency- and time-division multiplexing. This perspective article summarizes recent advances in control strategies for biohybrid robots, namely, wireless electrical stimulation, wireless optical stimulation, and neuromuscular integration. Then this describes cross-cutting design principles and highlights a future direction, namely, co-integration of neural organoid-bioelectronics toward autonomous, closed-loop biohybrid robots.
Accurate forecasting of future solar irradiance is essential for the effective control of solar thermal power plants. Although various kriging-based methods have been proposed to address the prediction problem, these methods typically do not provide an appropriate sampling strategy to dynamically position mobile sensors for optimizing prediction accuracy in real time, which is critical for achieving accurate forecasts with a minimal number of sensors. This paper introduces a dissimilarity map derived from a kriging model and proposes a persistent coverage control algorithm that effectively guides agents toward regions where additional observations are required to improve prediction performance. By means of experiments using mobile robots, the proposed approach was shown to obtain more accurate predictions than the considered baselines under various emulated irradiance fields.
Integrating AI into the physical layer is a cornerstone of 6G networks. However, current data-driven approaches struggle to generalize across dynamic environments because they lack an intrinsic understanding of electromagnetic wave propagation. We introduce the Wireless World Model (WWM), a multi-modal foundation framework predicting the spatiotemporal evolution of wireless channels by internalizing the causal relationship between 3D geometry and signal dynamics. Pre-trained on a massive ray-traced multi-modal dataset, WWM overcomes the data authenticity gap, further validated under real-world measurement data. Using a joint-embedding predictive architecture with a multi-modal mixture-of-experts Transformer, WWM fuses channel state information, 3D point clouds, and user trajectories into a unified representation. Across the five key downstream tasks supported by WWM, it achieves remarkable performance in seen environments, unseen generalization scenarios, and real-world measurements, consistently outperforming SOTA uni-modal foundation models and task-specific models. This paves the way for physics-aware 6G intelligence that adapts to the physical world.
Research on mobile manipulation systems that physically interact with humans has expanded rapidly in recent years, opening the way to tasks which could not be performed using fixed-base manipulators. Within this context, developing suitable control methodologies is essential since mobile manipulators introduce additional degrees of freedom, making the design of control approaches more challenging and more prone to performance optimization. This paper proposes a control approach for a mobile manipulator, composed of a mobile base equipped with a robotic arm mounted on the top, with the objective of minimizing the overall kinetic energy stored in the whole-body mobile manipulator in physical human-robot interaction applications. The approach is experimentally tested with reference to a peg-in-hole task, and the results demonstrate that the proposed approach reduces the overall kinetic energy stored in the whole-body robotic system and improves the system performance compared with the benchmark method.
In this paper we study the stability properties of the equilibrium point for an age-structured chemostat model with renewal boundary condition and coupled substrate dynamics under constant dilution rate. This is a complex infinite-dimensional feedback system. It has two feedback loops, both nonlinear. A positive static loop due to reproduction at the age-zero boundary of the PDE, counteracted and dominated by a negative dynamic loop with the substrate dynamics. The derivation of explicit sufficient conditions that guarantee global stability estimates is carried out by using an appropriate Lyapunov functional. The constructed Lyapunov functional guarantees global exponential decay estimates and uniform global asymptotic stability with respect to a measure related to the Lyapunov functional. From a biological perspective, stability arises because reproduction is constrained by substrate availability, while dilution, mortality, and substrate depletion suppress transient increases in biomass before age-structure effects can amplify them. The obtained results are applied to a chemostat model from the literature, where the derived stability condition is compared with existing results that are based on (necessarily local) linearization methods.
Low-altitude communications can promote the integration of aerial and terrestrial wireless resources, expand network coverage, and enhance transmission quality, thereby empowering the development of sixth-generation (6G) mobile communications. As an enabler for low-altitude transmission, 3D channel fingerprints (3D-CF), also referred to as the 3D radio map or 3D channel knowledge map, are expected to enhance the understanding of communication environments and assist in the acquisition of channel state information (CSI), thereby avoiding repeated estimations and reducing computational complexity. In this paper, we propose a modularized multimodal framework to construct 3D-CF. Specifically, we first establish the 3D-CF model as a collection of CSI-tuples based on Rician fading channels, with each tuple comprising the low-altitude vehicle's (LAV) positions and its corresponding statistical CSI. In consideration of the heterogeneous structures of different prior data, we formulate the 3D-CF construction problem as a multimodal regression task, where the target channel information in the CSI-tuple can be estimated directly by its corresponding LAV positions, together with communication measurements and geographic environment maps. Then, a high-efficiency multimodal framework is proposed accordingly, which includes a correlation-based multimodal fusion (Corr-MMF) module, a multimodal representation (MMR) module, and a CSI regression (CSI-R) module. Numerical results show that our proposed framework can efficiently construct 3D-CF and achieve at least 27.5% higher accuracy than the state-of-the-art algorithms under different communication scenarios, demonstrating its competitive performance and excellent generalization ability. We also analyze the computational complexity and illustrate its superiority in terms of the inference time.
Multiphase flows frequently occur naturally and in manufactured devices. Controlling such phenomena is extremely challenging due to the strongly non-linear dynamics, rapid phase transitions, and the limited spatial and temporal resolution of available sensors, which can lead to significant inaccuracies in predicting and managing these flows. In most cases, numerical models are the only way to access high spatial and temporal resolution data to an extent that allows for fine control. While embedding numerical models in control algorithms could enable fine control of multiphase processes, the significant computational burden currently limits their practical application. This work proposes a surrogate-assisted model predictive control (MPC) framework for regulating multiphase processes using learned operators. A Fourier Neural Operator (FNO) is trained to forecast the spatiotemporal evolution of a phase-indicator field (the volume fraction) over a finite horizon from a short history of recent states and a candidate actuation signal. The neural operator surrogate is then iteratively called during the optimisation process to identify the optimal control variable. To illustrate the approach, we solve an optimal control problem (OCP) on a two-phase Eulerian bubble column. Here, the controller tracks piecewise-constant liquid level setpoints by adjusting the gas flow rate introduced into the system. The results we obtained indicate that field-level forecasting with FNOs are well suited for closed-loop optimization since they have relatively low evaluation cost. The latter provide a practical route toward MPC for fast multiphase unit operations and a foundation for future extensions to partial observability and physics-informed operator learning.
Automatic image rotation estimation is a key preprocessing step in many vision pipelines. This task is challenging because angles have circular topology, creating boundary discontinuities that hinder standard regression methods. We present a comprehensive study of five circular-aware methods for global orientation estimation: direct angle regression with circular loss, classification via angular binning, unit-vector regression, phase-shifting coder, and circular Gaussian distribution. Using transfer learning from ImageNet-pretrained models, we systematically evaluate these methods across sixteen modern architectures by adapting their output heads for rotation-specific predictions. Our results show that probabilistic methods, particularly the circular Gaussian distribution, are the most robust across architectures, while classification achieves the best accuracy on well-matched backbones but suffers training instabilities on others. The best configuration (classification with EfficientViT-B3) achieves a mean absolute error (MAE) of 1.23° (mean across five independent runs) on the DRC-D dataset, while the circular Gaussian distribution with MambaOut Base achieves a virtually identical 1.24° with greater robustness across backbones. Training and evaluating our top-performing method-architecture combinations on COCO 2014, the best configuration reaches 3.71° MAE, improving substantially over prior work, with further improvement to 2.84° on the larger COCO 2017 dataset.
The use of hyperspectral imaging (HSI) in autonomous driving (AD), while promising, faces many challenges related to the specifics and requirements of this application domain. On the one hand, non-controlled and variable lighting conditions, the wide depth-of-field ranges, and dynamic scenes with fast-moving objects. On the other hand, the requirements for real-time operation and the limited computational resources of embedded platforms. The combination of these factors determines both the criteria for selecting appropriate HSI technologies and the development of custom vision algorithms that leverage the spectral and spatial information obtained from the sensors. In this article, we analyse several techniques explored in the research of HSI-based vision systems with application to AD, using as an example results obtained from experiments using data from the most recent version of the HSI-Drive dataset.
Non-fixed flexible antenna architectures, such as fluid antenna system (FAS), movable antenna (MA), and pinching antenna, have garnered significant interest in recent years. Among them, rotatable antenna (RA) has emerged as a promising technology for enhancing wireless communication and sensing performance through flexible antenna orientation/boresight rotation. By enabling mechanical or electronic boresight adjustment without altering physical antenna positions, RA introduces additional spatial degrees of freedom (DoFs) beyond conventional beamforming. In this paper, we provide a comprehensive tutorial on the fundamentals, architectures, and applications of RA-empowered wireless networks. Specifically, we begin by reviewing the historical evolution of RA-related technologies and clarifying the distinctive role of RA among flexible antenna architectures. Then, we establish a unified mathematical framework for RA-enabled systems, including general antenna/array rotation models, as well as channel models that cover near- and far-field propagation characteristics, wideband frequency selectivity, and polarization effects. Building upon this foundation, we investigate antenna/array rotation optimization in representative communication and sensing scenarios. Furthermore, we examine RA channel estimation/acquisition strategies encompassing orientation scheduling mechanisms and signal processing methods that exploit multi-view channel observations. Beyond theoretical modeling and algorithmic design, we discuss practical RA configurations and deployment strategies. We also present recent RA prototypes and experimental results that validate the practical performance gains enabled by antenna rotation. Finally, we highlight promising extensions of RA to emerging wireless paradigms and outline open challenges to inspire future research.
Induced seismicity caused by fluid extraction or injection in underground reservoirs is a major challenge for safe energy production and storage. This paper presents a robust output-feedback controller for induced seismicity mitigation in geological reservoirs described by a coupled 3D PDE-ODE model. The controller is nonlinear and robust (MIMO Super-Twisting design), producing a continuous control signal and requiring minimal model information, while accommodating parameter uncertainties and spatial heterogeneity. Two operational outputs are regulated simultaneously: regional pressures and seismicity rates computed over reservoir sub-regions. Closed-loop properties are established via explicit bounds on the solution and its time derivative for both the infinite-dimensional dynamics and the nonlinear ODE system, yielding finite-time or exponential convergence of the tracking errors. The method is evaluated on the Groningen gas-field case study in two scenarios: gas production while not exceeding the intrinsic seismicity of the region, and combined production with CO$_2$ injection toward net-zero carbon operation. Simulations demonstrate accurate tracking of pressure and seismicity targets across regions under significant parameter uncertainty, supporting safer reservoir operation while preserving production objectives.
The growing availability of second-life batteries (SLBs) from electric vehicles is reshaping future microgrid design, requiring planning frameworks that explicitly account for reduced capacity and efficiency over time. However, traditional microgrid planning models often neglect degradation effects or rely on highly simplified formulations, leading to unreliable sizing decisions and increased long-term costs. This paper proposes a degradation-aware iterative optimization framework for long-term microgrid planning that incorporates photovoltaic efficiency fading, battery capacity and efficiency degradation, and SLB characteristics. A cumulative multi-year optimization model is first solved to obtain an initial investment and operational strategy under simplified degradation assumptions, ensuring computational tractability. Subsequently, a yearly validation model evaluates degradation impacts on photovoltaic and battery assets, updating efficiencies and available capacity to assess reliability. An iterative refinement process then adjusts resource allocation to eliminate load shedding while minimizing total system cost. Sensitivity analyses on photovoltaic degradation rates, SLB capital costs, and grid tariffs are conducted to evaluate robustness under varying technical and economic conditions. Results demonstrate that neglecting degradation can compromise reliability and increase blackout risk, while SLBs offer meaningful cost-saving opportunities. The proposed framework provides a scalable and practical tool for planning future microgrids in degradation-constrained environments.
We introduce a U-net model for 360° acoustic source localization formulated as a spherical semantic segmentation task. Rather than regressing discrete direction-of-arrival (DoA) angles, our model segments beamformed audio maps (azimuth and elevation) into regions of active sound presence. Using delay-and-sum (DAS) beamforming on a custom 24-microphone array, we generate signals aligned with drone GPS telemetry to create binary supervision masks. A modified U-Net, trained on frequency-domain representations of these maps, learns to identify spatially distributed source regions while addressing class imbalance via the Tversky loss. Because the network operates on beamformed energy maps, the approach is inherently array-independent and can adapt to different microphone configurations without retraining from scratch. The segmentation outputs are post-processed by computing centroids over activated regions, enabling robust DoA estimates. Our dataset includes real-world open-field recordings of a DJI Air 3 drone, synchronized with 360° video and flight logs across multiple dates and locations. Experimental results show that U-net generalizes across environments, providing improved angular precision, offering a new paradigm for dense spatial audio understanding beyond traditional Sound Source Localization (SSL).
The Linear Quadratic Gaussian (LQG) regulator is a cornerstone of optimal control theory, yet its performance can degrade significantly when the noise distributions deviate from the assumed Gaussian model. To address this limitation, this work proposes a distributionally robust generalization of the finite-horizon LQG control problem. Specifically, we assume that the noise distributions are unknown and belong to ambiguity sets defined in terms of an entropy-regularized Wasserstein distance centered at a nominal Gaussian distribution. By deriving novel bounds on this Sinkhorn discrepancy and proving structural and topological properties of the resulting ambiguity sets, we establish global optimality of linear policies. Numerical experiments showcase improved distributional robustness of our control policy.
Magnetic levitation is poised to revolutionize industrial automation by integrating flexible in-machine product transport and seamless manipulation. It is expected to become the standard drive technology for automated manufacturing. However, controlling such systems is inherently challenging due to their complex, unstable dynamics. Traditional control approaches, which rely on hand-crafted control engineering, typically yield robust but conservative solutions, with their performance closely tied to the expertise of the engineering team. In contrast, learning-based neural control presents a promising alternative. This paper presents the first neural controller for 6D magnetic levitation. Trained end-to-end on interaction data from a proprietary controller, it directly maps raw sensor data and 6D reference poses to coil current commands. The neural controller can effectively generalize to previously unseen situations while maintaining accurate and robust control. These results underscore the practical feasibility of learning-based neural control in complex physical systems and suggest a future where such a paradigm could enhance or even substitute traditional engineering approaches in demanding real-world applications. The trained neural controller, source code, and demonstration videos are publicly available at this https URL.
Local bifurcation analysis plays a central role in understanding qualitative transitions in networked nonlinear dynamical systems, including dynamic neural network and opinion dynamics models. In this article we establish explicit bounds of validity for the classification of bifurcation diagrams in two classes of continuous-time networked dynamical systems, analogous in structure to the Hopfield and the Firing Rate dynamic neural network models. Our approach leverages recent advances in computing the bounds for the validity of Lyapunov-Schmidt reduction, a reduction method widely employed in nonlinear systems analysis. Using these bounds we rigorously characterize neighbourhoods around bifurcation points where predictions from reduced-order bifurcation equations remain reliable. We further demonstrate how these bounds can be applied to an illustrative family of nonlinear opinion dynamics on k-regular graphs, which emerges as a special case of the general framework. These results provide new analytical tools for quantifying the robustness of bifurcation phenomena in dynamics over networked systems and highlight the interplay between network structure and nonlinear dynamical behaviour.
This work proposes a two-layered control scheme for constrained nonlinear systems represented by a class of recurrent neural networks and affected by additive disturbances. In particular, a base controller ensures global or regional closed-loop l_p-stability of the error in tracking a desired equilibrium and the satisfaction of input and output constraints within a robustly positive invariant set. An additional control contribution, derived by combining the internal model control principle with a stable operator, is introduced to improve system performance. This operator, implemented as a stable neural network, can be trained via unconstrained optimisation on a chosen performance metric, without compromising closed-loop equilibrium tracking or constraint satisfaction, even if the optimisation is stopped prematurely. In addition, we characterise the class of closed-loop stable behaviours that can be achieved with the proposed architecture. Simulation results on a pH-neutralisation benchmark demonstrate the effectiveness of the proposed approach.
Despite the growing number of automated vehicles on public roads, operating such systems in open contexts inevitably involves incidents. Developing a defensible case that the residual risk is reduced to a reasonable (societally acceptable) level is hence a prerequisite to be prepared for potential liability cases. A "safety argumentation" is a common means to represent this case. In this paper, we contribute to the state of the art in terms of process guidance on argumentation creation and maintenance - aiming to promote a safety-argumentation-by-design paradigm, which mandates co-developing both the system and argumentation from the earliest stages. Initially, we extend a systematic design model for automated driving functions with an argumentation layer to address prevailing misconceptions regarding the development of safety arguments in a process context. Identified limitations of this extension motivate our complementary design of a dedicated argumentation life cycle that serves as an additional process viewpoint. Correspondingly, we define literature- and expert-based process requirements. To illustrate the safety argumentation life cycle that we propose as a result of implementing these consolidated requirements, we demonstrate principles of the introduced process phases (baselining, evolution, continuous maintenance) by an argumentation example on an operational design domain exit response.
In a practical multi-antenna receiver, each element of the receive antenna array has a directive antenna pattern, which is still not fully explored and investigated in academia and industry until now. When the emitter is deviated greatly from the normal direction of antenna element or is close to the null-point direction, the sensing energy by array will be seriously attenuated such that the direction-sensing performance is degraded significantly. To address such an issue, a rotatable array system is established with the directive antenna pattern of each element taken into account, where each element has the same antenna pattern. Then, the corresponding the Cramer-Rao lower bound (CRLB) is derived. Finally, a recursive rotation Root-MUSIC (RR-Root-MUSIC) direction-sensing method is proposed and its root-mean-squared-error (RMSE) performance is evaluated by the derived CRLB. Simulation results show that the proposed rotation method converges rapidly with about ten iterations, and make a significant enhancement on the direction-sensing accuracy in terms of RMSE when the target direction departs seriously far away from the normal vector of array. Compared with conventional Root-MUSIC, the sensing performance of the proposed RR-Root-MUSIC method is much closer to the CRLB.
Dynamic obstacle avoidance (DOA) for unmanned aerial vehicles (UAVs) requires fast reaction under limited onboard resources. We introduce the distributionally robust acceleration control barrier function (DR-ACBF) as an efficient collision avoidance method maintaining safety regions. The method constructs a second-order control barrier function as linear half-space constraints on commanded acceleration. Latency, actuator limits, and obstacle accelerations are handled through an effective clearance that considers dynamics and delay. Uncertainty is mitigated using Cantelli tightening with per-obstacle risk. A DR-conditional value at risk (DR-CVaR)based early trigger expands margins near violations to improve DOA. Real-time execution is ensured via constant-time Gauss-Southwell projections. Simulation studies achieve similar avoidance performance at substantially lower computational effort than state-of-the-art baseline approaches. Experiments with Crazyflie drones demonstrate the feasibility of our approach.
High-resolution range profiles (HRRPs) play a critical role in automatic target recognition (ATR) due to their richinformationregarding target scattering centers (SCs), which encapsulate the geometric and electromagnetic characteristics of this http URL few-shot circumstances, traditional learning-based methods often suffer from overfitting and struggle togeneralizeeffectively. The recently proposed HRRPLLM, which leverages the in-context learning (ICL) capabilities of largelanguagemodels (LLMs) for one-shot HRRP ATR, is limited in few-shot scenarios. This limitation arises because it primarilyutilizesthe distribution of SCs for recognition while neglecting the variance of the samples caused by aspect sensitivity. Thispaperproposes a straightforward yet effective Aspect-Distributed Prototype (ADP) strategy for LLM-based ATRunder few-shotconditions to enhance aspect robustness. Experiments conducted on both simulated and measured aircraft electromagneticdatasets demonstrate that the proposed method significantly outperforms current benchmarks.
Conventional robust H2/H-infinity control minimizes the worst-case performance, often leading to a conservative design driven by very rare parametric configurations. To reduce this conservatism while taking advantage of the stochastic properties of Monte Carlo sampling and its compatibility with parallel computing, we introduce an alternative paradigm that optimizes the controller with respect to a stochastic criterion, namely the conditional value at risk. We present the problem formulation and discuss several open challenges toward a general synthesis framework. The potential of this approach is illustrated on a mechanical system, where it significantly improves overall performance by tolerating some degradation in very rare worst-case scenarios.
ASVspoof 5 is the fifth edition in a series of challenges which promote the study of speech spoofing and deepfake detection solutions. A significant change from previous challenge editions is a new crowdsourced database collected from a substantially greater number of speakers under diverse recording conditions, and a mix of cutting-edge and legacy generative speech technology. With the new database described elsewhere, we provide in this paper an overview of the ASVspoof 5 challenge results for the submissions of 53 participating teams. While many solutions perform well, performance degrades under adversarial attacks and the application of neural encoding/compression schemes. Together with a review of post-challenge results, we also report a study of calibration in addition to other principal challenges and outline a road-map for the future of ASVspoof.
The integration of artificial intelligence into next-generation wireless networks necessitates the accurate construction of radio maps (RMs) as a foundational prerequisite for electromagnetic digital twins. A RM provides the digital representation of the wireless propagation environment, mapping complex geographical and topological boundary conditions to critical spatial-spectral metrics that range from received signal strength to full channel state information matrices. This tutorial presents a comprehensive survey of learning-based RM construction, systematically addressing three intertwined dimensions: data, paradigms, and physics-awareness. From the data perspective, we review physical measurement campaigns, ray tracing simulation engines, and publicly available benchmark datasets, identifying their respective strengths and fundamental limitations. From the paradigm perspective, we establish a core taxonomy that categorizes RM construction into source-aware forward prediction and source-agnostic inverse reconstruction, and examine five principal neural architecture families spanning convolutional neural networks, vision transformers, graph neural networks, generative adversarial networks, and diffusion models. We further survey optics-inspired methods adapted from neural radiance fields and 3D Gaussian splatting for continuous wireless radiation field modeling. From the physics-awareness perspective, we introduce a three-level integration framework encompassing data-level feature engineering, loss-level partial differential equation regularization, and architecture-level structural isomorphism. Open challenges including foundation model development, physical hallucination detection, and amortized inference for real-time deployment are discussed to outline future research directions.
Radio maps (RMs) provide spatially continuous propagation characterizations essential for 6G network planning, but high-fidelity RM construction remains challenging. Rigorous electromagnetic solvers incur prohibitive computational latency, while data-driven models demand massive labeled datasets and generalize poorly from simplified simulations to complex multipath environments. This paper proposes RadioDiff-FS, a few-shot diffusion framework that adapts a pretrained main-path generator to multipath-rich target domains with only a small number of high-fidelity samples. The adaptation is grounded in a theoretical decomposition of the multipath RM into a dominant main-path component and a directionally sparse residual. This decomposition shows that the cross-domain shift corresponds to a bounded and geometrically structured feature translation rather than an arbitrary distribution change. A direction-consistency loss (DCL) is then introduced to constrain diffusion score updates along physically plausible propagation directions, thereby suppressing phase-inconsistent artifacts that arise in the low-data regime. Experiments show that RadioDiff-FS reduces NMSE by 59.5\% on static RMs and by 74.0\% on dynamic RMs relative to the vanilla diffusion baseline, achieving an SSIM of 0.9752 and a PSNR of 36.37 dB under severely limited supervision. Even in a one-shot setting with a single target-domain sample per scene, RadioDiff-FS outperforms all fully supervised baselines, confirming that the directional constraint provides an effective inductive bias under extreme data scarcity. Code is available at this https URL.
This paper studies the problem of finite-time convergence to a prescribed safe set for nonlinear systems whose initial states violate the safety constraints. Existing Control Lyapunov-Barrier Functions (CLBFs) can enforce recovery to the safe set but may suffer from the issue of chattering and they do not explicitly consider control bounds. To address these limitations, we propose a new Control Barrier Function (CBF) formulation that guarantees finite-time convergence to the safe set while ensuring feasibility under control constraints. Specifically, we strengthen the initially violated safety constraint by introducing a parameter which enables the exploitation of the asymptotic property of a CBF to converge to the safe set in finite time. Furthermore, the conditions for the existence of such a CBF under control bounds to achieve finite-time convergence are derived via reachability analysis and constraint comparison, providing a systematic approach for parameter design. A case study on 2D obstacle avoidance is presented to demonstrate the effectiveness and advantages of the proposed method.
Discriminative features extracted from the sparse coding model have been shown to perform well for classification. Recent deep learning architectures have further improved reconstruction in inverse problems by considering new dense priors learned from data. We propose a novel dense and sparse coding model that integrates both representation capability and discriminative features. The model studies the problem of recovering a dense vector $\mathbf{x}$ and a sparse vector $\mathbf{u}$ given measurements of the form $\mathbf{y} = \mathbf{A}\mathbf{x}+\mathbf{B}\mathbf{u}$. Our first analysis relies on a geometric condition, specifically the minimal angle between the spanning subspaces of matrices $\mathbf{A}$ and $\mathbf{B}$, which ensures a unique solution to the model. The second analysis shows that, under some conditions on $\mathbf{A}$ and $\mathbf{B}$, a convex program recovers the dense and sparse components. We validate the effectiveness of the model on simulated data and propose a dense and sparse autoencoder (DenSaE) tailored to learning the dictionaries from the dense and sparse model. We demonstrate that (i) DenSaE denoises natural images better than architectures derived from the sparse coding model ($\mathbf{B}\mathbf{u}$), (ii) in the presence of noise, training the biases in the latter amounts to implicitly learning the $\mathbf{A}\mathbf{x} + \mathbf{B}\mathbf{u}$ model, (iii) $\mathbf{A}$ and $\mathbf{B}$ capture low- and high-frequency contents, respectively, and (iv) compared to the sparse coding model, DenSaE offers a balance between discriminative power and representation.
The application of supervised learning techniques in combination with model predictive control (MPC) has recently generated significant interest, particularly in the area of approximate explicit MPC, where function approximators like deep neural networks are used to learn the MPC policy via optimal state-action pairs generated offline. While the aim of approximate explicit MPC is to closely replicate the MPC policy, substituting online optimization with a trained neural network, the performance guarantees that come with solving the online optimization problem are typically lost. This paper considers an alternative strategy, where supervised learning is used to learn the optimal value function offline instead of learning the optimal policy. This can then be used as the cost-to-go function in a myopic MPC with a very short prediction horizon, such that the online computation burden reduces significantly without affecting the controller performance. This approach differs from existing work on value function approximations in the sense that it learns the cost-to-go function by using offline-collected state-value pairs, rather than closed-loop performance data. The cost of generating the state-value pairs used for training is addressed using a sensitivity-based data augmentation scheme.
Physics-informed Neural Networks (PINNs) show that embedding physical laws directly into the learning objective can significantly enhance the efficiency and physical consistency of neural network solutions. Similar to optimizing loss functions in machine learning, evolutionary algorithms iteratively optimize objective functions by simulating natural selection processes. Inspired by this principle, we ask a natural question: can physical information be similarly embedded into the fitness function of evolutionary algorithms? In this work, we propose Physics-informed Evolution (PIE), a novel framework that incorporates physical information derived from governing physical laws into the evolutionary fitness landscape, thereby extending Physics-informed artificial intelligence methods from machine learning to the broader domain of evolutionary computation. As a concrete instantiation, we apply PIE to quantum control problems governed by the Schrödinger equation, where the goal is to find optimal control fields that drive quantum systems from initial states to desired target states. We validate PIE on three representative quantum control benchmarks: state preparation in V-type three-level systems, entangled state generation in superconducting quantum circuits, and two-atom cavity QED systems. Within the PIE framework, we systematically compare the performance of ten single-objective and five multi-objective evolutionary algorithms. Experimental results demonstrate that by embedding physical information into the fitness function, PIE effectively guides evolutionary search, yielding control fields with high fidelity, low state deviation, and robust performance across different scenarios. Our findings further suggest that the Physics-informed principle extends naturally beyond neural network training to the broader domain of evolutionary computation.
This paper presents a new approach for jointly calibrating magnetometers and inertial measurement units, focusing on improving calibration accuracy and computational efficiency. The proposed method formulates the calibration problem as a maximum a posteriori estimation problem, treating both the calibration parameters and orientation trajectory of the sensors as unknowns. This formulation enables efficient optimization with closed-form derivatives. The method is compared against two state-of-the-art approaches in terms of computational complexity and estimation accuracy. Simulation results demonstrate that the proposed method achieves lower root mean square error in calibration parameters while maintaining competitive computational efficiency. Further validation through real-world experiments confirms the practical benefits of our approach: it effectively reduces position drift in a magnetic field-aided inertial navigation system by more than a factor of two on most datasets. Moreover, the proposed method calibrated 30 magnetometers in less than 2 minutes. The contributions include a new calibration method, an analysis of existing methods, and a comprehensive empirical evaluation. Datasets and algorithms are made publicly available to promote reproducible research.
Designing satellite constellation systems involves complex multidisciplinary optimization in which coverage serves as a primary driver of overall system cost and performance. Among the various design considerations, constellation configuration, which dictates how satellites are placed and distributed in space relative to each other, predominantly determines the resulting coverage. In constellation configuration design, coverage may be treated either as an optimization objective or as a constraint, depending on mission goals. State-of-the-art literature addresses each mission scenario on a case-by-case basis, employing distinct assumptions, modeling techniques, and solution methods. While such problem-specific approaches yield valuable insights, users often face implementation challenges when performing trade-off studies across different mission scenarios, as each scenario must be handled distinctly. In this paper, we propose a collection of five mixed-integer linear programs that are of practical significance, extensible to more complex mission narratives through additional constraints, and capable of obtaining provably optimal constellation configurations. The framework can handle various metrics and mission scenarios, such as percent coverage, average or maximum revisit times, a fixed number of satellites, spatiotemporally varying coverage requirements, and static or dynamic targets. The paper presents several case studies and comparative analyses to demonstrate the versatility of the proposed framework.
This paper explores the outcome of training state-of-the-art dereverberation models with supervision settings ranging from weakly-supervised to virtually unsupervised, relying solely on reverberant signals and an acoustic model for training. Most of the existing deep learning approaches typically require paired dry and reverberant data, which are difficult to obtain in practice. We develop instead a sequential learning strategy motivated by a maximum-likelihood formulation of the dereverberation problem, wherein acoustic parameters and dry signals are estimated from reverberant inputs using deep neural networks, guided by a reverberation matching loss. Our most data-efficient variant requires only 100 reverberation-parameter-labeled samples to outperform an unsupervised baseline, demonstrating the effectiveness and practicality of the proposed method in low-resource scenarios.
Current approaches for large audio language models (LALMs) often rely on closed data sources or proprietary models, limiting their generalization and accessibility. This paper introduces MiDashengLM, a novel open audio-language model designed for efficient and comprehensive audio understanding through the use of general audio captions using our novel ACAVCaps training dataset. MiDashengLM exclusively relies on publicly available pretraining and supervised fine-tuning (SFT) datasets, ensuring full transparency and reproducibility. At its core, MiDashengLM integrates Dasheng, an open-source audio encoder, specifically engineered to process diverse auditory information effectively. Unlike previous works primarily focused on Automatic Speech Recognition (ASR) based audio-text alignment, our strategy centers on general audio captions, fusing speech, sound and music information into one textual representation, enabling a holistic textual representation of complex audio scenes. Lastly, MiDashengLM provides an up to 4x speedup in terms of time-to-first-token (TTFT) and up to 20x higher throughput than comparable models. Checkpoints are available online at this https URL and this https URL.
We investigate the performance of a lightweight tracking controller, based on a flow version of the Newton-Raphson method, applied to a miniature blimp and a mid-size quadrotor. This tracking technique admits theoretical performance guarantees for certain classes of systems and has been successfully applied in simulation studies and on mobile robots with simplified motion models. We evaluate the technique through real-world flight experiments on aerial hardware platforms subject to realistic deployment and onboard computational constraints. The technique's performance is assessed in comparison with established baseline control frameworks of feedback linearization for the blimp, and nonlinear model predictive control for both the quadrotor and the blimp. The performance metrics under consideration are (i) root mean square error of flight trajectories with respect to target trajectories, (ii) algorithms' computation times, and (iii) CPU energy consumption associated with the control algorithms. The experimental findings show that the Newton-Raphson-based tracking controller achieves competitive or superior tracking performance to the baseline methods with substantially reduced computation time and energy expenditure.
We present a model predictive control (MPC) framework for efficient navigation of mobile robots in cluttered environments. The proposed approach integrates a finite-segment shortest path planner into the finite-horizon trajectory optimization of the MPC. This formulation ensures convergence to dynamically selected targets and guarantees collision avoidance, even under general nonlinear dynamics and cluttered environments. The approach is validated through hardware experiments on a small ground robot, where a human operator dynamically assigns target locations that a robot should reach while avoiding obstacles. The robot reached new targets within 2-3 seconds and responded to new commands within 50 ms to 100 ms, immediately adjusting its motion even while still moving at high speeds toward a previous target.
Understanding the robustness of deep learning models for multivariate long-term time series forecasting (M-LTSF) remains challenging, as evaluations typically rely on real-world datasets with unknown noise properties. We propose a simulation-based evaluation framework that generates parameterizable synthetic datasets, where each dataset instance corresponds to a different configuration of signal components, noise types, signal-to-noise ratios, and frequency characteristics. These configurable components aim to model real-world multivariate time series data without the ambiguity of unknown noise. This framework enables fine-grained, systematic evaluation of M-LTSF models under controlled and diverse scenarios. We benchmark four representative architectures S-Mamba (state-space), iTransformer (transformer-based), R-Linear (linear), and Autoformer (decomposition-based). Our analysis reveals that all models degrade severely when lookback windows cannot capture complete periods of seasonal patters in the data. S-Mamba and Autoformer perform best on sawtooth patterns, while R-Linear and iTransformer favor sinusoidal signals. White and Brownian noise universally degrade performance with lower signal-to-noise ratio while S-Mamba shows specific trend-noise and iTransformer shows seasonal-noise vulnerability. Further spectral analysis shows that S-Mamba and iTransformer achieve superior frequency reconstruction. This controlled approach, based on our synthetic and principle-driven testbed, offers deeper insights into model-specific strengths and limitations through the aggregation of MSE scores and provides concrete guidance for model selection based on signal characteristics and noise conditions.
Trajectory design in cislunar space under a High-Fidelity Ephemeris Model (HFEM) is pursued through a nonlinear optimization perspective anchored on the transition of solutions from lower fidelity models, namely the Circular Restricted Three-Body Problem (CR3BP). The optimization problem is posed in the likeness of a multiple-shooting approach, aiming for segment-to-segment continuity while tracking proximity to the original CR3BP structures. The analysis of various formulations leads to the selection of an unconstrained least-squares problem for further investigation. The nonlinear optimization problem is convexified and the use of the Levenberg-Marquardt algorithm, as an alternative to the minimum-norm update equation found in most literature, is investigated for its control over the update step and inherent robustness. Additional techniques, such as adaptive weighting, are employed to further consolidate the behavior of the proposed algorithm in challenging scenarios. Numerical trials evaluate the adequacy of the methodology presented and compare it to the minimum-norm baseline over various application cases, including the generation of quasi-periodic trajectories and orbital transfers between them. The proposed technique is found to be a suitable alternative to the minimum-norm scheme, generally retaining better proximity to the original CR3BP trajectories and providing benefits in numerical robustness and stability. Moreover, the ease of including proximity objectives in a relaxed manner is shown to facilitate control over the shape of the final converged solution.
Reliable downlink communication in satellite-to-underground networks remains challenging due to severe signal attenuation caused by underground soil and refraction in the air-soil interface. To address this, we propose a novel cooperative rate-splitting (CRS)-aided transmission framework, where an aboveground relay decodes and forwards the common stream to underground devices (UDs). Based on this framework, we formulate a max-min fairness optimization problem that jointly optimizes power allocation, message splitting, and time slot scheduling to maximize the minimum achievable rate across UDs. To solve this high-dimensional non-convex problem under uncertain channels, we develop a deep reinforcement learning solution framework based on the proximal policy optimization (PPO) algorithm that integrates distribution-aware action modeling and a multi-branch actor network. Simulation results under a realistic underground pipeline monitoring scenario demonstrate that the proposed approach achieves average max-min rate gains exceeding $167\%$ over conventional benchmark strategies across various numbers of UDs and underground conditions.
Optimal control problems with discrete-valued inputs are challenging due to the mixed-integer nature of the resulting optimization problems, which are generally intractable for real-time, safety-critical applications. Lossless convexification offers an alternative by reformulating mixed-integer programs as convex programs that can be solved efficiently. This paper develops a lossless convexification for optimal control problems of linear systems. We extend existing results by showing that system normality is preserved when reformulating Lagrange-form problems into Mayer-form via an epigraph transformation, and under simple geometric conditions on the input set the solution to the relaxed convex problem is the solution to the original non-convex problem. These results enable real-time computation of optimal discrete-valued controls without resorting to mixed-integer optimization. Numerical results from Monte Carlo simulations confirm that the proposed algorithm consistently yields discrete-valued control inputs with computation times compatible with safety-critical real-time applications.
In many countries, declining demand in energy-intensive industries such as cement, steel, and aluminum is leading to industrial overcapacity. Although industrial overcapacity is traditionally envisioned as problematic and resource-wasteful, it could unlock energy-intensive industries' flexibility in electricity use. Here, using China's aluminum smelting industry as a case study, we evaluate the system-level cost-benefit of retaining energy-intensive industries overcapacity for flexible electricity use in decarbonized energy systems. We find that overcapacity can enable aluminum smelters to adopt a seasonal operation paradigm, ceasing production during winter load peaks that are exacerbated by heating electrification and renewable seasonality. This seasonal operation paradigm could reduce the investment and operational costs of China's decarbonized electricity system by 23-32 billion CNY/year (11-15% of the aluminum smelting industry's product value), sufficient to offset the increased smelter maintenance and product storage costs associated with overcapacity. It may also provide an opportunity for seasonally complementary labor deployment across the aluminum smelting and thermal power generation sectors, offering a potential pathway for mitigating socio-economic disruptions caused by industrial restructuring and energy decarbonization.
This paper introduces DashengTokenizer, a continuous audio tokenizer engineered for joint use in both understanding and generation tasks. Unlike conventional approaches, which train acoustic tokenizers and subsequently integrate frozen semantic knowledge, our method inverts this paradigm: we leverage frozen semantic features and inject acoustic information. In linear evaluation across 22 diverse tasks, our method outperforms previous audio codec and audio encoder baselines by a significant margin while maintaining competitive audio reconstruction quality. Notably, we demonstrate that this acoustic injection improves performance for tasks such as speech emotion recognition, music understanding, and acoustic scene classification. We further evaluate the tokenizer's generative performance on text-to-audio (TTA), text-to-music (TTM), and speech enhancement (SE). Our approach surpasses standard variational autoencoder (VAE)-based methods on TTA and TTM tasks, while its effectiveness on SE underscores its capabilities as a general-purpose audio encoder. Finally, our results challenge the prevailing assumption that VAE-based architectures are a prerequisite for audio synthesis. Checkpoints are available at this https URL.