In urban areas, signal reception conditions are often poor due to reflections from buildings, resulting in inaccurate global navigation satellite system (GNSS)-based positioning. Various 3D-mapping-aided (3DMA) GNSS techniques, including shadow matching, have been proposed to address this issue. However, conventional shadow matching estimates positions in a discretized manner. The accuracy of this approach is limited by the resolution of the grid points representing the candidate receiver positions, making it difficult to achieve robust urban positioning and to ensure that the position estimate satisfies user-specified protection levels or safety bounds. To overcome these limitations, zonotope shadow matching (ZSM) has been proposed, which utilizes a set-based position estimate rather than grid-based estimates. ZSM calculates the GNSS shadow--an area on the ground where the line-of-sight (LOS) is blocked and only non-line-of-sight (NLOS) signals can be received--to estimate the receiver's position set. ZSM distinguishes between LOS and NLOS satellites, determining that the receiver is inside the GNSS shadow if the satellite is NLOS and outside if the satellite is LOS. However, relying solely on GNSS shadows limits the ability to sufficiently reduce the size of the receiver position set and to precisely estimate the receiver's location. To address this, we propose zonotope shadow and reflection matching (ZSRM) to enhance positioning accuracy in urban areas. The proposed ZSRM technique is validated through field tests using GNSS signals collected in an urban environment. Consequently, the RMS horizontal position error of ZSRM improved by 10.0% to 53.6% compared with ZSM, while the RMS cross-street and along-street position bounds improved by 18.0% to 50.1% and 30.7% to 59.3%, respectively.
Most Integrated Sensing and Communications (ISAC) systems require dividing airtime across their two modes. However, the specific impact of this decision on sensing performance remains unclear and underexplored. In this paper, we therefore investigate the impact on a gesture recognition system using a Millimeter-Wave (mmWave) ISAC system. With our dataset of power per beam pair gathered with two mmWave devices performing constant beam sweeps while test subjects performed distinct gestures, we train a gesture classifier using Convolutional Neural Networks. We then subsample these measurements, emulating reduced sensing airtime, showing that a sensing airtime of 25 % only reduces classification accuracy by 0.15 percentage points from full-time sensing. Alongside this high-quality sensing at low airtime, mmWave systems are known to provide extremely high data throughputs, making mmWave ISAC a prime enabler for applications such as truly wireless Extended Reality.
Congenital Heart Disease (CHD) remains a significant global health concern affecting approximately 1\% of births worldwide. Phonocardiography has emerged as a supplementary tool to diagnose CHD cost-effectively. However, the performance of these diagnostic models highly depends on the quality of the phonocardiography, thus, noise reduction is particularly critical. Supervised UNet effectively improves noise reduction capabilities, but limited clean data hinders its application. The complex time-frequency characteristics of phonocardiography further complicate finding the balance between effectively removing noise and preserving pathological features. In this study, we proposed a self-supervised phonocardiography noise reduction model based on Noise2Noise to enable training without clean data. Augmentation and contrastive learning are applied to enhance its performance. We obtained an average SNR of 12.98 dB after filtering under 10~dB of hospital noise. Classification sensitivity after filtering was improved from 27\% to 88\%, indicating its promising pathological feature retention capabilities in practical noisy environments.
A key challenge in topology optimization (TopOpt) is that manufacturable structures, being inherently binary, are non-differentiable, creating a fundamental tension with gradient-based optimization. The subpixel-smoothed projection (SSP) method addresses this issue by smoothing sharp interfaces at the subpixel level through a first-order expansion of the filtered field. However, SSP does not guarantee differentiability under topology changes, such as the merging of two interfaces, and therefore violates the convergence guarantees of many popular gradient-based optimization algorithms. We overcome this limitation by regularizing SSP with the Hessian of the filtered field, resulting in a twice-differentiable projected density during such transitions, while still guaranteeing an almost-everywhere binary structure. We demonstrate the effectiveness of our second-order SSP (SSP2) methodology on both thermal and photonic problems, showing that SSP2 has faster convergence than SSP for connectivity-dominant cases -- where frequent topology changes occur -- while exhibiting comparable performance otherwise. Beyond improving convergence guarantees for CCSA optimizers, SSP2 enables the use of a broader class of optimization algorithms with stronger theoretical guarantees, such as interior-point methods. Since SSP2 adds minimal complexity relative to SSP or traditional projection schemes, it can be used as a drop-in replacement in existing TopOpt codes.
Sensor nodes localization in wireless Internet of Things (IoT) sensor networks is crucial for the effective operation of diverse applications, such as smart cities and smart agriculture. Existing sensor nodes localization approaches heavily rely on anchor nodes within wireless sensor networks (WSNs). Anchor nodes are sensor nodes equipped with global positioning system (GPS) receivers and thus, have known locations. These anchor nodes operate as references to localize other sensor nodes. However, the presence of anchor nodes may not always be feasible in real-world IoT scenarios. Additionally, localization accuracy can be compromised by fluctuations in Received Signal Strength Indicator (RSSI), particularly under non-line-of-sight (NLOS) conditions. To address these challenges, we propose UBiGTLoc, a Unified Bidirectional Long Short-Term Memory (BiLSTM)-Graph Transformer Localization framework. The proposed UBiGTLoc framework effectively localizes sensor nodes in both anchor-free and anchor-presence WSNs. The framework leverages BiLSTM networks to capture temporal variations in RSSI data and employs Graph Transformer layers to model spatial relationships between sensor nodes. Extensive simulations demonstrate that UBiGTLoc consistently outperforms existing methods and provides robust localization across both dense and sparse WSNs while relying solely on cost-effective RSSI data.
India is the second largest producer of onions in the world, contributing over 26 million tonnes annually. However, during storage, approximately 30-40% of onions are lost due to rotting, sprouting, and weight loss. Despite being a major producer, conventional storage methods are either low-cost but ineffective (traditional storage with 40% spoilage) or highly effective but prohibitively expensive for small farmers (cold storage). This paper presents a low-cost IoT-based smart onion storage system that monitors and automatically regulates environmental parameters including temperature, humidity, and spoilage gases using ESP32 microcontroller, DHT22 sensor, MQ-135 gas sensor, and UV-C disinfection technology. The proposed system aims to reduce onion spoilage to 15-20% from the current 40-45% wastage rate while remaining affordable for small and marginal farmers who constitute the majority in India. The system is designed to be cost-effective (estimated 60k-70k INR), energy-efficient, farmer-friendly, and solar-powered.
This document develops a method to solve the periodic operating point of Dual-Active-Bridge (DAB).
Data on citywide street-segment traffic volumes are essential for urban planning and sustainable mobility management. Yet such data are available only for a limited subset of streets due to the high costs of sensor deployment and maintenance. Traffic volumes on the remaining network are therefore interpolated based on existing sensor measurements. However, current sensor locations are often determined by administrative priorities rather than by data-driven optimization, leading to biased coverage and reduced estimation performance. This study provides a large-scale, real-world benchmarking of easily implementable, data-driven strategies for optimizing the placement of permanent and temporary traffic sensors, using segment-level data from Berlin (Strava bicycle counts) and Manhattan (taxi counts). It compares spatial placement strategies based on network centrality, spatial coverage, feature coverage, and active learning. In addition, the study examines temporal deployment schemes for temporary sensors. The findings highlight that spatial placement strategies that emphasize even spatial coverage and employ active learning achieve the lowest prediction errors. With only 10 sensors, they reduce the mean absolute error by over 60% in Berlin and 70% in Manhattan compared to alternatives. Temporal deployment choices further improve performance: distributing measurements evenly across weekdays reduces error by an additional 7% in Berlin and 21% in Manhattan. Together, these spatial and temporal principles allow temporary deployments to closely approximate the performance of optimally placed permanent deployments. From a policy perspective, the results indicate that cities can substantially improve data usefulness by adopting data-driven sensor placement strategies, while retaining flexibility in choosing between temporary and permanent deployments.
Background: Artificial intelligence enabled electrocardiography (AI-ECG) has demonstrated the ability to detect diverse pathologies, but most existing models focus on single disease identification, neglecting comorbidities and future risk prediction. Although ECGFounder expanded cardiac disease coverage, a holistic health profiling model remains needed. Methods: We constructed a large multicenter dataset comprising 13.3 million ECGs from 2.98 million patients. Using transfer learning, ECGFounder was fine-tuned to develop AnyECG, a foundation model for holistic health profiling. Performance was evaluated using external validation cohorts and a 10-year longitudinal cohort for current diagnosis, future risk prediction, and comorbidity identification. Results: AnyECG demonstrated systemic predictive capability across 1172 conditions, achieving an AUROC greater than 0.7 for 306 diseases. The model revealed novel disease associations, robust comorbidity patterns, and future disease risks. Representative examples included high diagnostic performance for hyperparathyroidism (AUROC 0.941), type 2 diabetes (0.803), Crohn disease (0.817), lymphoid leukemia (0.856), and chronic obstructive pulmonary disease (0.773). Conclusion: The AnyECG foundation model provides substantial evidence that AI-ECG can serve as a systemic tool for concurrent disease detection and long-term risk prediction.
Rotating bearings play an important role in modern industries, but have a high probability of occurrence of defects because they operate at high speed, high load, and poor operating environments. Therefore, if a delay time occurs when a bearing is diagnosed with a defect, this may cause economic loss and loss of life. Moreover, since the vibration sensor from which the signal is collected is highly affected by the operating environment and surrounding noise, accurate defect diagnosis in a noisy environment is also important. In this paper, we propose a lightweight and strong robustness network (LSR-Net) that is accurate in a noisy environment and enables real-time fault diagnosis. To this end, first, a denoising and feature enhancement module (DFEM) was designed to create a 3-channel 2D matrix by giving several nonlinearity to the feature-map that passed through the denoising module (DM) block composed of convolution-based denoising (CD) blocks. Moreover, adaptive pruning was applied to DM to improve denoising ability when the power of noise is strong. Second, for lightweight model design, a convolution-based efficiency shuffle (CES) block was designed using group convolution (GConv), group pointwise convolution (GPConv) and channel split that can design the model while maintaining low parameters. In addition, the trade-off between the accuracy and model computational complexity that can occur due to the lightweight design of the model was supplemented using attention mechanisms and channel shuffle. In order to verify the defect diagnosis performance of the proposed model, performance verification was conducted in a noisy environment using a vibration signal. As a result, it was confirmed that the proposed model had the best anti-noise ability compared to the benchmark models, and the computational complexity of the model was also the lowest.
The Crack Topology Score (CTS) is a recently proposed metric that focuses on evaluating the topological correctness of crack segmentation outputs. While pixel-wise metrics such as IoU or F1-score fail to capture structural validity, CTS offers a skeleton-based matching framework to measure the preservation of connectivity. This paper presents a faithful implementation of the CTS metric, along with optional preprocessing extensions designed to handle common prediction artifacts (e.g., small holes and edge noise) found in deep learning outputs. All extensions are disabled by default to ensure strict comparability with the original definition. The implementation supports PyTorch-based workflows and includes visualization tools for transparency. Code and archival resources will be made available at this https URL.
Sparse recovery methods are essential for channel estimation and localization in modern communication systems, but their reliability relies on accurate physical models, which are rarely perfectly known. Their computational complexity also grows rapidly with the dictionary dimensions in large MIMO systems. In this paper, we propose MOMPnet, a novel unfolded sparse recovery framework that addresses both the reliability and complexity challenges of traditional methods. By integrating deep unfolding with data-driven dictionary learning, MOMPnet mitigates hardware impairments while preserving interpretability. Instead of a single large dictionary, multiple smaller, independent dictionaries are employed, enabling a low-complexity multidimensional Orthogonal Matching Pursuit algorithm. The proposed unfolded network is evaluated on realistic channel data against multiple baselines, demonstrating its strong performance and potential.
Traditional data collection from sensors produce a lot of data, which lead to constant power consumption and require more storage space. This study proposes an algorithm for a data acquisition and processing method based on Fourier transform (DFT), which extracts dominant frequency components using harmonic analysis (HA) to identify frequency peaks. This algorithm allows sensors to activate only when an event occurs, while preserving critical information for detecting defects, such as those in the surface structures of buildings and ensuring accuracy for further predictions.
In this paper, we address the radar detection of low observable targets with the assistance of a reconfigurable intelligent surface (RIS). Instead of using a multistatic radar network as counter-stealth strategy with its synchronization, costs, phase coherence, and energy consumption issues, we exploit a RIS to form a joint monostatic and bistatic configuration that can intercept the energy backscattered by the target along irrelevant directions different from the line-of-sight of the radar. Then, this energy is redirected towards the radar that capitalizes all the backscattered energy to detect the low observable target. To this end, five different detection architectures are devised that jointly process monostatic and bistatic echoes and exhibit the constant false alarm rate property at least with respect to the clutter power. To support the practical implementation, we also provide a guideline for the design of a RIS that satisfies the operating requirements of the considered application. The performance analysis is carried out in comparison with conventional detectors and shows that the proposed strategy leads to effective solutions to the detection of low observable targets.
The transition to electric vehicles (EVs) depends heavily on the reliability of charging infrastructure, yet approximately 1 in 5 drivers report being unable to charge during station visits due to inoperable equipment. While regulatory efforts such as the National Electric Vehicle Infrastructure (NEVI) program have established uptime requirements, these metrics are often simplistic, delayed, and fail to provide the diagnostic granularity needed by Charging Site Operators (CSOs). Despite their pivotal role in maintaining and improving site performance, CSOs have been largely overlooked by existing reporting standards. In this paper, we propose a suite of readily computable, actionable performance metrics-Fault Time, Fault-Reason Time, and Unreachable Time-that decompose charger behavior into operationally meaningful states. Unlike traditional uptime, these metrics are defined over configurable periods and distinguish between hardware malfunctions and network connectivity issues. We demonstrate the implementation of these metrics via an open-source tool that derives performance data from existing infrastructure without requiring hardware modifications. A case study involving 98 chargers at a California academic institution spanning 2018-2024 demonstrates that these metrics reveal persistent "zombie chargers" and high-frequency network instability that remain hidden in standard annual reporting.
This paper develops a generalized finite horizon recursive solution to the discrete time signal bound disturbance attenuation regulator (SiDAR) for state feedback control. This problem addresses linear dynamical systems subject to signal bound disturbances, i.e., disturbance sequences whose squared signal two-norm is bounded by a fixed budget. The term generalized indicates that the results accommodate arbitrary initial states. By combining game theory and dynamic programming, we derive a recursive solution for the optimal state feedback policy valid for arbitrary initial states. The optimal policy is nonlinear in the state and requires solving a tractable convex scalar optimization for the Lagrange multiplier at each stage; the control is then explicit. For fixed disturbance budget $\alpha$, the state space partitions into two distinct regions: $\mathcal{X}_L(\alpha)$, where the optimal control policy is linear and coincides with the standard linear $H_{\infty}$ state feedback control, and $\mathcal{X}_{NL}(\alpha)$, where the optimal control policy is nonlinear. We establish monotonicity and boundedness of the associated Riccati recursions and characterize the geometry of the solution regions. A numerical example illustrates the theoretical properties. This work provides a complete feedback solution to the finite horizon SiDAR for arbitrary initial states. Companion papers address the steady-state problem and convergence properties for the signal bound case, and the stage bound disturbance attenuation regulator (StDAR).
This paper establishes convergence and steady-state properties for the signal bound disturbance attenuation regulator (SiDAR). Building on the finite horizon recursive solution developed in a companion paper, we introduce the steady-state SiDAR and derive its tractable linear matrix inequality (LMI) with $O(n^3)$ complexity. Systems are classified as degenerate or nondegenerate based on steady-state solution properties. For nondegenerate systems, the finite horizon solution converges to the steady-state solution for all states as the horizon approaches infinity. For degenerate systems, convergence holds in one region of the state space, while a turnpike arises in the complementary region. When convergence holds, the optimal multiplier and control gain are obtained directly from the LMI solution. Numerical examples illustrate convergence behavior and turnpike phenomena. Companion papers address the finite horizon SiDAR solution and the stage bound disturbance attenuation regulator (StDAR).
This paper develops a generalized finite horizon recursive solution to the discrete time stage bound disturbance attenuation regulator (StDAR) for state feedback control. This problem addresses linear dynamical systems subject to stage bound disturbances, i.e., disturbance sequences constrained independently at each time step through stagewise squared two-norm bounds. The term generalized indicates that the results accommodate arbitrary initial states. By combining game theory and dynamic programming, this work derives a recursive solution for the optimal state feedback policy. The optimal policy is nonlinear in the state and requires solving a tractable convex optimization for the Lagrange multiplier vector at each stage; the control is then explicit. For systems with constant stage bound, the problem admits a steady-state optimization expressed as a tractable linear matrix inequality (LMI) with $O(n^3)$ complexity. Numerical examples illustrate the properties of the solution. This work provides a complete feedback solution to the StDAR for arbitrary initial states. Companion papers address the signal bound disturbance attenuation regulator (SiDAR): the finite horizon solution in Part~I-A and convergence properties in Part~I-B.
The rapid growth of radio access networks (RANs) is increasing energy consumption and challenging the sustainability of future systems. We consider a dense-urban vertical heterogeneous network (vHetNet) comprising a high-altitude platform station (HAPS) acting as a super macro base station, a terrestrial macro base station (MBS), and multiple small base stations (SBSs). We propose a HAPS-enhanced cell-switching algorithm that selectively deactivates SBSs based on their traffic load and the capacity and channel conditions of both the MBS and HAPS. The resulting energy-minimization problem, subject to an outage-based quality-of-service (QoS) constraint, is formulated as a mixed-integer nonlinear program and reformulated into a mixed-integer program for efficient solution. Using realistic 3GPP channel models, simulations show substantial energy savings versus All-ON, terrestrial cell switching, and sorting benchmarks. Relative to All-ON, the proposed method reduces power consumption by up to 77% at low loads and about 40% at high loads; a NoQoS variant achieves up to 90% and 47%, respectively. The approach maintains high served-traffic levels and provides a tunable trade-off between power efficiency and outage-based QoS, supporting scalable and sustainable 6G deployments.
AI-communication integration is widely regarded as a core enabling technology for 6G. Most existing AI-based physical-layer designs rely on task-specific models that are separately tailored to individual modules, resulting in poor generalization. In contrast, communication systems are inherently general-purpose and should support broad applicability and robustness across diverse scenarios. Foundation models offer a promising solution through strong reasoning and generalization, yet wireless-system constraints hinder a direct transfer of large language model (LLM)-style success to the wireless domain. Therefore, we introduce the concept of large wireless foundation models (LWFMs) and present a novel framework for empowering the physical layer with foundation models under wireless constraints. Specifically, we propose two paradigms for realizing LWFMs, including leveraging existing general-purpose foundation models and building novel wireless foundation models. Based on recent progress, we distill two roadmaps for each paradigm and formulate design principles under wireless constraints. We further provide case studies of LWFM-empowered wireless systems to intuitively validate their advantages. Finally, we characterize the notion of "large" in LWFMs through a multidimensional analysis of existing work and outline promising directions for future research.
Wi-Fi tracking technology demonstrates promising potential for future smart home and intelligent family care. Currently, accurate Wi-Fi tracking methods rely primarily on fine-grained velocity features. However, such velocity-based approaches suffer from the problem of accumulative errors, making it challenging to stably track users' trajectories over a long period of time. This paper presents DuTrack, a fusion-based tracking system for stable human tracking. The fundamental idea is to leverage the ubiquitous acoustic signals in households to rectify the accumulative Wi-Fi tracking error. Theoretically, Wi-Fi sensing in line-of-sight (LoS) and non-line-of-sight (NLoS) scenarios can be modeled as elliptical Fresnel zones and hyperbolic zones, respectively. By designing acoustic sensing signals, we are able to model the acoustic sensing zones as a series of hyperbolic clusters. We reveal how to fuse the fields of electromagnetic waves and mechanical waves, and establish the optimization equation. Next, we design a data-driven architecture to solve the aforementioned optimization equation. Experimental results show that the proposed multimodal tracking scheme exhibits superior performance. We achieve a 89.37% reduction in median tracking error compared to model-based methods and a 65.02% reduction compared to data-driven methods.
Soft, stretchable organic field-effect transistors (OFETs) can provide powerful on-skin signal conditioning, but current fabrication methods are often material-specific: each new polymer semiconductor (PSC) requires a tailored process. The challenge is even greater for complementary OFET circuits, where two PSCs must be patterned sequentially, which often leads to device degradation. Here, we introduce a universal, monolithic photolithography process that enables high-yield, high-resolution stretchable complementary OFETs and circuits. This approach is enabled by a process-design framework that includes (i) a direct, photopatternable, solvent-resistant, crosslinked dielectric/semiconductor interface, (ii) broadly applicable crosslinked PSC blends that preserve high mobility, and (iii) a patterning strategy that provides simultaneous etch masking and encapsulation. Using this platform, we achieve record integration density for stretchable OTFTs (55,000 cm^-2), channel lengths down to 2 um, and low-voltage operation at 5 V. We demonstrate photopatterning across multiple PSC types and realize complementary circuits, including 3 kHz stretchable ring oscillators, the first to exceed 1 kHz and representing more than a 60-fold increase in stage switching speed over the state of the art. Finally, we demonstrate the first stretchable complementary OTFT neuron circuit, where the output frequency is modulated by the input current to mimic neuronal signal processing. This scalable approach can be readily extended to diverse high-performance stretchable materials, accelerating the development and manufacturing of skin-like electronics.
Thermal energy storage (TES) systems coupled with heat pumps offer significant potential for improving building energy efficiency by shifting electricity demand to off-peak hours. However, conventional operating strategies maintain conservatively low chilled water temperatures throughout the cooling season, a practice that results in suboptimal heat pump performance. This study proposes a physics-based integrated simulation framework to determine the maximum feasible chilled water supply temperature while ensuring cooling stability. The framework integrates four submodels: relative humidity prediction, dynamic cooling load estimation, cooling coil performance prediction, and TES discharge temperature prediction. Validation against measured data from an office building demonstrates reliable accuracy across all sub-models (e.g., CVRMSE of 9.3% for cooling load and R2 of 0.91 for peak-time discharge temperature). The integrated simulation reveals that the proposed framework can increase the daily initial TES charging temperature by an average of 2.55 °C compared to conventional fixed-temperature operation, enabling the heat pump to operate at a higher coefficient of performance. This study contributes a practical methodology for optimizing TES charging temperatures in building heating, ventilation, and air conditioning (HVAC) systems while maintaining indoor setpoint temperatures.
Vehicular fog computing (VFC) is a promising paradigm for reducing the computation burden of vehicles, thus supporting delay-sensitive services in next-generation transportation networks. However, traditional VFC schemes rely on radio frequency (RF) communications, which limits their adaptability for dense vehicular environments. In this paper, a heterogeneous visible light communication (VLC)-RF architecture is designed for VFC systems to facilitate efficient task offloading. Specifically, computing tasks are dynamically partitioned and offloaded to idle vehicles via both VLC and RF links, thereby fully exploiting the interference resilience of VLC and the coverage advantage of RF. To minimize the average task processing delay (TPD), an optimization problem of task offloading and computing resource allocation is formulated, and then solved by the developed residual-based majorization-minimization (RBMM) algorithm. Simulation results confirm that the heterogeneous VLC-RF architecture with the proposed algorithm achieves a 15% average TPD reduction compared to VFC systems relying solely on VLC or RF.
Wi-Fi sensing technology enables non-intrusive, continuous monitoring of user locations and activities, which supports diverse smart home applications. Since different sensing tasks exhibit contextual relationships, their integration can enhance individual module performance. However, integrating sensing tasks across different research efforts faces challenges due to the absence of two key elements. The first is a unified architecture that captures the fundamental nature shared across diverse sensing tasks. The second is an extensible pipeline that can integrate sensing methodologies proposed in potential future research. This paper presents Uni-Fi, an extensible framework for multi-task Wi-Fi sensing integration. This paper makes the following contributions. First, we propose a unified theoretical framework that reveals the fundamental differences between single-task and multi-task sensing. Second, we develop a scalable sensing pipeline that automatically generates multi-task sensing solvers, enabling seamless integration of multiple sensing models. Experimental results show that Uni-Fi achieves robust performance across tasks, with a localization error of approximately 0.54 meters, 98.34 percent accuracy for activity classification, and 98.57 percent accuracy for presence detection.
No-reference video quality assessment (NR-VQA) estimates perceptual quality without a reference video, which is often challenging. While recent techniques leverage saliency or transformer attention, they merely address global context of the video signal by using static maps as auxiliary inputs rather than embedding context fundamentally within feature extraction of the video sequence. We present Dynamic Attention with Global Registers for Video Quality Assessment (DAGR-VQA), the first framework integrating register-token directly into a convolutional backbone for spatio-temporal, dynamic saliency prediction. By embedding learnable register tokens as global context carriers, our model enables dynamic, HVS-inspired attention, producing temporally adaptive saliency maps that track salient regions over time without explicit motion estimation. Our model integrates dynamic saliency maps with RGB inputs, capturing spatial data and analyzing it through a temporal transformer to deliver a perceptually consistent video quality assessment. Comprehensive tests conducted on the LSVQ, KonVid-1k, LIVE-VQC, and YouTube-UGC datasets show that the performance is highly competitive, surpassing the majority of top baselines. Research on ablation studies demonstrates that the integration of register tokens promotes the development of stable and temporally consistent attention mechanisms. Achieving an efficiency of 387.7 FPS at 1080p, DAGR-VQA demonstrates computational performance suitable for real-time applications like multimedia streaming systems.
Interpretation of imaging findings based on morphological characteristics is important for diagnosing pulmonary nodules on chest computed tomography (CT) images. In this study, we constructed a visual question answering (VQA) dataset from structured data in an open dataset and investigated an image-finding generation method for chest CT images, with the aim of enabling interactive diagnostic support that presents findings based on questions that reflect physicians' interests rather than fixed descriptions. In this study, chest CT images included in the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) datasets were used. Regions of interest surrounding the pulmonary nodules were extracted from these images, and image findings and questions were defined based on morphological characteristics recorded in the database. A dataset comprising pairs of cropped images, corresponding questions, and image findings was constructed, and the VQA model was fine-tuned on it. Language evaluation metrics such as BLEU were used to evaluate the generated image findings. The VQA dataset constructed using the proposed method contained image findings with natural expressions as radiological descriptions. In addition, the generated image findings showed a high CIDEr score of 3.896, and a high agreement with the reference findings was obtained through evaluation based on morphological characteristics. We constructed a VQA dataset for chest CT images using structured information on the morphological characteristics from the LIDC-IDRI dataset. Methods for generating image findings in response to these questions have also been investigated. Based on the generated results and evaluation metric scores, the proposed method was effective as an interactive diagnostic support system that can present image findings according to physicians' interests.
Recently, computer-aided diagnosis systems have been developed to support diagnosis, but their performance depends heavily on the quality and quantity of training data. However, in clinical practice, it is difficult to collect the large amount of CT images for specific cases, such as small cell carcinoma with low epidemiological incidence or benign tumors that are difficult to distinguish from malignant ones. This leads to the challenge of data imbalance. In this study, to address this issue, we proposed a method to automatically generate chest CT nodule images that capture target features using latent diffusion models (LDM) and verified its effectiveness. Using the LIDC-IDRI dataset, we created pairs of nodule images and finding-based text prompts based on physician evaluations. For the image generation models, we used Stable Diffusion version 1.5 (SDv1) and 2.0 (SDv2), which are types of LDM. Each model was fine-tuned using the created dataset. During the generation process, we adjusted the guidance scale (GS), which indicates the fidelity to the input text. Both quantitative and subjective evaluations showed that SDv2 (GS = 5) achieved the best performance in terms of image quality, diversity, and text consistency. In the subjective evaluation, no statistically significant differences were observed between the generated images and real images, confirming that the quality was equivalent to real clinical images. We proposed a method for generating chest CT nodule images based on input text using LDM. Evaluation results demonstrated that the proposed method could generate high-quality images that successfully capture specific medical features.
Bistatic integrated sensing and communication (ISAC) enables efficient reuse of the existing cellular infrastructure and is likely to play an important role in future sensing networks. In this context, ISAC using the data channel is a promising approach to improve the bistatic sensing performance compared to relying solely on pilots. One of the challenges associated with this approach is resource allocation: the communication link aims to transmit higher modulation order (MO) symbols to maximize the throughput, whereas a lower MO is preferable for sensing to achieve a higher signal-to-noise ratio in the radar image. To address this conflict, this paper introduces a hybrid resource allocation scheme. By placing lower MO symbols as pseudo-pilots on a suitable sensing grid, we enhance the bistatic sensing performance while only slightly reducing the spectral efficiency of the communication link. Simulation results validate our approach against different baselines and provide practical insights into how decoding errors affect the sensing performance.
We propose Comprehensive Robust Dynamic Mode Decomposition (CR-DMD), a novel framework that robustifies the entire DMD process - from mode extraction to dimensional reduction - against mixed noise. Although standard DMD widely used for uncovering spatio-temporal patterns and constructing low-dimensional models of dynamical systems, it suffers from significant performance degradation under noise due to its reliance on least-squares estimation for computing the linear time evolution operator. Existing robust variants typically modify the least-squares formulation, but they remain unstable and fail to ensure faithful low-dimensional representations. First, we introduce a convex optimization-based preprocessing method designed to effectively remove mixed noise, achieving accurate and stable mode extraction. Second, we propose a new convex formulation for dimensional reduction that explicitly links the robustly extracted modes to the original noisy observations, constructing a faithful representation of the original data via a sparse weighted sum of the modes. Both stages are efficiently solved by a preconditioned primal-dual splitting method. Experiments on fluid dynamics datasets demonstrate that CR-DMD consistently outperforms state-of-the-art robust DMD methods in terms of mode accuracy and fidelity of low-dimensional representations under noisy conditions.
In many scenarios, it is natural to model a plant's dynamical behavior using a hybrid dynamical system influenced by exogenous continuous-time inputs. While solution concepts and analytical tools for existence and completeness are well established for autonomous hybrid systems, corresponding results for hybrid dynamical systems involving continuous-time inputs are generally lacking. This work aims to address this gap. We first formalize notions of a solution for such systems. We then provide conditions that guarantee the existence and forward completeness of solutions. Moreover, we leverage results and ideas from viability theory to present more explicit conditions in terms of various tangent cone formulations. Variants are provided that depend on the regularity of the exogenous input signals.
This study investigates the application of modern control theory to improve the precision of spacecraft orbit maneuvers in low Earth orbit under the influence of solar radiation pressure. A full order observer based feedback control framework is developed to estimate system states and compensate for external disturbances during the trajectory correction phase following main engine cut off. The maneuver trajectory is generated using Lambert guidance, while the observer based controller ensures accurate tracking of the target orbit despite SRP perturbations. The effectiveness of the proposed design is assessed through stability, observability, and controllability analyses. Stability is validated by step-response simulations and eigenvalue distributions of the system dynamics. Observability is demonstrated through state matrix rank analysis, confirming complete state estimation. Controllability is verified using state feedback rank conditions and corresponding control performance plots. Comparative simulations highlight that, in contrast to uncontrolled or conventional control cases, the observer based controller achieves improved trajectory accuracy and robust disturbance rejection with moderate control effort. These findings indicate that observer-based feedback control offers a reliable and scalable solution for precision orbital maneuvering in LEO missions subject to environmental disturbances.
This paper presents the design, fabrication, and characterization of broadband liquid crystal (LC) reconfigurable intelligent surfaces (RIS) operating around 60 GHz and scaling up to 750 radiating elements. The RISs employ a delay line architecture (DLA) that decouples the phase shifting and radiating layer, enabling wide bandwidth, continuous phase control exceeding 360°, and fast response times with a micrometer-thin LC layer of 4.6 micrometer. Two prototypes with 120 and 750 elements are realized using identical unit cells and column-wise biasing. Measurements demonstrate beam steering over +-60° and -3 dB bandwidths exceeding 9% for both apertures, confirming the scalability of the proposed architecture. On top of a measured nanowatt power consumption per unit cell, aperture efficiencies above 20% are predicted by simulations. While the measured efficiencies are reduced to 9.2% and 2.6%, a detailed analysis verifies that this reduction can be attributed to technological challenges in a laboratory environment. Finally, a comprehensive comparison between the applied DLA-based LC-RIS and a conventional approach highlights the superior potential of applied architecture.
This paper considers data-based solutions of linear-quadratic nonzero-sum differential games. Two cases are considered. First, the deterministic game is solved and Nash equilibrium strategies are obtained by using persistently excited data from the multiagent system. Then, a stochastic formulation of the game is considered, where each agent measures a different noisy output signal and state observers must be designed for each player. It is shown that the proposed data-based solutions of these games are equivalent to known model-based procedures. The resulting data-based solutions are validated in a numerical experiment.
Multi-hop collaboration offers new perspectives for enhancing task execution efficiency by increasing available distributed collaborators for resource sharing. Consequently, selecting trustworthy collaborators becomes critical for realizing effective multi-hop collaboration. However, evaluating device trust requires the consideration of multiple factors, including relatively stable factors, such as historical interaction data, and dynamic factors, such as varying resources and network conditions. This differentiation makes it challenging to achieve the accurate evaluation of composite trust factors using one identical evaluation approach. To address this challenge, this paper proposes a composite and staged trust evaluation (CSTE) mechanism, where stable and dynamic factors are separately evaluated at different stages and then integrated for a final trust decision. First, a device interaction graph is constructed from stable historical interaction data to represent direct trust relationships between devices. A graph neural network framework is then used to propagate and aggregate these trust relationships to produce the historical trustworthiness of devices. In addition, a task-specific trust evaluation method is developed to assess the dynamic resources of devices based on task requirements, which generates the task-specific resource trustworthiness of devices. After these evaluations, CSTE integrates their results to identify devices within the network topology that satisfy the minimum trust thresholds of tasks. These identified devices then establish a trusted topology. Finally, within this trusted topology, an A* search algorithm is employed to construct a multi-hop collaboration path that satisfies the task requirements. Experimental results demonstrate that CSTE outperforms the comparison algorithms in identifying paths with the highest average trust values.
The Internet of Things (IoT) has become integral to modern technology, enhancing daily life and industrial processes through seamless connectivity. However, the rapid expansion of IoT systems presents significant sustainability challenges, such as high energy consumption and inefficient resource management. Addressing these issues is critical for the long-term viability of IoT networks. Machine learning (ML), with its proven success across various domains, offers promising solutions for optimizing IoT operations. ML algorithms can learn directly from raw data, uncovering hidden patterns and optimizing processes in dynamic environments. Executing ML at the edge of IoT networks can further enhance sustainability by reducing bandwidth usage, enabling real-time decision-making, and improving data privacy. Additionally, testing ML models on actual hardware is essential to ensure satisfactory performance under real-world conditions, as it captures the complexities and constraints of real-world IoT deployments. Combining ML at the edge and actual hardware testing, therefore, increases the reliability of ML models to effectively improve the sustainability of IoT systems. The present systematic literature review explores how ML can be utilized to enhance the sustainability of IoT networks, examining current methodologies, benefits, challenges, and future opportunities. Through our analysis, we aim to provide insights that will drive future innovations in making IoT networks more sustainable.
This paper studies microfluidic molecular communication receivers with finite-capacity Langmuir adsorption driven by an effective surface concentration. In the reaction-limited regime, we derive a closed-form single-pulse response kernel and a symbol-rate recursion for on-off keying that explicitly exposes channel memory and inter-symbol interference. We further develop short-pulse and long-pulse approximations, revealing an interference asymmetry in the long-pulse regime due to saturation. To account for stochasticity, we adopt a finite-receptor binomial counting model, employ pulse-end sampling, and propose a low-complexity midpoint-threshold detector that reduces to a fixed threshold when interference is negligible. Numerical results corroborate the proposed characterization and quantify detection performance versus pulse and symbol durations.
We develop a learning-based framework for constructing shrinking disturbance-invariant tubes under state- and input-dependent uncertainty, intended as a building block for tube Model Predictive Control (MPC), and certify safety via a lifted, isotone (order-preserving) fixed-point map. Gaussian Process (GP) posteriors become $(1-\alpha)$ credible ellipsoids, then polytopic outer sets for deterministic set operations. A two-time-scale scheme separates learning epochs, where these polytopes are frozen, from an inner, outside-in iteration that converges to a compact fixed point $Z^\star\!\subseteq\!\mathcal G$; its state projection is RPI for the plant. As data accumulate, disturbance polytopes tighten, and the associated tubes nest monotonically, resolving the circular dependence between the set to be verified and the disturbance model while preserving hard constraints. A double-integrator study illustrates shrinking tube cross-sections in data-rich regions while maintaining invariance.
Microwave linear analog computers (MiLACs) have recently emerged as a promising solution for future gigantic multiple-input multiple-output (MIMO) systems, enabling beamforming with greatly reduced hardware and computational cost. However, channel estimation for MiLAC-aided systems remains an open problem. Conventional least squares (LS) and minimum mean square error (MMSE) estimation rely on intensive digital computation, which undermines the benefits offered by MiLACs. In this letter, we propose efficient LS and MMSE channel estimation schemes for MiLAC-aided MIMO systems. By designing training precoders and combiners implemented by MiLACs, both LS and MMSE estimation are performed fully in the analog domain, achieving identical performance to their digital counterparts while significantly reducing computational complexity, transmit RF chains, analog-to-digital/digital-to-analog converters (ADCs/DACs) resolution requirements, and peak-to-average power ratio (PAPR). Numerical results verify the effectiveness and advantages of the proposed schemes.
The shift from traditional synchronous generator (SG) based power generation to generation driven by power electronic devices introduces new dynamic phenomena and considerations for the control of large-scale power systems. In this paper, two aspects of all-inverter power systems are investigated: greater localization of system disturbance response and greater system controllability. The prevalence of both of these aspects are shown to be related to the lower effective inertia of inverters and have implications for future widearea control system design. Greater disturbance localization implies the need for feedback measurement placement close to generator nodes to properly reject disturbances in the system while increased system controllability implies that widearea control systems should preferentially actuate inverters to most efficiently control the system. This investigation utilizes reduced-order linear time-invariant models of both SGs and inverters that are shown to capture the frequency dynamics of interest in both all-SG and all-inverter systems, allowing for the efficient use of both frequency and time domain analysis methods.
Speech tokenizers serve as the cornerstone of discrete Speech Large Language Models (Speech LLMs). Existing tokenizers either prioritize semantic encoding, fuse semantic content with acoustic style inseparably, or achieve incomplete semantic-acoustic disentanglement. To achieve better disentanglement, we propose DSA-Tokenizer, which explicitly disentangles speech into discrete semantic and acoustic tokens via distinct optimization constraints. Specifically, semantic tokens are supervised by ASR to capture linguistic content, while acoustic tokens focus on mel-spectrograms restoration to encode style. To eliminate rigid length constraints between the two sequences, we introduce a hierarchical Flow-Matching decoder that further improve speech generation quality. Furthermore, We employ a joint reconstruction-recombination training strategy to enforce this separation. DSA-Tokenizer enables high fidelity reconstruction and flexible recombination through robust disentanglement, facilitating controllable generation in speech LLMs. Our analysis highlights disentangled tokenization as a pivotal paradigm for future speech modeling. Audio samples are avaialble at this https URL. The code and model will be made publicly available after the paper has been accepted.
Neuromorphic vision made significant progress in recent years, thanks to the natural match between spiking neural networks and event data in terms of biological inspiration, energy savings, latency and memory use for dynamic visual data processing. However, optimising its energy requirements still remains a challenge within the community, especially for embedded applications. One solution may reside in preprocessing events to optimise data quantity thus lowering the energy cost on neuromorphic hardware, proportional to the number of synaptic operations. To this end, we extend an end-to-end neuromorphic line detection mechanism to introduce line-based event data preprocessing. Our results demonstrate on three benchmark event-based datasets that preprocessing leads to an advantageous trade-off between energy consumption and classification performance. Depending on the line-based preprocessing strategy and the complexity of the classification task, we show that one can maintain or increase the classification accuracy while significantly reducing the theoretical energy consumption. Our approach systematically leads to a significant improvement of the neuromorphic classification efficiency, thus laying the groundwork towards a more frugal neuromorphic computer vision thanks to event preprocessing.
Traditional speech systems typically rely on separate, task-specific models for text-to-speech (TTS), automatic speech recognition (ASR), and voice conversion (VC), resulting in fragmented pipelines that limit scalability, efficiency, and cross-task generalization. In this paper, we present General-Purpose Audio (GPA), a unified audio foundation model that integrates multiple core speech tasks within a single large language model (LLM) architecture. GPA operates on a shared discrete audio token space and supports instruction-driven task induction, enabling a single autoregressive model to flexibly perform TTS, ASR, and VC without architectural modifications. This unified design combines a fully autoregressive formulation over discrete speech tokens, joint multi-task training across speech domains, and a scalable inference pipeline that achieves high concurrency and throughput. The resulting model family supports efficient multi-scale deployment, including a lightweight 0.3B-parameter variant optimized for edge and resource-constrained environments. Together, these design choices demonstrate that a unified autoregressive architecture can achieve competitive performance across diverse speech tasks while remaining viable for low-latency, practical deployment.
If we consider human manipulation, it is clear that contact-rich manipulation (CRM)-the ability to use any surface of the manipulator to make contact with objects-can be far more efficient and natural than relying solely on end-effectors (i.e., fingertips). However, state-of-the-art model-based planners for CRM are still focused on feasibility rather than optimality, limiting their ability to fully exploit CRM's advantages. We introduce a new paradigm that computes approximately optimal manipulator plans. This approach has two phases. Offline, we construct a graph of mutual reachable sets, where each set contains all object orientations reachable from a starting object orientation and grasp. Online, we plan over this graph, effectively computing and sequencing local plans for globally optimized motion. On a challenging, representative contact-rich task, our approach outperforms a leading planner, reducing task cost by 61%. It also achieves a 91% success rate across 250 queries and maintains sub-minute query times, ultimately demonstrating that globally optimized contact-rich manipulation is now practical for real-world tasks.
We study d-way balanced allocation, which assigns each incoming job to the lightest loaded among d randomly chosen servers. While prior work has extensively studied the performance of the basic scheme, there has been less published work on adapting this technique to many aspects of large-scale systems. Based on our experience in building and running planet-scale cloud applications, we extend the understanding of d-way balanced allocation along the following dimensions: (i) Bursts: Events such as breaking news can produce bursts of requests that may temporarily exceed the servicing capacity of the system. Thus, we explore what happens during a burst and how long it takes for the system to recover from such bursts. (ii) Priorities: Production systems need to handle jobs with a mix of priorities (e.g., user facing requests may be high priority while other requests may be low priority). We extend d-way balanced allocation to handle multiple priorities. (iii) Noise: Production systems are often typically distributed and thus d-way balanced allocation must work with stale or incorrect information. Thus we explore the impact of noisy information and their interactions with bursts and priorities. We explore the above using both extensive simulations and analytical arguments. Specifically we show, (i) using simulations, that d-way balanced allocation quickly recovers from bursts and can gracefully handle priorities and noise; and (ii) that analysis of the underlying generative models complements our simulations and provides insight into our simulation results.
The quantum Fourier transform and quantum wavelet transform have been cornerstones of quantum information processing. However, for non-stationary signals and anomaly detection, the Hilbert transform can be a more powerful tool, yet no prior work has provided efficient quantum implementations for the discrete Hilbert transform. This letter presents a novel construction for a quantum Hilbert transform in polylogarithmic size and logarithmic depth for a signal of length $N$, exponentially fewer operations than classical algorithms for the same mapping. We generalize this algorithm to create any $d$-dimensional Hilbert transform in depth $O(d\log N)$. Simulations demonstrate effectiveness for tasks such as power systems control and image processing, with exact agreement with classical results.
Estimating brain age (BA) from T1-weighted magnetic resonance images (MRIs) provides a useful approach to map the anatomic features of brain senescence. Whereas global BA (GBA) summarizes overall brain health, local BA (LBA) can reveal spatially localized patterns of aging. Although previous studies have examined anatomical contributors to GBA, no framework has been established to compute LBA using cortical morphology. To address this gap, we introduce a novel graph neural network (GNN) that uses morphometric features (cortical thickness, curvature, surface area, gray/white matter intensity ratio and sulcal depth) to estimate LBA across the cortical surface at high spatial resolution (mean inter-vertex distance = 1.37 mm). Trained on cortical surface meshes extracted from the MRIs of cognitively normal adults (N = 14,250), our GNN identifies prefrontal and parietal association cortices as early sites of morphometric aging, in concordance with biological theories of brain aging. Feature comparison using integrated gradients reveals that morphological aging is driven primarily by changes in surface area (gyral crowns and highly folded regions) and cortical thickness (occipital lobes), with additional contributions from gray/white matter intensity ratio (frontal lobes and sulcal troughs) and curvature (sulcal troughs). In Alzheimers disease (AD), as expected, the model identifies widespread, excessive morphological aging in parahippocampal gyri and related temporal structures. Significant associations are found between regional LBA gaps and neuropsychological measures descriptive of AD-related cognitive impairment, suggesting an intimate relationship between morphological cortical aging and cognitive decline. These results highlight the ability of GNN-derived gero-morphometry to provide insights into local brain aging.
The increased connectivity of industrial networks has led to a surge in cyberattacks, emphasizing the need for cybersecurity measures tailored to the specific requirements of industrial systems. Modern Industry 4.0 technologies, such as OPC UA, offer enhanced resilience against these threats. However, widespread adoption remains limited due to long installation times, proprietary technology, restricted flexibility, and formal process requirements (e.g. safety certifications). Consequently, many systems do not yet implement these technologies, or only partially. This leads to the challenge of dealing with so-called brownfield systems, which are often placed in isolated security zones to mitigate risks. However, the need for data exchange between secure and insecure zones persists. This paper reviews existing solutions to address this challenge by analysing their approaches, advantages, and limitations. Building on these insights, we identify three key concepts, evaluate their suitability and compatibility, and ultimately introduce the SigmaServer, a novel TCP-level aggregation method. The developed proof-of-principle implementation is evaluated in an operational technology (OT) testbed, demonstrating its applicability and effectiveness in bridging secure and insecure zones.
Quantum architecture search (QAS) has emerged to automate the design of high-performance quantum circuits under specific tasks and hardware constraints. We propose a noise-aware quantum architecture search (NA-QAS) framework based on variational quantum circuit design. By incorporating a noise model into the training of parameterized quantum circuits (PQCs) , the proposed framework identifies the noise-robust architectures. We introduce a hybrid Hamiltonian $\varepsilon$ -greedy strategy to optimize evaluation costs and circumvent local optima. Furthermore, an enhanced variable-depth NSGA-II algorithm is employed to navigate the vast search space, enabling an automated trade-off between architectural expressibility and quantum hardware overhead. The effectiveness of the framework is validated through binary classification and iris multi-classification tasks under a noisy condition. Compared to existing approaches, our framework can search for quantum architectures with superior performance and greater resource efficiency under a noisy condition.
Restoring critical loads after extreme events demands adaptive control to maintain distribution-grid resilience, yet uncertainty in renewable generation, limited dispatchable resources, and nonlinear dynamics make effective restoration difficult. Reinforcement learning (RL) can optimize sequential decisions under uncertainty, but standard RL often generalizes poorly and requires extensive retraining for new outage configurations or generation patterns. We propose a meta-guided gradient-free RL (MGF-RL) framework that learns a transferable initialization from historical outage experiences and rapidly adapts to unseen scenarios with minimal task-specific tuning. MGF-RL couples first-order meta-learning with evolutionary strategies, enabling scalable policy search without gradient computation while accommodating nonlinear, constrained distribution-system dynamics. Experiments on IEEE 13-bus and IEEE 123-bus test systems show that MGF-RL outperforms standard RL, MAML-based meta-RL, and model predictive control across reliability, restoration speed, and adaptation efficiency under renewable forecast errors. MGF-RL generalizes to unseen outages and renewable patterns while requiring substantially fewer fine-tuning episodes than conventional RL. We also provide sublinear regret bounds that relate adaptation efficiency to task similarity and environmental variation, supporting the empirical gains and motivating MGF-RL for real-time load restoration in renewable-rich distribution grids.
The active impedance is a fundamental parameter for characterizing the behavior of large, uniform phased array antennas. However, its conventional calculation via the mutual impedance matrix (or the scattering matrix) offers limited physical intuition and can be computationally intensive. This paper presents a novel derivation of the active impedance directly from the radiated beam pattern of such arrays. This approach maps the scan-angle variation of the active impedance directly to the intrinsic angular variation of the beam, providing a more intuitive physical interpretation. The theoretical derivation is straightforward and rigorous. The validity of the proposed equation is conclusively confirmed through full-wave simulations of a prototype array. This work establishes a new and more intuitive framework for understanding, analyzing and accurately measuring the scan-dependent variations in phased arrays, which is one of the main challenges in modern phased array designs. Consequently, this novel formalism is expected to expedite and simplify the overall design and optimization process for next-generation, large-scale uniform phased arrays.
With 5G deployment and the evolution toward 6G, mobile networks must make decisions in highly dynamic environments under strict latency, energy, and spectrum constraints. Achieving this goal, however, depends on prior knowledge of spatial-temporal variations in wireless channels and traffic demands. This motivates a joint, site-specific representation of radio propagation and user demand that is queryable at low online overhead. In this work, we propose the perception embedding map (PEM), a localized framework that embeds fine-grained channel statistics together with grid-level spatial-temporal traffic patterns over a base station's coverage. PEM is built from standard-compliant measurements -- such as measurement report and scheduling/quality-of-service logs -- so it can be deployed and maintained at scale with low cost. Integrated into PEM, this joint knowledge supports enhanced environment-aware optimization across PHY, MAC, and network layers while substantially reducing training overhead and signaling. Compared with existing site-specific channel maps and digital-twin replicas, PEM distinctively emphasizes (i) joint channel-traffic embedding, which is essential for network optimization, and (ii) practical construction using standard measurements, enabling network autonomy while striking a favorable fidelity-cost balance.
Recent end-to-end spoken dialogue systems leverage speech tokenizers and neural audio codecs to enable LLMs to operate directly on discrete speech representations. However, these models often exhibit limited speaker identity preservation, hindering personalized voice interaction. In this work, we present Chroma 1.0, the first open-source, real-time, end-to-end spoken dialogue model that achieves both low-latency interaction and high-fidelity personalized voice cloning. Chroma achieves sub-second end-to-end latency through an interleaved text-audio token schedule (1:2) that supports streaming generation, while maintaining high-quality personalized voice synthesis across multi-turn conversations. Our experimental results demonstrate that Chroma achieves a 10.96% relative improvement in speaker similarity over the human baseline, with a Real-Time Factor (RTF) of 0.43, while maintaining strong reasoning and dialogue capabilities. Our code and models are publicly available at this https URL and this https URL .
We study the impact of imperfect line-of-sight (LoS) phase tracking on the performance of cell-free massive MIMO networks. Unlike prior works that assume perfectly known or completely unknown phases, we consider a realistic regime where LoS phases are estimated with residual uncertainty due to hardware impairments, mobility, and synchronization errors. To this end, we propose a Rician fading model where LoS components are rotated by imperfect phase estimates and attenuated by a deterministic phase-error penalty factor. We derive a linear MMSE channel estimator that captures statistical phase errors and unifies prior results, reducing to the Bayesian MMSE estimator with perfect phase knowledge and to a zero-mean model in the absence of phase knowledge. To address the non-Gaussian setting, we introduce a virtual uplink model that preserves second-order statistics of channel estimation, enabling the derivation of tractable centralized and distributed MMSE beamformers. To ensure fair assessment of the network performance, we apply these beamformers to the true uplink model and compute the spectral efficiency bounds available in the literature. Numerical results show that our framework bridges idealized assumptions and practical tracking limitations, providing rigorous performance benchmarks and design insights for 6G cell-free networks.
We consider the problem of adaptively monitoring a wildfire front using a mobile agent (e.g., a drone), whose trajectory determines where sensor data is collected and thus influences the accuracy of fire propagation estimation. This is a challenging problem, as the stochastic nature of wildfire evolution requires the seamless integration of sensing, estimation, and control, often treated separately in existing methods. State-of-the-art methods either impose linear-Gaussian assumptions to establish optimality or rely on approximations and heuristics, often without providing explicit performance guarantees. To address these limitations, we formulate the fire front monitoring task as a stochastic optimal control problem that integrates sensing, estimation, and control. We derive an optimal recursive Bayesian estimator for a class of stochastic nonlinear elliptical-growth fire front models. Subsequently, we transform the resulting nonlinear stochastic control problem into a finite-horizon Markov decision process and design an information-seeking predictive control law obtained via a lower confidence bound-based adaptive search algorithm with asymptotic convergence to the optimal policy.
Digital twins are virtual replicas of physical entities and are poised to transform personalized medicine through the real-time simulation and prediction of human physiology. Translating this paradigm from engineering to biomedicine requires overcoming profound challenges, including anatomical variability, multi-scale biological processes, and the integration of multi-physics phenomena. This survey systematically reviews methodologies for building digital twins of human organs, structured around a pipeline decoupled into anatomical twinning (capturing patient-specific geometry and structure) and functional twinning (simulating multi-scale physiology from cellular to organ-level function). We categorize approaches both by organ-specific properties and by technical paradigm, with particular emphasis on multi-scale and multi-physics integration. A key focus is the role of artificial intelligence (AI), especially physics-informed AI, in enhancing model fidelity, scalability, and personalization. Furthermore, we discuss the critical challenges of clinical validation and translational pathways. This study not only charts a roadmap for overcoming current bottlenecks in single-organ twins but also outlines the promising, albeit ambitious, future of interconnected multi-organ digital twins for whole-body precision healthcare.
Collision avoidance in heterogeneous fleets of uncrewed vessels is challenging because the decision-making processes and controllers often differ between platforms, and it is further complicated by the limitations on sharing trajectories and control values in real-time. This paper presents a pragmatic approach that addresses these issues by adding a control filter on each autonomous vehicle that assumes worst-case behavior from other contacts, including crewed vessels. This distributed safety control filter is developed using control barrier function (CBF) theory and the application is clearly described to ensure explainability of these safety-critical methods. This work compares the worst-case CBF approach with a Collision Regulations (COLREGS) behavior-based approach in simulated encounters. Real-world experiments with three different uncrewed vessels and a human operated vessel were performed to confirm the approach is effective across a range of platforms and is robust to uncooperative behavior from human operators. Results show that combining both CBF methods and COLREGS behaviors achieves the best safety and efficiency.
Energy efficiency has become an integral aspect of modern computing infrastructure design, impacting the performance, cost, scalability, and durability of production systems. The incorporation of power actuation and sensing capabilities in CPU designs is indicative of this, enabling the deployment of system software that can actively monitor and adjust energy consumption and performance at runtime. While reinforcement learning (RL) would seem ideal for the design of such energy efficiency control systems, online training presents challenges ranging from the lack of proper models for setting up an adequate simulated environment, to perturbation (noise) and reliability issues, if training is deployed on a live system. In this paper we discuss the use of offline reinforcement learning as an alternative approach for the design of an autonomous CPU power controller, with the goal of improving the energy efficiency of parallel applications at runtime without unduly impacting their performance. Offline RL sidesteps the issues incurred by online RL training by leveraging a dataset of state transitions collected from arbitrary policies prior to training. Our methodology applies offline RL to a gray-box approach to energy efficiency, combining online application-agnostic performance data (e.g., heartbeats) and hardware performance counters to ensure that the scientific objectives are met with limited performance degradation. Evaluating our method on a variety of compute-bound and memory-bound benchmarks and controlling power on a live system through Intel's Running Average Power Limit, we demonstrate that such an offline-trained agent can substantially reduce energy consumption at a tolerable performance degradation cost.
The development of robust learning-based control algorithms for unstable systems requires high-quality, real-world data, yet access to specialized robotic hardware remains a significant barrier for many researchers. This paper introduces a comprehensive dynamics dataset for the Mini Wheelbot, an open-source, quasi-symmetric balancing reaction wheel unicycle. The dataset provides 1 kHz synchronized data encompassing all onboard sensor readings, state estimates, ground-truth poses from a motion capture system, and third-person video logs. To ensure data diversity, we include experiments across multiple hardware instances and surfaces using various control paradigms, including pseudo-random binary excitation, nonlinear model predictive control, and reinforcement learning agents. We include several example applications in dynamics model learning, state estimation, and time-series classification to illustrate common robotics algorithms that can be benchmarked on our dataset.
This paper investigates how end-to-end (E2E) channel autoencoders (AEs) can achieve energy-efficient wideband communications by leveraging Walsh-Hadamard (WH) interleaved converters. WH interleaving enables high sampling rate analog-digital conversion with reduced power consumption using an analog WH transformation. We demonstrate that E2E-trained neural coded modulation can transparently adapt to the WH-transceiver hardware without requiring algorithmic redesign. Focusing on the short block length regime, we train WH-domain AEs and benchmark them against standard neural and conventional baselines, including 5G Polar codes. We quantify the system-level energy tradeoffs among baseband compute, channel signal-to-noise ratio (SNR), and analog converter power. Our analysis shows that the proposed WH-AE system can approach conventional Polar code SNR performance within 0.14dB while consuming comparable or lower system power. Compared to the best neural baseline, WH-AE achieves, on average, 29% higher energy efficiency (in bit/J) for the same reliability. These findings establish WH-domain learning as a viable path to energy-efficient, high-throughput wideband communications by explicitly balancing compute complexity, SNR, and analog power consumption.
We address discrete-time consensus on the Euclidean unit sphere. For this purpose we consider a distributed algorithm comprising the iterative projection of a conical combination of neighboring states. Neighborhoods are represented by a strongly connected directed graph, and the conical combinations are represented by a (non-negative) weight matrix with a zero structure corresponding to the graph. A first result mirrors earlier results for gradient flows. Under the assumptions that each diagonal element of the weight matrix is more than $\sqrt{2}$ larger than the sum of the other elements in the corresponding row, the sphere dimension is greater or equal to 2, and the graph, as well as the weight matrix, is symmetric, we show that the algorithm comprises gradient ascent, stable fixed points are consensus points, and the set of initial points for which the algorithm converges to a non-consensus fixed point has measure zero. The second result is that for the unit circle and a strongly connected graph or for any unit sphere with dimension greater than or equal to $1$ and the complete graph, only for a measure zero set of weight matrices there are fixed points for the algorithm which do not have consensus or antipodal configurations.
An important part of the information theory folklore had been about the output statistics of codes that achieve the capacity and how the empirical distributions compare to the output distributions induced by the optimal input in the channel capacity problem. Results for a variety of such empirical output distributions of good codes have been known in the literature, such as the comparison of the output distribution of the code to the optimal output distribution in vanishing and non-vanishing error probability cases. Motivated by these, we aim to achieve similar results for the quantum codes that are used for classical communication, that is the setting in which the classical messages are communicated through quantum codewords that pass through a noisy quantum channel. We first show the uniqueness of the optimal output distribution, to be able to talk more concretely about the optimal output distribution. Then, we extend the vanishing error probability results to the quantum case, by using techniques that are close in spirit to the classical case. We also extend non-vanishing error probability results to the quantum case on block codes, by using the second-order converses for such codes based on hypercontractivity results for the quantum generalized depolarizing semi-groups.
Progress in Type 1 Diabetes (T1D) algorithm development is limited by the fragmentation and lack of standardization across existing T1D management datasets. Current datasets differ substantially in structure and are time-consuming to access and process, which impedes data integration and reduces the comparability and generalizability of algorithmic developments. This work aims to establish a unified and accessible data resource for T1D algorithm development. Multiple publicly available T1D datasets were consolidated into a unified resource, termed the MetaboNet dataset. Inclusion required the availability of both continuous glucose monitoring (CGM) data and corresponding insulin pump dosing records. Additionally, auxiliary information such as reported carbohydrate intake and physical activity was retained when present. The MetaboNet dataset comprises 3135 subjects and 1228 patient-years of overlapping CGM and insulin data, making it substantially larger than existing standalone benchmark datasets. The resource is distributed as a fully public subset available for immediate download at this https URL , and with a Data Use Agreement (DUA)-restricted subset accessible through their respective application processes. For the datasets in the latter subset, processing pipelines are provided to automatically convert the data into the standardized MetaboNet format. A consolidated public dataset for T1D research is presented, and the access pathways for both its unrestricted and DUA-governed components are described. The resulting dataset covers a broad range of glycemic profiles and demographics and thus can yield more generalizable algorithmic performance than individual datasets.
Visual Simultaneous Localisation and Mapping (VSLAM) is a well-known problem in robotics with a large range of applications. This paper presents a novel approach to VSLAM by lifting the observer design to a novel Lie group on which the system output is equivariant. The perspective gained from this analysis facilitates the design of a non-linear observer with almost semi-globally asymptotically stable error dynamics. Simulations are provided to illustrate the behaviour of the proposed observer and experiments on data gathered using a fixed-wing UAV flying outdoors demonstrate its performance.
The kinematics of many systems encountered in robotics, mechatronics, and avionics are naturally posed on homogeneous spaces; that is, their state lies in a smooth manifold equipped with a transitive Lie group symmetry. This paper proposes a novel filter, the Equivariant Filter (EqF), by posing the observer state on the symmetry group, linearising global error dynamics derived from the equivariance of the system, and applying extended Kalman filter design principles. We show that equivariance of the system output can be exploited to reduce linearisation error and improve filter performance. Simulation experiments of an example application show that the EqF significantly outperforms the extended Kalman filter and that the reduced linearisation error leads to a clear improvement in performance.
Inertial Velocity-Aided Attitude (VAA), the estimation of the velocity and attitude of a vehicle using gyroscope, accelerometer, and inertial-frame velocity (e.g. GPS velocity) measurements, is an important problem in the control of Remotely Piloted Aerial Systems (RPAS). Existing solutions provide limited stability guarantees, relying on local linearisation, high gain design, or assuming specific trajectories such as constant acceleration of the vehicle. This paper proposes a novel non-linear observer for inertial VAA with almost globally asymptotically and locally exponentially stable error dynamics. The approach exploits Lie group symmetries of the system dynamics to construct a globally valid correction term. To the authors' knowledge, this construction is the first observer to provide almost global convergence for the inertial VAA problem. The observer performance is verified in simulation, where it is shown that the estimation error converges to zero even with an extremely poor initial condition.
The rise of HDR-WCG display devices has highlighted the need to convert SDRTV to HDRTV, as most video sources are still in SDR. Existing methods primarily focus on designing neural networks to learn a single-style mapping from SDRTV to HDRTV. However, the limited information in SDRTV and the diversity of styles in real-world conversions render this process an ill-posed problem, thereby constraining the performance and generalization of these methods. Inspired by generative approaches, we propose a novel method for SDRTV to HDRTV conversion guided by real HDRTV priors. Despite the limited information in SDRTV, introducing real HDRTV as reference priors significantly constrains the solution space of the originally high-dimensional ill-posed problem. This shift transforms the task from solving an unreferenced prediction problem to making a referenced selection, thereby markedly enhancing the accuracy and reliability of the conversion process. Specifically, our approach comprises two stages: the first stage employs a Vector Quantized Generative Adversarial Network to capture HDRTV priors, while the second stage matches these priors to the input SDRTV content to recover realistic HDRTV outputs. We evaluate our method on public datasets, demonstrating its effectiveness with significant improvements in both objective and subjective metrics across real and synthetic datasets.
The pursuit of carbon-neutral wireless networks is increasingly constrained by the escalating energy demands of deep learning-based signal processing. Here, we introduce SpikACom (Spiking Adaptive Communications), a neuromorphic computing framework that synergizes brain-inspired spiking neural networks (SNNs) with wireless signal processing to deliver sustainable intelligence. SpikACom advances the paradigm shift from energy-intensive, continuous-valued processing to event-driven sparse computation. Moreover, it supports continual learning in dynamic wireless environments via a dual-scale mechanism that integrates channel distribution-aware context modulation with a synaptic consolidation rule using SNN-specific statistics, mitigating catastrophic forgetting. Evaluations across critical wireless communication tasks, including semantic communication, multiple-input multiple-output (MIMO) beamforming, and channel estimation demonstrate that SpikACom matches full-precision deep learning baselines while achieving an order-of-magnitude improvement in computational energy efficiency. Our results position SNNs as a promising pathway toward green wireless intelligence, providing evidence that neuromorphic computing can empower the sustainability of modern digital systems.
Effective epidemic modeling is essential for managing public health crises, requiring robust methods to predict disease spread and optimize resource allocation. This study introduces a novel deep learning framework that advances time series forecasting for infectious diseases, with its application to COVID 19 data as a critical case study. Our hybrid approach integrates Convolutional Neural Networks (CNNs) and Long Short Term Memory (LSTM) models to capture spatial and temporal dynamics of disease transmission across diverse regions. The CNN extracts spatial features from raw epidemiological data, while the LSTM models temporal patterns, yielding precise and adaptable predictions. To maximize performance, we employ a hybrid optimization strategy combining the Whale Optimization Algorithm (WOA) and Gray Wolf Optimization (GWO) to fine tune hyperparameters, such as learning rates, batch sizes, and training epochs enhancing model efficiency and accuracy. Applied to COVID 19 case data from 24 countries across six continents, our method outperforms established benchmarks, including ARIMA and standalone LSTM models, with statistically significant gains in predictive accuracy (e.g., reduced RMSE). This framework demonstrates its potential as a versatile method for forecasting epidemic trends, offering insights for resource planning and decision making in both historical contexts, like the COVID 19 pandemic, and future outbreaks.
We develop delay-compensating feedback laws for linear switched systems with time-dependent switching. Because the future values of the switching signal, which are needed for constructing an exact predictor-feedback law, may be unavailable at current time, the key design challenge is how to construct a proper predictor state. We resolve this challenge constructing two alternative, average predictor-based feedback laws. The first is viewed as a predictor-feedback law for a particular average system, properly modified to provide exact state predictions over a horizon that depends on a minimum dwell time of the switching signal (when it is available). The second is, essentially, a modification of an average of predictor feedbacks, each one corresponding to the fixed-mode predictor-feedback law. We establish that under the control laws introduced, the closed-loop systems are (uniformly) exponentially stable, provided that the differences among system's matrices and among (nominal stabilizing) controller's gains are sufficiently small, with a size that is inversely proportional to the delay length. Since no restriction is imposed on the delay, such a limitation is inherent to the problem considered (in which the future switching signal values are unavailable), and thus, it cannot be removed. The stability proof relies on multiple Lyapunov functionals constructed via backstepping and derivation of solutions' estimates for quantifying the difference between average and exact predictor states. We present consistent numerical simulation results, which illustrate the necessity of employing the average predictor-based laws and demonstrate the performance improvement when the knowledge of a minimum dwell time is properly utilized for improving state prediction accuracy.
This paper investigates a heterogeneous multi-vehicle, multi-modal sensing (H-MVMM) aided online precoding problem. The proposed H-MVMM scheme utilizes a vertical federated learning (VFL) framework to minimize pilot sequence length and optimize the sum rate. This offers a promising solution for reducing latency in frequency division duplexing systems. To achieve this, three preprocessing modules are designed to transform raw sensory data into informative representations relevant to precoding. The approach effectively addresses local data heterogeneity arising from diverse on-board sensor configurations through a well-structured VFL training procedure. Additionally, a label-free online model updating strategy is introduced, enabling the H-MVMM scheme to adapt its weights flexibly. This strategy features a pseudo downlink channel state information label simulator (PCSI-Simulator), which is trained using a semi-supervised learning (SSL) approach alongside an online loss function. Numerical results show that the proposed method can closely approximate the performance of traditional optimization techniques with perfect channel state information, achieving a significant 90.6\% reduction in pilot sequence length.
Optimal sensor placement is essential for state estimation and effective network monitoring. As known in the literature, this problem becomes particularly challenging in large-scale undirected or bidirected cyclic networks with parametric uncertainties, such as water distribution networks (WDNs), where pipe resistance and demand patterns are often unknown. Motivated by the challenges of cycles, parametric uncertainties, and scalability, this paper proposes a sensor placement algorithm that guarantees structural observability for cyclic and acyclic networks with parametric uncertainties. By leveraging a graph-based strategy, the proposed method efficiently addresses the computational complexities of large-scale networks. To demonstrate the algorithm's effectiveness, we apply it to several EPANET benchmark WDNs. Most notably, the developed algorithm solves the sensor placement problem with guaranteed structured observability for the L-town WDN with 1694 nodes and 124 cycles in under 0.1 seconds.
The increasing integration of renewable energy sources has introduced complex dynamic behavior in power systems that challenge the adequacy of traditional continuous-time modeling approaches. These developments call for modeling frameworks that can capture the intricate interplay between continuous dynamics and discrete events characterizing modern grid operations. Hybrid dynamical systems offer a rigorous foundation for representing such mixed dynamics and have emerged as a valuable tool in power system analysis. Despite their potential, existing studies remain focused on isolated applications or case-specific implementations, offering limited generalizability and guidance for model selection. This paper addresses that gap by providing a comprehensive overview of hybrid modeling approaches relevant to power systems. It critically examines key formalisms, including hybrid automata, switched systems, and piecewise affine models, evaluating their respective strengths, limitations, and suitability across control, stability, and system design tasks. In doing so, the paper identifies open challenges and outlines future research directions to support the systematic application of hybrid methods in renewable-rich, converter-dominated power systems
This paper proposes a novel approach to design analog electronic circuits that implement Model Predictive Control (MPC) policies for dynamical systems described by affine models. Effective approaches to define a reduced-complexity Explicit MPC form are combined and applied to realize an analog circuit comprising a limited set of low-latency, commercially available components. The practical feasibility and effectiveness of the proposed approach are demonstrated through its application in the design of a novel MPC-based controller for DC-DC Buck converters. We formally analyze the stability of the resulting system and conduct extensive numerical simulations to demonstrate the control system's performance in rejecting line and load disturbances.
Traditional reinforcement learning lacks the ability to provide stability guarantees. More recent algorithms learn Lyapunov functions alongside the control policies to ensure stable learning. However, the current self-learned Lyapunov functions are sample inefficient due to their on-policy nature. This paper introduces a method for learning Lyapunov functions off-policy and incorporates the proposed off-policy Lyapunov function into the Soft Actor Critic and Proximal Policy Optimization algorithms to provide them with a data efficient stability certificate. Simulations of an inverted pendulum and a quadrotor illustrate the improved performance of the two algorithms when endowed with the proposed off-policy Lyapunov function.
We propose a novel symbolic control framework for enforcing temporal logic specifications in Euler-Lagrange systems that addresses the key limitations of traditional abstraction-based approaches. Unlike existing methods that require exact system models and provide guarantees only at discrete sampling instants, our approach relies only on bounds on system parameters and input constraints, and ensures correctness for the full continuous-time trajectory. The framework combines scalable abstraction of a simplified virtual system with a closed-form, model-free controller that guarantees trajectories satisfy the original specification while respecting input bounds and remaining robust to unknown but bounded disturbances. We provide feasibility conditions for the construction of confinement regions and analyze the trade-off between efficiency and conservatism. Case studies on pendulum dynamics, a two-link manipulator, and multi-agent systems, including hardware experiments, demonstrate that the proposed approach ensures both correctness and safety while significantly reducing computational time and memory requirements. These results highlight its scalability and practicality for real-world robotic systems where precise models are unavailable and continuous-time guarantees are essential.
Reconfigurable Intelligent Surface (RIS) technology has emerged as a key enabler for future wireless communications. However, its potential is constrained by the difficulty of acquiring accurate user-to-RIS channel state information (CSI), due to the cascaded channel structure and the high pilot overhead of non-parametric methods. Unlike a passive RIS, where the reflected signal suffers from multiplicative path loss, an active RIS amplifies the signal, improving its practicality in real deployments. In this letter, we propose a parametric channel estimation method tailored for active RISs. The proposed approach integrates an active RIS model with an adaptive Maximum Likelihood Estimator (MLE) to recover the main channel parameters using a minimal number of pilots. To further enhance performance, an adaptive active RIS configuration strategy is employed, which refines the beam direction based on an initial user location estimate. Moreover, an orthogonal angle-pair codebook is used instead of the conventional Discrete Fourier Transform (DFT) codebook, significantly reducing the codebook size and ensuring reliable operation for both far-field and near-field users. Extensive simulations demonstrate that the proposed method achieves near-optimal performance with very few pilots compared to non-parametric approaches. Its performance is also benchmarked against that of a traditional passive RIS under the same total power budget to ensure fairness. Results show that active RIS yields higher spectral efficiency (SE) by eliminating the multiplicative fading inherent in passive RISs and allocating more resources to data transmission
The massive scale of Wireless Foundation Models (FMs) hinders their real-time deployment on edge devices. This letter moves beyond standard knowledge distillation by introducing a novel Multi-Component Adaptive Knowledge Distillation (MCAKD) framework. Key innovations include a Cross-Attention-Based Knowledge Selection (CA-KS) module that selectively identifies critical features from the teacher model, and an Autonomous Learning-Passive Learning (AL-PL) strategy that balances knowledge transfer with independent learning to achieve high training efficiency at a manageable computational cost. When applied to the WiFo FM, the distilled Tiny-WiFo model, with only 5.5M parameters, achieves a 1.6 ms inference time while retaining over 98% of WiFo's performance and its crucial zero-shot generalization capability, making real-time FM deployment viable.
We study downlink multi-group multicast (MGM) transmission in overloaded millimeter-wave (mmWave) systems, where the number of users exceeds the number of transmit antennas. We first show that, under realistic line-of-sight (LoS)-dominant user geometries, the conventional single-slot MGM scheme suffers from a fundamental collapse of the max-min fairness degrees of freedom (MMF-DoF), regardless of beamforming optimization. Although this collapse can in principle be avoided via aggressive time-division scheduling, it requires excessive time sharing and results in severe throughput loss in overloaded regimes. To address this limitation, we propose a CSIT-free multi-group multicast framework (CF-MGM) that does not rely on instantaneous channel state information at the transmitter (CSIT) and is based on a deterministic multi-slot transmission structure. By exploiting structured precoding and receiver-side combining across multiple slots, the proposed framework eliminates inter-group interference by construction. We show that CF-MGM guarantees a strictly positive MMF-DoF in overloaded LoS mmWave systems, in sharp contrast to the DoF collapse of conventional single-slot MGM. Simulation results demonstrate that CF-MGM significantly outperforms state-of-the-art CSIT-based MGM schemes while substantially reducing signaling overhead.
The rapid development of 6G systems demands advanced technologies to boost network capacity and spectral efficiency, particularly in the context of intelligent reflecting surfaces (IRS)-aided millimeter-wave (mmWave) communications. A key challenge here is obtaining accurate channel state information (CSI), especially with extremely large IRS (XL-IRS), due to near-field propagation, high-dimensional wideband cascaded channels, and the passive nature of the XL-IRS. In addition, most existing CSI acquisition methods fail to leverage the spatio-temporal sparsity inherent in the channel, resulting in suboptimal estimation performance. To address these challenges, we consider an XL-IRS-aided wideband multiple-input multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) system and propose an efficient channel estimation and tracking (CET) algorithm. Specifically, a unified near-field cascaded channel representation model is presented first, and a hierarchical spatio-temporal sparse prior is then constructed to capture two-dimensional (2D) block sparsity in the polar domain, one-dimensional (1D) clustered sparsity in the angle-delay domain, and temporal correlations across different channel estimation frames. Based on these priors, a tensor-based sparse CET (TS-CET) algorithm is proposed that integrates tensor-based orthogonal matching pursuit (OMP) with particle-based variational Bayesian inference (VBI) and message passing. Simulation results demonstrate that the TS-CET framework significantly improves the estimation accuracy and reduces the pilot overhead as compared to existing benchmark methods.
We study state estimation for discrete-time linear stochastic systems under distributional ambiguity in the initial state, process noise, and measurement noise. We propose a noise-centric distributionally robust Kalman filter (DRKF) based on Wasserstein ambiguity sets imposed directly on these distributions. This formulation excludes dynamically unreachable priors and yields a Kalman-type recursion driven by least-favorable covariances computed via semidefinite programs (SDP). In the time-invariant case, the steady-state DRKF is obtained from a single stationary SDP, producing a constant gain with Kalman-level online complexity. We establish the convergence of the DR Riccati covariance iteration to the stationary SDP solution, together with an explicit sufficient condition for a prescribed convergence rate. We further show that the proposed noise-centric model induces a priori spectral bounds on all feasible covariances and a Kalman filter sandwiching property for the DRKF covariances. Finally, we prove that the steady-state error dynamics are Schur stable, and the steady-state DRKF is asymptotically minimax optimal with respect to worst-case mean-square error.
Restoring natural and intuitive hand function requires simultaneous and proportional control (SPC) of multiple degrees of freedom (DoFs). This study systematically evaluated the multichannel linear descriptors-based block field method (MLD-BFM) for continuous decoding of five finger-joint DoFs by leveraging the rich spatial information of high-density surface electromyography (HD sEMG). Twenty-one healthy participants performed dynamic sinusoidal finger movements while HD sEMG signals were recorded from the extensor digitorum communis (EDC) and flexor digitorum superficialis (FDS) muscles. MLD-BFM extracted region-specific spatial features, including effective field strength ($\Sigma$), field-strength variation rate ($\Phi$), and spatial complexity ($\Omega$). Model performance was optimized (block size: $2 \times 2$; window: 0.15 s) and compared with conventional time-domain features and dimensionality reduction approaches when applied to multi-output regression models. MLD-BFM consistently achieved the highest $\mathrm{R}^2_{\mathrm{vw}}$ values across all models. The multilayer perceptron (MLP) combined with MLD-BFM yielded the best performance ($\mathrm{R}^2_{\mathrm{vw}} = 86.68\% \pm 0.33$). Time-domain features also showed strong predictive capability and were statistically comparable to MLD-BFM in some models, whereas dimensionality reduction techniques exhibited lower accuracy. Decoding accuracy was higher for the middle and ring fingers than for the thumb. Overall, MLD-BFM improved continuous finger movement decoding accuracy, underscoring the importance of taking advantage of the spatial richness of HD sEMG. These findings suggest that spatially structured features enhance SPC and provide practical guidance for designing robust, real-time, and responsive myoelectric interfaces.
The deployment of extremely large aperture arrays (ELAAs) in sixth-generation (6G) networks could shift communication into the near-field communication (NFC) regime. In this regime, signals exhibit spherical wave propagation, unlike the planar waves in conventional far-field systems. Reconfigurable intelligent surfaces (RISs) can dynamically adjust phase shifts to support NFC beamfocusing, concentrating signal energy at specific spatial coordinates. However, effective RIS utilization depends on both rapid channel state information (CSI) estimation and proactive blockage mitigation, which occur on inherently different timescales. CSI varies at millisecond intervals due to small-scale fading, while blockage events evolve over seconds, posing challenges for conventional single-level control algorithms. To address this issue, we propose a dual-transformer (DT) hierarchical framework that integrates two specialized transformer models within a hierarchical deep reinforcement learning (HDRL) architecture, referred to as the DT-HDRL framework. A fast-timescale transformer processes ray-tracing data for rapid CSI estimation, while a vision transformer (ViT) analyzes visual data to predict impending blockages. In HDRL, the high-level controller selects line-of-sight (LoS) or RIS-assisted non-line-of-sight (NLoS) transmission paths and sets goals, while the low-level controller optimizes base station (BS) beamfocusing and RIS phase shifts using instantaneous CSI. This dual-timescale coordination maximizes spectral efficiency (SE) while ensuring robust performance under dynamic conditions. Simulation results demonstrate that our approach improves SE by approximately 18% compared to single-timescale baselines, while the proposed blockage predictor achieves an F1-score of 0.92, providing a 769 ms advance warning window in dynamic scenarios.
Prediction error and maximum likelihood methods are powerful tools for identifying linear dynamical systems and, in particular, enable the joint estimation of model parameters and the Kalman filter used for state estimation. A key limitation, however, is that these methods require solving a generally non-convex optimization problem to global optimality. This paper analyzes the statistical behavior of local minimizers in the special case where only the Kalman gain is estimated. We prove that these local solutions are statistically consistent estimates of the true Kalman gain. This follows from asymptotic unimodality: as the dataset grows, the objective function converges to a limit with a unique local (and therefore global) minimizer. We further provide guidelines for designing the optimization problem for Kalman filter tuning and discuss extensions to the joint estimation of additional linear parameters and noise covariances. Finally, the theoretical results are illustrated using three examples of increasing complexity. The main practical takeaway of this paper is that difficulties caused by local minimizers in system identification are, at least, not attributable to the tuning of the Kalman gain.
We propose a novel piecewise smooth image model with piecewise constant local parameters that are automatically adapted to each image. Technically, the model is formulated in terms of factor graphs with NUP (normal with unknown parameters) priors, and the pertinent computations amount to iterations of conjugate-gradient steps and Gaussian message passing. The proposed model and algorithms are demonstrated with applications to denoising and contrast enhancement.
Visual-Inertial Odometry (VIO) is the problem of estimating a robot's trajectory by combining information from an inertial measurement unit (IMU) and a camera, and is of great interest to the robotics community. This paper develops a novel Lie group symmetry for the VIO problem and applies the recently proposed equivariant filter. The proposed symmetry is compatible with the invariance of the VIO reference frame, leading to improved filter consistency. The bias-free IMU dynamics are group-affine, ensuring that filter linearisation errors depend only on the bias estimation error and measurement noise. Furthermore, visual measurements are equivariant with respect to the symmetry, enabling the application of the higher-order equivariant output approximation to reduce approximation error in the filter update equation. As a result, the equivariant filter (EqF) based on this Lie group is a consistent estimator for VIO with lower linearisation error in the propagation of state dynamics and a higher order equivariant output approximation than standard formulations. Experimental results on the popular EuRoC and UZH FPV datasets demonstrate that the proposed system outperforms other state-of-the-art VIO algorithms in terms of both speed and accuracy.
The purpose of this paper is to study the delay-dependent coherent feedback dynamics by focusing on one typical realization, i.e., a two-atom quantum network whose feedback loop is closed by a semi-infinite waveguide. In this set-up, an initially excited two-level atom can emit a photon into the waveguide, where the propagating photon can be reflected by the terminal mirror of the waveguide or absorbed by the other atom, thus constructing various coherent feedback loops. We show that there can be two-photon, one-photon or zero-photon states in the waveguide, which can be controlled by the feedback loop length and the coupling strengths between the atoms and waveguide. The photonic states in the waveguide are analyzed in both the frequency domain and the spatial domain, and the transient process of photon emissions is better understood based on a comprehensive analysis using both domains. Interestingly, we clarify that this quantum coherent feedback network can be mathematically modeled as a linear control system with multiple delays, which are determined by the distances between atoms and the terminal mirror of the semi-infinite waveguide. Therefore, based on time-delayed linear control system theory, the influence of delays on the stability of the quantum state evolution and the steady-state atomic and photonic states is investigated, for both small and large delays.
Complex conjugate matrix equations (CCME) are important in computation and antilinear systems. Existing research mainly focuses on the time-invariant version, while studies on the time-variant version and its solution using artificial neural networks are still lacking. This paper introduces zeroing neural dynamics (ZND) to solve the earliest time-variant CCME. Firstly, the vectorization and Kronecker product in the complex field are defined uniformly. Secondly, Con-CZND1 and Con-CZND2 models are proposed, and their convergence and effectiveness are theoretically proved. Thirdly, numerical experiments confirm their effectiveness and highlight their differences. The results show the advantages of ZND in the complex field compared with that in the real field, and further refine the related theory.
Acoustic eavesdropping is a privacy risk, but existing attacks rarely work in real outdoor situations where people make phone calls on the move. We present SuperEar, the first portable system that uses acoustic metamaterials to reliably capture conversations in these scenarios. We show that the threat is real as a practical prototype can be implemented to enhance faint signals, cover the full range of speech with a compact design, and reduce noise and distortion to produce clear audio. We show that SuperEar can be implemented from low-cost 3D-printed parts and off-the-shelf hardware. Experimental results show that SuperEar can recover phone call audio with a success rate of over 80% at distances of up to 4.6 m - more than twice the range of previous approaches. Our findings highlight a new class of privacy threats enabled by metamaterial technology that requires attention.
Objective: This study addresses conceptual issues around data standardisation in audiology, and outlines steps toward achieving it. It reports a survey of the computational audiology community on their current understanding, needs, and preferences concerning data standards. Based on survey findings and a panel discussion, recommendations are made concerning moving forward with standardisation in audiology. Design: Mixed-methods: 1) review of existing standardisation efforts; 2) a survey of the computational audiology community; 3) expert panel discussion in a dedicated session at the 2024 Virtual Conference of Computational Audiology. Sample: Survey: 82 members of the global community; Panel discussion: five experts. Results: A prerequisite for any global audiology database are agreed data standards. Although many are familiar with the general idea, few know of existing initiatives, or have actively participated in them. Ninety percent of respondents expressed willingness to follow or contribute to standardisation efforts. The panel discussed relevant initiatives (e.g. OMOP, openEHR, Noah) and explored both challenges (around harmonisation) and opportunities (alignment with other medical fields and conversion among approaches). Conclusions: Combining conceptual discussion with stakeholder views, the study offers guidance for implementing interoperable data standards in audiology. It highlights community support, key issues to address, and suggests paths for future work.
Speech-language models (SLMs) offer a promising path toward unifying speech and text understanding and generation. However, challenges remain in achieving effective cross-modal alignment and high-quality speech generation. In this work, we systematically investigate the role of speech tokenizer designs in LLM-centric SLMs, augmented by speech heads and speaker modeling. We compare coupled, semi-decoupled, and fully decoupled speech tokenizers under a fair SLM framework and find that decoupled tokenization significantly improves alignment and synthesis quality. To address the information density mismatch between speech and text, we introduce multi-token prediction (MTP) into SLMs, enabling each hidden state to decode multiple speech tokens. This leads to up to 12$\times$ faster decoding and a substantial drop in word error rate (from 6.07 to 3.01). Furthermore, we propose a speaker-aware generation paradigm and introduce RoleTriviaQA, a large-scale role-playing knowledge QA benchmark with diverse speaker identities. Experiments demonstrate that our methods enhance both knowledge understanding and speaker consistency.
Study Objectives: Fetal sleep is a vital yet underexplored aspect of prenatal neurodevelopment. Its cyclic organization reflects the maturation of central neural circuits, and disturbances in these patterns may offer some of the earliest detectable signs of neurological compromise. This is the first review to integrate more than seven decades of research into a unified, cross-species synthesis of fetal sleep. We examine: (i) Physiology and Ontogeny-comparing human fetuses with animal models; and (ii) Methodological Evolution-transitioning from invasive neurophysiology to non-invasive monitoring and deep learning frameworks. Methods: A structured narrative synthesis was guided by a systematic literature search across four databases (PubMed, Scopus, IEEE Xplore, and Google Scholar). From 2,925 identified records, 171 studies involving fetal sleep-related physiology, sleep-state classification, or signal-based monitoring were included in this review. Results: Across the 171 studies, fetal sleep states become clearly observable as the brain matures. In fetal sheep and baboons, organized cycling between active and quiet sleep emerges at approximately 80%-90% gestation. In humans, this differentiation occurs later, around 95% gestation, with full maturation reached near term. Despite extensive animal research, no unified, clinically validated framework exists for defining fetal sleep states, limiting translation into routine obstetric practice. Conclusions: By integrating evidence across species, methodologies, and clinical contexts, this review provides the scientific foundation for developing objective, multimodal, and non-invasive fetal sleep monitoring technologies-tools that may ultimately support earlier detection of neurological compromise and guide timely prenatal intervention.
High-resolution imagery plays a critical role in improving the performance of visual recognition tasks such as classification, detection, and segmentation. In many domains, including remote sensing and surveillance, low-resolution images can limit the accuracy of automated analysis. To address this, super-resolution (SR) techniques have been widely adopted to attempt to reconstruct high-resolution images from low-resolution inputs. Related traditional approaches focus solely on enhancing image quality based on pixel-level metrics, leaving the relationship between super-resolved image fidelity and downstream classification performance largely underexplored. This raises a key question: can integrating classification objectives directly into the super-resolution process further improve classification accuracy? In this paper, we try to respond to this question by investigating the relationship between super-resolution and classification through the deployment of a specialised algorithmic strategy. We propose a novel methodology that increases the resolution of synthetic aperture radar imagery by optimising loss functions that account for both image quality and classification performance. Our approach improves image quality, as measured by scientifically ascertained image quality indicators, while also enhancing classification accuracy.
The rise of parallel computing hardware has made it increasingly important to understand which nonlinear state space models can be efficiently parallelized. Recent advances like DEER (arXiv:2309.12252) and DeepPCR (arXiv:2309.16318) recast sequential evaluation as a parallelizable optimization problem, sometimes yielding dramatic speedups. However, the factors governing the difficulty of these optimization problems remained unclear, limiting broader adoption. In this work, we establish a precise relationship between a system's dynamics and the conditioning of its corresponding optimization problem, as measured by its Polyak-Lojasiewicz (PL) constant. We show that the predictability of a system, defined as the degree to which small perturbations in state influence future behavior and quantified by the largest Lyapunov exponent (LLE), impacts the number of optimization steps required for evaluation. For predictable systems, the state trajectory can be computed in at worst $O((\log T)^2)$ time, where $T$ is the sequence length: a major improvement over the conventional sequential approach. In contrast, chaotic or unpredictable systems exhibit poor conditioning, with the consequence that parallel evaluation converges too slowly to be useful. Importantly, our theoretical analysis shows that predictable systems always yield well-conditioned optimization problems, whereas unpredictable systems lead to severe conditioning degradation. We validate our claims through extensive experiments, providing practical guidance on when nonlinear dynamical systems can be efficiently parallelized. We highlight predictability as a key design principle for parallelizable models.
Recent advances in spoken language processing have led to substantial progress in phonetic tasks such as automatic speech recognition (ASR), phone recognition (PR), grapheme-to-phoneme conversion (G2P), and phoneme-to-grapheme conversion (P2G). Despite their conceptual similarity, these tasks have largely been studied in isolation, each relying on task-specific architectures and datasets. In this paper, we introduce POWSM (Phonetic Open Whisper-style Speech Model), the first unified framework capable of jointly performing multiple phone-related tasks. POWSM enables seamless conversion between audio, text (graphemes), and phones, opening up new possibilities for universal and low-resource speech processing. Our model outperforms or matches specialized PR models of similar size (Wav2Vec2Phoneme and ZIPA) while jointly supporting G2P, P2G, and ASR. Our training data, code and models are released to foster open science.
Approximate model-predictive control (AMPC) aims to imitate an MPC's behavior with a neural network, removing the need to solve an expensive optimization problem at runtime. However, during deployment, the parameters of the underlying MPC must usually be fine-tuned. This often renders AMPC impractical as it requires repeatedly generating a new dataset and retraining the neural network. Recent work addresses this problem by adapting AMPC without retraining using approximated sensitivities of the MPC's optimization problem. Currently, this adaption must be done by hand, which is labor-intensive and can be unintuitive for high-dimensional systems. To solve this issue, we propose using Bayesian optimization to tune the parameters of AMPC policies based on experimental data. By combining model-based control with direct and local learning, our approach achieves superior performance to nominal AMPC on hardware, with minimal experimentation. This allows automatic and data-efficient adaptation of AMPC to new system instances and fine-tuning to cost functions that are difficult to directly implement in MPC. We demonstrate the proposed method in hardware experiments for the swing-up maneuver on an inverted cartpole and yaw control of an under-actuated balancing unicycle robot, a challenging control problem.
Attitude control is essential for many satellite missions. Classical controllers, however, are time-consuming to design and sensitive to model uncertainties and variations in operational boundary conditions. Deep Reinforcement Learning (DRL) offers a promising alternative by learning adaptive control strategies through autonomous interaction with a simulation environment. Overcoming the Sim2Real gap, which involves deploying an agent trained in simulation onto the real physical satellite, remains a significant challenge. In this work, we present the first successful in-orbit demonstration of an AI-based attitude controller for inertial pointing maneuvers. The controller was trained entirely in simulation and deployed to the InnoCube 3U nanosatellite, which was developed by the Julius-Maximilians-Universität Würzburg in cooperation with the Technische Universität Berlin, and launched in January 2025. We present the AI agent design, the methodology of the training procedure, the discrepancies between the simulation and the observed behavior of the real satellite, and a comparison of the AI-based attitude controller with the classical PD controller of InnoCube. Steady-state metrics confirm the robust performance of the AI-based controller during repeated in-orbit maneuvers.
This paper presents a novel learning based framework for predicting power outages caused by extreme events. The proposed approach targets low probability high consequence outage scenarios and leverages a comprehensive set of features derived from publicly available data sources. We integrate EAGLE-I outage records from 2014 to 2024 with weather, socioeconomic, infrastructure, and seasonal event data. Incorporating social and demographic indicators reveals patterns of community vulnerability and improves understanding of outage risk during extreme conditions. Four machine learning models are evaluated including Random Forest (RF), Graph Neural Network (GNN), Adaptive Boosting (AdaBoost), and Long Short Term Memory (LSTM). Experimental validation is performed on a large scale dataset covering counties in the lower peninsula of Michigan. Among all models tested, the LSTM network achieves higher accuracy.