To address the imperative for covert underwater swarm coordination, this paper introduces "Hydrodynamic Whispering," a near-field silent communication paradigm utilizing Artificial Lateral Line (ALL) arrays. Grounded in potential flow theory, we model the transmitter as an oscillating dipole source. The resulting pressure field exhibits steep nearfield attenuation (scaling with 1/r^2, naturally delimiting a secure "communication bubble" with intrinsic Low Probability of Interception (LPI) properties. We propose a transceiver architecture featuring a Binary Phase Shift Keying (BPSK) modulation scheme adapted for mechanical actuator inertia, coupled with a bio-inspired 24-sensor conformal array. To mitigate low Signal-to-Noise Ratio (SNR) in turbulent environments,a Spatio-Temporal Joint Processing framework incorporating Spatial Matched-Field Beamforming is developed. Simulation results demonstrate that the system achieves an array gain of approximately 13.8 dB and maintains a near-zero Bit Error Rate (BER) within the effective range. This study validates the feasibility of utilizing localized hydrodynamic pressure fluctuations for reliable and secure short-range underwater networking.
Background: Deep learning superresolution (SR) may enhance musculoskeletal MR image quality, but its diagnostic value in knee imaging at 7T is unclear. Objectives: To compare image quality and diagnostic performance of SR, low-resolution (LR), and high-resolution (HR) 7T knee MRI. Methods: In this prospective study, 42 participants underwent 7T knee MRI with LR (0.8*0.8*2 mm3) and HR (0.4*0.4*2 mm3) sequences. SR images were generated from LR data using a Hybrid Attention Transformer model. Three radiologists assessed image quality, anatomic conspicuity, and detection of knee pathologies. Arthroscopy served as reference in 10 cases. Results: SR images showed higher overall quality than LR (median score 5 vs 4, P<.001) and lower noise than HR (5 vs 4, P<.001). Visibility of cartilage, menisci, and ligaments was superior in SR and HR compared to LR (P<.001). Detection rates and diagnostic performance (sensitivity, specificity, AUC) for intra-articular pathology were similar across image types (P>=.095). Conclusions: Deep learning superresolution improved subjective image quality in 7T knee MRI but did not increase diagnostic accuracy compared with standard LR imaging.
This paper presents robust position control strategies for the novel VSSEA. By employing a constructed state-space model, two control schemes are developed in a unified framework: a state-feedback controller and a sliding mode controller, both integrated with a second-order DOb. The proposed framework achieves high-performance motion control by precisely estimating and compensating for internal and external disturbances, while preserving the nominal dynamic response. Simulation results demonstrate that pole-placement-based controllers are highly sensitive to disturbances, whereas LQR-based controllers offer improved robustness at the expense of slower dynamics. By incorporating DOb, robustness is significantly enhanced without degrading time response, and the LQR controller can be tuned solely for performance optimization. Experimental results confirm that the proposed robust position controllers can be implemented in real world applications. These results highlight the effectiveness of the proposed approach and lay the foundation for future investigations on robust stability and performance under different stiffness settings.
This paper presents a new HPDOb that significantly improves disturbance estimation accuracy and robustness in motion control systems, surpassing the capabilities of conventional DObs. The proposed observer is analysed and synthesised in the discrete-time domain, providing a realistic representation of their dynamic behaviour and enabling enhanced controller design for practical applications. The core contribution of the HPDOb is a novel synthesis method that incorporates higher-order truncation error dynamics into disturbance estimation. Unlike conventional DObs, which are limited to zero-order truncation error, the HPDOb achieves first-order truncation error, yielding markedly improved estimation accuracy and robustness against disturbances in motion control systems. Simulation and experiments verify the stability and performance of HPDOb.
In this study, we evaluated four binarization methods. Locality-Sensitive Hashing (LSH), Iterative Quantization (ITQ), Kernel-based Supervised Hashing (KSH), and Supervised Discrete Hashing (SDH) on the ODIR dataset using deep feature embeddings. Experimental results show that SDH achieved the best performance, with an mAP@100 of 0.9184 using only 32-bit codes, outperforming LSH, ITQ, and KSH. Compared with prior studies, our method proved highly competitive: Fang et al. reported 0.7528 (Fundus-iSee, 48 bits) and 0.8856 (ASOCT-Cataract, 48 bits), while Wijesinghe et al. achieved 94.01 (KVASIR, 256 bits). Despite using significantly fewer bits, our SDH-based framework reached retrieval accuracy close to the state-of-the-art. These findings demonstrate that SDH is the most effective approach among those tested, offering a practical balance of accuracy, storage, and efficiency for medical image retrieval and device inventory management.
The growing demand for mid-band spectrum necessitates efficient Dynamic Spectrum Sharing (DSS) to ensure coexistence between cellular networks and incumbent radar systems. Existing Spectrum Access System (SAS) frameworks rely on fixed Environmental Sensing Capability (ESC) sensors, which are latency-prone and inflexible. This paper introduces O-DSS, an O-RAN-compliant, Machine Learning (ML)-driven DSS framework that enables real-time cellular-radar coexistence in mid-band frequencies with shipborne and fast-moving airborne radars. O-DSS integrates radar detection from low-overhead Key Performance Metrics (KPMs) with spectrogram-based localization to drive fine-grained RAN control, including PRB blanking and radar-aware MCS adaptation. Deployed as a modular xApp, O-DSS achieves 60~ms detection and 700~ms evacuation latencies, outperforming existing baselines. Evaluations across simulations and Over-The-Air (OTA) testbeds show that O-DSS ensures robust incumbent protection while maintaining cellular performance by achieving radar detection of $\geq 99\%$ at SINR $\geq -4$~dB and localization recall of $\geq 95\%$ at SINR $\geq 8$~dB.
Artificial intelligence systems are increasingly embedded as persistent, closed-loop components within cyber-physical, social, and institutional processes. Rather than producing isolated outputs, such systems operate continuously under feedback, adaptation, and scale, reshaping physical flows, human behavior, and institutional practice over time. In these settings, socially unacceptable outcomes rarely arise from singular faults or explicit policy violations. Instead, they emerge through cumulative execution trajectories enabled by repetition, concurrency, and feedback. This paper advances the formal foundation of the Social Responsibility Stack (SRS) by making its central requirement explicit: responsibility is fundamentally a reachability property of system execution. A system is responsible iff its execution semantics prevent entry into inadmissible global configurations, regardless of local performance gains or optimization objectives. Responsibility failures are therefore not objective-level errors, but execution-level failures of trajectory control. To operationalize this perspective, we introduce Petri nets as an execution-level formalism for responsible autonomous systems. We show how SRS value commitments correspond to forbidden markings, safeguards to structural constraints on transition firing, auditing to monitoring of reachability pressure, and governance to legitimate modification of execution structure. Embedding Petri-net reachability within the SRS architecture internalizes responsibility as a structural invariant rather than an external objective or post-hoc mechanism. These results establish the Social Responsibility Stack as an executable responsibility architecture and position reachability-based execution semantics as a necessary foundation for responsible autonomy in feedback-rich cyber-physical and socio-technical systems.
Solving inverse problems in imaging requires models that support efficient inference, uncertainty quantification, and principled probabilistic reasoning. Energy-Based Models (EBMs), with their interpretable energy landscapes and compositional structure, are well-suited for this task but have historically suffered from high computational costs and training instability. To overcome the historical shortcomings of EBMs, we introduce a fast distillation strategy to transfer the strengths of pre-trained diffusion models into multi-scale EBMs. These distilled EBMs enable efficient sampling and preserve the interpretability and compositionality inherent to potential-based frameworks. Leveraging EBM compositionality, we propose Annealed Langevin Posterior Sampling (ALPS) algorithm for Maximum-A-Posteriori (MAP), Minimum Mean Square Error (MMSE), and uncertainty estimates for inverse problems in imaging. Unlike diffusion models that use complex guidance strategies for latent variables, we perform annealing on static posterior distributions that are well-defined and composable. Experiments on image inpainting and MRI reconstruction demonstrate that our method matches or surpasses diffusion-based baselines in both accuracy and efficiency, while also supporting MAP recovery. Overall, our framework offers a scalable and principled solution for inverse problems in imaging, with potential for practical deployment in scientific and clinical settings. ALPS code is available at the GitHub repository \href{this https URL}{ALPS}.
This paper presents a measurement-based framework for characterizing altitude-dependent spectral behavior of signals received by a tethered Helikite unmanned aerial vehicle (UAV). Using a multi-year spectrum measurement campaign in an outdoor urban environment, power spectral density snapshots are collected over the 89 MHz--6 GHz range. Three altitude-dependent spectral metrics are extracted: band-average power, spectral entropy, and spectral sparsity. We introduce the Altitude-Dependent Spectral Structure Model (ADSSM) to characterize the spectral power and entropy using first-order altitude-domain differential equations, and spectral sparsity using a logistic function, yielding closed-form expressions with physically consistent asymptotic behavior. The model is fitted to altitude-binned measurements from three annual campaigns at the AERPAW testbed across six licensed and unlicensed sub-6 GHz bands. Across all bands and years, the ADSSM achieves low root-mean-square error and high coefficients of determination. Results indicate that power transitions occur over narrow low-altitude regions, while entropy and sparsity evolve over broader, band-dependent altitude ranges, demonstrating that altitude-dependent spectrum behavior is inherently multidimensional. By explicitly modeling altitude-dependent transitions in spectral structure beyond received power, the proposed framework enables spectrum-aware UAV sensing and band selection decisions that are not achievable with conventional power- or threshold-based occupancy models.
Smart grid infrastructure needs improved resilience and preventive maintenance through more accurate predictions. Current methodologies lack accurate representation of spatio-temporal-causal interdependencies and class imbalance in failure prediction tasks. This study introduces a Topology-Aware Spatio-Temporal Graph Transformer (ST-GT) architecture that overcomes existing limitations by using three main innovations: (1) directly incorporating physical transmission network topology into the transformer attention mechanism to identify spatial failure propagation patterns; (2) unified processing of static topological descriptors and (3) temporal Phasor Measurement Units (PMU) sequences in an end-to-end framework. The ST-GT model exhibited outstanding performance in five-fold cross-validation across 10 substations, attaining perfect recall (1.000 $\pm$ 0.001) and an F1-score of 0.858 $\pm$ 0.009, markedly surpassing XGBoost baselines (0.683 accuracy/F1). Perfect recall guarantees that no critical failures are overlooked, which is essential for grid safety; however, it may lead to an increase in false alarm rates. This framework integrates temporal dynamics modeling with spatial graph awareness for critical infrastructure monitoring. It offers interpretable insights into failure propagation pathways and enhances maintenance strategies. Future research focuses on developing cost-weighted loss functions for precision-recall trade-off enhancement, implementing real-time monitoring systems with uncertainty quantification, and creating cost-sensitive frameworks balancing false alarm expenditures with failure consequences. The methodology success suggests its potential for wider application in critical infrastructure areas requiring spatial temporal failure prediction.
AV2 is the successor to the AV1 royalty-free video coding standard developed by the Alliance for Open Media (AOMedia). Its primary objective is to deliver substantial compression gains and subjective quality improvements while maintaining low-complexity encoder and decoder operations. This paper describes the transform, quantization and entropy coding design in AV2, including redesigned transform kernels and data-driven transforms, expanded transform partitioning, and a mode & coefficient dependent transform signaling. AV2 introduces several new coding tools including Intra/Inter Secondary Transforms (IST), Trellis Coded Quantization (TCQ), Adaptive Transform Coding (ATC), Probability Adaptation Rate Adjustment (PARA), Forward Skip Coding (FSC), Cross Chroma Component Transforms (CCTX), Parity Hiding (PH) tools and improved lossless coding. These advances enable AV2 to deliver the highest quality video experience for video applications at a significantly reduced bitrate.
This paper discusses the task of face-based speech synthesis, a kind of personalized speech synthesis where the synthesized voices are constrained to perceptually match with a reference face image. Due to the lack of TTS-quality audio-visual corpora, previous approaches suffer from either low synthesis quality or domain mismatch induced by a knowledge transfer scheme. This paper proposes a new approach called Vclip that utilizes the facial-semantic knowledge of the CLIP encoder on noisy audio-visual data to learn the association between face and voice efficiently, achieving 89.63% cross-modal verification AUC score on Voxceleb testset. The proposed method then uses a retrieval-based strategy, combined with GMM-based speaker generation module for a downstream TTS system, to produce probable target speakers given reference images. Experimental results demonstrate that the proposed Vclip system in conjunction with the retrieval step can bridge the gap between face and voice features for face-based speech synthesis. And using the feedback information distilled from downstream TTS helps to synthesize voices that match closely with reference faces. Demos available at this http URL.
The coordination of Embodied Multi-Agent Systems in constrained physical environments requires a rigorous balance between safety, scalability, and efficiency. Traditional decentralized approaches, e.g., reactive collision avoidance, are prone to local minima or reciprocal yielding standoffs due to the lack of future intent awareness. In contrast, centralized planning suffers from intractable computational complexity and single-point-of-failure vulnerabilities. To address these limitations, we propose the Hierarchical Preemptive Holistic Collaborative (Prollect) framework, which generalizes the Preemptive Holistic Collaborative System (PHCS) by decomposing the global coordination problem into topologically connected subspace optimizations. We formalize the system as a Hybrid Automaton and introduce a three-stage receding horizon mechanism (frozen execution, preliminary planning, proactive look-ahead windows) with explicit padding to prevent races between coordination dissemination and intent updates. Notably, we design a robust timing protocol with a mandatory Idle Buffer that acts as a dwell-time constraint to eliminate Zeno behaviors and ensure computational stability under jitter. Furthermore, we formalize a Shadow Agent protocol to guarantee seamless trajectory consistency across subspace boundaries, which we treat as an Input-to-State Stability (ISS) problem.
Blockchain plays a crucial role in ensuring the security and integrity of decentralized systems, with the proof-of-work (PoW) mechanism being fundamental for achieving distributed consensus. As PoW blockchains see broader adoption, an increasingly diverse set of miners with varying computing capabilities participate in the network. In this paper, we consider the PoW blockchain mining, where the miners are associated with resource uncertainties. To characterize the uncertainty computing resources at different mining participants, we establish an ambiguous set representing uncertainty of resource distributions. Then, the networked mining is formulated as a non-cooperative game, where distributionally robust performance is calculated for each individual miner to tackle the resource uncertainties. We prove the existence of the equilibrium of the distributionally robust mining game. To derive the equilibrium, we propose the conditional value-at-risk (CVaR)-based reinterpretation of the best response of each miner. We then solve the individual strategy with alternating optimization, which facilitates the iteration among miners towards the game equilibrium. Furthermore, we consider the case that the ambiguity of resource distribution reduces to Gaussian distribution and the case that another uncertainties vanish, and then characterize the properties of the equilibrium therein along with a distributed algorithm to achieve the equilibrium. Simulation results show that the proposed approaches effectively converge to the equilibrium, and effectively tackle the uncertainties in blockchain mining to achieve a robust performance guarantee.
In this article, a framework of AI-native cross-module optimized physical layer with cooperative control agents is proposed, which involves optimization across global AI/ML modules of the physical layer with innovative design of multiple enhancement mechanisms and control strategies. Specifically, it achieves simultaneous optimization across global modules of uplink AI/ML-based joint source-channel coding with modulation, and downlink AI/ML-based modulation with precoding and corresponding data detection, reducing traditional inter-module information barriers to facilitate end-to-end optimization toward global objectives. Moreover, multiple enhancement mechanisms are also proposed, including i) an AI/ML-based cross-layer modulation approach with theoretical analysis for downlink transmission that breaks the isolation of inter-layer features to expand the solution space for determining improved constellation, ii) a utility-oriented precoder construction method that shifts the role of the AI/ML-based CSI feedback decoder from recovering the original CSI to directly generating precoding matrices aiming to improve end-to-end performance, and iii) incorporating modulation into AI/ML-based CSI feedback to bypass bit-level bottlenecks that introduce quantization errors, non-differentiable gradients, and limitations in constellation solution spaces. Furthermore, AI/ML based control agents for optimized transmission schemes are proposed that leverage AI/ML to perform model switching according to channel state, thereby enabling integrated control for global throughput optimization. Finally, simulation results demonstrate the superiority of the proposed solutions in terms of BLER and throughput. These extensive simulations employ more practical assumptions that are aligned with the requirements of the 3GPP, which hopefully provides valuable insights for future standardization discussions.
Accurate and automated lesion segmentation in Positron Emission Tomography / Computed Tomography (PET/CT) imaging is essential for cancer diagnosis and therapy planning. This paper presents a Swin Transformer UNet 3D (SwinUNet3D) framework for lesion segmentation in Fluorodeoxyglucose Positron Emission Tomography / Computed Tomography (FDG-PET/CT) scans. By combining shifted window self-attention with U-Net style skip connections, the model captures both global context and fine anatomical detail. We evaluate SwinUNet3D on the AutoPET III FDG dataset and compare it against a baseline 3D U-Net. Results show that SwinUNet3D achieves a Dice score of 0.88 and IoU of 0.78, surpassing 3D U-Net (Dice 0.48, IoU 0.32) while also delivering faster inference times. Qualitative analysis demonstrates improved detection of small and irregular lesions, reduced false positives, and more accurate PET/CT fusion. While the framework is currently limited to FDG scans and trained under modest GPU resources, it establishes a strong foundation for future multi-tracer, multi-center evaluations and benchmarking against other transformer-based architectures. Overall, SwinUNet3D represents an efficient and robust approach to PET/CT lesion segmentation, advancing the integration of transformer-based models into oncology imaging workflows.
Distributed radar sensors enable robust human activity recognition. However, scaling the number of coordinated nodes introduces challenges in feature extraction from large datasets, and transparent data fusion. We propose an end-to-end framework that operates directly on raw radar data. Each radar node employs a lightweight 2D Convolutional Neural Network (CNN) to extract local features. A self-attention fusion block then models inter-node relationships and performs adaptive information fusion. Local feature extraction reduces the input dimensionality by up to 480x. This significantly lowers communication overhead and latency. The attention mechanism provides inherent interpretability by quantifying the contribution of each radar node. A hybrid supervised contrastive loss further improves feature separability, especially for fine-grained and imbalanced activity classes. Experiments on real-world distributed Ultra Wide Band (UWB) radar data demonstrate that the proposed method reduces model complexity by 70.8\%, while achieving higher average accuracy than baseline approaches. Overall, the framework enables transparent, efficient, and low-overhead distributed radar sensing.
Defining agency is an extremely important challenge for cognitive science and artificial intelligence. Physics generally describes mechanical happenings, but there remains an unbridgeable gap between them and the acts of agents. To discuss the morality and responsibility of agents, it is necessary to model acts; whether such responsible acts can be fully explained by physical determinism has been debated. Although we have already proposed a physical "agent determinism" model that appears to go beyond mere mechanical happenings, we have not yet established a strict mathematical formalism to eliminate ambiguity. Here, we explain why a physical system can follow coarse-graining agent-level determination without violating physical laws by formulating supervenient causation. Generally, supervenience including coarse graining does not change without a change in its lower base; therefore, a single supervenience alone cannot define supervenient causation. We define supervenient causation as the causal efficacy from the supervenience level to its lower base level. Although an algebraic expression composed of the multiple supervenient functions does supervenes on the base, a sequence of indices that determines the algebraic expression does not supervene on the base. Therefore, the sequence can possess unique dynamical laws that are independent of the lower base level. This independent dynamics creates the possibility for temporally preceding changes at the supervenience level to cause changes at the lower base level. Such a dual-laws system is considered useful for modeling self-determining agents such as humans.
Next-generation wireless systems aim to enable on-demand connectivity through dynamic spectrum utilization. Motivated by this vision, this paper investigates the propagation characteristics and MIMO performance of the upper mid-band, spanning approximately 7-24 GHz and unofficially referred to as FR3. Using site-specific ray-tracing (RT) simulations based on the Sionna framework, we analyze indoor and outdoor environments at representative frequencies across FR1, FR3, and FR2, including 3.5, 7, 10, 14, 20, 24, and 28 GHz, under both single-antenna and multi-antenna configurations. The results show that FR3 exhibits intermediate propagation behavior between sub-6 GHz and millimeter-wave bands while sustaining effective spatial multiplexing and favorable spectral efficiency. Furthermore, large-array analysis indicates that performance gains in FR3 are closely tied to antenna scaling, highlighting the importance of large-size or large-aperture MIMO architectures for practical deployments.
Advanced speech synthesis technologies have enabled highly realistic speech generation, posing security risks that motivate research into audio deepfake detection (ADD). While state space models (SSMs) offer linear complexity, pure causal SSMs architectures often struggle with the content-based retrieval required to capture global frequency-domain artifacts. To address this, we explore the scaling properties of hybrid architectures by proposing XLSR-MamBo, a modular framework integrating an XLSR front-end with synergistic Mamba-Attention backbones. We systematically evaluate four topological designs using advanced SSM variants, Mamba, Mamba2, Hydra, and Gated DeltaNet. Experimental results demonstrate that the MamBo-3-Hydra-N3 configuration achieves competitive performance compared to other state-of-the-art systems on the ASVspoof 2021 LA, DF, and In-the-Wild benchmarks. This performance benefits from Hydra's native bidirectional modeling, which captures holistic temporal dependencies more efficiently than the heuristic dual-branch strategies employed in prior works. Furthermore, evaluations on the DFADD dataset demonstrate robust generalization to unseen diffusion- and flow-matching-based synthesis methods. Crucially, our analysis reveals that scaling backbone depth effectively mitigates the performance variance and instability observed in shallower models. These results demonstrate the hybrid framework's ability to capture artifacts in spoofed speech signals, providing an effective method for ADD.
Extreme events such as earthquakes pose significant threats to integrated electricity-gas distribution systems (IEGDS) by causing widespread damage. Existing restoration approaches typically assume full awareness of damage, which may not be true if monitoring and communication infrastructures are impaired. In such circumstances, field inspection is necessary. This paper presents a novel adaptive restoration framework for IEGDS, considering dynamic damage assessment and repair. The restoration problem is formulated as a partially observable Markov decision process (POMDP), capturing the gradually revealed contingency and the evolving impact of field crew actions. To address the computational challenges of POMDPs in real-time applications, an advanced belief tree search (BTS) algorithm is introduced. This algorithm enables crew members to continuously update their actions based on evolving belief states, leveraging comprehensive simulations to evaluate potential future trajectories and identify optimal inspection and repair strategies. Based on the BTS algorithm, a unified real-time decision-making framework is developed for IEGDS restoration. Case studies on two distinct IEGDS systems demonstrate the effectiveness and scalability of the proposed method. The results indicate that the proposed approach achieves an outage cost comparable to the ideal solution, and reduces the total outage cost by more than 15% compared to strategies based on stochastic programming and heuristic methods.
Reliable and energy-efficient Bluetooth Low Energy (BLE) communication is crucial for Internet of Things (IoT) applications in dynamic environments. However, the Received Signal Strength Indicator (RSSI) and data throughput in BLE are highly susceptible to environmental variability, which degrades communication performance. In this work, we systematically analyze the interdependence among RSSI, throughput, transmission power (TXP), and the peripheral device system power consumption under diverse real-world conditions. We observe that adjusting the TXP effectively influences both RSSI and throughput. We propose a robust closed-loop TXP control framework based on Proportional-Integral-Derivative (PID) controllers. Two initial control strategies are investigated: an RSSI-based approach and a throughput-based approach, each exhibiting distinct advantages and limitations. The RSSI-based method provides rapid responsiveness to signal fluctuations but lacks direct correlation with data throughput, whereas the throughput-based method offers more accurate feedback on effective throughput at the cost of slower response. To address these limitations, a hybrid RSSI-throughput control strategy is developed, combining the responsiveness of RSSI feedback with the accuracy of throughput measurements. Experimental results demonstrate that the proposed hybrid approach maintains data throughput close to the target level with minimal variance, even under rapidly changing environmental conditions.
Battery Energy Storage Systems (BESSs) are increasingly critical to power-system stability, yet their operation and maintenance remain dominated by reactive, expert-dependent diagnostics. While cell-level inconsistencies provide early warning signals of degradation and safety risks, the lack of scalable and interpretable decision-support frameworks prevents these signals from being effectively translated into operational actions. Here we introduce an inconsistency-driven operation and maintenance paradigm for large-scale BESSs that systematically transforms routine monitoring data into explainable, decision-oriented guidance. The proposed framework integrates multi-dimensional inconsistency evaluation with large language model-based semantic reasoning to bridge the gap between quantitative diagnostics and practical maintenance decisions. Using eight months of field data from an in-service battery system comprising 3,564 cells, we demonstrate how electrical, thermal, and aging-related inconsistencies can be distilled into structured operational records and converted into actionable maintenance insights through a multi-agent framework. The proposed approach enables accurate and explainable responses to real-world operation and maintenance queries, reducing response time and operational cost by over 80% compared with conventional expert-driven practices. These results establish a scalable pathway for intelligent operation and maintenance of battery energy storage systems, with direct implications for reliability, safety, and cost-effective integration of energy storage into modern power systems.
The rapid proliferation of wireless devices makes robust identity authentication essential. Radio Frequency Fingerprinting (RFF) exploits device-specific, hard-to-forge physical-layer impairments for identification, and is promising for IoT and unmanned systems. In practice, however, new devices continuously join deployed systems while per-class training data are limited. Conventional static training and naive replay of stored exemplars are impractical due to growing class cardinality, storage cost, and privacy concerns. We propose an exemplar-free class-incremental learning framework tailored to RFF recognition. Starting from a pretrained feature extractor, we freeze the backbone during incremental stages and train only a classifier together with lightweight Adapter modules that perform small task-specific feature adjustments. For each class we fit a diagonal Gaussian Mixture Model (GMM) to the backbone features and sample pseudo-features from these fitted distributions to rehearse past classes without storing raw signals. To improve robustness under few-shot conditions we introduce a time-domain random-masking augmentation and adopt a multi-teacher distillation scheme to compress stage-wise Adapters into a single inference Adapter, trading off accuracy and runtime efficiency. We evaluate the method on large, self-collected ADS-B datasets: the backbone is pretrained on 2,175 classes and incremental experiments are run on a disjoint set of 669 classes with multiple rounds and step sizes. Against several representative baselines, our approach consistently yields higher average accuracy and lower forgetting, while using substantially less storage and avoiding raw-data retention. The proposed pipeline is reproducible and provides a practical, low-storage solution for RFF deployment in resource- and privacy-constrained environments.
Modeling fine-grained speaking styles remains challenging for language-speech representation pre-training, as existing speech-text models are typically trained with coarse captions or task-specific supervision, and scalable fine-grained style annotations are unavailable. We present FCaps, a large-scale dataset with fine-grained free-text style descriptions, encompassing 47k hours of speech and 19M fine-grained captions annotated via a novel end-to-end pipeline that directly grounds detailed captions in audio, thereby avoiding the error propagation caused by LLM-based rewriting in existing cascaded pipelines. Evaluations using LLM-as-a-judge demonstrate that our annotations surpass existing cascaded annotations in terms of correctness, coverage, and naturalness. Building on FCaps, we propose CLSP, a contrastive language-speech pre-trained model that integrates global and fine-grained supervision, enabling unified representations across multiple granularities. Extensive experiments demonstrate that CLSP learns fine-grained and multi-granular speech-text representations that perform reliably across global and fine-grained speech-text retrieval, zero-shot paralinguistic classification, and speech style similarity scoring, with strong alignment to human judgments. All resources will be made publicly available.
This paper proposes a machine learning (ML) based method for channel prediction in high mobility orthogonal time frequency space (OTFS) channels. In these scenarios, rapid variations caused by Doppler spread and time varying multipath propagation lead to fast channel decorrelation, making conventional pilot based channel estimation methods prone to outdated channel state information (CSI) and excessive overhead. Therefore, reliable channel prediction methods become essential to support robust detection and decoding in OTFS systems. In this paper, we propose conditional variational autoencoder for channel prediction (CVAE4CP) method, which learns the conditional distribution of OTFS delay Doppler channel coefficients given physical system and mobility parameters. By incorporating these parameters as conditioning information, the proposed method enables the prediction of future channel coefficients before their actual realization, while accounting for inherent channel uncertainty through a low dimensional latent representation. The proposed framework is evaluated through extensive simulations under high mobility conditions. Numerical results demonstrate that CVAE4CP consistently outperforms a competing learning based baseline in terms of normalized mean squared error (NMSE), particularly at high Doppler frequencies and extended prediction horizons. These results confirm the effectiveness and robustness of the proposed approach for channel prediction in rapidly time varying OTFS systems.
We develop a structure-aware reinforcement learning (RL) approach for delay- and energy-aware flow allocation in 5G User Plane Functions (UPFs). We consider a dynamic system with $K$ heterogeneous UPFs of varying capacities that handle stochastic arrivals of $M$ flow types, each with distinct rate requirements. We model the system as a Markov decision process (MDP) to capture the stochastic nature of flow arrivals and departures (possibly unknown), as well as the impact of flow allocation in the system. To solve this problem, we propose a post-decision state (PDS) based value iteration algorithm that exploits the underlying structure of the MDP. By separating action-controlled dynamics from exogenous factors, PDS enables faster convergence and efficient adaptive flow allocation, even in the absence of statistical knowledge about exogenous variables. Simulation results demonstrate that the proposed method converges faster and achieves lower long-term cost than standard Q-learning, highlighting the effectiveness of PDS-based RL for resource allocation in wireless networks.
Generative joint source-channel coding (GJSCC) has emerged as a new Deep JSCC paradigm for achieving high-fidelity and robust image transmission under extreme wireless channel conditions, such as ultra-low bandwidth and low signal-to-noise ratio. Recent studies commonly adopt diffusion models as generative decoders, but they frequently produce visually realistic results with limited semantic consistency. This limitation stems from a fundamental mismatch between reconstruction-oriented JSCC encoders and generative decoders, as the former lack explicit semantic discriminability and fail to provide reliable conditional cues. In this paper, we propose DiT-JSCC, a novel GJSCC backbone that can jointly learn a semantics-prioritized representation encoder and a diffusion transformer (DiT) based generative decoder, our open-source project aims to promote the future research in GJSCC. Specifically, we design a semantics-detail dual-branch encoder that aligns naturally with a coarse-to-fine conditional DiT decoder, prioritizing semantic consistency under extreme channel conditions. Moreover, a training-free adaptive bandwidth allocation strategy inspired by Kolmogorov complexity is introduced to further improve the transmission efficiency, thereby indeed redefining the notion of information value in the era of generative decoding. Extensive experiments demonstrate that DiT-JSCC consistently outperforms existing JSCC methods in both semantic consistency and visual quality, particularly in extreme regimes.
We study finite memory belief approximation for partially observable (PO) stochastic optimal control (SOC) problems. While belief states are sufficient for SOC in partially observable Markov decision processes (POMDPs), they are generally infinite-dimensional and impractical. We interpret truncated input-output (IO) histories as inducing a belief approximation and develop a metric-based theory that directly relates information loss to control performance. Using the Wasserstein metric, we derive policy-conditional performance bounds that quantify value degradation induced by finite memory along typical closed-loop trajectories. Our analysis proceeds via a fixed-policy comparison: we evaluate two cost functionals under the same closed-loop execution and isolate the effect of replacing the true belief by its finite memory approximation inside the belief-level cost. For linear quadratic Gaussian (LQG) systems, we provide closed-form belief mismatch evaluation and empirically validate the predicted mechanism, demonstrating that belief mismatch decays approximately exponentially with memory length and that the induced performance mismatch scales accordingly. Together, these results provide a metric-aware characterization of what finite memory belief approximation can and cannot achieve in PO settings.
Electromagnetic Formation Flight is a technology that uses electromagnetic forces and torques to control multiple satellites without conventional fuel-based propulsion. In this paper, the controllability of the system is discussed based on the conservation of the entire system's angular momentum, which constitutes a nonholonomic constraint. This paper designs a new controller for multiple satellites without an additional attitude actuator.
In this paper, we propose a spectral-efficient LoRa (SE-LoRa) modulation scheme with a low complexity successive interference cancellation (SIC)-based detector. The proposed communication scheme significantly improves the spectral efficiency of LoRa modulation, while achieving an acceptable error performance compared to conventional LoRa modulation, especially in higher spreading factor (SF) settings. We derive the joint maximum likelihood (ML) detection rule for the SE-LoRa transmission scheme that turns out to be of high computational complexity. To overcome this issue, and by exploiting the frequency-domain characteristics of the dechirped SE-LoRa signal, we propose a low complexity SIC-based detector with a computation complexity at the order of conventional LoRa detection. By computer simulations, we show that the proposed SE-LoRa with low complexity SIC-based detector can improve the spectral efficiency of LoRa modulation up to $445.45\%$, $1011.11\%$, and $1071.88\%$ for SF values of $7$, $9$, and $11$, respectively, while maintaining the error performance within less than $3$ dB of conventional LoRa at symbol error rate (SER) of $10^{-3}$ in Rician channel conditions.
Accurate aircraft trajectory prediction (TP) in air traffic management systems is confounded by a number of epistemic uncertainties, dominated by uncertain meteorological conditions and operator specific procedures. Handling this uncertainty necessitates the use of probabilistic, machine learned models for generating trajectories. However, the trustworthiness of such models is limited if generated trajectories are not physically plausible. For this reason we propose a physics-informed approach in which aircraft thrust and airspeed are learned from data and are used to condition the existing Base of Aircraft Data (BADA) model, which is physics-based and enforces energy-based constraints on generated trajectories. A set of informative features are identified and used to condition a probabilistic model of aircraft thrust and airspeed, with the proposed scheme demonstrating a 20% improvement in skilfulness across a set of six metrics, compared against a baseline probabilistic model that ignores contextual information such as meteorological conditions.
This paper studies the transferability of altitude-dependent spectrum activity models and measurements across years. We introduce a physics-informed, mean-only stochastic-geometry model of aggregate interference to altitude-binned received power, yielding three interpretable parameters for a given band and campaign: 1) line-of-sight transition slope, 2) transition altitude, and 3) effective activity constant. Analysis of aerial spectrum measurements collected from 2023 to 2025 across multiple sub-6 GHz bands reveals that downlink (DL) and shared-access bands preserve a persistent geometry-driven altitude structure that is stable across years. In contrast, uplink (UL) bands exhibit weak altitude dependence with no identifiable transition, indicating that interference is dominated by activity dynamics rather than propagation geometry. To quantify the practical limits of model reuse, we evaluate a minimal-calibration method in which the transition altitude is fixed from a reference year and the remaining parameters are estimated from only two altitude bins in the target year. The results further indicate that the proposed approach provides accurate predictions for DL and CBRS bands, suggesting the feasibility of low-cost model transfer in stable environments, while highlighting the reduced applicability of mean-field models for UL scenarios.
Wearable devices such as AI glasses are transforming voice assistants into always-available, hands-free collaborators that integrate seamlessly with daily life, but they also introduce challenges like egocentric audio affected by motion and noise, rapid micro-interactions, and the need to distinguish device-directed speech from background conversations. Existing benchmarks largely overlook these complexities, focusing instead on clean or generic conversational audio. To bridge this gap, we present WearVox, the first benchmark designed to rigorously evaluate voice assistants in realistic wearable scenarios. WearVox comprises 3,842 multi-channel, egocentric audio recordings collected via AI glasses across five diverse tasks including Search-Grounded QA, Closed-Book QA, Side-Talk Rejection, Tool Calling, and Speech Translation, spanning a wide range of indoor and outdoor environments and acoustic conditions. Each recording is accompanied by rich metadata, enabling nuanced analysis of model performance under real-world constraints. We benchmark leading proprietary and open-source speech Large Language Models (SLLMs) and find that most real-time SLLMs achieve accuracies on WearVox ranging from 29% to 59%, with substantial performance degradation on noisy outdoor audio, underscoring the difficulty and realism of the benchmark. Additionally, we conduct a case study with two new SLLMs that perform inference with single-channel and multi-channel audio, demonstrating that multi-channel audio inputs significantly enhance model robustness to environmental noise and improve discrimination between device-directed and background speech. Our results highlight the critical importance of spatial audio cues for context-aware voice assistants and establish WearVox as a comprehensive testbed for advancing wearable voice AI research.
Speech-based machine learning systems are sensitive to noise, complicating reliable deployment in emotion recognition and voice pathology detection. We evaluate the robustness of a hybrid quantum machine learning model, quanvolutional neural networks (QNNs) against classical convolutional neural networks (CNNs) under four acoustic corruptions (Gaussian noise, pitch shift, temporal shift, and speed variation) in a clean-train/corrupted-test regime. Using AVFAD (voice pathology) and TESS (speech emotion), we compare three QNN models (Random, Basic, Strongly) to a simple CNN baseline (CNN-Base), ResNet-18 and VGG-16 using accuracy and corruption metrics (CE, mCE, RCE, RmCE), and analyze architectural factors (circuit complexity or depth, convergence) alongside per-emotion robustness. QNNs generally outperform the CNN-Base under pitch shift, temporal shift, and speed variation (up to 22% lower CE/RCE at severe temporal shift), while the CNN-Base remains more resilient to Gaussian noise. Among quantum circuits, QNN-Basic achieves the best overall robustness on AVFAD, and QNN-Random performs strongest on TESS. Emotion-wise, fear is most robust (80-90% accuracy under severe corruptions), neutral can collapse under strong Gaussian noise (5.5% accuracy), and happy is most vulnerable to pitch, temporal, and speed distortions. QNNs also converge up to six times faster than the CNN-Base. To our knowledge, this is a systematic study of QNN robustness for speech under common non-adversarial acoustic corruptions, indicating that shallow entangling quantum front-ends can improve noise resilience while sensitivity to additive noise remains a challenge.
Multimodal large language models (MLLMs) show promising performance on medical visual question answering (VQA) and report generation, but these generation and explanation abilities do not reliably transfer to disease-specific classification. We evaluated MLLM architectures on knee osteoarthritis (OA) radiograph classification, which remains underrepresented in existing medical MLLM benchmarks, even though knee OA affects an estimated 300 to 400 million people worldwide. Through systematic ablation studies manipulating the vision encoder, the connector, and the large language model (LLM) across diverse training strategies, we measured each component's contribution to diagnostic accuracy. In our classification task, a trained vision encoder alone could outperform full MLLM pipelines in classification accuracy and fine-tuning the LLM provided no meaningful improvement over prompt-based guidance. And LoRA fine-tuning on a small, class-balanced dataset (500 images) gave better results than training on a much larger but class-imbalanced set (5,778 images), indicating that data balance and quality can matter more than raw scale for this task. These findings suggest that for domain-specific medical classification, LLMs are more effective as interpreters and report generators rather than as primary classifiers. Therefore, the MLLM architecture appears less suitable for medical image diagnostic classification tasks that demand high certainty. We recommend prioritizing vision encoder optimization and careful dataset curation when developing clinically applicable systems.
The rapid advancement of speech synthesis technologies, including text-to-speech (TTS) and voice conversion (VC), has intensified security and privacy concerns related to voice cloning. Recent defenses attempt to prevent unauthorized cloning by embedding protective perturbations into speech to obscure speaker identity while maintaining intelligibility. However, adversaries can apply advanced purification techniques to remove these perturbations, recover authentic acoustic characteristics, and regenerate cloneable voices. Despite the growing realism of such attacks, the robustness of existing defenses under adaptive purification remains insufficiently studied. Most existing purification methods are designed to counter adversarial noise in automatic speech recognition (ASR) systems rather than speaker verification or voice cloning pipelines. As a result, they fail to suppress the fine-grained acoustic cues that define speaker identity and are often ineffective against speaker verification attacks (SVA). To address these limitations, we propose Diffusion-Bridge (VocalBridge), a purification framework that learns a latent mapping from perturbed to clean speech in the EnCodec latent space. Using a time-conditioned 1D U-Net with a cosine noise schedule, the model enables efficient, transcript-free purification while preserving speaker-discriminative structure. We further introduce a Whisper-guided phoneme variant that incorporates lightweight temporal guidance without requiring ground-truth transcripts. Experimental results show that our approach consistently outperforms existing purification methods in recovering cloneable voices from protected speech. Our findings demonstrate the fragility of current perturbation-based defenses and highlight the need for more robust protection mechanisms against evolving voice-cloning and speaker verification threats.
Running Automatic Speech Recognition (ASR) models on memory-constrained edge devices requires efficient compression. While layer-wise post-training quantization is effective, it suffers from error accumulation, especially in encoder-decoder architectures. Existing solutions like Quantization Error Propagation (QEP) are suboptimal for ASR due to the model's heterogeneity, processing acoustic features in the encoder while generating text in the decoder. To address this, we propose Fine-grained Alpha for Dynamic Quantization Error Propagation (FADE), which adaptively controls the trade-off between cross-layer error correction and local quantization. Experiments show that FADE significantly improves stability by reducing performance variance across runs, while simultaneously surpassing baselines in mean WER.
Purpose: Annotation of medical breast images is an essential step toward better diagnostic but a time consuming task. This research aims to focus on different selecting sample strategies within deep active learning on Breast Region Segmentation (BRS) to lessen computational cost of training and effective use of resources. Methods: The Stavanger breast MRI dataset containing 59 patients was used in this study, with FCN-ResNet50 adopted as a sustainable deep learning (DL) model. A novel sample selection approach based on Breast Anatomy Geometry (BAG) analysis was introduced to group data with similar informative features for DL. Patient positioning and Breast Size were considered the key selection criteria in this process. Four selection strategies including Random Selection, Nearest Point, Breast Size, and a hybrid of all three strategies were evaluated using an active learning framework. Four training data proportions of 10%, 20%, 30%, and 40% were used for model training, with the remaining data reserved for testing. Model performance was assessed using Dice score, Intersection over Union, precision, and recall, along with 5-fold cross-validation to enhance generalizability. Results: Increasing the training data proportion from 10% to 40% improved segmentation performance for nearly all strategies, except for Random Selection. The Nearest Point strategy consistently achieved the lowest carbon footprint at 30% and 40% data proportions. Overall, combining the Nearest Point strategy with 30% of the training data provided the best balance between segmentation performance, efficiency, and environmental sustainability. Keywords: Deep Active Learning, Breast Region Segmentation, Human-center analysis
The rapid growth of dermatological imaging and mobile diagnostic tools calls for systems that not only demonstrate empirical performance but also provide strong theoretical guarantees. Deep learning models have shown high predictive accuracy; however, they are often criticized for lacking well, calibrated uncertainty estimates without which these models are hardly deployable in a clinical setting. To this end, we present the Conformal Bayesian Dermatological Classifier (CBDC), a well, founded framework that combines Statistical Learning Theory, Topological Data Analysis (TDA), and Bayesian Conformal Inference. CBDC offers distribution, dependent generalization bounds that reflect dermatological variability, proves a topological stability theorem that guarantees the invariance of convolutional neural network embeddings under photometric and morphological perturbations and provides finite conformal coverage guarantees for trustworthy uncertainty quantification. Through exhaustive experiments on the HAM10000, PH2, and ISIC 2020 datasets, we show that CBDC not only attains classification accuracy but also generates calibrated predictions that are interpretable from a clinical perspective. This research constitutes a theoretical and practical leap for deep dermatological diagnostics, thereby opening the machine learning theory clinical applicability interface.
This paper deals with the gradient-based extremum seeking control (ESC) with actuation dynamics governed by distributed wave partial differential equations (PDEs). To achieve the control objective of real-time optimization for this class of infinite-dimensional systems, we first solve the trajectory generation problem to re-design the additive perturbation signal of the ESC system. Then, we develop a boundary control law through the backstepping method to compensate for the wave PDE with distributed effects, which ensures the exponential stability of the average closed-loop system by means of a Lyapunov-based analysis. At last, by employing the averaging theory for infinite-dimensional systems, we prove that the closed-loop trajectories converge to a small neighborhood surrounding the optimal point. Numerical simulations are presented to illustrate the effectiveness of the proposed method.
Optimal power flow (OPF) is one of the fundamental tasks for power system operations. While machine learning (ML) approaches such as deep neural networks (DNNs) have been widely studied to enhance OPF solution speed and performance, their practical deployment faces two critical scaling questions: What is the minimum training data volume required for reliable results? How should ML models' complexity balance accuracy with real-time computational limits? Existing studies evaluate discrete scenarios without quantifying these scaling relationships, leading to trial-and-error-based ML development in real-world applications. This work presents the first systematic scaling study for ML-based OPF across two dimensions: data scale (0.1K-40K training samples) and compute scale (multiple NN architectures with varying FLOPs). Our results reveal consistent power-law relationships on both DNNs and physics-informed NNs (PINNs) between each resource dimension and three core performance metrics: prediction error (MAE), constraint violations and speed. We find that for ACOPF, the accuracy metric scales with dataset size and training compute. These scaling laws enable predictable and principled ML pipeline design for OPF. We further identify the divergence between prediction accuracy and constraint feasibility and characterize the compute-optimal frontier. This work provides quantitative guidance for ML-OPF design and deployments.
Accurate radio map (RM) construction is essential to enabling environment-aware and adaptive wireless communication. However, in future 6G scenarios characterized by high-speed network entities and fast-changing environments, it is very challenging to meet real-time requirements. Although generative diffusion models (DMs) can achieve state-of-the-art accuracy with second-level delay, their iterative nature leads to prohibitive inference latency in delay-sensitive scenarios. In this paper, by uncovering a key structural property of diffusion processes: the latent midpoints remain highly consistent across semantically similar scenes, we propose RadioDiff-Flux, a novel two-stage latent diffusion framework that decouples static environmental modeling from dynamic refinement, enabling the reuse of precomputed midpoints to bypass redundant denoising. In particular, the first stage generates a coarse latent representation using only static scene features, which can be cached and shared across similar scenarios. The second stage adapts this representation to dynamic conditions and transmitter locations using a pre-trained model, thereby avoiding repeated early-stage computation. The proposed RadioDiff-Flux significantly reduces inference time while preserving fidelity. Experiment results show that RadioDiff-Flux can achieve up to 50 acceleration with less than 0.15% accuracy loss, demonstrating its practical utility for fast, scalable RM generation in future 6G networks.
The first XACLE Challenge (x-to-audio alignment challenge) addresses the critical need for automatic evaluation metrics that correlate with human perception of audio-text semantic alignment. In this paper, we describe the "Takano_UTokyo_03" system submitted to XACLE Challenge. Our approach leverages a CLAPScore-based architecture integrated with a novel training method called Standardized Preference Optimization (SPO). SPO standardizes the raw alignment scores provided by each listener, enabling the model to learn relative preferences and mitigate the impact of individual scoring biases. Additionally, we employ listener screening to exclude listeners with inconsistent ratings. Experimental evaluations demonstrate that both SPO and listener screening effectively improve the correlation with human judgment. Our system achieved 6th place in the challenge with a Spearman's rank correlation coefficient (SRCC) of 0.6142, demonstrating competitive performance within a marginal gap from the top-ranked systems. The code is available at this https URL.
Extending the input modality of Large Language Models~(LLMs) to the audio domain is essential for achieving comprehensive multimodal perception. However, it is well-known that acoustic information is intrinsically \textit{heterogeneous}, entangling attributes such as speech, music, and environmental context. Existing research is limited to a dense, parameter-shared adapter to model these diverse patterns, which induces \textit{gradient conflict} during optimization, as parameter updates required for distinct attributes contradict each other. To address this limitation, we introduce the \textit{\textbf{MoE-Adapter}}, a sparse Mixture-of-Experts~(MoE) architecture designed to decouple acoustic information. Specifically, it employs a dynamic gating mechanism that routes audio tokens to specialized experts capturing complementary feature subspaces while retaining shared experts for global context, thereby mitigating gradient conflicts and enabling fine-grained feature learning. Comprehensive experiments show that the MoE-Adapter achieves superior performance on both audio semantic and paralinguistic tasks, consistently outperforming dense linear baselines with comparable computational costs. Furthermore, we will release the related code and models to facilitate future research.
We propose a learning-based trajectory tracking controller for autonomous robotic platforms whose motion can be described kinematically on $\mathrm{SE}(3)$. The controller is formulated in the dual quaternion framework and operates at the velocity level, assuming direct command of angular and linear velocities, as is standard in many aerial vehicles and omnidirectional mobile robots. Gaussian Process (GP) regression is integrated into a geometric feedback law to learn and compensate online for unknown, state-dependent disturbances and modeling imperfections affecting both attitude and position, while preserving the algebraic structure and coupling properties inherent to rigid-body motion. The proposed approach does not rely on explicit parametric models of the unknown effects, making it well-suited for robotic systems subject to sensor-induced disturbances, unmodeled actuation couplings, and environmental uncertainties. A Lyapunov-based analysis establishes probabilistic ultimate boundedness of the pose tracking error under bounded GP uncertainty, providing formal stability guarantees for the learning-based controller. Simulation results demonstrate accurate and smooth trajectory tracking in the presence of realistic, localized disturbances, including correlated rotational and translational effects arising from magnetometer perturbations. These results illustrate the potential of combining geometric modeling and probabilistic learning to achieve robust, data-efficient pose control for autonomous robotic systems.
Emotion is a central dimension of spoken communication, yet, we still lack a mechanistic account of how modern large audio-language models (LALMs) encode it internally. We present the first neuron-level interpretability study of emotion-sensitive neurons (ESNs) in LALMs and provide causal evidence that such units exist in Qwen2.5-Omni, Kimi-Audio, and Audio Flamingo 3. Across these three widely used open-source models, we compare frequency-, entropy-, magnitude-, and contrast-based neuron selectors on multiple emotion recognition benchmarks. Using inference-time interventions, we reveal a consistent emotion-specific signature: ablating neurons selected for a given emotion disproportionately degrades recognition of that emotion while largely preserving other classes, whereas gain-based amplification steers predictions toward the target emotion. These effects arise with modest identification data and scale systematically with intervention strength. We further observe that ESNs exhibit non-uniform layer-wise clustering with partial cross-dataset transfer. Taken together, our results offer a causal, neuron-level account of emotion decisions in LALMs and highlight targeted neuron interventions as an actionable handle for controllable affective behaviors.
Indoor localization systems face a fundamental trade-off between efficiency and responsiveness, which is especially important for emerging use cases such as mobile robots operating in GPS-denied environments. Traditional RTLS either require continuously powered infrastructure, limiting their scalability, or are limited by their responsiveness. This work presents Eco-WakeLoc, designed to achieve centimeter-level UWB localization while remaining energy-neutral by combining ultra-low power wake-up radios (WuRs) with solar energy harvesting. By activating anchor nodes only on demand, the proposed system eliminates constant energy consumption while achieving centimeter-level positioning accuracy. To reduce coordination overhead and improve scalability, Eco-WakeLoc employs cooperative localization where active tags initiate ranging exchanges (trilateration), while passive tags opportunistically reuse these messages for TDOA positioning. An additive-increase/multiplicative-decrease (AIMD)-based energy-aware scheduler adapts localization rates according to the harvested energy, thereby maximizing the overall performance of the sensor network while ensuring long-term energy neutrality. The measured energy consumption is only 3.22mJ per localization for active tags, 951uJ for passive tags, and 353uJ for anchors. Real-world deployment on a quadruped robot with nine anchors confirms the practical feasibility, achieving an average accuracy of 43cm in dynamic indoor environments. Year-long simulations show that tags achieve an average of 2031 localizations per day, retaining over 7% battery capacity after one year -- demonstrating that the RTLS achieves sustained energy-neutral operation. Eco-WakeLoc demonstrates that high-accuracy indoor localization can be achieved at scale without continuous infrastructure operation, combining energy neutrality, cooperative positioning, and adaptive scheduling.
Foundation vision, audio, and language models enable zero-shot performance on downstream tasks via their latent representations. Recently, unsupervised learning of data group structure with deep learning methods has gained popularity. TURTLE, a state of the art deep clustering algorithm, uncovers data labeling without supervision by alternating label and hyperplane updates, maximizing the hyperplane margin, in a similar fashion to support vector machines (SVMs). However, TURTLE assumes clusters are balanced; when data is imbalanced, it yields non-ideal hyperplanes that cause higher clustering error. We propose PET-TURTLE, which generalizes the cost function to handle imbalanced data distributions by a power law prior. Additionally, by introducing sparse logits in the labeling process, PET-TURTLE optimizes a simpler search space that in turn improves accuracy for balanced datasets. Experiments on synthetic and real data show that PET-TURTLE improves accuracy for imbalanced sources, prevents over-prediction of minority clusters, and enhances overall clustering.
We provide new insights into an open problem recently posed by Yuan-Sun [ISIT 2025], concerning the minimum individual key rate required in the vector linear secure aggregation problem. Consider a distributed system with $K$ users, where each user $k\in [K]$ holds a data stream $W_k$ and an individual key $Z_k$. A server aims to compute a linear function $\mathbf{F}[W_1;\ldots;W_K]$ without learning any information about another linear function $\mathbf{G}[W_1;\ldots;W_K]$, where $[W_1;\ldots;W_K]$ denotes the row stack of $W_1,\ldots,W_K$. The open problem is to determine the minimum required length of $Z_k$, denoted as $R_k$, $k\in [K]$. In this paper, we characterize a new achievable region for the rate tuple $(R_1,\ldots,R_K)$. The region is polyhedral, with vertices characterized by a binary rate assignment $(R_1,\ldots,R_K) = (\mathbf{1}(1 \in \mathcal{I}),\ldots,\mathbf{1}(K\in \mathcal{I}))$, where $\mathcal{I}\subseteq [K]$ satisfies the \textit{rank-increment condition}: $\mathrm{rank}\left(\bigl[\mathbf{F}_{\mathcal{I}};\mathbf{G}_{\mathcal{I}}\bigr]\right) =\mathrm{rank}\bigl(\mathbf{F}_{\mathcal{I}}\bigr)+N$. Here, $\mathbf{F}_\mathcal{I}$ and $\mathbf{G}_\mathcal{I}$ are the submatrices formed by the columns indexed by $\mathcal{I}$. Our results uncover the novel fact that it is not necessary for every user to hold a key, thereby strictly enlarging the best-known achievable region in the literature. Furthermore, we provide a converse analysis to demonstrate its optimality when minimizing the number of users that hold keys.
Many important problems in science and engineering involve inferring a signal from noisy and/or incomplete observations, where the observation process is known. Historically, this problem has been tackled using hand-crafted regularization (e.g., sparsity, total-variation) to obtain meaningful estimates. Recent data-driven methods often offer better solutions by directly learning a solver from examples of ground-truth signals and associated observations. However, in many real-world applications, obtaining ground-truth references for training is expensive or impossible. Self-supervised learning methods offer a promising alternative by learning a solver from measurement data alone, bypassing the need for ground-truth references. This manuscript provides a comprehensive summary of different self-supervised methods for inverse problems, with a special emphasis on their theoretical underpinnings, and presents practical applications in imaging inverse problems.
Artificial muscles are essential for compliant musculoskeletal robotics but complicate control due to nonlinear multiphysics dynamics. Hydraulically amplified electrostatic (HASEL) actuators, a class of soft artificial muscles, offer high performance but exhibit memory effects and hysteresis. Here we present a data-driven reduction and control strategy grounded in spectral submanifold (SSM) theory. In the adiabatic regime, where inputs vary slowly relative to intrinsic transients, trajectories rapidly converge to a low-dimensional slow manifold. We learn an explicit input-to-output map on this manifold from forced-response trajectories alone, avoiding decay experiments that can trigger hysteresis. We deploy the SSM-based model for real-time control of an antagonistic HASEL-clutch joint. This approach yields a substantial reduction in tracking error compared to feedback-only and feedforward-only baselines under identical settings. This record-and-control workflow enables rapid characterization and high-performance control of soft muscles and muscle-driven joints without detailed physics-based modeling.
We present TEyeD, the world's largest unified public data set of eye images taken with head-mounted devices. TEyeD was acquired with seven different head-mounted eye trackers. Among them, two eye trackers were integrated into virtual reality (VR) or augmented reality (AR) devices. The images in TEyeD were obtained from various tasks, including car rides, simulator rides, outdoor sports activities, and daily indoor activities. The data set includes 2D and 3D landmarks, semantic segmentation, 3D eyeball annotation and the gaze vector and eye movement types for all images. Landmarks and semantic segmentation are provided for the pupil, iris and eyelids. Video lengths vary from a few minutes to several hours. With more than 20 million carefully annotated images, TEyeD provides a unique, coherent resource and a valuable foundation for advancing research in the field of computer vision, eye tracking and gaze estimation in modern VR and AR applications. Download: this https URL Alternative Download: this https URL
This paper addresses the design of safety certificates for stochastic systems, with a focus on ensuring long-term safety through fast real-time control. In stochastic environments, set invariance-based methods that restrict the probability of risk events in infinitesimal time intervals may exhibit significant long-term risks due to cumulative uncertainties/risks. On the other hand, reachability-based approaches that account for the long-term future may require prohibitive computation in real-time decision making. To overcome this challenge involving stringent long-term safety vs. computation tradeoffs, we first introduce a novel technique termed 'probabilistic invariance'. This technique characterizes the invariance conditions of the probability of interest. When the target probability is defined using long-term trajectories, this technique can be used to design myopic conditions/controllers with assured long-term safe probability. Then, we integrate this technique into safe control and learning. The proposed control methods efficiently assure long-term safety using neural networks or model predictive controllers with short outlook horizons. The proposed learning methods can be used to guarantee long-term safety during and after training. Finally, we demonstrate the performance of the proposed techniques in numerical simulations.
We consider a robust and self-reliant (or "egoistic") variation of the rigid body localization (RBL) problem, in which a primary rigid body seeks to estimate the pose (i.e., location and orientation) of another rigid body (or "target"), relative to its own, without the assistance of external infrastructure, without prior knowledge of the shape of the target, and taking into account the possibility that the available observations are incomplete. Three complementary contributions are then offered for such a scenario. The first is a method to estimate the translation vector between the center point of both rigid bodies, which unlike existing techniques does not require that both objects have the same shape or even the same number of landmark points. This technique is shown to significantly outperform the state-of-the-art (SotA) under complete information, but to be sensitive to data erasures, even when enhanced by matrix completion methods. The second contribution, designed to offer improved performance in the presence of incomplete information, offers a robust alternative to the latter, at the expense of a slight relative loss under complete information. Finally, the third contribution is a scheme for the estimation of the rotation matrix describing the relative orientation of the target rigid body with respect to the primary. Comparisons of the proposed schemes and SotA techniques demonstrate the advantage of the contributed methods in terms of root mean square error (RMSE) performance under fully complete information and incomplete conditions.
Base station (BS) deployment remains a critical task with successive wireless communication generations and increasing data rates demands, while the electromagnetic field (EMF) exposure is often underrated, yielding potential health implications. Therefore, this paper proposes a workflow that adjusts BS deployment and radiated power in a 3D urban scenario to jointly consider EMF exposure and coverage. To achieve this ambition, firstly, a novel least-time shoot-and-bounce ray (SBR) ray-launching (RL) tool is developed to improve computational efficiency, and simultaneously enhance diffraction modeling} for accurate EMF exposure calculation, validated with real-world measurements. To efficiently extend the computation across the target urban area, the adaptive grid refinement (AGR) algorithm is designed based on the spatial stability of the effective channel while accounting for BS beamforming, enabling global estimation of EMF exposure and signal coverage. Subsequently, to better represent real-world communication network behaviors, the actual maximum transmit power, intercell interference, and channel state information imperfection are incorporated on the BS side, while mobility over the EMF exposure averaging interval is captured on the user equipment side. Upon the aforementioned aspects, the coverage-guaranteed EMF exposure minimization problem is formulated in a realistic and accurate manner, and solved by a geometry-aware algorithm adapted to deterministic channel models, yielding the optimal BS deployment and power configuration. In comparison to a baseline that relies on an empirical channel model, the proposed workflow delivers more reliable estimation of EMF exposure and provides practical guidance for BS construction and operations.
Self-supervised automatic speech recognition (SSL-ASR) is an ASR approach that uses speech encoders pretrained on large amounts of unlabeled audio (e.g., wav2vec2.0 or HuBERT) and then fine-tunes them with limited labeled data to perform transcription. Decoding is usually performed with a CTC decoder, whose hypotheses are scored and refined using an external language model (LM), typically an n-gram or neural LM, which guides beam search to produce the final transcription. Using Large Language Models (LLMs) as external LMs remains a challenge, as their word probabilities are overly confident. The proposed method integrates an LLM with an SSL acoustic model by using the LLM's decoding mechanism to generate a set of candidate next tokens. For each candidate, the SSL model provides an acoustic score by aligning it to the input acoustics of the SSL model. A combined acoustic and LLM score is then calculated based on decomposing the MAP estimator of words given the acoustic signal. The tokens with the highest combined scores are maintained in a beam, which is then used to proceed to the next decoding step. We illustrate the effectiveness of our method through a comprehensive comparison with the current state-of-the-art LLM-based decoding, post-processing, and error-correcting methods across multiple datasets. Our approach proves particularly effective when processing challenging inputs such as complex speech sentences, acronyms, and domain-specific vocabulary.
Early detection of lung cancer is critical to improving survival outcomes. We present a deep learning framework for automated lung cancer screening from chest computed tomography (CT) images with integrated explainability. Using the IQ-OTH/NCCD dataset (1,197 scans across Normal, Benign, and Malignant classes), we evaluate a custom convolutional neural network (CNN) and three fine-tuned transfer learning backbones: DenseNet121, ResNet152, and VGG19. Models are trained with cost-sensitive learning to mitigate class imbalance and evaluated via accuracy, precision, recall, F1-score, and ROC-AUC. While ResNet152 achieved the highest accuracy (97.3%), DenseNet121 provided the best overall balance in precision, recall, and F1 (up to 92%, 90%, 91%, respectively). We further apply Shapley Additive Explanations (SHAP) to visualize evidence contributing to predictions, improving clinical transparency. Results indicate that CNN-based approaches augmented with explainability can provide fast, accurate, and interpretable support for lung cancer screening, particularly in resource-limited settings.
Polarimetric phased arrays (PPAs) enhance radar target detection and anti-jamming capabilities, but their conventional dual transmit/receive (T/R) channel architecture leads to high cost and system complexity. To address these limitations, this paper proposes a polarization-coding reconfigurable phased array (PCRPA) and associated pattern synthesis techniques, which reduce the channel count while preserving key performance. In the PCRPA, each antenna element connects to a single T/R channel and is equipped with a two-level RF switch, enabling real-time control of its polarization state and subarray grouping. By optimizing both the element polarization codes and the excitation weights, the array can synthesize arbitrarily polarized and dual-polarized beams. Simulation results show that the proposed approach achieves suppressed cross-polarization and comparable sidelobe levels compared to conventional PPAs across a wide scan range, with performance improvements being more pronounced in larger arrays. The inherent channel reduction does, however, incur a trade-off in terms of radiated power and directivity. Experimental validation using an $8\times 8$ X-band array antenna confirms the feasibility and effectiveness of the proposed system. The PCRPA architecture and the accompanying synthesis methods offer a cost-effective solution for large-scale PPA systems, maintaining sidelobe and polarization control with significantly reduced hardware complexity.
This paper proposes a novel orthogonal-by-construction parametrization for augmenting physics-based input-output models with a learning component in an additive sense. The parametrization allows to jointly optimize the parameters of the physics-based model and the learning component. Unlike the commonly applied additive (parallel) augmentation structure, the proposed formulation eliminates overlap in representation of the system dynamics, thereby preserving the uniqueness of the estimated physical parameters, ultimately leading to enhanced model interpretability. By theoretical analysis, we show that, under mild conditions, the method is statistically consistent and guarantees recovery of the true physical parameters. With further analysis regarding the asymptotic covariance matrix of the identified parameters, we also prove that the proposed structure provides a clear separation between the physics-based and learning components of the augmentation structure. The effectiveness of the proposed approach is demonstrated through simulation studies, showing accurate reproduction of the data-generating dynamics without sacrificing consistent estimation of the physical parameters.
Accurate timing and synchronization, typically enabled by GPS, are essential for modern wireless communication systems. However, many emerging applications must operate in GPS-denied environments where signals are unreliable or disrupted, resulting in oscillator drift and carrier frequency impairments. To address these challenges, we present BenchLink, a System-on-Chip (SoC)-based benchmark for resilient communication links that functions without GPS and supports adaptive pilot density and modulation. Unlike traditional General Purpose Processor (GPP)-based software-defined radios (e.g. USRPs), the SoC-based design allows for more precise latency control. We implement and evaluate BenchLink on Zynq UltraScale+ MPSoCs, and demonstrate its effectiveness in both ground and aerial environments. A comprehensive dataset has also been collected under various conditions. We will make both the SoC-based link design and dataset available to the wireless community. BenchLink is expected to facilitate future research on data-driven link adaptation, resilient synchronization in GPS-denied scenarios, and emerging applications that require precise latency control, such as integrated radar sensing and communication.
Smartphone-based tele-dermatology assumes that colorimetric calibration ensures clinical reliability, yet this remains untested for underrepresented skin phototypes. We investigated whether standard calibration translates to reliable clinical biomarkers using 43,425 images from 965 Korean subjects (Fitzpatrick III-IV) across DSLR, tablet, and smartphone devices. While Linear Color Correction Matrix (CCM) normalization reduced color error by 67-77% -- achieving near-clinical accuracy (Delta E < 2.3) -- this success did not translate to biomarker reliability. We identify a phenomenon termed "color-clinical decoupling": despite perceptual accuracy, the Individual Typology Angle (ITA) showed poor inter-device agreement (ICC = 0.40), while the Melanin Index achieved good agreement (ICC = 0.77). This decoupling is driven by the ITA formula's sensitivity to b* channel noise and is further compounded by anatomical variance. Facial region accounts for 25.2% of color variance -- 3.6x greater than device effects (7.0%) -- challenging the efficacy of single-patch calibration. Our results demonstrate that current colorimetric standards are insufficient for clinical-grade biomarker extraction, necessitating region-aware protocols for mobile dermatology.
Semantic communication (SemCom) improves communication efficiency by transmitting task-relevant information instead of raw bits and is expected to be a key technology for 6G networks. Recent advances in generative AI (GenAI) further enhance SemCom by enabling robust semantic encoding and decoding under limited channel conditions. However, these efficiency gains also introduce new security and privacy vulnerabilities. Due to the broadcast nature of wireless channels, eavesdroppers can also use powerful GenAI-based semantic decoders to recover private information from intercepted signals. Moreover, rapid advances in agentic AI enable eavesdroppers to perform long-term and adaptive inference through the integration of memory, external knowledge, and reasoning capabilities. This allows eavesdroppers to further infer user private behavior and intent beyond the transmitted content. Motivated by these emerging challenges, this paper comprehensively rethinks the security and privacy of SemCom systems in the age of generative and agentic AI. We first present a systematic taxonomy of eavesdropping threat models in SemCom systems. Then, we provide insights into how GenAI and agentic AI can enhance eavesdropping threats. Meanwhile, we also highlight potential opportunities for leveraging GenAI and agentic AI to design privacy-preserving SemCom systems.
In decentralized optimization, the choice of stepsize plays a critical role in algorithm performance. A common approach is to use a shared stepsize across all agents to ensure convergence. However, selecting an optimal stepsize often requires careful tuning, which can be time-consuming and may lead to slow convergence, especially when there is significant variation in the smoothness (L-smoothness) of local objective functions across agents. Individually tuning stepsizes per agent is also impractical, particularly in large-scale networks. To address these limitations, we propose AdGT, an adaptive gradient tracking method that enables each agent to adjust its stepsize based on the smoothness of its local objective. We prove that AdGT achieves linear convergence to the global optimal solution. Through numerical experiments, we compare AdGT with fixed-stepsize gradient tracking methods and demonstrate its superior performance. Additionally, we compare AdGT with adaptive gradient descent (AdGD) in a centralized setting and observe that fully adaptive stepsizes offer greater benefits in decentralized networks than in centralized ones.
Sensor systems are extremely popular today and vulnerable to sensor data attacks. Due to possible devastating consequences, counteracting sensor data attacks is an extremely important topic, which has not seen sufficient study. This paper develops the first methods that accurately identify/eliminate only the problematic attacked sensor data presented to a sequence estimation/regression algorithm under a powerful attack model constructed based on known/observed attacks. The approach does not assume a known form for the statistical model of the sensor data, allowing data-driven and machine learning sequence estimation/regression algorithms to be protected. A simple protection approach for attackers not endowed with knowledge of the details of our protection approach is first developed, followed by additional processing for attacks based on protection system knowledge. In the cases tested for which it was designed, experimental results show that the simple approach achieves performance indistinguishable, to two decimal places, from that for an approach which knows which sensors are attacked. For cases where the attacker has knowledge of the protection approach, experimental results indicate the additional processing can be configured so that the worst-case degradation under the additional processing and a large number of sensors attacked can be made significantly smaller than the worst-case degradation of the simple approach, and close to an approach which knows which sensors are attacked, for the same number of attacked sensors with just a slight degradation under no attacks. Mathematical descriptions of the worst-case attacks are used to demonstrate the additional processing will provide similar advantages for cases for which we do not have numerical results. All the data-driven processing used in our approaches employ only unattacked training data.
Spoken language models (SLMs) have advanced rapidly in recent years, accompanied by a growing number of evaluation benchmarks. However, most existing benchmarks emphasize task completion and capability scaling, while remaining poorly aligned with how users interact with SLMs in real-world spoken conversations. Effective spoken interaction requires not only accurate understanding of user intent and content, but also the ability to respond with appropriate interactional strategies. In this paper, we present TELEVAL, a dynamic, user-centered benchmark for evaluating SLMs in realistic Chinese spoken interaction scenarios. TELEVAL consolidates evaluation into two core aspects. Reliable Content Fulfillment assesses whether models can comprehend spoken inputs and produce semantically correct responses. Interactional Appropriateness evaluates whether models act as socially capable interlocutors, requiring them not only to generate human-like, colloquial responses, but also to implicitly incorporate paralinguistic cues for natural interaction. Experiments reveal that, despite strong performance on semantic and knowledge-oriented tasks, current SLMs still struggle to produce natural and interactionally appropriate responses, highlighting the need for more interaction-faithful evaluation.
The ability to reason from audio, including speech, environmental sounds, and music, is essential for AI agents to interact effectively in real-world scenarios. Existing benchmarks mainly focus on static or single-scene settings and English audio data and do not fully capture scenarios where multiple speakers, unfolding events, and heterogeneous audio sources interact. To address these challenges, we introduce CMDAR, a Chinese benchmark for evaluating models on complex, multi-scene, and dynamically evolving audio reasoning tasks. CMDAR comprises 3,000 carefully curated question-answer pairs linked to diverse audio clips, covering five categories of complex reasoning and spanning three question types. We benchmark 26 state-of-the-art audio language models on CMDAR and observe that they exhibit limitations in complex reasoning tasks. In CMDAR-main, Qwen2.5-Omni achieves 76.67% accuracy, whereas GPT-4o Audio reaches 68.47%. However, GPT-4o Audio substantially outperforms Qwen2.5-Omni on the more challenging multiple-choice with multiple audios and open-ended tasks. And we provide detail analysis corresponding suggestions for the future development of large audio language models.