Cross-process root-cause analysis of wafer defects is among the most critical yet challenging tasks in semiconductor manufacturing due to the heterogeneity and combinatorial nature of processes along the processing route. This paper presents a new framework for wafer defect root cause analysis, called Potential Loss Analysis (PLA), as a significant enhancement of the previously proposed partial trajectory regression approach. The PLA framework attributes observed high wafer defect densities to upstream processes by comparing the best possible outcomes generated by partial processing trajectories. We show that the task of identifying the best possible outcome can be reduced to solving a Bellman equation. Remarkably, the proposed framework can simultaneously solve the prediction problem for defect density as well as the attribution problem for defect scores. We demonstrate the effectiveness of the proposed framework using real wafer history data.
Stochastic models in biomolecular contexts can have a state-dependent process noise covariance. The choice of the process noise covariance is an important parameter in the design of a Kalman Filter for state estimation and the theoretical guarantees of updating the process noise covariance as the state estimate changes are unclear. Here we investigated this issue using the Minimum Mean Square Error estimator framework and an interpretation of the Kalman Filter as minimizing a weighted least squares cost using Newton's method. We found that a Kalman Filter-like algorithm with a process noise covariance update is the best linear unbiased estimator for a class of systems with linear process dynamics and a square root-dependence of the process noise covariance on the state. We proved the result for discrete-time system dynamics and then extended it to continuous-time dynamics using a limiting procedure. For nonlinear dynamics with a general dependence of process noise covariance on the state, we showed that this algorithm minimizes a quadratic approximation to a least squares cost weighted by the noise covariance. The algorithm is illustrated with an example.
This paper addresses the critical challenge of estimating the reliability of an Electric Vehicle (EV) charging systems when facing risks such as overheating, unpredictable, weather, and cyberattacks. Traditional methods for predicting failures often rely on past data or limiting assumptions, making them ineffective for new or less common threats that results in failure. To solve this issue, we utilize the Principle of Maximum Entropy (PME), a statistical tool that estimates risks even with limited information. PME works by balancing known constraints to create an unbiased predictions without guessing missing details. Using the EV charging ecosystem as a case study, we show how PME models stress factors responsible for failure. Our findings reveal a critical insight: even minor, localized stress events can trigger disproportionately large drops in overall system reliability, similar to a domino effect. The our PME model demonstrates how high-impact components, such as the power grid, are more likely to fail as stress accumulates, creating network-wide tipping points. Beyond EVs, this approach applies to any complex system with incomplete data, such as smart grids, healthcare devices, or logistics networks. By mathematically establishing an inverse relationship between uncertainty (entropy) and reliability, our work quantifies how greater system unpredictability directly degrades robustness. This offers a universal tool to improve decision-making under unpredictable conditions. This work bridges advanced mathematics with real-world engineering, providing actionable insights for policymakers and industries to build safer, more efficient systems in our increasingly connected world.
Automated vehicles will allow occupants to engage in non-driving tasks, but limited visual cues will make them vulnerable to unexpected movements. These unpredictable perturbations create a "surprise factor," forcing the central nervous system to rely on compensatory postural adjustments, which are less effective, and are more likely to trigger sensory conflicts. Since the head is a key reference for sensory input (vestibular and vision), models accurately capturing head-neck postural stabilization are essential for assessing AV comfort. This study extends an existing model predictive control-based framework to simulate head-neck postural control under lateral perturbations. Experimental validation against human data demonstrates that the model can accurately reproduce dynamic responses during lateral trunk perturbations. The results show that muscle effort combined with partial somatosensory feedback provides the best overall dynamic fit without requiring corrective relative and global head orientation integrators for posture.
The recently proposed System Identification via Validation and Adaptation (SIVA) method allows system identification, uncertainty quantification, and model validation directly from data. Inspired by generative modeling, SIVA employs a neural network that converts random noise to physically meaningful parameters. The known equation of motion utilizes these parameters to generate fake accelerations, which are compared to real training data using a mean square error loss. For concurrent parameter validation, independent datasets are passed through the model, and the resulting signals are classified as real or fake by a discriminator network, which guides the parameter-generator network. In this work, we apply SIVA to simulated vibration data from a cantilever beam that contains a lumped mass and a nonlinear end attachment, demonstrating accurate parameter estimation and model updating on complex, highly nonlinear systems.
Due to the high flexibility and versatility, unmanned aerial vehicles (UAVs) are leveraged in various fields including surveillance and disaster this http URL, in UAV networks, routing is vulnerable to malicious damage due to distributed topologies and high dynamics. Hence, ensuring the routing security of UAV networks is challenging. In this paper, we characterize the routing process in a time-varying UAV network with malicious nodes. Specifically, we formulate the routing problem to minimize the total delay, which is an integer linear programming and intractable to solve. Then, to tackle the network security issue, a blockchain-based trust management mechanism (BTMM) is designed to dynamically evaluate trust values and identify low-trust UAVs. To improve traditional practical Byzantine fault tolerance algorithms in the blockchain, we propose a consensus UAV update mechanism. Besides, considering the local observability, the routing problem is reformulated into a decentralized partially observable Markov decision process. Further, a multi-agent double deep Q-network based routing algorithm is designed to minimize the total delay. Finally, simulations are conducted with attacked UAVs and numerical results show that the delay of the proposed mechanism decreases by 13.39$\%$, 12.74$\%$, and 16.6$\%$ than multi-agent proximal policy optimal algorithms, multi-agent deep Q-network algorithms, and methods without BTMM, respectively.
Millimeter-wave (mmWave) communications have gained attention as a key technology for high-capacity wireless systems, owing to the wide available bandwidth. However, mmWave signals suffer from their inherent characteristics such as severe path loss, poor scattering, and limited diffraction, which necessitate the use of large antenna arrays and directional beamforming, typically implemented through massive MIMO architectures. Accurate channel estimation is critical in such systems, but its computational complexity increases proportionally with the number of antennas. This may become a significant burden in mmWave systems where channels exhibit rapid fluctuations and require frequent updates. In this paper, we propose a low-complexity channel denoiser based on Bayesian binary hypothesis testing and beamspace sparsity. By modeling each sparse beamspace component as a mixture of signal and noise under a Bernoulli-complex Gaussian prior, we formulate a likelihood ratio test to detect signal-relevant elements. Then, a hard-thresholding rule is applied to suppress noise-dominant components in the noisy channel vector. Despite its extremely low computational complexity, the proposed method achieves channel estimation accuracy that is comparable to that of complex iterative or learning-based approaches. This effectiveness is supported by both theoretical analysis and numerical evaluation, suggesting that the method can be a viable option for mmWave systems with strict resource constraints.
This retrospective study evaluated five VLMs (Qwen2.5, Phi-4, Gemma3, Llama3.2, and Mistral3.1) using the MedFMC dataset. This dataset includes 22,349 images from 7,461 patients encompassing chest radiography (19 disease multi-label classifications), colon pathology (tumor detection), endoscopy (colorectal lesion identification), neonatal jaundice assessment (skin color-based treatment necessity), and retinal fundoscopy (5-point diabetic retinopathy grading). Diagnostic accuracy was compared in three experimental settings: visual input only, multimodal input, and chain-of-thought reasoning. Model accuracy was assessed against ground truth labels, with statistical comparisons using bootstrapped confidence intervals (p<.05). Qwen2.5 achieved the highest accuracy for chest radiographs (90.4%) and endoscopy images (84.2%), significantly outperforming the other models (p<.001). In colon pathology, Qwen2.5 (69.0%) and Phi-4 (69.6%) performed comparably (p=.41), both significantly exceeding other VLMs (p<.001). Similarly, for neonatal jaundice assessment, Qwen2.5 (58.3%) and Phi-4 (58.1%) showed comparable leading accuracies (p=.93) significantly exceeding their counterparts (p<.001). All models struggled with retinal fundoscopy; Qwen2.5 and Gemma3 achieved the highest, albeit modest, accuracies at 18.6% (comparable, p=.99), significantly better than other tested models (p<.001). Unexpectedly, multimodal input reduced accuracy for some models and modalities, and chain-of-thought reasoning prompts also failed to improve accuracy. The open-source VLMs demonstrated promising diagnostic capabilities, particularly in chest radiograph interpretation. However, performance in complex domains such as retinal fundoscopy was limited, underscoring the need for further development and domain-specific adaptation before widespread clinical application.
Fake speech detection systems have become a necessity to combat against speech deepfakes. Current systems exhibit poor generalizability on out-of-domain speech samples due to lack to diverse training data. In this paper, we attempt to address domain generalization issue by proposing a novel speech representation using self-supervised (SSL) speech embeddings and the Modulation Spectrogram (MS) feature. A fusion strategy is used to combine both speech representations to introduce a new front-end for the classification task. The proposed SSL+MS fusion representation is passed to the AASIST back-end network. Experiments are conducted on monolingual and multilingual fake speech datasets to evaluate the efficacy of the proposed model architecture in cross-dataset and multilingual cases. The proposed model achieves a relative performance improvement of 37% and 20% on the ASVspoof 2019 and MLAAD datasets, respectively, in in-domain settings compared to the baseline. In the out-of-domain scenario, the model trained on ASVspoof 2019 shows a 36% relative improvement when evaluated on the MLAAD dataset. Across all evaluated languages, the proposed model consistently outperforms the baseline, indicating enhanced domain generalization.
Integrated Sensing and Communication (ISAC) is emerging as a key enabler for 6G wireless networks, allowing the joint use of spectrum and infrastructure for both communication and sensing. While prior ISAC solutions have addressed resource optimization, including power allocation, beamforming, and waveform design, they often rely on centralized architectures with full network knowledge, limiting their scalability in distributed systems. In this paper, we propose two coordinated decentralized optimization algorithms for beamforming and power allocation tailored to cell-free ISAC networks. The first algorithm employs locally designed fixed beamformers at access points (APs), combined with a centralized power allocation scheme computed at a central server (CS). The second algorithm jointly optimizes beamforming and power control through a fully decentralized consensus ADMM framework. Both approaches rely on local information at APs and limited coordination with the CS. Simulation results obtained using our proposed Python-based simulation framework evaluate their fronthaul overhead and system-level performance, demonstrating their practicality for scalable ISAC deployment in decentralized, cell-free architectures.
An increasing share of consumers care about the carbon footprint of their electricity. This paper proposes to integrate consumer carbon preferences in the electricity market-clearing through consumer-based carbon costs. Specifically, consumers can submit not only bids for power but also assign a cost to the carbon emissions incurred by their electricity use. We start from a centralized market clearing that maximizes social welfare under consideration of generation costs, consumer utility and consumer carbon costs. We then derive an equivalent equilibrium formulation which incorporates a carbon allocation problem and gives rise to a set of carbon-adjusted electricity prices for both consumers and generators. We prove that the carbon-adjusted prices are higher for low-emitting generators and consumers with high carbon costs. Further, we prove that this new paradigm satisfies the same desirable market properties as standard electricity markets based on locational marginal prices, namely revenue adequacy and individual rationality, and demonstrate that a carbon tax on generators is equivalent to imposing a uniform carbon cost on consumers. Using a simplified three-bus system and the RTS-GMLC system, we illustrate that consumer-based carbon costs contribute to greener electricity market clearing both through generation redispatch and reductions in demand.
We develop a system for real-time public transportation data, deciding to use the data standard GTFS-RT (GTFS Realtime), an open data format for public transit data. We give an overview of the design of a physical GPS sensor device, its firmware, and processes. Next, we give the algorithms used to translate raw sensor data into a public GTFS-RT data feed. We deploy this feed over a highly available cluster across multiple regions to maintain high availability.
The recently proposed orthogonal delay-Doppler division multiplexing (ODDM) modulation has been demonstrated to enjoy excellent reliability over doubly-dispersive channels. However, most of the prior analysis tends to ignore the interactive dispersion caused by the wideband property of ODDM signal, which possibly leads to performance degradation. To solve this problem, we investigate the input-output relation of ODDM systems considering the wideband effect, which is also known as the Doppler squint effect (DSE) in the literature. The extra delay-Doppler (DD) dispersion caused by the DSE is first explicitly explained by employing the time-variant frequency response of multipath channels. Its characterization is then derived for both reduced cyclic prefix (RCP) and zero padded (ZP)-based wideband ODDM systems, where the extra DD spread and more complicated power leakage outside the peak region are presented theoretically. Numerical results are finally provided to confirm the significance of DSE. The derivations in this paper are beneficial for developing accurate signal processing techniques in ODDM-based integrated sensing and communication systems.
Synthesizing amyloid PET scans from the more widely available and accessible structural MRI modality offers a promising, cost-effective approach for large-scale Alzheimer's Disease (AD) screening. This is motivated by evidence that, while MRI does not directly detect amyloid pathology, it may nonetheless encode information correlated with amyloid deposition that can be uncovered through advanced modeling. However, the high dimensionality and structural complexity of 3D neuroimaging data pose significant challenges for existing MRI-to-PET translation methods. Modeling the cross-modality relationship in a lower-dimensional latent space can simplify the learning task and enable more effective translation. As such, we present CoCoLIT (ControlNet-Conditioned Latent Image Translation), a diffusion-based latent generative framework that incorporates three main innovations: (1) a novel Weighted Image Space Loss (WISL) that improves latent representation learning and synthesis quality; (2) a theoretical and empirical analysis of Latent Average Stabilization (LAS), an existing technique used in similar generative models to enhance inference consistency; and (3) the introduction of ControlNet-based conditioning for MRI-to-PET translation. We evaluate CoCoLIT's performance on publicly available datasets and find that our model significantly outperforms state-of-the-art methods on both image-based and amyloid-related metrics. Notably, in amyloid-positivity classification, CoCoLIT outperforms the second-best method with improvements of +10.5% on the internal dataset and +23.7% on the external dataset. The code and models of our approach are available at this https URL.
This work addresses the critical challenge of guaranteeing safety for complex dynamical systems where precise mathematical models are uncertain and data measurements are corrupted by noise. We develop a physics-informed, direct data-driven framework for synthesizing robust safety controllers (R-SCs) for both discrete- and continuous-time nonlinear polynomial systems that are subject to unknown-but-bounded disturbances. To do so, we introduce a notion of safety through robust control barrier certificates (R-CBCs), which ensure avoidance of (potentially multiple) unsafe regions, offering a less conservative alternative to existing methods based on robust invariant sets. Our core innovation lies in integrating the fundamental physical principles with observed noisy data which drastically reduces data requirements, enabling robust safety analysis with significantly shorter trajectories, compared to purely data-driven methods. To achieve this, the proposed synthesis procedure is formulated as a sum-of-squares (SOS) optimization program that systematically designs the R-CBC and its associated R-SC by leveraging both collected data and underlying physical laws. The efficacy of our framework is demonstrated on four benchmark systems, three discrete-time and one continuous-time nonlinear polynomial systems, confirming its ability to offer robust safety guarantees with reduced data demands.
Infrared small target detection (IRSTD) is thus critical in both civilian and military applications. This study addresses the challenge of precisely IRSTD in complex backgrounds. Recent methods focus fundamental reliance on conventional convolution operations, which primarily capture local spatial patterns and struggle to distinguish the unique frequency-domain characteristics of small targets from intricate background clutter. To overcome these limitations, we proposed the Synergistic Wavelet-Attention Network (SWAN), a novel framework designed to perceive targets from both spatial and frequency domains. SWAN leverages a Haar Wavelet Convolution (HWConv) for a deep, cross-domain fusion of the frequency energy and spatial details of small target. Furthermore, a Shifted Spatial Attention (SSA) mechanism efficiently models long-range spatial dependencies with linear computational complexity, enhancing contextual awareness. Finally, a Residual Dual-Channel Attention (RDCA) module adaptively calibrates channel-wise feature responses to suppress background interference while amplifying target-pertinent signals. Extensive experiments on benchmark datasets demonstrate that SWAN surpasses existing state-of-the-art methods, showing significant improvements in detection accuracy and robustness, particularly in complex challenging scenarios.
The use of Convolutional Neural Networks (CNNs) has greatly improved the interpretation of medical images. However, conventional CNNs typically demand extensive computational resources and large training datasets. To address these limitations, this study applied transfer learning to achieve strong classification performance using fewer training samples. Specifically, the study compared EfficientNetV2 with its predecessor, EfficientNet, and with ResNet50 in classifying brain tumors into three types: glioma, meningioma, and pituitary tumors. Results showed that EfficientNetV2 delivered superior performance compared to the other models. However, this improvement came at the cost of increased training time, likely due to the model's greater complexity.
Lung adenocarcinoma (LUAD) is a subtype of non-small cell lung cancer (NSCLC). LUAD with mutation in the EGFR gene accounts for approximately 46% of LUAD cases. Patients carrying EGFR mutations can be treated with specific tyrosine kinase inhibitors (TKIs). Hence, predicting EGFR mutation status can help in clinical decision making. H&E-stained whole slide imaging (WSI) is a routinely performed screening procedure for cancer staging and subtyping, especially affecting the Southeast Asian populations with significantly higher incidence of the mutation when compared to Caucasians (39-64% vs 7-22%). Recent progress in AI models has shown promising results in cancer detection and classification. In this study, we propose a deep learning (DL) framework built on vision transformers (ViT) based pathology foundation model and attention-based multiple instance learning (ABMIL) architecture to predict EGFR mutation status from H&E WSI. The developed pipeline was trained using data from an Indian cohort (170 WSI) and evaluated across two independent datasets: Internal test (30 WSI from Indian cohort) set, and an external test set from TCGA (86 WSI). The model shows consistent performance across both datasets, with AUCs of 0.933 (+/-0.010), and 0.965 (+/-0.015) for the internal and external test sets respectively. This proposed framework can be efficiently trained on small datasets, achieving superior performance as compared to several prior studies irrespective of training domain. The current study demonstrates the feasibility of accurately predicting EGFR mutation status using routine pathology slides, particularly in resource-limited settings using foundation models and attention-based multiple instance learning.
The plug-and-play (PnP) method uses a deep denoiser within a proximal algorithm for model-based image reconstruction (IR). Unlike end-to-end IR, PnP allows the same pretrained denoiser to be used across different imaging tasks, without the need for retraining. However, black-box networks can make the iterative process in PnP unstable. A common issue observed across architectures like CNNs, diffusion models, and transformers is that the visual quality and PSNR often improve initially but then degrade in later iterations. Previous attempts to ensure stability usually impose restrictive constraints on the denoiser. However, standard denoisers, which are freely trained for single-step noise removal, need not satisfy such constraints. We propose a simple data-driven stabilization mechanism that adaptively averages the potentially unstable PnP operator with a contractive IR operator. This acts as a form of viscosity regularization, where the contractive component progressively dampens updates in later iterations, helping to suppress oscillations and prevent divergence. We validate the effectiveness of our stabilization mechanism across different proximal algorithms, denoising architectures, and imaging tasks.
Suppose there is an adversarial UAV network being tracked by a radar. How can the radar determine whether the UAVs are coordinating, in some well-defined sense? How can the radar infer the objectives of the individual UAVs and the network as a whole? We present an abstract interpretation of such a strategic interaction, allowing us to conceptualize coordination as a linearly constrained multi-objective optimization problem. Then, we present some tools from microeconomic theory that allow us to detect coordination and reconstruct individual UAV objective functions, from radar tracking signals. This corresponds to performing inverse multi-objective optimization. We present details for how the abstract microeconomic interpretation corresponds to, and naturally arises from, physical-layer radar waveform modulation and multi-target filtering. This article serves as a tutorial, bringing together concepts from several established research contributions in an expository style.
The choice of parameterization in Nonlinear (NL) system models greatly affects the quality of the estimated model. Overly complex models can be impractical and hard to interpret, necessitating data-driven methods for simpler and more accurate representations. In this paper, we propose a data-driven approach to simplify a class of continuous-time NL system models using linear approximations around varying operating points. Specifically, for sparse additive NL models, our method identifies the number of NL subterms and their corresponding input spaces. Under small-signal operation, we approximate the unknown NL system as a trajectory-scheduled Linear Parameter-Varying (LPV) system, with LPV coefficients representing the gradient of the NL function and indicating input sensitivity. Using this sensitivity measure, we determine the NL system's structure through LPV model reduction by identifying non-zero LPV coefficients and selecting scheduling parameters. We introduce two sparse estimators within a vector-valued Reproducing Kernel Hilbert Space (RKHS) framework to estimate the LPV coefficients while preserving their structural relationships. The structure of the sparse additive NL model is then determined by detecting non-zero elements in the gradient vector (LPV coefficients) and the Hessian matrix (Jacobian of the LPV coefficients). We propose two computationally tractable RKHS-based estimators for this purpose. The sparsified Hessian matrix reveals the NL model's structure, with numerical simulations confirming the approach's effectiveness.
The rise of highly convincing synthetic speech poses a growing threat to audio communications. Although existing Audio Deepfake Detection (ADD) methods have demonstrated good performance under clean conditions, their effectiveness drops significantly under degradations such as packet losses and speech codec compression in real-world communication environments. In this work, we propose the first unified framework for robust ADD under such degradations, which is designed to effectively accommodate multiple types of Time-Frequency (TF) representations. The core of our framework is a novel Multi-Granularity Adaptive Attention (MGAA) architecture, which employs a set of customizable multi-scale attention heads to capture both global and local receptive fields across varying TF granularities. A novel adaptive fusion mechanism subsequently adjusts and fuses these attention branches based on the saliency of TF regions, allowing the model to dynamically reallocate its focus according to the characteristics of the degradation. This enables the effective localization and amplification of subtle forgery traces. Extensive experiments demonstrate that the proposed framework consistently outperforms state-of-the-art baselines across various real-world communication degradation scenarios, including six speech codecs and five levels of packet losses. In addition, comparative analysis reveals that the MGAA-enhanced features significantly improve separability between real and fake audio classes and sharpen decision boundaries. These results highlight the robustness and practical deployment potential of our framework in real-world communication environments.
Hydrogen Purchase Agreements (HPAs) guarantee revenue streams that mitigate the financial risks inherent in the long-term production of green hydrogen from renewable energy sources. However, the intermittency of renewable electricity and the availability of parallel revenue opportunities in both the electricity and hydrogen markets complicate the scheduling of green hydrogen production. The scheduling should maximise the total revenue from short-term sales of electricity and hydrogen against the long-term HPA delivery obligations. This challenge is addressed by developing a Bounded Fuzzy Logic Control (BFLC) which determines the daily HPA delivery target based on day-ahead forecasts of electricity and hydrogen prices, as well as wind capacity factors. Subsequently, the daily target is imposed as a constraint in dispatch optimisation which allocates energy and hydrogen flows for each hour of the day. Revenue comparisons over several years demonstrate that the BFLC achieves total annual revenues within 9% of optimal revenues that are based on perfect foresight. The BFLC revenues consistently exceed those of steady control, with the largest differences observed under conditions of elevated price levels and variability. The BFLC provides an effective long-term scheduling of green hydrogen production, enabling realistic revenue quantification that mitigates economic risks without overlooking economically viable projects.
A fully customisable chip-on board (COB) LED design to evoke two brain responses simultaneously (steady state visual evoked potential (SSVEP) and transient evoked potential, P300) is discussed in this paper. Considering different possible modalities in braincomputer interfacing (BCI), SSVEP is widely accepted as it requires a lesser number of electroencephalogram (EEG) electrodes and minimal training time. The aim of this work was to produce a hybrid BCI hardware platform to evoke SSVEP and P300 precisely with reduced fatigue and improved classification performance. The system comprises of four independent radial green visual stimuli controlled individually by a 32-bit microcontroller platform to evoke SSVEP and four red LEDs flashing at random intervals to generate P300 events. The system can also record the P300 event timestamps that can be used in classification, to improve the accuracy and reliability. The hybrid stimulus was tested for realtime classification accuracy by controlling a LEGO robot to move in four directions.
Change detection encompasses a variety of task types, and the goal of building change detection (BCD) tasks is to accurately locate buildings and distinguish changed building areas. In recent years, various deep learning-based BCD methods have achieved significant success in detecting difference regions by using different change information enhancement techniques, effectively improving the precision of BCD tasks. To address the issue of BCD with special colors, we propose the change-guided cross correlation enhancement network (CGCCE-Net). We design the change-guided residual refinement (CGRR) Branch, which focuses on extending shallow texture features to multiple scale features obtained from PVT, enabling early attention and acquisition of special colors. Then, channel spatial attention is used in the deep features to achieve independent information enhancement. Additionally, we construct the global cross correlation module (GCCM) to facilitate semantic information interaction between bi-temporal images, establishing building and target recognition relationships between different images. Further semantic feature enhancement is achieved through the semantic cognitive enhancement module (SCEM), and finally, the cross fusion decoder (CFD) is used for change information fusion and image reconstruction. Extensive experiments on three public datasets demonstrate that our CGCCE-Net outperforms mainstream BCD methods with outstanding performance.
With the advancement of remote sensing satellite technology and the rapid progress of deep learning, remote sensing change detection (RSCD) has become a key technique for regional monitoring. Traditional change detection (CD) methods and deep learning-based approaches have made significant contributions to change analysis and detection, however, many outstanding methods still face limitations in the exploration and application of multimodal data. To address this, we propose the multimodal graph-conditioned vision-language reconstruction network (MGCR-Net) to further explore the semantic interaction capabilities of multimodal data. Multimodal large language models (MLLM) have attracted widespread attention for their outstanding performance in computer vision, particularly due to their powerful visual-language understanding and dialogic interaction capabilities. Specifically, we design a MLLM-based optimization strategy to generate multimodal textual data from the original CD images, which serve as textual input to MGCR. Visual and textual features are extracted through a dual encoder framework. For the first time in the RSCD task, we introduce a multimodal graph-conditioned vision-language reconstruction mechanism, which is integrated with graph attention to construct a semantic graph-conditioned reconstruction module (SGCM), this module generates vision-language (VL) tokens through graph-based conditions and enables cross-dimensional interaction between visual and textual features via multihead attention. The reconstructed VL features are then deeply fused using the language vision transformer (LViT), achieving fine-grained feature alignment and high-level semantic interaction. Experimental results on four public datasets demonstrate that MGCR achieves superior performance compared to mainstream CD methods. Our code is available on this https URL
Accurate estimation of biological brain age from three dimensional (3D) T$_1$-weighted magnetic resonance imaging (MRI) is a critical imaging biomarker for identifying accelerated aging associated with neurodegenerative diseases. Effective brain age prediction necessitates training 3D models to leverage comprehensive insights from volumetric MRI scans, thereby fully capturing spatial anatomical context. However, optimizing deep 3D models remains challenging due to problems such as vanishing gradients. Furthermore, brain structural patterns differ significantly between sexes, which impacts aging trajectories and vulnerability to neurodegenerative diseases, thereby making sex classification crucial for enhancing the accuracy and generalizability of predictive models. To address these challenges, we propose a Deeply Supervised Multitask Autoencoder (DSMT-AE) framework for brain age estimation. DSMT-AE employs deep supervision, which involves applying supervisory signals at intermediate layers during training, to stabilize model optimization, and multitask learning to enhance feature representation. Specifically, our framework simultaneously optimizes brain age prediction alongside auxiliary tasks of sex classification and image reconstruction, thus effectively capturing anatomical and demographic variability to improve prediction accuracy. We extensively evaluate DSMT-AE on the Open Brain Health Benchmark (OpenBHB) dataset, the largest multisite neuroimaging cohort combining ten publicly available datasets. The results demonstrate that DSMT-AE achieves state-of-the-art performance and robustness across age and sex subgroups. Additionally, our ablation study confirms that each proposed component substantially contributes to the improved predictive accuracy and robustness of the overall architecture.
We study a pursuit-evasion game between a double integrator-driven pursuer with bounded velocity and bounded acceleration and a single integrator-driven evader with bounded velocity in a two-dimensional plane. The pursuer's goal is to capture the evader in the shortest time, while the evader attempts to delay the capture. We analyze two scenarios based on whether the capture can happen before the pursuer's speed reaches its maximum. For the case when the pursuer can capture the evader before its speed reaches its maximum, we use geometric methods to obtain the strategies for the pursuer and the evader. For the case when the pursuer cannot capture the evader before its speed reaches its maximum, we use numerical methods to obtain the strategies for the pursuer and the evader. In both cases, we demonstrate that the proposed strategies are optimal in the sense of Nash equilibrium through the Hamilton-Jacobi-Isaacs equation, and the pursuer can capture the evader as long as as its maximum speed is larger than that of the evader. Simulation experiments illustrate the effectiveness of the strategies.
According to the World Health Organization, 430 million people experience disabling hearing loss. For them, recognizing spoken commands such as one's name is difficult. To address this issue, Lumename, a real-time smartwatch, utilizes on-device machine learning to detect a user-customized name before generating a haptic-visual alert. During training, to overcome the need for large datasets, Lumename uses novel audio modulation techniques to augment samples from one user and generate additional samples to represent diverse genders and ages. Constrained random iterations were used to find optimal parameters within the model architecture. This approach resulted in a low-resource and low-power TinyML model that could quickly infer various keyword samples while remaining 91.67\% accurate on a custom-built smartwatch based on an Arduino Nano 33 BLE Sense.
The parcellation of Cranial Nerves (CNs) serves as a crucial quantitative methodology for evaluating the morphological characteristics and anatomical pathways of specific CNs. Multi-modal CNs parcellation networks have achieved promising segmentation performance, which combine structural Magnetic Resonance Imaging (MRI) and diffusion MRI. However, insufficient exploration of diffusion MRI information has led to low performance of existing multi-modal fusion. In this work, we propose a tractography-guided Dual-label Collaborative Learning Network (DCLNet) for multi-modal CNs parcellation. The key contribution of our DCLNet is the introduction of coarse labels of CNs obtained from fiber tractography through CN atlas, and collaborative learning with precise labels annotated by experts. Meanwhile, we introduce a Modality-adaptive Encoder Module (MEM) to achieve soft information swapping between structural MRI and diffusion MRI. Extensive experiments conducted on the publicly available Human Connectome Project (HCP) dataset demonstrate performance improvements compared to single-label network. This systematic validation underscores the effectiveness of dual-label strategies in addressing inherent ambiguities in CNs parcellation tasks.
Most existing robust control barrier functions (CBFs) can only handle matched disturbances, restricting their applications in real-world scenarios. While some recent advances extend robust CBFs to unmatched disturbances, they heavily rely on differentiability property of disturbances, and fail to accommodate non-differentiable case for high-relative-degree safety constraints. To address these limitations, this paper proposes a class of disturbance rejection CBFs (DRCBFs), including DRCBFs and adaptive DRCBFs (aDRCBFs). This class of DRCBFs can strictly guarantee safety under general bounded disturbances, which includes both matched or unmatched, differentiable or non-differentiable disturbances as special cases. Morevoer, no information of disturbance bound is needed in aDRCBFs. Simulation results illustrate that this class of DRCBFs outperform existing robust CBFs.
In speaker verification (SV), the acoustic mismatch between children's and adults' speech leads to suboptimal performance when adult-trained SV systems are applied to children's speaker verification (C-SV). While domain adaptation techniques can enhance performance on C-SV tasks, they often do so at the expense of significant degradation in performance on adults' SV (A-SV) tasks. In this study, we propose an Age Agnostic Speaker Verification (AASV) system that achieves robust performance across both C-SV and A-SV tasks. Our approach employs a domain classifier to disentangle age-related attributes from speech and subsequently expands the embedding space using the extracted domain information, forming a unified speaker representation that is robust and highly discriminative across age groups. Experiments on the OGI and VoxCeleb datasets demonstrate the effectiveness of our approach in bridging SV performance disparities, laying the foundation for inclusive and age-adaptive SV systems.
Global Positioning System (GPS) satellites are essential for providing accurate navigation and timing information worldwide. Operating in medium Earth orbit (MEO), these satellites must maintain precise Earth-pointing attitudes to transmit signals effectively. This paper presents a comprehensive review of the operational dynamics, attitude determination and control systems (ADCS), and orbital insertion techniques for GPS satellites. We explore the integration of sensors and actuators, control algorithms, stabilization strategies, and the launch procedures required to deploy these satellites. Key equations related to orbital mechanics and attitude control are discussed, and references to recent technical literature are included.
The ability to predict the attention of expert pathologists could lead to decision support systems for better pathology training. We developed methods to predict the spatio-temporal (where and when) movements of pathologists' attention as they grade whole slide images (WSIs) of prostate cancer. We characterize a pathologist's attention trajectory by their x, y, and m (magnification) movements of a viewport as they navigate WSIs using a digital microscope. This information was obtained from 43 pathologists across 123 WSIs, and we consider the task of predicting the pathologist attention scanpaths constructed from the viewport centers. We introduce a fixation extraction algorithm that simplifies an attention trajectory by extracting fixations in the pathologist's viewing while preserving semantic information, and we use these pre-processed data to train and test a two-stage model to predict the dynamic (scanpath) allocation of attention during WSI reading via intermediate attention heatmap prediction. In the first stage, a transformer-based sub-network predicts the attention heatmaps (static attention) across different magnifications. In the second stage, we predict the attention scanpath by sequentially modeling the next fixation points in an autoregressive manner using a transformer-based approach, starting at the WSI center and leveraging multi-magnification feature representations from the first stage. Experimental results show that our scanpath prediction model outperforms chance and baseline models. Tools developed from this model could assist pathology trainees in learning to allocate their attention during WSI reading like an expert.
This paper addresses the challenge of large model (LM)-embedded wireless network for handling the trade-off problem of model accuracy and network latency. To guarantee a high-quality of users' service, the network latency should be minimized while maintaining an acceptable inference accuracy. To meet this requirement, LM quantization is proposed to reduce the latency. However, the excessive quantization may destroy the accuracy of LM inference. To this end, a promising fluid antenna (FA) technology is investigated for enhancing the transmission capacity, leading to a lower network latency in the LM-embedded multiple-input multiple-output (MIMO) network. To design the FA-assisted LM-embedded network with the lower latency and higher accuracy requirements, the latency and peak signal to noise ratio (PSNR) are considered in the objective function. Then, an efficient optimization algorithm is proposed under the block coordinate descent framework. Simulation results are provided to show the convergence behavior of the proposed algorithm, and the performance gains from the proposed FA-assisted LMembedded network over the other benchmark networks in terms of network latency and PSNR.
The growth of the number of connected devices and network densification is driving an increasing demand for radio network resources, particularly Radio Frequency (RF) spectrum. Given the dynamic and complex nature of contemporary wireless environments, characterized by a wide variety of devices and multiple RATs, spectrum sensing is envisioned to become a building component of future 6G, including as a component within O-RAN or digital twins. However, the current SotA research for RAT classification predominantly revolves around supervised Convolutional Neural Network (CNN)-based approach that require extensive labeled dataset. Due to this, it is unclear how existing models behave in environments for which training data is unavailable thus leaving open questions regarding their generalization capabilities. In this paper, we propose a new spectrum sensing workflow in which the model training does not require any prior knowledge of the RATs transmitting in that area (i.e. no labelled data) and the class assignment can be easily done through manual mapping. Furthermore, we adapt a SSL deep clustering architecture capable of autonomously extracting spectrum features from raw 1D Fast Fourier Transform (FFT) data. We evaluate the proposed architecture on three real-world datasets from three European cities, in the 868 MHz, 2.4 GHz and 5.9 GHz bands containing over 10 RATs and show that the developed model achieves superior performance by up to 35 percentage points with 22% fewer trainable parameters and 50% less floating-point operations per second (FLOPS) compared to an SotA AE-based reference architecture.
Automatic modulation classification (AMC) is essential for wireless communication systems in both military and civilian applications. However, existing deep learning-based AMC methods often require large labeled signals and struggle with non-fixed signal lengths, distribution shifts, and limited labeled signals. To address these challenges, we propose a modulation-driven feature fusion via diffusion model (ModFus-DM), a novel unsupervised AMC framework that leverages the generative capacity of diffusion models for robust modulation representation learning. We design a modulated signal diffusion generation model (MSDGM) to implicitly capture structural and semantic information through a progressive denoising process. Additionally, we propose the diffusion-aware feature fusion (DAFFus) module, which adaptively aggregates multi-scale diffusion features to enhance discriminative representation. Extensive experiments on RML2016.10A, RML2016.10B, RML2018.01A and RML2022 datasets demonstrate that ModFus-DM significantly outperforms existing methods in various challenging scenarios, such as limited-label settings, distribution shifts, variable-length signal recognition and channel fading scenarios. Notably, ModFus-DM achieves over 88.27% accuracy in 24-type recognition tasks at SNR $\geq $ 12dB with only 10 labeled signals per type.
This letter presents an innovative scheme to enhance the communication rate and energy efficiency (EE) of Unmanned Aerial Vehicle (UAV) in wireless powered communication networks (WPCNs) by deploying the emerging fluid antenna system (FAS) technology onto the UAV. Our proposed approach leverages the dynamic port switching capability of FAS, enabling the UAV to adaptively select the optimal antenna location that maximizes channel gain for both downlink wireless power transfer (WPT) and uplink wireless data transfer (WDT). We derive both exact analytical expression of the ergodic spectral rate, and asymptotic expression at high signal to noise ratio (SNR) regime under Nakagami-m correlated fading channels. The Mont-Carlo simulation results confirms the accuracy of the analytical expressions and demonstrate the substantial increase in energy efficiency of UAV with FAS compared to fixed antenna systems.
Aneurysmal subarachnoid hemorrhage (SAH) is a life-threatening neurological emergency with mortality rates exceeding 30%. Transfer learning from related hematoma types represents a potentially valuable but underexplored approach. Although Unet architectures remain the gold standard for medical image segmentation due to their effectiveness on limited datasets, Low-Rank Adaptation (LoRA) methods for parameter-efficient transfer learning have been rarely applied to convolutional neural networks in medical imaging contexts. We implemented a Unet architecture pre-trained on computed tomography scans from 124 traumatic brain injury patients across multiple institutions, then fine-tuned on 30 aneurysmal SAH patients from the University of Michigan Health System using 3-fold cross-validation. We developed a novel CP-LoRA method based on tensor CP-decomposition and introduced DoRA variants (DoRA-C, convDoRA, CP-DoRA) that decompose weight matrices into magnitude and directional components. We compared these approaches against existing LoRA methods (LoRA-C, convLoRA) and standard fine-tuning strategies across different modules on a multi-view Unet model. LoRA-based methods consistently outperformed standard Unet fine-tuning. Performance varied by hemorrhage volume, with all methods showing improved accuracy for larger volumes. CP-LoRA achieved comparable performance to existing methods while using significantly fewer parameters. Over-parameterization with higher ranks consistently yielded better performance than strictly low-rank adaptations. This study demonstrates that transfer learning between hematoma types is feasible and that LoRA-based methods significantly outperform conventional Unet fine-tuning for aneurysmal SAH segmentation.
This Letter fills the research gap on physics-consistent optimization for reconfigurable intelligent surfaces (RISs) with mutual coupling (MC) and 1-bit-tunable elements, a common hardware constraint in existing RIS prototypes. We compare a model-based method (temperature-annealed back-propagation) and model-agnostic methods (coordinate descent, genetic algorithm), and evaluate potential benefits of intelligently initializing these methods. To facilitate our evaluation, we introduce a technique for generating statistical ensembles of multiport-network model parameters, wherein a single hyper-parameter adjusts the MC strength. The technique is a generalization of Rayleigh fading to radio environments with deterministic programmability, and it accounts for passivity constraints as well as the coherent-backscattering effect. We find that, except when MC is negligible, coordinate descent with random initialization yields the most favorable trade-off in terms of performance, execution time and memory usage. We expect our findings to extend to beyond-diagonal RIS, stacked intelligent metasurfaces, dynamic metasurface antennas, and wave-domain physical neural networks.
Recently, large language models (LLMs) have driven promis ing progress in lossless image compression. However, di rectly adopting existing paradigms for medical images suf fers from an unsatisfactory trade-off between compression performance and efficiency. Moreover, existing LLM-based compressors often overlook the security of the compres sion process, which is critical in modern medical scenarios. To this end, we propose a novel joint lossless compression and steganography framework. Inspired by bit plane slicing (BPS), we find it feasible to securely embed privacy messages into medical images in an invisible manner. Based on this in sight, an adaptive modalities decomposition strategy is first devised to partition the entire image into two segments, pro viding global and local modalities for subsequent dual-path lossless compression. During this dual-path stage, we inno vatively propose a segmented message steganography algo rithm within the local modality path to ensure the security of the compression process. Coupled with the proposed anatom ical priors-based low-rank adaptation (A-LoRA) fine-tuning strategy, extensive experimental results demonstrate the su periority of our proposed method in terms of compression ra tios, efficiency, and security. The source code will be made publicly available.
This work proposes a hybrid, explicit-implicit temporal buffering scheme for conditional residual video coding. Recent conditional coding methods propagate implicit temporal information for inter-frame coding, demonstrating superior coding performance to those relying exclusively on previously decoded frames (i.e. the explicit temporal information). However, these methods require substantial memory to store a large number of implicit features. This work presents a hybrid buffering strategy. For inter-frame coding, it buffers one previously decoded frame as the explicit temporal reference and a small number of learned features as implicit temporal reference. Our hybrid buffering scheme for conditional residual coding outperforms the single use of explicit or implicit information. Moreover, it allows the total buffer size to be reduced to the equivalent of two video frames with a negligible performance drop on 2K video sequences. The ablation experiment further sheds light on how these two types of temporal references impact the coding performance.
Alzheimer's disease (AD) progression follows a complex continuum from normal cognition (NC) through mild cognitive impairment (MCI) to dementia, yet most deep learning approaches oversimplify this into discrete classification tasks. This study introduces M$^3$AD, a novel multi-task multi-gate mixture of experts framework that jointly addresses diagnostic classification and cognitive transition modeling using structural MRI. We incorporate three key innovations: (1) an open-source T1-weighted sMRI preprocessing pipeline, (2) a unified learning framework capturing NC-MCI-AD transition patterns with demographic priors (age, gender, brain volume) for improved generalization, and (3) a customized multi-gate mixture of experts architecture enabling effective multi-task learning with structural MRI alone. The framework employs specialized expert networks for diagnosis-specific pathological patterns while shared experts model common structural features across the cognitive continuum. A two-stage training protocol combines SimMIM pretraining with multi-task fine-tuning for joint optimization. Comprehensive evaluation across six datasets comprising 12,037 T1-weighted sMRI scans demonstrates superior performance: 95.13% accuracy for three-class NC-MCI-AD classification and 99.15% for binary NC-AD classification, representing improvements of 4.69% and 0.55% over state-of-the-art approaches. The multi-task formulation simultaneously achieves 97.76% accuracy in predicting cognitive transition. Our framework outperforms existing methods using fewer modalities and offers a clinically practical solution for early intervention. Code: this https URL.
This paper presents a heuristic method for simplifying resource allocation in access systems, leveraging the concept of comparative advantage to reduce computational complexity while maintaining near-optimal performance. Using power-division non-orthogonal multiple access (PD-NOMA) as an example, we demonstrate how this approach mitigates the challenge of power allocation in multi-cell networks. Our method reduces the search space for optimization, significantly decreasing computational overhead while ensuring efficient spectrum utilization. In principle, the method reduces the dimensions of search space by half. Extensive analysis and simulations validate its effectiveness, highlighting its potential for practical deployment in next-generation wireless networks. The proposed framework can help streamline resource allocation in complex communication environments, enhancing system performance and scalability.
The integration of reconfigurable intelligent surfaces (RIS) with extremely large multiple-input multiple-output (MIMO) arrays at the base station has emerged as a key enabler for enhancing wireless network performance. However, this setup introduces high-dimensional channel matrices, leading to increased computational complexity and pilot overhead in channel estimation. Mutual coupling (MC) effects among densely packed unit cells, spatial correlation, and near-field propagation conditions further complicate the estimation process. Conventional estimators, such as linear minimum mean square error (MMSE), require channel statistics that are challenging to acquire for high-dimensional arrays, while least squares (LS) estimators suffer from performance limitations. To address these challenges, the reduced-subspace least squares (RS-LS) estimator leverages array geometry to enhance estimation accuracy. This work advances the promising RS-LS estimation algorithm by explicitly incorporating MC effects into the more realistic and challenging near-field propagation environment within the increasingly relevant generalized RIS-aided MIMO framework. Additionally, we investigate the impact of MC on the spatial degrees of freedom (DoF). Our analysis reveals that accounting for MC effects provides a significant performance gain of approximately 5 dB at an SNR of 5 dB, compared to conventional methods that ignore MC.
Accurate breast tumor segmentation in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is important for downstream tasks such as pathological complete response (pCR) assessment. In this work, we address both segmentation and pCR classification using the large-scale MAMA-MIA DCE-MRI dataset. We employ a large-kernel MedNeXt architecture with a two-stage training strategy that expands the receptive field from 3x3x3 to 5x5x5 kernels using the UpKern algorithm. This approach allows stable transfer of learned features to larger kernels, improving segmentation performance on the unseen validation set. An ensemble of large-kernel models achieved a Dice score of 0.67 and a normalized Hausdorff Distance (NormHD) of 0.24. For pCR classification, we trained a self-normalizing network (SNN) on radiomic features extracted from the predicted segmentations and first post-contrast DCE-MRI, reaching an average balanced accuracy of 57\%, and up to 75\% in some subgroups. Our findings highlight the benefits of combining larger receptive fields and radiomics-driven classification while motivating future work on advanced ensembling and the integration of clinical variables to further improve performance and generalization. Code: this https URL
This paper introduces a novel application of Test-Time Training (TTT) for Speech Enhancement, addressing the challenges posed by unpredictable noise conditions and domain shifts. This method combines a main speech enhancement task with a self-supervised auxiliary task in a Y-shaped architecture. The model dynamically adapts to new domains during inference time by optimizing the proposed self-supervised tasks like noise-augmented signal reconstruction or masked spectrogram prediction, bypassing the need for labeled data. We further introduce various TTT strategies offering a trade-off between adaptation and efficiency. Evaluations across synthetic and real-world datasets show consistent improvements across speech quality metrics, outperforming the baseline model. This work highlights the effectiveness of TTT in speech enhancement, providing insights for future research in adaptive and robust speech processing.
This work presents the results of a methodological transfer from remote sensing to healthcare, adapting AMBER -- a transformer-based model originally designed for multiband images, such as hyperspectral data -- to the task of 3D medical datacube segmentation. In this study, we use the AMBER architecture with Adaptive Fourier Neural Operators (AFNO) in place of the multi-head self-attention mechanism. While existing models rely on various forms of attention to capture global context, AMBER-AFNO achieves this through frequency-domain mixing, enabling a drastic reduction in model complexity. This design reduces the number of trainable parameters by over 80% compared to UNETR++, while maintaining a FLOPs count comparable to other state-of-the-art architectures. Model performance is evaluated on two benchmark 3D medical datasets -- ACDC and Synapse -- using standard metrics such as Dice Similarity Coefficient (DSC) and Hausdorff Distance (HD), demonstrating that AMBER-AFNO achieves competitive or superior accuracy with significant gains in training efficiency, inference speed, and memory usage.
Research in semantic communication has garnered considerable attention, particularly in the area of image transmission, where joint source-channel coding (JSCC)-based neural network (NN) modules are frequently employed. However, these systems often experience performance degradation over time due to an outdated knowledge base, highlighting the need for periodic updates. To address this challenge in the context of training JSCC modules for image transmission, we propose a federated learning (FL) algorithm with semantic feature reconstruction (FR), named FedSFR. This algorithm more efficiently utilizes the available communication capacity by allowing some of the selected FL participants to transmit smaller feature vectors instead of local update information. Unlike conventional FL methods, our approach integrates FR at the parameter server (PS), stabilizing training and enhancing image transmission quality. Experimental results demonstrate that the proposed scheme significantly enhances both the stability and effectiveness of the FL process compared to other algorithms. Furthermore, we mathematically derive the convergence rate to validate the improved performance.
Most frame-based learned video codecs can be interpreted as recurrent neural networks (RNNs) propagating reference information along the temporal dimension. This work revisits the limitations of the current approaches from an RNN perspective. The output-recurrence methods, which propagate decoded frames, are intuitive but impose dual constraints on the output decoded frames, leading to suboptimal rate-distortion performance. In contrast, the hidden-to-hidden connection approaches, which propagate latent features within the RNN, offer greater flexibility but require large buffer sizes. To address these issues, we propose HyTIP, a learned video coding framework that combines both mechanisms. Our hybrid buffering strategy uses explicit decoded frames and a small number of implicit latent features to achieve competitive coding performance. Experimental results show that our HyTIP outperforms the sole use of either output-recurrence or hidden-to-hidden approaches. Furthermore, it achieves comparable performance to state-of-the-art methods but with a much smaller buffer size, and outperforms VTM 17.0 (Low-delay B) in terms of PSNR-RGB and MS-SSIM-RGB. The source code of HyTIP is available at this https URL.
Cyber attacks are unavoidable in networked discrete event systems where the plant and the supervisor communicate with each other via networks. Because of the nondeterminism in observation and control caused by cyber attacks, the language generated by the supervised system becomes nondeterministic. The small language is defined as the lower bound on all possible languages that can be generated by the supervised system, which is needed for a supervised system to perform some required tasks under cyber attacks. In this paper, we investigate supervisory control for the small language. After introducing CA-S-controllability and CA-S-observability, we prove that the supervisory control problem of achieving a required small language is solvable if and only if the given language is CA-Scontrollable and CA-S-observable. If the given language is not CA-S controllable and/or CA-S-observable, we derive conditions under which the infimal CA-S-controllable and CA-S-observable superlanguage exists and can be used to design a supervisor satisfying the given requirement.
As power systems evolve with increased integration of renewable energy sources, they become more complex and vulnerable to both cyber and physical threats. This study validates a centralized Dynamic State Estimation (DSE) algorithm designed to enhance the protection of power systems, particularly focusing on microgrids with substantial renewable energy integration. The algorithm utilizing a structured hypothesis testing framework, systematically identifies and differentiates anomalies caused by cyberattacks from those resulting from physical faults. This algorithm was evaluated through four case studies: a False Data Injection Attack (FDIA) via manipulation of Current Transformer (CT) ratios, a single line-to-ground (SLG) fault, and two combined scenarios involving both anomalies. Results from real-time simulations demonstrate that the algorithm effectively distinguishes between cyber-induced anomalies and physical faults, thereby significantly enhancing the reliability and security of energy systems. This research underscores the critical role of advanced diagnostic tools in protecting power systems against the growing prevalence of cyber-physical threats, enhancing the resilience of the grid and preventing potential blackouts by avoiding the mis-operation of protection relays.
Reliable and interpretable tumor classification from clinical imaging remains a core challenge due to heterogeneous modality quality, limited annotations, and the lack of structured anatomical guidance. We introduce REACT-KD, a Region-Aware Cross-modal Topological Knowledge Distillation framework that transfers rich supervision from high-fidelity multi-modal sources into a lightweight CT-based student model. The framework uses a dual teacher design: one branch captures structure-function relationships using dual-tracer PET/CT, and the other models dose-aware features through synthetically degraded low-dose CT data. These branches jointly guide the student model through two complementary objectives. The first focuses on semantic alignment via logits distillation, while the second models anatomical topology using region graph distillation. A shared CBAM-3D module is employed to maintain consistent attention across modalities. To improve reliability for deployment, REACT-KD introduces modality dropout during training, allowing inference under partial or noisy inputs. The staging task for hepatocellular carcinoma (HCC) is conducted as a case study. REACT-KD achieves an average AUC of 93.4% on an internal PET/CT cohort and maintains 76.6% to 81.5% AUC across varying dose levels in external CT testing. Decision curve analysis shows that REACT-KD consistently provides the highest clinical benefit across decision thresholds, supporting its potential in real-world diagnostics. Code is available at this https URL.
Reversible image conversion (RIC) suffers from ill-posedness issues due to its forward conversion process being considered an underdetermined system. Despite employing invertible neural networks (INN), existing RIC methods intrinsically remain ill-posed as inevitably introducing uncertainty by incorporating randomly sampled variables. To tackle the ill-posedness dilemma, we focus on developing a reliable approximate left inverse for the underdetermined system by constructing an overdetermined system with a non-zero Gram determinant, thus ensuring a well-posed solution. Based on this principle, we propose a well-posed invertible $1\times1$ convolution (WIC), which eliminates the reliance on random variable sampling and enables the development of well-posed invertible networks. Furthermore, we design two innovative networks, WIN-Naïve and WIN, with the latter incorporating advanced skip-connections to enhance long-term memory. Our methods are evaluated across diverse RIC tasks, including reversible image hiding, image rescaling, and image decolorization, consistently achieving state-of-the-art performance. Extensive experiments validate the effectiveness of our approach, demonstrating its ability to overcome the bottlenecks of existing RIC solutions and setting a new benchmark in the field. Codes are available in this https URL.
The predominant metric for evaluating speech recognizers, the Word Error Rate (WER) has been extended in different ways to handle transcripts produced by long-form multi-talker speech recognizers. These systems process long transcripts containing multiple speakers and complex speaking patterns so that the classical WER cannot be applied. There are speaker-attributed approaches that count speaker confusion errors, such as the concatenated minimum-permutation WER cpWER and the time-constrained cpWER (tcpWER), and speaker-agnostic approaches, which aim to ignore speaker confusion errors, such as the Optimal Reference Combination WER (ORC-WER) and the MIMO-WER. These WERs evaluate different aspects and error types (e.g., temporal misalignment). A detailed comparison has not been made. We therefore present a unified description of the existing WERs and highlight when to use which metric. To further analyze how many errors are caused by speaker confusion, we propose the Diarization-invariant cpWER (DI-cpWER). It ignores speaker attribution errors and its difference to cpWER reflects the impact of speaker confusions on the WER. Since error types cannot reliably be classified automatically, we discuss ways to visualize sequence alignments between the reference and hypothesis transcripts to facilitate the spotting of errors by a human judge. Since some WER definitions have high computational complexity, we introduce a greedy algorithm to approximate the ORC-WER and DI-cpWER with high precision ($<0.1\%$ deviation in our experiments) and polynomial complexity instead of exponential. To improve the plausibility of the metrics, we also incorporate the time constraint from the tcpWER into ORC-WER and MIMO-WER, also significantly reducing the computational complexity.
Integrated sensing and communications (ISAC) is a key enabler for next-generation wireless systems, aiming to support both high-throughput communication and high-accuracy environmental sensing using shared spectrum and hardware. Theoretical performance metrics, such as mutual information (MI), minimum mean squared error (MMSE), and Bayesian Cramér--Rao bound (BCRB), play a key role in evaluating ISAC system performance limits. However, in practice, hardware impairments, multipath propagation, interference, and scene constraints often result in nonlinear, multimodal, and non-Gaussian distributions, making it challenging to derive these metrics analytically. Recently, there has been a growing interest in applying score-based generative models to characterize these metrics from data, although not discussed for ISAC. This paper provides a tutorial-style summary of recent advances in score-based performance evaluation, with a focus on ISAC systems. We refer to the summarized framework as scoring ISAC, which not only reflects the core methodology based on score functions but also emphasizes the goal of scoring (i.e., evaluating) ISAC systems under realistic conditions. We present the connections between classical performance metrics and the score functions and provide the practical training techniques for learning score functions to estimate performance metrics. Proof-of-concept experiments on target detection and localization validate the accuracy of score-based performance estimators against ground-truth analytical expressions, illustrating their ability to replicate and extend traditional analyses in more complex, realistic settings. This framework demonstrates the great potential of score-based generative models in ISAC performance analysis, algorithm design, and system optimization.
Contactless cardiac monitoring has vast potential to replace contact-based monitoring in various future scenarios such as smart home and in-cabin monitoring. Various contactless sensors can be potentially implemented for cardiac monitoring, such as cameras, acoustic sensors, Wi-Fi routers and radars. Among all these sensors, radar could achieve unobtrusive monitoring with high accuracy and robustness at the same time. The research about radar-based cardiac monitoring can be generally divided into the radar architecture design and signal-processing parts, where the former has been thoroughly reviewed in the literature but not the latter. To the best of the author knowledge, this is the first review paper that focuses on elaborating the algorithms for extracting cardiac features from the received radar signal. In addition, a new taxonomy is proposed to reveal the core feature of each algorithm, with the pros and cons evaluated in detail. Furthermore, the public datasets containing the received radar signal and ground-truth cardiac feature signal are listed with detailed configurations, and the corresponding evaluations may help the researchers select the suitable dataset. At last, several unsolved challenges and future directions are suggested and discussed in detail to encourage future research on solving the main obstacles in this field. In summary, this review can be served as a guide for researchers and practitioners to quickly understand the research trend and recent development of the cardiac feature extraction algorithms, and it is worth further investigating the relative area based on the proposed challenges and future directions.
Broad beam beamforming (BF) design in multiple-input multiple-output (MIMO) can be convenient for initial access, synchronization, and sensing capabilities in cellular networks by avoiding overheads of sweeping methods while making efficient use of resources. Phase-only BF is key for maximizing power efficiency across antennas. A successful method to produce broad beams is the phase-only dual-polarization BF (DPBF). However, its efficiency has not been proved in non-line-of-sight (NLoS). Therefore, this paper contributes by evaluating DPBF in collocated and distributed MIMO configurations under both line-of-sight (LoS) and NLoS channel conditions. We model the reflection coefficients for different materials in NLoS conditions and propose the use of orthogonal space-time block code to improve the coverage compared to the DPBF in collocated MIMO (C-MIMO). We further propose a DPBF method for distributed MIMO and show that it achieves better coverage than C-MIMO with DPBF.
Maximum likelihood factor analysis has been used for direction of arrival estimation in unknown nonuniform noise and some iterative approaches have been developed. In particular, the Factor Analysis for Anisotropic Noise (FAAN) method proposed by Stoica and Babu has excellent convergence properties. In this letter, the Expectation/Conditional Maximization Either (ECME) algorithm, an extension of the expectation-maximization algorithm, is designed, which has almost the same computational complexity at each iteration as the FAAN method. However, numerical results show that the ECME algorithm yields faster stable convergence and is computationally more efficient.
Automatic Speech Recognition (ASR) consists of an acoustic model (AM) and a language model (LM). The AM estimates the probability of an acoustic signal based on a sequence of linguistic units, typically phones, characters, or tokens, while the LM assesses the likelihood of a specific sequence of words or tokens. Although Large Language Models (LLMs) have demonstrated significant potential across various tasks, integrating them into ASR remains an open challenge. By decomposing the maximum a posteriori (MAP) estimator of words (or tokens) given the acoustic signal, we derive an iterative procedure that facilitates a novel integration of the AM and LLM, while maintaining their separability. This approach enables each component to be independently trained and improved using its own data, thereby maximizing the system's performance by leveraging the strengths of both models without requiring joint optimization. We illustrate the effectiveness of our method in comparison to three language models: N-gram, GCNN, and TransformerLM across multiple datasets spanning various speech styles, including ALLSSTAR, WSJ0, and TED-LIUM 3. Our experiments involved two acoustic models (wav2vec 2.0 and HuBERT) and three LLMs (GPT-2, LLaMA 2, and Falcon). Notably, our method demonstrates particular efficacy in addressing complex speech sentences, acronyms, and domain-specific vocabulary.
Distributed formation maneuver control refers to the problem of maneuvering a group of agents to change their formation shape by adjusting the motions of partial agents, where the controller of each agent only requires local information measured from its neighbors. Although this problem has been extensively investigated, existing approaches are mostly limited to uniform scaling transformations. This article proposes a new type of local matrix-valued constraints, via which non-uniform scaling control of position formation can be achieved by tuning the positions of only two agents (i.e., leaders). Here, the non-uniform scaling transformation refers to scaling the position formation with different ratios along different orthogonal coordinate directions. Moreover, by defining scaling and translation of attitude formation, we propose a distributed control scheme for scaling and translation maneuver control of joint position-attitude formations. It is proven that the proposed controller achieves global convergence, provided that the sensing graph among agents is a 2-rooted bidirectional graph. Compared with the affine formation maneuver control approach, the proposed approach leverages a sparser sensing graph, requires fewer leaders, and additionally enables scaling transformations of the attitude formation. A simulation example is proposed to demonstrate our theoretical results.
Sex conversion in speech involves privacy risks from data collection and often leaves residual sex-specific cues in outputs, even when target speaker references are unavailable. We introduce RASO for Reference-free Adversarial Sex Obfuscation. Innovations include a sex-conditional adversarial learning framework to disentangle linguistic content from sex-related acoustic markers and explicit regularisation to align fundamental frequency distributions and formant trajectories with sex-neutral characteristics learned from sex-balanced training data. RASO preserves linguistic content and, even when assessed under a semi-informed attack model, it significantly outperforms a competing approach to sex obfuscation.
In uplink integrated sensing and communication (ISAC) systems, pilot signal design is crucial for enabling accurate channel estimation and reliable radar sensing. In orthogonal frequency-division multiple access (OFDMA)-based frameworks, conventional pilot allocation schemes face a trade-off between spectral efficiency (SE) and sensing performance. Interleaved pilots improve user equipment (UE) multiplexing through sparse allocation but reduce the maximum unambiguous range. Conversely, orthogonal block-based pilots reduce range ambiguity but degrade sensing resolution due to limited delay granularity. To address this trade-off, the phase-shifted ISAC (PS-ISAC) scheme was recently proposed for uplink multiple access in ISAC systems. However, PS-ISAC suffers from spectral inefficiency due to the fixed cyclic prefix (CP) constraints. To overcome these limitations, we propose adaptive phase-shifted-ISAC (APS-ISAC), an enhanced pilot scheme that employs an overlapped block-pilot structure with UE-specific phase shifts determined by maximum excess delay of each UE. This design enables UEs to share the same time-frequency resources while preserving separable and contiguous channel impulse responses (CIRs) at the base station (BS). Simulation results show that APS-ISAC significantly outperforms conventional pilot allocation methods in terms of SE, approximately doubling the number of multiplexed UEs. It also achieves lower mean square error (MSE) under power constraints with reduced complexity. Furthermore, it yields maximum range resolution and unambiguous sensing performance. These results establish APS-ISAC as a scalable, spectrally efficient, ambiguity-resilient, and low-complexity pilot design paradigm for future uplink ISAC systems.
Monitoring respiration parameters such as respiratory rate could be beneficial to understand the impact of training on equine health and performance and ultimately improve equine welfare. In this work, we compare deep learning-based methods to an adapted signal processing method to automatically detect cyclic respiratory events and extract the dynamic respiratory rate from microphone recordings during high intensity exercise in Standardbred trotters. Our deep learning models are able to detect exhalation sounds (median F1 score of 0.94) in noisy microphone signals and show promising results on unlabelled signals at lower exercising intensity, where the exhalation sounds are less recognisable. Temporal convolutional networks were better at detecting exhalation events and estimating dynamic respiratory rates (median F1: 0.94, Mean Absolute Error (MAE) $\pm$ Confidence Intervals (CI): 1.44$\pm$1.04 bpm, Limits Of Agreements (LOA): 0.63$\pm$7.06 bpm) than long short-term memory networks (median F1: 0.90, MAE$\pm$CI: 3.11$\pm$1.58 bpm) and signal processing methods (MAE$\pm$CI: 2.36$\pm$1.11 bpm). This work is the first to automatically detect equine respiratory sounds and automatically compute dynamic respiratory rates in exercising horses. In the future, our models will be validated on lower exercising intensity sounds and different microphone placements will be evaluated in order to find the best combination for regular monitoring.
This paper offers a data-driven approach for designing adaptive suboptimal second-order sliding mode (ASSOSM) controllers for single-input nonlinear systems, characterized by perturbed strict-feedback structures with unknown dynamics. The proposed approach is recursive, in which the system dynamics are first decomposed into two parts, referred to as the upper and lower dynamics. The control design task is then divided into two stages, that is, designing a virtual controller for the upper dynamics, followed by synthesizing the actual controller for the full-order system. To this end, we start by collecting noisy data from the system through a finite-time experiment, referred to as a single trajectory. We then formulate a data-dependent condition as a semidefinite program, whose feasibility enables the design of a virtual controller that ensures global asymptotic stability of the origin for the upper dynamics. Building upon this virtual controller, we subsequently propose a data-driven sliding variable that facilitates the design of an ASSOSM controller for the unknown full-order system. This controller guarantees semi-global asymptotic stability of the origin in the presence of disturbances. Specifically, for any prescribed bounded set--no matter how large--the controller's design parameters can be chosen to ensure asymptotic stability of the origin. The effectiveness of the proposed method is demonstrated through three case studies, reflecting different aspects of the approach.
Steady state visual evoked response (SSVEP) is widely used in visual-based diagnosis and applications such as brain computer interfacing due to its high information transfer rate and the capability to activate commands through simple gaze control. However, one major impediment in using flashing visual stimulus to obtain SSVEP is eye fatigue that prevents continued long term use preventing practical deployment. This combined with the difficulty in establishing precise pulse-width modulation (PWM) that results in poorer accuracy warrants the development of appropriate approach to solve these issues. Various studies have suggested the usage of high frequencies of visual stimulus to reduce the visual fatigue for the user but this results in poor response performance. Here, the authors study the use of extremely high duty-cycles in the stimulus in the hope of solving these constraints. Electroencephalogram data was recorded with PWM duty-cycles of 50 to 95% generated by a precise custom-made light-emitting diode hardware and tested ten subjects responded that increasing duty-cycles had less visual strain for all the frequency values and the SSVEP exhibited a subject-independent peak response for duty-cycle of 85%. This could pave the way for increased usage of SSVEP for practical applications.
3D Gaussian Splatting (3DGS) has emerged as a promising approach for CT reconstruction. However, existing methods rely on the average gradient magnitude of points within the view, often leading to severe needle-like artifacts under sparse-view conditions. To address this challenge, we propose GR-Gaussian, a graph-based 3D Gaussian Splatting framework that suppresses needle-like artifacts and improves reconstruction accuracy under sparse-view conditions. Our framework introduces two key innovations: (1) a Denoised Point Cloud Initialization Strategy that reduces initialization errors and accelerates convergence; and (2) a Pixel-Graph-Aware Gradient Strategy that refines gradient computation using graph-based density differences, improving splitting accuracy and density representation. Experiments on X-3D and real-world datasets validate the effectiveness of GR-Gaussian, achieving PSNR improvements of 0.67 dB and 0.92 dB, and SSIM gains of 0.011 and 0.021. These results highlight the applicability of GR-Gaussian for accurate CT reconstruction under challenging sparse-view conditions.
The reliability of affective state estimation using EEG data is in question, given the variability in reported performance and the lack of standardized evaluation protocols. To investigate this, we reviewed 101 studies, focusing on the widely used DEAP dataset for emotion recognition. Our analysis revealed widespread methodological issues that include data leakage from improper segmentation, biased feature selection, flawed hyperparameter optimization, neglect of class imbalance, and insufficient methodological reporting. Notably, we found that nearly 87% of the reviewed papers contained one or more of these errors. Moreover, through experimental analysis, we observed that such methodological flaws can inflate the classification accuracy by up to 46%. These findings reveal fundamental gaps in standardized evaluation practices and highlight critical deficiencies in the peer review process for machine learning applications in neuroscience, emphasizing the urgent need for stricter methodological standards and evaluation protocols.
Identifying actionable driver mutations in non-small cell lung cancer (NSCLC) can impact treatment decisions and significantly improve patient outcomes. Despite guideline recommendations, broader adoption of genetic testing remains challenging due to limited availability and lengthy turnaround times. Machine Learning (ML) methods for Computational Pathology (CPath) offer a potential solution; however, research often focuses on only one or two common mutations, limiting the clinical value of these tools and the pool of patients who can benefit from them. This study evaluates various Multiple Instance Learning (MIL) techniques to detect six key actionable NSCLC driver mutations: ALK, BRAF, EGFR, ERBB2, KRAS, and MET ex14. Additionally, we introduce an Asymmetric Transformer Decoder model that employs queries and key-values of varying dimensions to maintain a low query dimensionality. This approach efficiently extracts information from patch embeddings and minimizes overfitting risks, proving highly adaptable to the MIL setting. Moreover, we present a method to directly utilize tissue type in the model, addressing a typical MIL limitation where either all regions or only some specific regions are analyzed, neglecting biological relevance. Our method outperforms top MIL models by an average of 3%, and over 4% when predicting rare mutations such as ERBB2 and BRAF, moving ML-based tests closer to being practical alternatives to standard genetic testing.
This article establishes the relationship between the Koopman eigenfunctions of a first-order nonlinear ODE system and its symmetries. Specifically, a bijective mapping is obtained between i) a commuting local frame of symmetry generators, and ii) a set of Koopman eigenfunctions whose gradients form a local frame. This equivalence is used to convert the problem of finding Koopman eigenfunctions into the equivalent problem of finding commuting vector fields, which are readily given by the flow map of the system. The implication for stability is also studied in terms of a contraction metric that is associated with the commuting local frame of symmetries. The theoretical result is verified with the van der Pol oscillator whose eigenfunctions are computed to the precision of finite numerical integration.
Model predictive control (MPC) is widely used in process control due to its interpretability and ability to handle constraints. As a parametric policy in reinforcement learning (RL), MPC offers strong initial performance and low data requirements compared to black-box policies like neural networks. However, most RL methods rely on first-order updates, which scale well to large parameter spaces but converge at most linearly, making them inefficient when each policy update requires solving an optimal control problem, as is the case with MPC. While MPC policies are typically sparsely parameterized and thus amenable to second-order approaches, existing second-order methods demand second-order policy derivatives, which can be computationally and memory-wise intractable. This work introduces a Gauss-Newton approximation of the deterministic policy Hessian that eliminates the need for second-order policy derivatives, enabling superlinear convergence with minimal computational overhead. To further improve robustness, we propose a momentum-based Hessian averaging scheme for stable training under noisy estimates. We demonstrate the effectiveness of the approach on a nonlinear continuously stirred tank reactor (CSTR), showing faster convergence and improved data efficiency over state-of-the-art first-order methods.
In this paper, a joint optimal allocation of transmit power at the source and jamming power at the destination is proposed to maximize the average secrecy energy efficiency (SEE) of a wireless network within a finite time duration. The destination transmits the jamming signal to improve secrecy by utilizing full-duplex capability. The source and destination both have energy harvesting (EH) capability with limited battery capacity. Due to the Markov nature of the system, the problem is formulated as a finite-horizon reinforcement learning (RL) problem. We propose the finite-horizon joint power allocation (FHJPA) algorithm for the finite-horizon RL problem and compare it with a low-complexity greedy algorithm (GA). An infinite-horizon joint power allocation (IHJPA) algorithm is also proposed for the corresponding infinite-horizon problem. A comparative analysis of these algorithms is carried out in terms of SEE, expected total transmitted secure bits, and computational complexity. The results show that the FHJPA algorithm outperforms the GA and IHJPA algorithms due to its appropriate modelling in finite horizon transmission. When the source node battery has sufficient energy, the GA can yield performance close to the FHJPA algorithm despite its low-complexity. When the transmission time horizon increases, the accuracy of the infinite-horizon model improves, resulting in a reduced performance gap between FHJPA and IHJPA algorithms. The computational time comparison shows that the FHJPA algorithm takes $16.6$ percent less time than the IHJPA algorithm.
In this work, we consider the problem of multi-pitch estimation, i.e., identifying super-imposed truncated harmonic series from noisy measurements. We phrase this as recovering a harmonically-structured measure on the unit circle, where the structure is enforced using regularizers based on optimal transport theory. In the resulting framework, a signal's spectral content is simultaneously inferred and assigned, or transported, to a small set of harmonic series defined by their corresponding fundamental frequencies. In contrast to existing methods from the compressed sensing paradigm, the proposed framework decouples regularization and dictionary design and mitigates coherency problems. As a direct consequence, this also introduces robustness to the phenomenon of inharmonicity. From this framework, we derive two estimation methods, one for stochastic and one for deterministic signals, and propose efficient numerical algorithms implementing them. In numerical studies on both synthetic and real data, the proposed methods are shown to achieve better estimation performance as compared to other methods from statistical signal processing literature. Furthermore, they perform comparably or better than network-based methods, except when the latter are specially trained on the data-type considered and are given access to considerably more data during inference.
While audio recordings in real life provide insights into social dynamics and conversational behavior, they also raise concerns about the privacy of personal, sensitive data. This article explores the effectiveness of restricting recordings to low-frequency audio to protect spoken content. For resampling the audio signals to different sampling rates, we compare the effect of employing anti-aliasing filtering. Privacy enhancement is measured by an increased word error rate of automatic speech recognition models. The impact on utility performance is measured with voice activity detection models. Our experimental results show that for clean recordings, models trained with a sampling rate of up to 800 Hz transcribe the majority of words correctly. For both models, we analyzed the impact of the speaker's sex and pitch, and we demonstrated that missing anti-aliasing filters more strongly compromise speech privacy.
Autonomous systems operating in unknown environments often rely heavily on visual sensor data, yet making safe and informed control decisions based on these measurements remains a significant challenge. To facilitate the integration of perception and control in autonomous vehicles, we propose a novel perception-based control approach that incorporates road estimation, quantification of its uncertainty, and uncertainty-aware control based on this estimate. At the core of our method is a parametric road curvature model, optimized using visual measurements of the road through a constrained nonlinear optimization problem. This process ensures adherence to constraints on both model parameters and curvature. By leveraging the Frenet frame formulation, we embed the estimated track curvature into the system dynamics, allowing the controller to explicitly account for perception uncertainty and enhancing robustness to estimation errors based on visual input. We validate our approach in a simulated environment, using a high-fidelity 3D rendering engine, and demonstrate its effectiveness in achieving reliable and uncertainty-aware control for autonomous racing.
Causal analysis helps us understand variables that are responsible for system failures. This improves fault detection and makes system more reliable. In this work, we present a new method that combines causal inference with machine learning to classify faults in electrical distribution systems (EDS) using graph-based models. We first build causal graphs using transfer entropy (TE). Each fault case is represented as a graph, where the nodes are features such as voltage and current, and the edges demonstrate how these features influence each other. Then, the graphs are classified using machine learning and GraphSAGE where the model learns from both the node values and the structure of the graph to predict the type of fault. To make the predictions understandable, we further developed an integrated approach using GNNExplainer and Captums Integrated Gradients to highlight the nodes (features) that influences the most on the final prediction. This gives us clear insights into the possible causes of the fault. Our experiments show high accuracy: 99.44% on the EDS fault dataset, which is better than state of art models. By combining causal graphs with machine learning, our method not only predicts faults accurately but also helps understand their root causes. This makes it a strong and practical tool for improving system reliability.
Hematoxylin and eosin (H&E) staining is the clinical standard for assessing tissue morphology, but it lacks molecular-level diagnostic information. In contrast, immunohistochemistry (IHC) provides crucial insights into biomarker expression, such as HER2 status for breast cancer grading, but remains costly and time-consuming, limiting its use in time-sensitive clinical workflows. To address this gap, virtual staining from H&E to IHC has emerged as a promising alternative, yet faces two core challenges: (1) Lack of fair evaluation of synthetic images against misaligned IHC ground truths, and (2) preserving structural integrity and biological variability during translation. To this end, we present an end-to-end framework encompassing both generation and evaluation in this work. We introduce Star-Diff, a structure-aware staining restoration diffusion model that reformulates virtual staining as an image restoration task. By combining residual and noise-based generation pathways, Star-Diff maintains tissue structure while modeling realistic biomarker variability. To evaluate the diagnostic consistency of the generated IHC patches, we propose the Semantic Fidelity Score (SFS), a clinical-grading-task-driven metric that quantifies class-wise semantic degradation based on biomarker classification accuracy. Unlike pixel-level metrics such as SSIM and PSNR, SFS remains robust under spatial misalignment and classifier uncertainty. Experiments on the BCI dataset demonstrate that Star-Diff achieves state-of-the-art (SOTA) performance in both visual fidelity and diagnostic relevance. With rapid inference and strong clinical alignment,it presents a practical solution for applications such as intraoperative virtual IHC synthesis.
Accurate whole-heart segmentation is a critical component in the precise diagnosis and interventional planning of cardiovascular diseases. Integrating complementary information from modalities such as computed tomography (CT) and magnetic resonance imaging (MRI) can significantly enhance segmentation accuracy and robustness. However, existing multi-modal segmentation methods face several limitations: severe spatial inconsistency between modalities hinders effective feature fusion; fusion strategies are often static and lack adaptability; and the processes of feature alignment and segmentation are decoupled and inefficient. To address these challenges, we propose a dual-branch U-Net architecture enhanced by reinforcement learning for feature alignment, termed RL-U$^2$Net, designed for precise and efficient multi-modal 3D whole-heart segmentation. The model employs a dual-branch U-shaped network to process CT and MRI patches in parallel, and introduces a novel RL-XAlign module between the encoders. The module employs a cross-modal attention mechanism to capture semantic correspondences between modalities and a reinforcement-learning agent learns an optimal rotation strategy that consistently aligns anatomical pose and texture features. The aligned features are then reconstructed through their respective decoders. Finally, an ensemble-learning-based decision module integrates the predictions from individual patches to produce the final segmentation result. Experimental results on the publicly available MM-WHS 2017 dataset demonstrate that the proposed RL-U$^2$Net outperforms existing state-of-the-art methods, achieving Dice coefficients of 93.1% on CT and 87.0% on MRI, thereby validating the effectiveness and superiority of the proposed approach.
Although direct position estimation (DPE) has been demonstrated to offer enhanced robustness in GNSS receivers, its theoretical limits and performance in OFDM based positioning systems remain largely unexplored. In this paper, the Cramér-Rao bound (CRB) for DPE using OFDM based cellular signals is derived and benchmarked against the conventional two-step positioning method to assess their relative performance in non-line-of-sight (NLOS) dominated multipath environments. Numerical results reveal that 1) the DPE method consistently outperforms the two-step approach in OFDM systems under all evaluated conditions; 2) a large bandwidth is crucial in both methods, and increasing subcarrier spacing is more beneficial for a fixed bandwidth; 3) utilizing multiple OFDM symbols for positioning leads to substantial improvements in localization accuracy compared to relying on a single symbol. However, further increasing the number of symbols yields marginal improvements while significantly increasing computational complexity.
Dynamic mode decomposition (DMD) has become a powerful data-driven method for analyzing the spatiotemporal dynamics of complex, high-dimensional systems. However, conventional DMD methods are limited to matrix-based formulations, which might be inefficient or inadequate for modeling inherently multidimensional data including images, videos, and higher-order networks. In this letter, we propose tensor dynamic mode decomposition (TDMD), a novel extension of DMD to third-order tensors based on the recently developed T-product framework. By incorporating tensor factorization techniques, TDMD achieves more efficient computation and better preservation of spatial and temporal structures in multiway data for tasks such as state reconstruction and dynamic component separation, compared to standard DMD with data flattening. We demonstrate the effectiveness of TDMD on both synthetic and real-world datasets.
Multi-agent shepherding represents a challenging distributed control problem where herder agents must coordinate to guide independently moving targets to desired spatial configurations. Most existing control strategies assume cohesive target behavior, which frequently fails in practical applications where targets exhibit stochastic autonomous behavior. This paper presents a hierarchical learning-based control architecture that decomposes the shepherding problem into a high-level decision-making module and a low-level motion control component. The proposed distributed control system synthesizes effective control policies directly from closed-loop experience without requiring explicit inter-agent communication or prior knowledge of target dynamics. The decentralized architecture achieves cooperative control behavior through emergent coordination without centralized supervision. Experimental validation demonstrates superior closed-loop performance compared to state-of-the-art heuristic control methods, achieving 100\% success rates with improved settling times and control efficiency. The control architecture scales beyond its design conditions, adapts to time-varying goal regions, and demonstrates practical implementation feasibility through real-time experiments on the Robotarium platform.
This paper presents a novel type of mobile rolling robot designed as a modular platform for non-prehensile manipulation, highlighting the associated control challenges in achieving balancing control of the robotic system. The developed rolling disk modules incorporate an innovative internally actuated magnetic-pendulum coupling mechanism, which introduces a compelling control problem due to the frictional and sliding interactions, as well as the magnetic effects between each module. In this paper, we derive the nonlinear dynamics of the robot using the Euler-Lagrange formulation. Then, through simulation, the motion behavior of the system is studied and analyzed, providing critical insights for future investigations into control methods for complex non-prehensile motion between robotic modules. Also, we study the balancing of this new platform and introduce a new motion pattern of lifting. This research aims to enhance the understanding and implementation of modular self-reconfigurable robots in various scenarios for future applications.
Engineering design optimization seeks to automatically determine the shapes, topologies, or parameters of components that maximize performance under given conditions. This process often depends on physics-based simulations, which are difficult to install, computationally expensive, and require domain-specific expertise. To mitigate these challenges, we introduce EngiBench, the first open-source library and datasets spanning diverse domains for data-driven engineering design. EngiBench provides a unified API and a curated set of benchmarks -- covering aeronautics, heat conduction, photonics, and more -- that enable fair, reproducible comparisons of optimization and machine learning algorithms, such as generative or surrogate models. We also release EngiOpt, a companion library offering a collection of such algorithms compatible with the EngiBench interface. Both libraries are modular, letting users plug in novel algorithms or problems, automate end-to-end experiment workflows, and leverage built-in utilities for visualization, dataset generation, feasibility checks, and performance analysis. We demonstrate their versatility through experiments comparing state-of-the-art techniques across multiple engineering design problems, an undertaking that was previously prohibitively time-consuming to perform. Finally, we show that these problems pose significant challenges for standard machine learning methods due to highly sensitive and constrained design manifolds.
Monitoring sleep posture and behavior is critical for diagnosing sleep disorders and improving overall sleep quality. However, traditional approaches, such as wearable devices, cameras, and pressure sensors, often compromise user comfort, fail under obstructions like blankets, and raise privacy concerns. To overcome these limitations, we present RestAware, a non-invasive, contactless sleep monitoring system based on a 24GHz frequency-modulated continuous wave (FMCW) radar. Our system is evaluated on 25 participants across eight common sleep postures, achieving 92% classification accuracy and an F1-score of 0.91 using a K-Nearest Neighbors (KNN) classifier. In addition, we integrate instruction-tuned large language models (Mistral, Llama, and Falcon) to generate personalized, human-readable sleep summaries from radar-derived posture data. This low-cost ($ 35), privacy-preserving solution offers a practical alternative for real-time deployment in smart homes and clinical environments.
Recent developments in AI techniques for space applications mirror the success achieved in terrestrial applications. Machine learning, which excels in data rich environments, is particularly well suited to space-based computer vision applications, such as space optical attitude sensing. Of these sensors, digital sun sensors (DSS) are one of the most common and important sensors for spacecraft attitude determination. The main challenge in using the DSS for attitude estimation are sensor errors, which limit the overall achievable estimation accuracy. However, the traditional sun sensor calibration process is costly, slow, labor-intensive and inefficient. These limitations motivate the use of AI techniques to enable more accurate and efficient DSS calibration. The objective of this work is to develop an end-to-end predictive calibration methodology for digital sun sensors to solve 2-axis state estimates utilizing a sparse submanifold convolutional neural network (SSCNN). We find that the proposed framework can achieve state-of-the-art performance on synthetic data with a mean accuracy of 0.005° for the two sun angle estimates. Furthermore, the model is highly capable of implicitly learning complex noise patterns and handling mixed noise types, thereby greatly improving the model robustness and accuracy to real-world applications. The main contributions of this work are: (1) the first application (to our knowledge) of a CNN regression model to the problem of DSS predictive calibration, (2) the introduction of a fused end-to-end training approach for DSS calibration, (3) the creation of a publicly available physics-informed synthetic dataset and simulation for DSS training images, and (4) the evaluation of the performance of the deep learning approach for various mask configurations.
SmartDate is an AI-powered system for automated sorting and quality control of date fruits. It combines deep learning, genetic algorithms, and reinforcement learning to improve classification accuracy and predict shelf life. The system uses high-resolution imaging and Visible-Near-Infrared (VisNIR) spectral sensors to evaluate key features such as moisture, sugar content, and texture. Reinforcement learning enables real-time adaptation to production conditions, while genetic algorithms optimize model parameters. SmartDate achieved 94.5 percent accuracy, 93.1 percent F1-score, and an AUC-ROC of 0.96. The system reduces waste and ensures that only high-quality dates reach the market, setting a new benchmark in smart agriculture.
This paper presents a systematic literature review of music technology tailored for blind and low vision (BLV) individuals. Music activities can be particularly beneficial for BLV people. However, a systematic approach to organizing knowledge on designing accessible technology for BLV people has yet to be attempted. We categorize the existing studies based on the type of technology and the extent of BLV people's involvement in the research. We identify six main categories of BLV people-oriented music technology and highlight four key trends in design goals. Based on these categories, we propose four general insights focusing on (1) spatial awareness, (2) access to information, (3) (non-verbal) communication, and (4) memory. The identified trends suggest that more empirical studies involving BLV people in real-world scenarios are needed to ensure that technological advancements can enhance musical experiences and social inclusion. This research proposes collaborative music technology and inclusive real-world testing with the target group as two key areas missing in current research. They serve as a foundational step in shifting the focus from ``accessible technology'' to ``inclusive technology'' for BLV individuals within the broader field of accessibility research.
Non-prehensile manipulation is challenging due to complex contact interactions between objects, the environment, and robots. Model-based approaches can efficiently generate complex trajectories of robots and objects under contact constraints. However, they tend to be sensitive to model inaccuracies and require access to privileged information (e.g., object mass, size, pose), making them less suitable for novel objects. In contrast, learning-based approaches are typically more robust to modeling errors but require large amounts of data. In this paper, we bridge these two approaches to propose a framework for learning closed-loop pivoting manipulation. By leveraging computationally efficient Contact-Implicit Trajectory Optimization (CITO), we design demonstration-guided deep Reinforcement Learning (RL), leading to sample-efficient learning. We also present a sim-to-real transfer approach using a privileged training strategy, enabling the robot to perform pivoting manipulation using only proprioception, vision, and force sensing without access to privileged information. Our method is evaluated on several pivoting tasks, demonstrating that it can successfully perform sim-to-real transfer.
Autonomous drone racing presents a challenging control problem, requiring real-time decision-making and robust handling of nonlinear system dynamics. While iterative learning model predictive control~(LMPC) offers a promising framework for iterative performance improvement, its direct application to drone racing faces challenges like real-time compatibility or the trade-off between time-optimal and safe traversal. In this paper, we enhance LMPC with three key innovations:~(1) an adaptive cost function that dynamically weights time-optimal tracking against centerline adherence,~(2)~a shifted local safe set to prevent excessive shortcutting and enable more robust iterative updates, and~(3) a Cartesian-based formulation that accommodates safety constraints without the singularities or integration errors associated with Frenet-frame transformations. Results from extensive simulation and real-world experiments demonstrate that our improved algorithm can optimize initial trajectories generated by a wide range of controllers with varying levels of tuning for a maximum improvement in lap time by 60.85\%. Even applied to the most aggressively tuned state-of-the-art model-based controller, MPCC++, on a real drone, a 6.05\% improvement is still achieved. Overall, the proposed method pushes the drone toward faster traversal and avoids collisions in simulation and real-world experiments, making it a practical solution to improve the peak performance of drone racing.
Parameter estimation is a fundamental problem in science and engineering. In many safety-critical applications, one is not only interested in a {\it point} estimator, but also the uncertainty bound that can self-assess the accuracy of the estimator. In this regard, the Cramér-Rao lower bound (CRLB) is of great importance, as it provides a lower bound on the variance of {\it any} unbiased estimator. In many cases, it is the only way of evaluating, without recourse to simulations, the expected accuracy of numerically obtainable estimates. For the existence of the CRLB, there have been widely accepted regularity conditions, one of which is that the support of the likelihood function (LF) -- the pdf of the observations conditioned on the parameter of interest -- should be independent of the parameter to be estimated. This paper starts from reviewing the derivations of the classical CRLB under the condition that the LF has parameter-independent support. To cope with the case of parameter-dependent support, we generalize the CRLB to the {\it Cramér-Rao-Leibniz lower bound (CRLLB)}, by leveraging the general Leibniz integral rule. Notably, the existing results on CRLLB and CRLB are unified under the framework of CRLLB with multidimensional parameters. Then, we survey existing examples of LFs to illustrate the usefulness of the CRLLB in providing valid covariance bounds.
This paper introduces Q8bot, an open-source, miniature quadruped designed for robotics research and education. We present the robot's novel zero-wire design methodology, which leads to its superior form factor, robustness, replicability, and high performance. With a size and weight similar to a modern smartphone, this standalone robot can walk for over an hour on a single battery charge and survive meter-high drops with simple repairs. Its 300-dollar bill of materials includes minimal off-the-shelf components, readily available custom electronics from online vendors, and structural parts that can be manufactured on hobbyist 3D printers. A preliminary user assembly study confirms that Q8bot can be easily replicated, with an average assembly time of under one hour by a single person. With heuristic open-loop control, Q8bot achieves a stable walking speed of 5.4 body lengths per second and a turning speed of 5 radians per second, along with other dynamic movements such as jumping and climbing moderate slopes.
This paper proposes a novel towed movable antenna (ToMA) array architecture to enhance the physical layer security of airborne communication systems. Unlike conventional onboard arrays with fixed-position antennas (FPAs), the ToMA array employs multiple subarrays mounted on flexible cables and towed by distributed drones, enabling agile deployment in three-dimensional (3D) space surrounding the central aircraft. This design significantly enlarges the effective array aperture and allows dynamic geometry reconfiguration, offering superior spatial resolution and beamforming flexibility. We consider a secure transmission scenario where an airborne transmitter communicates with multiple legitimate users in the presence of potential eavesdroppers. To ensure security, zero-forcing beamforming is employed to nullify signal leakage toward eavesdroppers. Based on the statistical distributions of locations of users and eavesdroppers, the antenna position vector (APV) of the ToMA array is optimized to maximize the users' ergodic achievable rate. Analytical results for the case of a single user and a single eavesdropper reveal the optimal APV structure that minimizes their channel correlation. For the general multiuser scenario, we develop a low-complexity alternating optimization algorithm by leveraging Riemannian manifold optimization. Simulation results confirm that the proposed ToMA array achieves significant performance gains over conventional onboard FPA arrays, especially in scenarios where eavesdroppers are closely located to users under line-of-sight (LoS)-dominant channels.
Functional MRI is a neuroimaging technique aiming at analyzing the functional activity of the brain by measuring blood-oxygen-level-dependent signals throughout the brain. The derived functional features can be used for investigating brain alterations in neurological and psychiatric disorders. In this work, we employed the hypergraph structure to model high-order functional relations across brain regions, introducing algebraic connectivity (a(G)) for estimating the hyperedge weights. The hypergraph structure was derived from healthy controls to build a common topological structure across individuals. The considered cohort included subjects covering the Alzheimer's disease (AD) continuum, that is both mild cognitive impairment and AD patients. Statistical analysis was performed using the hyperedges' weights as features to assess the differences across the three groups. Additionally, a mediation analysis was performed to evaluate the effectiveness and reliability of the a(G) values, representing the functional information, as the mediator between tau-PET levels, a key biomarker of AD, and cognitive scores. The proposed approach outperformed state-of-the-art methods in identifying a larger number of hyperedges statistically different across groups. Among these, two hyperedges belonging to salience ventral attention and somatomotor networks showed a partial mediation effect between the tau biomarkers and cognitive decline. These results suggested that the a(G) can be an effective approach for extracting the hyperedge weights, including important functional information that resides in the brain areas forming the hyperedges.
Automated bioacoustic analysis is essential for biodiversity monitoring and conservation, requiring advanced deep learning models that can adapt to diverse bioacoustic tasks. This article presents a comprehensive review of large-scale pretrained bioacoustic foundation models and systematically investigates their transferability across multiple bioacoustic classification tasks. We overview bioacoustic representation learning including major pretraining data sources and benchmarks. On this basis, we review bioacoustic foundation models by thoroughly analysing design decisions such as model architecture, pretraining scheme, and training paradigm. Additionally, we evaluate selected foundation models on classification tasks from the BEANS and BirdSet benchmarks, comparing the generalisability of learned representations under both linear and attentive probing strategies. Our comprehensive experimental analysis reveals that BirdMAE, trained on large-scale bird song data with a self-supervised objective, achieves the best performance on the BirdSet benchmark. On BEANS, BEATs$_{NLM}$, the extracted encoder of the NatureLM-audio large audio model, is slightly better. Both transformer-based models require attentive probing to extract the full performance of their representations. ConvNext$_{BS}$ and Perch models trained with supervision on large-scale bird song data remain competitive for passive acoustic monitoring classification tasks of BirdSet in linear probing settings. Training a new linear classifier has clear advantages over evaluating these models without further training. While on BEANS, the baseline model BEATs trained with self-supervision on AudioSet outperforms bird-specific models when evaluated with attentive probing. These findings provide valuable guidance for practitioners selecting appropriate models to adapt them to new bioacoustic classification tasks via probing.
This work analyzes accelerating and decelerating wall-driven flows by quantifying the upper bound of transient energy growth using a Lyapunov-type approach. By formulating the linearized Navier-Stokes equations as a linear time-varying system and constructing a time-dependent Lyapunov function, we obtain a rigorous upper bound on transient energy growth by solving linear matrix inequalities (LMI). The LMI approach can obtain the upper bound of transient energy growth that closely matches transient growth computed via the singular value decomposition of the state-transition matrix of linear time-varying systems. Our analysis captures that decelerating base flows exhibit significantly larger transient growth compared with accelerating flows. Our approach offers the advantages of providing a rigorous certificate of uniform stability and an invariant ellipsoid to bound the solution trajectory. This Lyapunov-based analysis also has the potential to be extended to input-output analysis and nonlinear analysis.
We present VWAttacker, the first systematic testing framework for analyzing the security of Voice over WiFi (VoWiFi) User Equipment (UE) implementations. VWAttacker includes a complete VoWiFi network testbed that communicates with Commercial-Off-The-Shelf (COTS) UEs based on a simple interface to test the behavior of diverse VoWiFi UE implementations; uses property-guided adversarial testing to uncover security issues in different UEs systematically. To reduce manual effort in extracting and testing properties, we introduce an LLM-based, semi-automatic, and scalable approach for property extraction and testcase (TC) generation. These TCs are systematically mutated by two domain-specific transformations. Furthermore, we introduce two deterministic oracles to detect property violations automatically. Coupled with these techniques, VWAttacker extracts 63 properties from 11 specifications, evaluates 1,116 testcases, and detects 13 issues in 21 UEs. The issues range from enforcing a DH shared secret to 0 to supporting weak algorithms. These issues result in attacks that expose the victim UE's identity or establish weak channels, thus severely hampering the security of cellular networks. We responsibly disclose the findings to all the related vendors. At the time of writing, one of the vulnerabilities has been acknowledged by MediaTek with high severity.
In this paper, we propose an Optimal Transport objective for learning one-dimensional translation-equivariant systems and demonstrate its applicability to single pitch estimation. Our method provides a theoretically grounded, more numerically stable, and simpler alternative for training state-of-the-art self-supervised pitch estimators.
Gradient-based optimization of neural differential equations and other parameterized dynamical systems fundamentally relies on the ability to differentiate numerical solutions with respect to model parameters. In stiff systems, it has been observed that sensitivities to parameters controlling fast-decaying modes become vanishingly small during training, leading to optimization difficulties. In this paper, we show that this vanishing gradient phenomenon is not an artifact of any particular method, but a universal feature of all A-stable and L-stable stiff numerical integration schemes. We analyze the rational stability function for general stiff integration schemes and demonstrate that the relevant parameter sensitivities, governed by the derivative of the stability function, decay to zero for large stiffness. Explicit formulas for common stiff integration schemes are provided, which illustrate the mechanism in detail. Finally, we rigorously prove that the slowest possible rate of decay for the derivative of the stability function is $O(|z|^{-1})$, revealing a fundamental limitation: all A-stable time-stepping methods inevitably suppress parameter gradients in stiff regimes, posing a significant barrier for training and parameter identification in stiff neural ODEs.
The battlefield of information warfare has moved to online social networks, where influence campaigns operate at unprecedented speed and scale. As with any strategic domain, success requires understanding the terrain, modeling adversaries, and executing interventions. This tutorial introduces a formal optimization framework for social media information operations (IO), where the objective is to shape opinions through targeted actions. This framework is parameterized by quantities such as network structure, user opinions, and activity levels - all of which must be estimated or inferred from data. We discuss analytic tools that support this process, including centrality measures for identifying influential users, clustering algorithms for detecting community structure, and sentiment analysis for gauging public opinion. These tools either feed directly into the optimization pipeline or help defense analysts interpret the information environment. With the landscape mapped, we highlight threats such as coordinated bot networks, extremist recruitment, and viral misinformation. Countermeasures range from content-level interventions to mathematically optimized influence strategies. Finally, the emergence of generative AI transforms both offense and defense, democratizing persuasive capabilities while enabling scalable defenses. This shift calls for algorithmic innovation, policy reform, and ethical vigilance to protect the integrity of our digital public sphere.
Geometry-based point cloud compression (G-PCC), an international standard designed by MPEG, provides a generic framework for compressing diverse types of point clouds while ensuring interoperability across applications and devices. However, G-PCC underperforms compared to recent deep learning-based PCC methods despite its lower computational power consumption. To enhance the efficiency of G-PCC without sacrificing its interoperability or computational flexibility, we propose a novel preprocessing framework that integrates a compression-oriented voxelization network with a differentiable G-PCC surrogate model, jointly optimized in the training phase. The surrogate model mimics the rate-distortion behaviour of the non-differentiable G-PCC codec, enabling end-to-end gradient propagation. The versatile voxelization network adaptively transforms input point clouds using learning-based voxelization and effectively manipulates point clouds via global scaling, fine-grained pruning, and point-level editing for rate-distortion trade-offs. During inference, only the lightweight voxelization network is appended to the G-PCC encoder, requiring no modifications to the decoder, thus introducing no computational overhead for end users. Extensive experiments demonstrate a 38.84% average BD-rate reduction over G-PCC. By bridging classical codecs with deep learning, this work offers a practical pathway to enhance legacy compression standards while preserving their backward compatibility, making it ideal for real-world deployment.
Anonymous communication networks have emerged as crucial tools for obfuscating communication pathways and concealing user identities. However, their practical deployments face significant challenges, including susceptibility to artificial intelligence (AI)-powered metadata analysis, difficulties in decentralized architectures, and the absence of provable security guarantees. To address these issues, this paper proposes a novel decentralized anonymous routing protocol with resistance to tracing and traffic analysis. The protocol eliminates dependencies on the threshold model and trusted third-party setups, ensuring indistinguishable identity privacy even in highly adversarial environments. Different from traditional empirical security analysis of anonymous networks, this paper rigorously proves indistinguishable identity privacy for users even in extremely adversarial environments. Furthermore, simulations confirm its practical feasibility, demonstrating both security and efficiency. By achieving information sharing with privacy preservation, the proposed protocol offers a provably secure solution for privacy-preserving communication in digital environments.
This paper addresses the challenge of enhancing the realism of vocoder-generated singing voice audio by mitigating the distinguishable disparities between synthetic and real-life recordings, particularly in high-frequency spectrogram components. Our proposed approach combines two innovations: an explicit linear spectrogram estimation step using denoising diffusion process with DiT-based neural network architecture optimized for time-frequency data, and a redesigned vocoder based on Vocos specialized in handling large linear spectrograms with increased frequency bins. This integrated method can produce audio with high-fidelity spectrograms that are challenging for both human listeners and machine classifiers to differentiate from authentic recordings. Objective and subjective evaluations demonstrate that our streamlined approach maintains high audio quality while achieving this realism. This work presents a substantial advancement in overcoming the limitations of current vocoding techniques, particularly in the context of adversarial attacks on fake spectrogram detection.
In this paper, we investigate reconfigurable intelligent surface (RIS)-aided multiple-input-multiple-output (MIMO) OAC systems designed to emulate the fully-connected (FC) layer of a neural network (NN) via analog OAC, where the RIS and the transceivers are jointly adjusted to engineer the ambient wireless propagation environment to emulate the weights of the target FC layer. We refer to this novel computational paradigm as AirFC. We first study the case in which the precoder, combiner, and RIS phase shift matrices are jointly optimized to minimize the mismatch between the OAC system and the target FC layer. To solve this non-convex optimization problem, we propose a low-complexity alternating optimization algorithm, where semi-closed-form/closed-form solutions for all optimization variables are derived. Next, we consider training of the system parameters using two distinct learning strategies, namely centralized training and distributed training. In the centralized training approach, training is performed at either the transmitter or the receiver, whichever possesses the channel state information (CSI), and the trained parameters are provided to the other terminal. In the distributed training approach, the transmitter and receiver iteratively update their parameters through back and forth transmissions by leveraging channel reciprocity, thereby avoiding CSI acquisition and significantly reducing computational complexity. Subsequently, we extend our analysis to a multi-RIS scenario by exploiting its spatial diversity gain to enhance the system performance. Simulation results show that the AirFC system realized by the RIS-aided MIMO configuration achieves satisfactory classification accuracy.
Video caching can significantly improve delivery efficiency and enhance quality of video streaming, which constitutes the majority of wireless communication traffic. Due to limited cache size, caching strategies must be designed to adapt to and dynamic user demand in order to maximize system revenue. The system revenue depends on the benefits of delivering the requested videos and costs for (a) transporting the files to the users and (b) cache replacement. Since the cache content at any point in time impacts the replacement costs in the future, demand predictions over multiple cache placement slots become an important prerequisite for efficient cache planning. Motivated by this, we introduce a novel two-stage privacy-preserving solution for revenue optimization in wireless video caching networks. First, we train a Transformer using privacy-preserving federated learning (FL) to predict multi-slot future demands. Given that prediction results are never entirely accurate, especially for longer horizons, we further combine global content popularity with per-user prediction results to estimate the content demand distribution. Then, in the second stage, we leverage these estimation results to find caching strategies that maximize the long-term system revenue. This latter problem takes on the form of a multi-stage knapsack problem, which we then transform to a integer linear program. Our extensive simulation results demonstrate that (i) our FL solution delivers nearly identical performance to that of the ideal centralized solution and outperforms other existing caching methods, and (ii) our novel revenue optimization approach provides deeper system performance insights than traditional cache hit ratio (CHR)-based optimization approaches.
Modern photon-counting sensors are increasingly dominated by Poisson noise, yet conventional Feature-Specific Imaging (FSI) is optimized for additive Gaussian noise, leading to suboptimal performance and a loss of its advantages under Poisson noise. To address this, we introduce DeepFSI, a novel end-to-end optical-electronic framework. DeepFSI "unfreezes" traditional FSI masks, enabling a deep neural network to learn globally optimal measurement masks by computing gradients directly under realistic Poisson and additive noise conditions. Our simulations demonstrate DeepFSI's superior feature fidelity and task performance compared to conventional FSI with predefined masks, especially in Poisson-Noise-dominant environments. DeepFSI also exhibits enhanced robustness to design choices and performs well under additive Gaussian noise, representing a significant advance for noise-robust computational imaging in photon-limited applications.
Audio-visual temporal deepfake localization under the content-driven partial manipulation remains a highly challenging task. In this scenario, the deepfake regions are usually only spanning a few frames, with the majority of the rest remaining identical to the original. To tackle this, we propose a Hierarchical Boundary Modeling Network (HBMNet), which includes three modules: an Audio-Visual Feature Encoder that extracts discriminative frame-level representations, a Coarse Proposal Generator that predicts candidate boundary regions, and a Fine-grained Probabilities Generator that refines these proposals using bidirectional boundary-content probabilities. From the modality perspective, we enhance audio-visual learning through dedicated encoding and fusion, reinforced by frame-level supervision to boost discriminability. From the temporal perspective, HBMNet integrates multi-scale cues and bidirectional boundary-content relationships. Experiments show that encoding and fusion primarily improve precision, while frame-level supervision boosts recall. Each module (audio-visual fusion, temporal scales, bi-directionality) contributes complementary benefits, collectively enhancing localization performance. HBMNet outperforms BA-TFD and UMMAFormer and shows improved potential scalability with more training data.
This paper presents a multifunctional speech synthesis system that integrates voice cloning and emotion control speech synthesis within a unified framework. The goal of this work is to address longstanding challenges in achieving highly expressive, controllable, and natural speech generation that faithfully preserves speaker identity across diverse linguistic and emotional contexts. Our approach introduces an effective speaker-emotion disentanglement mechanism with in-batch contrastive learning, enabling independent manipulation of speaker identity and eemotional style, as well as rotational emotional embedding integration method for smooth emotion control. To support comprehensive training and evaluation, we construct CSEMOTIONS, a high-quality emotional speech dataset containing 10 hours of Mandarin speech from six professional speakers across seven emotional categories. Extensive experiments demonstrate that our system, Marco-Voice, achieves substantial improvements in both objective and subjective metrics. Comprehensive evaluations and analysis were conducted, results show that MarcoVoice delivers competitive performance in terms of speech clarity and emotional richness, representing a substantial advance in the field of expressive neural speech synthesis.
Visualising complex polarimetry optical axis fields is challenging. We introduce density-encoded line integral convolution (DELIC), a novel approach that builds on the classic line integral convolution algorithm by incorporating the principles of centroidal Voronoi tessellation, enabling clearer and more interpretable representations of complex optical axis fields.
We consider the problem of multi-channel single-speaker blind dereverberation, where multi-channel mixtures are used to recover the clean anechoic speech. To solve this problem, we propose USD-DPS, {U}nsupervised {S}peech {D}ereverberation via {D}iffusion {P}osterior {S}ampling. USD-DPS uses an unconditional clean speech diffusion model as a strong prior to solve the problem by posterior sampling. At each diffusion sampling step, we estimate all microphone channels' room impulse responses (RIRs), which are further used to enforce a multi-channel mixture consistency constraint for diffusion guidance. For multi-channel RIR estimation, we estimate reference-channel RIR by optimizing RIR parameters of a sub-band RIR signal model, with the Adam optimizer. We estimate non-reference channels' RIRs analytically using forward convolutive prediction (FCP). We found that this combination provides a good balance between sampling efficiency and RIR prior modeling, which shows superior performance among unsupervised dereverberation approaches. An audio demo page is provided in this https URL.
Lens flare removal remains an information confusion challenge in the underlying image background and the optical flares, due to the complex optical interactions between light sources and camera lens. While recent solutions have shown promise in decoupling the flare corruption from image, they often fail to maintain contextual consistency, leading to incomplete and inconsistent flare removal. To eliminate this limitation, we propose DeflareMamba, which leverages the efficient sequence modeling capabilities of state space models while maintains the ability to capture local-global dependencies. Particularly, we design a hierarchical framework that establishes long-range pixel correlations through varied stride sampling patterns, and utilize local-enhanced state space models that simultaneously preserves local details. To the best of our knowledge, this is the first work that introduces state space models to the flare removal task. Extensive experiments demonstrate that our method effectively removes various types of flare artifacts, including scattering and reflective flares, while maintaining the natural appearance of non-flare regions. Further downstream applications demonstrate the capacity of our method to improve visual object recognition and cross-modal semantic understanding. Code is available at this https URL.
Large-scale models (LSMs) can be an effective framework for semantic representation and understanding, thereby providing a suitable tool for designing semantic communication (SC) systems. However, their direct deployment is often hindered by high computational complexity and resource requirements. In this paper, a novel robust knowledge distillation based semantic communication (RKD-SC) framework is proposed to enable efficient and \textcolor{black}{channel-noise-robust} LSM-powered SC. The framework addresses two key challenges: determining optimal compact model architectures and effectively transferring knowledge while maintaining robustness against channel noise. First, a knowledge distillation-based lightweight differentiable architecture search (KDL-DARTS) algorithm is proposed. This algorithm integrates knowledge distillation loss and a complexity penalty into the neural architecture search process to identify high-performance, lightweight semantic encoder architectures. Second, a novel two-stage robust knowledge distillation (RKD) algorithm is developed to transfer semantic capabilities from an LSM (teacher) to a compact encoder (student) and subsequently enhance system robustness. To further improve resilience to channel impairments, a channel-aware transformer (CAT) block is introduced as the channel codec, trained under diverse channel conditions with variable-length outputs. Extensive simulations on image classification tasks demonstrate that the RKD-SC framework significantly reduces model parameters while preserving a high degree of the teacher model's performance and exhibiting superior robustness compared to existing methods.
Recently convolutional sparse representation (CSR), as a sparse representation technique, has attracted increasing attention in the field of image processing, due to its good characteristic of translate-invariance. The content of CSR usually consists of convolutional sparse coding (CSC) and convolutional dictionary learning (CDL), and many studies focus on how to solve the corresponding optimization problems. At present, the most efficient optimization scheme for CSC is based on the alternating direction method of multipliers (ADMM). However, the ADMM-based approach involves a penalty parameter that needs to be carefully selected, and improper parameter selection may result in either no convergence or very slow convergence. In this paper, a novel fast and efficient method using Chambolle-Pock(CP) framework is proposed, which does not require extra manual selection parameters in solving processing, and has faster convergence speed. Furthermore, we propose an anisotropic total variation penalty of the coefficient maps for CSC and apply the CP algorithm to solve it. In addition, we also apply the CP framework to solve the corresponding CDL problem. Experiments show that for noise-free image the proposed CSC algorithms can achieve rival results of the latest ADMM-based approach, while outperforms in removing noise from Gaussian noise pollution image.
This paper considers distributed resource allocation problems (DRAPs) with a coupled constraint for real-time systems. Based on primal-dual methods, we adopt a control perspective for optimization algorithm design by synthesizing a safe feedback controller using control barrier functions to enforce constraint satisfaction. On this basis, a distributed anytime-feasible resource allocation (DanyRA) algorithm is proposed. It is shown that DanyRA algorithm converges to the exact optimal solution of DRAPs while ensuring feasibility of the coupled inequality constraint at all time steps. Considering constraint violation arises from potential external interferences, a virtual queue with minimum buffer is incorporated to restore the constraint satisfaction before the pre-defined deadlines. We characterize the trade-off between convergence accuracy and violation robustness for maintaining or recovering feasibility. DanyRA algorithm is further extended to address DRAPs with a coupled equality constraint, and its linear convergence rate is theoretically established. Finally, a numerical example is provided for verification.
There has been significant research effort developing neural-network-based predictors of SQ in recent years. While a primary objective has been to develop non-intrusive, i.e.~reference-free, metrics to assess the performance of SE systems, recent work has also investigated the direct inference of neural SQ predictors within the loss function of downstream speech tasks. To aid in the training of SQ predictors, several large datasets of audio with corresponding human labels of quality have been created. Recent work in this area has shown that speech representations derived from large unsupervised or semi-supervised foundational speech models are useful input feature representations for neural SQ prediction. In this work, a novel and robust SQ predictor is proposed based on feature representations extracted from an ASR model, found to be a powerful input feature for the SQ prediction task. The proposed system achieves higher correlation with human MOS ratings than recent approaches on all NISQA test sets and shows significantly better domain adaption compared to the commonly used DNSMOS metric.
Recent advances in split learning (SL) have established it as a promising framework for privacy-preserving, communication-efficient distributed learning at the network edge. However, SL's sequential update process is vulnerable to even a single malicious client, which can significantly degrade model accuracy. To address this, we introduce Pigeon-SL, a novel scheme grounded in the pigeonhole principle that guarantees at least one entirely honest cluster among M clients, even when up to N of them are adversarial. In each global round, the access point partitions the clients into N+1 clusters, trains each cluster independently via vanilla SL, and evaluates their validation losses on a shared dataset. Only the cluster with the lowest loss advances, thereby isolating and discarding malicious updates. We further enhance training and communication efficiency with Pigeon-SL+, which repeats training on the selected cluster to match the update throughput of standard SL. We validate the robustness and effectiveness of our approach under three representative attack models -- label flipping, activation and gradient manipulation -- demonstrating significant improvements in accuracy and resilience over baseline SL methods in future intelligent wireless networks.
Detecting and segmenting dysfluencies is crucial for effective speech therapy and real-time feedback. However, most methods only classify dysfluencies at the utterance level. We introduce StutterCut, a semi-supervised framework that formulates dysfluency segmentation as a graph partitioning problem, where speech embeddings from overlapping windows are represented as graph nodes. We refine the connections between nodes using a pseudo-oracle classifier trained on weak (utterance-level) labels, with its influence controlled by an uncertainty measure from Monte Carlo dropout. Additionally, we extend the weakly labelled FluencyBank dataset by incorporating frame-level dysfluency boundaries for four dysfluency types. This provides a more realistic benchmark compared to synthetic datasets. Experiments on real and synthetic datasets show that StutterCut outperforms existing methods, achieving higher F1 scores and more precise stuttering onset detection.
This paper proposes an adaptive lattice-based motion planning solution to address the problem of generating feasible trajectories for systems, represented by a linearly parameterizable non-linear model operating within a cluttered environment. The system model is considered to have uncertain model parameters. The key idea here is to utilize input/output data online to update the model set containing the uncertain system parameter, as well as a dynamic estimated parameter of the model, so that the associated model estimation error reduces over time. This in turn improves the quality of the motion primitives generated by the lattice-based motion planner using a nominal estimated model selected on the basis of suitable criteria. The motion primitives are also equipped with tubes to account for the model mismatch between the nominal estimated model and the true system model, to guarantee collision-free overall motion. The tubes are of uniform size, which is directly proportional to the size of the model set containing the uncertain system parameter. The adaptive learning module guarantees a reduction in the diameter of the model set as well as in the parameter estimation error between the dynamic estimated parameter and the true system parameter. This directly implies a reduction in the size of the implemented tubes and guarantees that the utilized motion primitives go arbitrarily close to the resolution-optimal motion primitives associated with the true model of the system, thus significantly improving the overall motion planning performance over time. The efficiency of the motion planner is demonstrated by a suitable simulation example that considers a drone model represented by Euler-Lagrange dynamics containing uncertain parameters and operating within a cluttered environment.
Panoramic cameras, capturing comprehensive 360-degree environmental data, are suitable for quadruped robots in surrounding perception and interaction with complex environments. However, the scarcity of high-quality panoramic training data-caused by inherent kinematic constraints and complex sensor calibration challenges-fundamentally limits the development of robust perception systems tailored to these embodied platforms. To address this issue, we propose QuaDreamer-the first panoramic data generation engine specifically designed for quadruped robots. QuaDreamer focuses on mimicking the motion paradigm of quadruped robots to generate highly controllable, realistic panoramic videos, providing a data source for downstream tasks. Specifically, to effectively capture the unique vertical vibration characteristics exhibited during quadruped locomotion, we introduce Vertical Jitter Encoding (VJE). VJE extracts controllable vertical signals through frequency-domain feature filtering and provides high-quality prompts. To facilitate high-quality panoramic video generation under jitter signal control, we propose a Scene-Object Controller (SOC) that effectively manages object motion and boosts background jitter control through the attention mechanism. To address panoramic distortions in wide-FoV video generation, we propose the Panoramic Enhancer (PE)-a dual-stream architecture that synergizes frequency-texture refinement for local detail enhancement with spatial-structure correction for global geometric consistency. We further demonstrate that the generated video sequences can serve as training data for the quadruped robot's panoramic visual perception model, enhancing the performance of multi-object tracking in 360-degree scenes. The source code and model weights will be publicly available at this https URL.
The ability of modern telecommunication systems to locate users and objects in the radio environment raises justified privacy concerns. To prevent unauthorized localization, single-antenna transmitters can obfuscate the signal by convolving it with a randomized sequence prior to transmission, which alters the channel state information (CSI) estimated at the receiver. However, this strategy is only effective against CSI-based localization systems deploying single-antenna receivers. Inspired by the concept of blind multichannel identification, we propose a simple CSI recovery method for multi-antenna receivers to extract channel features that ensure reliable user localization regardless of the transmitted signal. We comparatively evaluate the impact of signal obfuscation and the proposed recovery method on the localization performance of CSI fingerprinting, channel charting, and classical triangulation using real-world channel measurements. This work aims to demonstrate the necessity for further efforts to protect the location privacy of users from adversarial radio-based localization systems.
Trustworthy interpretation of deep learning models is critical for neuroimaging applications, yet commonly used Explainable AI (XAI) methods lack rigorous validation, risking misinterpretation. We performed the first large-scale, systematic comparison of XAI methods on ~45,000 structural brain MRIs using a novel XAI validation framework. This framework establishes verifiable ground truth by constructing prediction tasks with known signal sources - from localized anatomical features to subject-specific clinical lesions - without artificially altering input images. Our analysis reveals systematic failures in two of the most widely used methods: GradCAM consistently failed to localize predictive features, while Layer-wise Relevance Propagation generated extensive, artifactual explanations that suggest incompatibility with neuroimaging data characteristics. Our results indicate that these failures stem from a domain mismatch, where methods with design principles tailored to natural images require substantial adaptation for neuroimaging data. In contrast, the simpler, gradient-based method SmoothGrad, which makes fewer assumptions about data structure, proved consistently accurate, suggesting its conceptual simplicity makes it more robust to this domain shift. These findings highlight the need for domain-specific adaptation and validation of XAI methods, suggest that interpretations from prior neuroimaging studies using standard XAI methodology warrant re-evaluation, and provide urgent guidance for practical application of XAI in neuroimaging.
Robotic cutting is a challenging contact-rich manipulation task where the robot must simultaneously negotiate unknown object mechanics, large contact forces, and precise motion requirements. We introduce a new virtual-model control scheme that enables knife rocking motion for robot manipulators, without pre-planned trajectories or precise information of the environment. Motion is generated through interconnection with virtual mechanisms, given by virtual springs, dampers, and masses arranged in a suitable way. Through analysis and experiments, we demonstrate that the controlled robot behavior settles into a periodic motion. Experiments with a Franka manipulator demonstrate robust cuts with five different vegetables, and sub-millimeter slice accuracy from 1 mm to 6 mm at nearly one cut per second. The same controller survives changes in knife shape and cutting board height, and adaptation to a different humanoid manipulator, demonstrating robustness and platform independence.
Attention is not monolithic; rather, it operates in multiple forms to facilitate efficient cognitive processing. In the auditory domain, attention enables the prioritization of relevant sounds in an auditory scene and can be either attracted by elements in the scene in a bottom-up fashion or directed towards features, objects, or the entire scene in a top-down fashion. How these modes of attention interact and whether their neural underpinnings are distinct remains unclear. In this work, we investigate the perceptual and neural correlates of different attentional modes in a controlled "cocktail party" paradigm, where listeners listen to the same stimuli and attend to either a spatial location (feature-based), a speaker (object-based), or the entire scene (global or free-listening) while detecting deviations in pitch of a voice in the scene. Our findings indicate that object-based attention is more perceptually effective than feature-based or global attention. Furthermore, object-based and spatial-based attention engage distinct neural mechanisms and are differentially modulated by bottom-up salience. Notably, while bottom-up salience aids in the initial segregation of auditory objects, it plays a reduced role in object tracking once attention has been voluntarily allocated. In addition, decoding the stimulus envelope from the EEG data revealed a source-sampling scheme in the global attention mode that is not present in the object or spatial modes. Overall, the study shows that the perception of the same acoustic scene differs according to the listening task, guided by an interaction between top-down and bottom-up processes.
We demonstrate that a single 3x3 convolutional kernel can produce emergent audio effects when trained on 200 samples from a personalized corpus. We achieve this through two key techniques: (1) Conditioning Aware Kernels (CAK), where output = input + (learned_pattern x control), with a soft-gate mechanism supporting identity preservation at zero control; and (2) AuGAN (Audit GAN), which reframes adversarial training from "is this real?" to "did you apply the requested value?" Rather than learning to generate or detect forgeries, our networks cooperate to verify control application, discovering unique transformations. The learned kernel exhibits a diagonal structure creating frequency-dependent temporal shifts that are capable of producing musical effects based on input characteristics. Our results show the potential of adversarial training to discover audio transformations from minimal data, enabling new approaches to effect design.
A clustered gossip network is considered in which a source updates its information over time, and end-nodes, organized in clusters through clusterheads, are keeping track of it. The goal for the nodes is to remain as fresh as possible, i.e., have the same information as the source, which we assess by the long-term average binary freshness metric. We introduce a smart mechanism of information dissemination which we coin rate-changing gossip (RC-Gossip). Its main idea is that gossiping is directed towards nodes that need it the most, and hence the rate of gossiping changes based on the number of fresh nodes in the network at a given time. While Stochastic Hybrid System (SHS) analysis has been the norm in studying freshness of gossip networks, we present an equivalent way to analyze freshness using a renewal-reward-based approach. Using that, we show that RC-gossip significantly increases freshness of nodes in different clustered networks, with optimal cluster sizes, compared to traditional gossiping techniques.
Learning-based control has attracted significant attention in recent years, especially for plants that are difficult to model based on first-principles. A key issue in learning-based control is how to make efficient use of data as the abundance of data becomes overwhelming. To address this issue, this work proposes an information-triggered learning framework and a corresponding learning-based controller design approach with guaranteed stability. Specifically, we consider a linear time-invariant system with unknown dynamics. A set-membership approach is introduced to learn a parametric uncertainty set for the unknown dynamics. Then, a data selection mechanism is proposed by evaluating the incremental information in a data sample, where the incremental information is quantified by its effects on shrinking the parametric uncertainty set. Next, after introducing a stability criterion using the set-membership estimate of the system dynamics, a robust learning-based predictive controller (LPC) is designed by minimizing a worst-case cost function. The closed-loop stability of the LPC equipped with the information-triggered learning protocol is discussed within a high-probability framework. Finally, comparative numerical experiments are performed to verify the validity of the proposed approach.
The abundance of information present in Whole Slide Images (WSIs) renders them an essential tool for survival analysis. Several Multiple Instance Learning frameworks proposed for this task utilize a ResNet50 backbone pre-trained on natural images. By leveraging recenetly released histopathological foundation models such as UNI and Hibou, the predictive prowess of existing MIL networks can be enhanced. Furthermore, deploying an ensemble of digital pathology foundation models yields higher baseline accuracy, although the benefits appear to diminish with more complex MIL architectures. Our code will be made publicly available upon acceptance.
With the rising demand for high-rate and timely communications, fog radio access networks (F-RANs) offer a promising solution. This work investigates age of information (AoI) performance in F-RANs, consisting of multiple content users (CUs), enhanced remote radio heads (eRRHs), and content providers (CPs). Time-critical CUs need rapid content updates from CPs but cannot communicate directly with them; instead, eRRHs act as intermediaries. CUs decide whether to request content from a CP and which eRRH to send the request to, while eRRHs decide whether to command CPs to update content or use cached content. We study two general classes of policies: (i) oblivious policies, where decision-making is independent of historical information, and (ii) non-oblivious policies, where decisions are influenced by historical information. First, we obtain closed-form expressions for the average AoI of eRRHs under both policy types. Due to the complexity of calculating closed-form expressions for CUs, we then derive general upper bounds for their average AoI. Next, we identify optimal policies for both types. Under both optimal policies, each CU requests content from each CP at an equal rate, consolidating all requests to a single eRRH when demand is low or resources are limited, and distributing requests evenly among eRRHs when demand is high and resources are ample. eRRHs command content from each CP at an equal rate under an optimal oblivious policy, while prioritize the CP with the highest age under an optimal non-oblivious policy. Our numerical results validate these theoretical findings.
This paper investigates the limits to which a passive Reconfigurable Intelligent Surface (RIS) can reshape a point-to-point Multiple-Input Multiple-Output (MIMO) channel in terms of singular values and their functions (e.g., achievable rate and harvestable power) for improved wireless performance. We depart from the Diagonal (D) scattering model and adopt a Beyond-Diagonal (BD) model that exploits element-wise connections for passive signal amplitude and phase manipulation. Specifically, analytical tight bounds are derived under typical RIS deployment scenarios to unveil the channel shaping potentials of BD-RIS regarding communication Degrees of Freedom (DoF), singular value spread, power gain, and capacity. An efficient numerical method is then proposed to optimize BD-RIS for any locally Lipschitz function of channel singular values, and showcased to characterize the achievable singular value region. As a side product, we tackle BD-RIS-aided MIMO rate maximization problem by a local-optimal Alternating Optimization (AO) approach and a low-complexity shaping approach. Results show that BD-RIS significantly improves the dynamic range of channel singular values and the tradeoff in manipulating them, thus offering enhanced data rate, harvestable power, and physical-layer security. These advantages become more pronounced when the number of RIS elements, group size, or MIMO dimensions increase. Of particular interest, BD-RIS is shown to activate multi-stream transmission and achieve the asymptotic DoF at much lower transmit power than D-RIS thanks to its proficiency in channel shaping.
In this paper, we address the problem of joint allocation of transmit and jamming power at the source and destination, respectively, to enhance the long-term cumulative secrecy performance of an energy-harvesting wireless communication system until it stops functioning in the presence of an eavesdropper. The source and destination have energy-harvesting devices with limited battery capacities. The destination also has a full-duplex transceiver to transmit jamming signals for secrecy. We frame the problem as an infinite-horizon Markov decision process (MDP) problem and propose a reinforcement learning (RL)-based optimal joint power allocation (OJPA) algorithm that employs a policy iteration (PI) algorithm. Since the optimal algorithm is computationally expensive, we develop a low-complexity sub-optimal joint power allocation (SJPA) algorithm, namely, reduced state joint power allocation (RSJPA). Two other SJPA algorithms, the greedy algorithm (GA), and the naive algorithm (NA) are implemented as benchmarks. In addition, the OJPA algorithm outperforms the individual power allocation (IPA) algorithms termed individual transmit power allocation (ITPA) and individual jamming power allocation (IJPA), where the transmit and jamming powers, respectively, are optimized individually. The results show that the OJPA algorithm is also more energy this http URL also show that the OJPA algorithm significantly improves the secrecy performance compared to all SJPA algorithms. The OJPA algorithm also outperforms the secrecy performance of a genetic algorithm-based RL algorithm and a finite-horizon RL this http URL proposed RSJPA algorithm achieves nearly optimal performance with significantly less computational complexity marking it the balanced choice between the complexity and the performance.
Electroencephalography (EEG) has emerged as a cost-effective and efficient tool to support neurologists in the detection of Alzheimer's Disease (AD). However, most existing approaches rely heavily on manual feature engineering or data transformation. While such techniques may provide benefits when working with small-scale datasets, they often lead to information loss and distortion when applied to large-scale data, ultimately limiting model performance. Moreover, the limited subject scale and demographic diversity of datasets used in prior studies hinder comprehensive evaluation of model robustness and generalizability, thus restricting their applicability in real-world clinical settings. To address these challenges, we propose ADformer, a novel multi-granularity spatial-temporal transformer designed to capture both temporal and spatial features from raw EEG signals, enabling effective end-to-end representation learning. Our model introduces multi-granularity embedding strategies across both spatial and temporal dimensions, leveraging a two-stage intra-inter granularity self-attention mechanism to learn both local patterns within each granularity and global dependencies across granularities. We evaluate ADformer on 4 large-scale datasets comprising a total of 1,713 subjects, representing one of the largest corpora for EEG-based AD detection to date, under a cross-validated, subject-independent setting. Experimental results demonstrate that ADformer consistently outperforms existing methods, achieving subject-level F1 scores of 92.82%, 89.83%, 67.99%, and 83.98% on the 4 datasets, respectively, in distinguishing AD from healthy control (HC) subjects.
Photoacoustic computed tomography (PACT) is a non-invasive imaging modality, similar to ultrasound, with wide-ranging medical applications. Conventional PACT images are degraded by wavefront distortion caused by the heterogeneous speed of sound (SOS) in tissue. Accounting for these effects can improve image quality and provide medically useful information, but measuring the SOS directly is burdensome and the existing joint reconstruction method is computationally expensive. Traditional supervised learning techniques are currently inaccessible in this data-starved domain. In this work, we introduce an efficient, self-supervised joint reconstruction method that recovers SOS and high-quality images for ring array PACT systems. To solve this semi-blind inverse problem, we parametrize the SOS using either a pixel grid or a neural field (NF) and update it directly by backpropagating the gradients through a differentiable imaging forward model. Our method removes SOS aberrations more accurately and 35x faster than the current SOTA. We demonstrate the success of our method quantitatively in simulation and qualitatively on experimentally-collected and in vivo data. Our code and synthetic numerical phantoms are available on our project page: this https URL.
In this paper, we propose and design a new task called audio moment retrieval (AMR). Unlike conventional language-based audio retrieval tasks that search for short audio clips from an audio database, AMR aims to predict relevant moments in untrimmed long audio based on a text query. Given the lack of prior work in AMR, we first build a dedicated dataset, Clotho-Moment, consisting of large-scale simulated audio recordings with moment annotations. We then propose a DETR-based model, named Audio Moment DETR (AM-DETR), as a fundamental framework for AMR tasks. This model captures temporal dependencies within audio features, inspired by similar video moment retrieval tasks, thus surpassing conventional clip-level audio retrieval methods. Additionally, we provide manually annotated datasets to properly measure the effectiveness and robustness of our methods on real data. Experimental results show that AM-DETR, trained with Clotho-Moment, outperforms a baseline model that applies a clip-level audio retrieval method with a sliding window on all metrics, particularly improving Recall1@0.7 by 9.00 points. Our datasets and code are publicly available in this https URL.
This study introduces an innovative approach for adaptive power allocation in Non-Orthogonal Multiple Access (NOMA) systems, enhanced by the integration of spaceborne and terrestrial signals through a Reconfigurable Intelligent Surface (RIS). We develop an adaptive mechanism to adjust the power distribution between spaceborne and terrestrial signals according to variations in environmental conditions and elevation angles. This mechanism employs a sophisticated transition model that combines Gaussian Mixture Models with Log-Normal distributions to adaptively counteract the detrimental impacts of atmospheric attenuation and urban shadowing. These adaptive power adjustments significantly enhance system capacity, particularly improving the Signal-to-Interference-plus-Noise Ratio under diverse operational scenarios. Simulation studies confirm the efficacy of our method within an RIS-enhanced framework, showing an approximate 20\% increase in system capacity through optimized power management between spaceborne and terrestrial signals.
The successful deployment of deep learning-based acoustic echo and noise reduction (AENR) methods in consumer devices has spurred interest in developing low-complexity solutions, while emphasizing the need for robust performance in real-life applications. In this work, we propose a hybrid approach to enhance the state-of-the-art (SOTA) ULCNet model by integrating time alignment and parallel encoder blocks for the model inputs, resulting in better echo reduction and comparable noise reduction performance to existing SOTA methods. We also propose a channel-wise sampling-based feature reorientation method, ensuring robust performance across many challenging scenarios, while maintaining overall low computational and memory requirements.
Electromagnetically programmable information metasurfaces, as dynamically controllable 2D metamaterials, hold significant promise as low-profile hardware enabling passive wave control and signal generation for backscatter systems. However, current metasurface-based transmitters architecture fundamentally suffer from hardware non-modularization, forcing all transmitter functions onto nonlinear switch-based unit cells, which introduces symbol mapping inconsistency via phase coupling. Moreover, both temporal coding (limited by unit cell diodes) and space-time coding (impaired by symbol anisotropy) exhibit irreducible harmonic interference and entangled control of amplitude, phase, and beam direction. This paper proposes a metasurface-enabled superheterodyne architecture (MSA), comprising a digital up-conversion (DUC) module performing baseband-to-intermediate frequency (IF) conversion, filtering, and digital-to-analog conversion (DAC), and a reconfigurable metasurface featuring programmable unit cells that independently control both the magnitude and phase of the reflection coefficient. Systematically, the architecture leverages a dual-stage up-conversion process, typical of superheterodyne systems, but uniquely employs the metasurface for the final RF conversion stage. Building upon this framework, a proof-of-concept prototype featuring a 5.8 GHz magnitude-phase decoupled (MPD) metasurface (<15 degree phase deviation per state) and a DAC-based DUC module is presented. Extensive validation confirms the metasurface's capability for distortion-free mixing with arbitrary IF signals while maintaining consistent radiation patterns. The prototype successfully implements diverse QAM modulation schemes (4QAM to 256QAM) in mono-static and bi-static configurations, demonstrating symbol isotropy for spatially separated receivers and achieving a data rate of approximately 20 Mbps (at 5 MHz IF)...
5G cellular networks are now a reality and promise to improve key performance indicators (KPIs), such as Gbps data rates and latencies in the order of milliseconds. While some of these KPIs are achievable in urban scenarios, rural areas often face challenging connectivity conditions due to the lack of terrestrial network (TN) infrastructure. To solve this problem, non-terrestrial networks (NTNs) such as satellite-based solutions, have been introduced to provide coverage in remote regions. Therefore, a multi-connectivity approach can be integrated to simultaneously serve an end-user by merging satellite and cellular links in a joint approach. This study explores, using experimental data, the benefits of both TN-TN and TN-NTN multi-connectivity in rural environments. The results obtained demonstrate that a traditional single-connectivity approach may not be sufficient to provide service to rural environments due to the KPIs requirements given several use cases within these rural areas. The multi-connectivity strategy, which jointly integrates 5G and satellite networks, meets the network availability requirements for latency, downlink throughput, and uplink throughput KPIs at least 98%, 99%, and 95% of the time, respectively, for several use cases, such as precision agriculture, livestock monitoring, and forest management. These include applications like microclimate monitoring, remote operational support, early pest detection, and real-time tracking of livestock transport.
Real-world system control requires both high-performing and interpretable controllers. Model-based control policies have gained popularity by using historical data to learn system costs and dynamics before implementation. However, this two-phase approach prevents these policies from achieving optimal control as the metrics that we train these models (e.g., mean squared errors) often differ from the actual control system cost. In this paper, we present DiffOP, a Differentiable Optimization-based Policy for optimal control. In the proposed framework, control actions are derived by solving an optimization, where the control cost function and system's dynamics can be parameterized as neural networks. Our key technical innovation lies in developing a hybrid optimization algorithm that combines policy gradients with implicit differentiation through the optimization layer, enabling end-to-end training with the actual cost feedback. Under standard regularity conditions, we prove DiffOP converges to stationary points at a rate of $O(1/K)$. Empirically, DiffOP achieves state-of-the-art performance in both nonlinear control tasks and real-world building control.
Automatic segmentation of brain tumors in intra-operative ultrasound (iUS) images could facilitate localization of tumor tissue during resection surgery. The lack of large annotated datasets limits the current models performances. In this paper, we investigated the use of tumor annotations in magnetic resonance imaging (MRI) scans, which are more accessible than annotations in iUS images, for training of deep learning models for iUS brain tumor segmentation. We used 180 annotated MRI scans with corresponding unannotated iUS images, and 29 annotated iUS images. Image registration was performed to transfer the MRI annotations to the corresponding iUS images before training the nnU-Net model with different configurations of the data and label origins. The results showed no significant difference in Dice score for a model trained with only MRI annotated tumors compared to models trained with only iUS annotations and both, and to expert annotations, indicating that MRI tumor annotations can be used as a substitute for iUS tumor annotations to train a deep learning model for automatic brain tumor segmentation in iUS images. The best model obtained an average Dice score of $0.62\pm0.31$, compared to $0.67\pm0.25$ for an expert neurosurgeon, where the performance on larger tumors were similar, but lower for the models on smaller tumors. In addition, the results showed that removing smaller tumors from the training sets improved the results. The main models are available here: this https URL
We study the distributed consensus of state vectors in a discrete-time multi-agent network with matrix edge weights using stochastic matrix convergence theory. We present a distributed asynchronous time update model wherein one randomly selected agent updates its state vector at a time by interacting with its neighbors. We prove that all agents converge to same state vector almost surely when every edge weight matrix is positive definite. We study vector consensus in cooperative-competitive networks with edge weights being either positive or negative definite matrices and present a necessary and sufficient condition to achieve bipartite vector consensus in such networks. We study the network structures on which agents achieve zero consensus. We also present a convergence result on nonhomogenous matrix products which is of independent interest in matrix convergence theory. All the results hold true for the synchronous time update model as well in which all agents update their states simultaneously.
Ride-hailing systems often suffer from spatiotemporal supply-demand imbalances, largely due to the independent and uncoordinated actions of drivers. While existing fleet rebalancing methods offer repositioning recommendations to idle drivers to improve service efficiency, they typically assume full driver compliance: an unrealistic premise in practice. We propose an Adherence-Aware Vehicle Rebalancing (AAVR) framework that explicitly models and addresses uncertainties in driver adherence, stemming from individual behavioral preferences and dynamic trust in the recommender system. Our approach integrates (i) region-specific XGBoost models for demand forecasting, (ii) a network-level XGBoost model for inter-region travel time prediction, (iii) driver-specific logit models to capture repositioning preferences, and (iv) driver-specific Beta-Bernoulli Bandit models with Thompson Sampling to track and update each driver's confidence in the system over this http URL elements are incorporated into a novel optimization framework that generates adherence-aware repositioning recommendations. To enable real-time implementation, we further develop a linearized version of the AAVR model. Extensive simulations on the NYC taxi dataset demonstrate that AAVR significantly outperforms four state-of-the-art adherence-agnostic baselines, achieving a 28% increase in served demand, a 22.7% reduction in customer wait times, a 28.6% increase in platform earnings and a 26% gain in driver profits on average.
Domain shifts in medical image segmentation, particularly when data comes from different centers, pose significant challenges. Intra-center variability, such as differences in scanner models or imaging protocols, can cause domain shifts as large as, or even larger than, those between centers. To address this, we propose the "one image as one domain" (OIOD) hypothesis, which treats each image as a unique domain, enabling flexible and robust domain generalization. Based on this hypothesis, we develop a unified disentanglement-based domain generalization (UniDDG) framework, which simultaneously handles both multi-source and single-source domain generalization without requiring explicit domain labels. This approach simplifies training with a fixed architecture, independent of the number of source domains, reducing complexity and enhancing scalability. We decouple each input image into content representation and style code, then exchange and combine these within the batch for segmentation, reconstruction, and further disentanglement. By maintaining distinct style codes for each image, our model ensures thorough decoupling of content representations and style codes, improving domain invariance of the content representations. Additionally, we enhance generalization with expansion mask attention (EMA) for boundary preservation and style augmentation (SA) to simulate diverse image styles, improving robustness to domain shifts. Extensive experiments show that our method achieves Dice scores of 84.43% and 88.91% for multi-source to single-center and single-center generalization in optic disc and optic cup segmentation, respectively, and 86.96% and 88.56% for prostate segmentation, outperforming current state-of-the-art domain generalization methods, offering superior performance and adaptability across clinical settings.
The attention mechanism is the key to the success of transformers in different machine learning tasks. However, the quadratic complexity with respect to the sequence length of the vanilla softmax-based attention mechanism becomes the major bottleneck for the application of long sequence tasks, such as vision tasks. Although various efficient linear attention mechanisms have been proposed, they need to sacrifice performance to achieve high efficiency. What's more, memory-efficient methods, such as FlashAttention-1-3, still have quadratic computation complexity which can be further improved. In this paper, we propose a novel efficient linear fast attention (ELFATT) mechanism to achieve low memory input/output operations, linear computational complexity, and high performance at the same time. ELFATT offers 4-7x speedups over the vanilla softmax-based attention mechanism in high-resolution vision tasks without losing performance. ELFATT is FlashAttention friendly. Using FlashAttention-2 acceleration, ELFATT still offers 2-3x speedups over the vanilla softmax-based attention mechanism on high-resolution vision tasks without losing performance. Even in some non-vision tasks of long-range arena, ELFATT still achieves leading performance and offers 1.2-2.3x speedups over FlashAttention-2. Even on edge GPUs, ELFATT still offers 1.6x to 2.0x speedups compared to state-of-the-art attention mechanisms in various power modes from 5W to 60W. Furthermore, ELFATT can be used to enhance and accelerate diffusion tasks directly without training.
We study prescribed-time extremum seeking (PT-ES) for scalar maps in the presence of time delay. The PT-ES problem has been solved by Yilmaz and Krstic in 2023 using chirpy probing and time-varying singular gains. To alleviate the gain singularity, we present an alternative approach, employing delays with bounded time-periodic gains, for achieving prescribed-time convergence to the extremum. Our results are not extensions or refinements but a new methodological direction, even in the absence of the delay on the map. The main result we present compensates the map's delay and uses perturbation-based and the Newton (rather than gradient) approaches. With the help of averaging theorems in infinite dimension, specifically Retarded Functional Differential Equations (RFDEs), we conduct a prescribed-time convergence analysis on a suitable perturbation-averaged target ES system, which contains the time-periodic gains of the map and feedback delays. We further extend our method to multi-variable static maps and illustrate our results through numerical simulations.
To guarantee global coverage and ubiquitous connectivity, the Non-terrestrial Network (NTN) technology has been regarded as a key enabling technology in the Six Generation (6G) network, which consists of the unmanned aerial vehicle (UAV), high-altitude platform (HAP), and satellite. It is noted that the unique characteristics of various NTN platforms directly impact the design and implementation of NTNs, which results in highly dynamic and heterogeneous networks. Even within the same tier, such as the space tier, the NTNs are developed based on different platforms including Low Earth Orbit (LEO), Medium Earth Orbit (MEO), and Geostationary Earth Orbit (GEO). Therefore, distributed coordination among heterogeneous NTNs remains an important challenge. Although distributed learning framework finds a wide range of applications by leveraging rich distributed data and computation resources. The explicit and systematic analysis of the individual layers' challenges, and corresponding distributed coordination solutions in heterogeneous NTNs has not been proposed yet. In this article, we first summarize the unique characteristics of each NTN platform, and analyze the corresponding impact on the design and implementation of the NTN. We then identify the communication challenges of heterogeneous NTNs in individual layers, where the potential coordinated solutions are identified. We further illustrate the multi-agent deep reinforcement learning (MADRL) algorithms tailored for coordinated solutions in heterogeneous NTNs. Last but not least, we present a case study of the user scheduling optimization problem in heterogeneous UAVs-based cellular networks, where the multi-agent deep deterministic policy gradient (MADDPG) technique is developed to validate the effectiveness of distributed coordination in heterogeneous NTNs.
In this work, we develop novel MRI reconstruction approaches that are accurate, fast and low-latency for a large number of dynamic MRI applications, sampling schemes and sampling rates; without any problem-specific parameter tuning. We refer to this property of a single algorithm, without parameter tuning, being accurate and fast for many settings as generalizability. Generalizability is possible only for simple (few parameter) models such as low-rank (LR) or LR plus sparse (L plus S), and for simple few parameter algorithms based on these models, which is what we develop and evaluate in this work.
Due to domain shifts across diverse medical imaging modalities, learned segmentation models often suffer significant performance degradation during deployment. These domain shifts, typically caused by variations in imaging systems, generally comprise two principal components: 1) \textbf{"style" shifts}, referring to global disparities in image properties such as illumination, contrast, and color; and 2) \textbf{"content" shifts}, which involve local discrepancies in anatomical structures. To address domain shifts in medical image segmentation, a core challenge arises: how can we decouple the factors within images that determine their "style" and "content" components? To this end, we first propose a linear style-content decomposition method that factorizes an image into style codes and content maps, explicitly modeling the "style" and "content" components. Building on this, we introduce a \textbf{Sty}le-\textbf{Con}tent decomposition-based data \textbf{a}ugmentation algorithm (StyCona), which leverages this decomposition strategy to guide augmentation of both the global style and local content of source-domain images, enabling the training of a well-generalized model for domain-generalizable medical image segmentation. StyCona is a simple yet effective plug-and-play module that substantially improves model generalization without requiring additional training parameters or modifications to segmentation model architectures. Experiments on cardiac magnetic resonance imaging and fundus photography segmentation tasks, with single and multiple target domains respectively, demonstrate the effectiveness of StyCona and its superiority over state-of-the-art domain generalization methods. The code will be released at this https URL.
The Fluid Antenna System (FAS) overcomes the spatial degree-of-freedom limitations of conventional static antenna arrays in wireless this http URL capability critically depends on acquiring full Channel State Information across all accessible ports. Existing studies focus exclusively on narrowband FAS, performing channel estimation solely in the spatial domain. This work proposes a channel estimation and spatial equalization framework for wideband FAS, revealing for the first time an inherent group-sparse structure in aperture-limited FAS channels. First, we establish a group-sparse recovery framework for space-frequency characteristics in FAS, formally characterizing leakage-induced sparsity degradation from limited aperture and bandwidth as a structured group-sparsity problem. By deriving dictionary-adapted group restricted isometry property, we prove tight recovery bounds for a convex $\ell_1/\ell_2$-mixed norm optimization formulation that preserves leakage-aware sparsity patterns. Second, we develop a descending correlation group orthogonal matching pursuit algorithm that systematically relaxes leakage constraints to reduce subcoherence. This approach enables FSC recovery with accelerated convergence and superior performance compared to conventional compressive sensing methods like OMP or GOMP. Third, we formulate spatial equalization as a mixed-integer linear programming problem, complement this with a greedy algorithm maintaining near-optimal performance. Simulation results demonstrate the proposed channel estimation algorithm effectively resolves energy misallocation and enables recovery of weak details, achieving superior recovery accuracy and convergence rate. The SE framework suppresses deep fading phenomena and largely reduces time consumption overhead while maintaining equivalent link reliability.
Identifying controlled safety invariant sets (CSISs) is essential in safety-critical applications. This paper tackles the problem of identifying CSISs for black-box discrete-time systems, where the model is unknown and only limited simulation data is accessible. Traditionally, a CSIS is defined as a subset of a safe set, encompassing initial states for which a control input exists that keeps the system within the set at the next time step-this is referred to as the one-step invariance property. However, the requirement for one-step invariance can be equivalently translated into a stricter condition of ``always-invariance'', meaning that there exist control inputs capable of keeping the system within this set indefinitely. Such a condition may prove overly stringent or impractical for black-box systems, where predictions can become unreliable beyond a single time step or a limited number of finite time steps. To overcome the challenges posed by black-box systems, we reformulate the one-step invariance property in a ``Probably Approximately Correct'' (PAC) sense. This approach allows us to assess the probability that a control input exists to keep the system within the CSIS at the next time step, with a predefined level of confidence. If the system successfully remains within the set at the next time step, we can then reapply the invariance evaluation to the new state, thereby facilitating a recursive assurance of invariance. Our method employs barrier functions and scenario optimization, resulting in a linear programming method to estimate PAC CSISs. Finally, the effectiveness of our approach is demonstrated on several examples.
Optical Wireless Communication (OWC) has gained significant attention due to its high-speed data transmission and throughput. Optical wireless channels are often assumed to be flat, but we evaluate frequency selective channels to consider high data rate optical wireless or very dispersive environments. To address this for optical scenarios, this paper presents a robust channel estimation framework with low-complexity to mitigate frequency-selective effects, then to improve system reliability and performance. This channel estimation framework contains a neural network that can estimate general optical wireless channels without prior channel information about the environment. Based on this estimate and the corresponding delay spread, one of several candidate offline-trained neural networks will be activated to predict this channel. Simulation results demonstrate that the proposed method has improved and robust normalized mean square error (NMSE) and bit error rate (BER) performance compared to conventional estimation methods while maintaining computational efficiency. These findings highlight the potential of neural network solutions in enhancing the performance of OWC systems under indoor channel conditions.
The effectiveness of data-driven techniques heavily depends on the input signal used to generate the estimation data. However, a significant research gap exists in the field of input design for nonlinear dynamic system identification. In particular, existing methods largely overlook the minimization of the generalization error, i.e., model inaccuracies in regions not covered by the estimation dataset. This work addresses this gap by proposing an input design method that embeds a novel optimality criterion within a receding horizon control (RHC)-based optimization framework. The distance-based optimality criterion induces a space-filling design within a user-defined region of interest in a surrogate model's input space, requiring only minimal prior knowledge. Additionally, the method is applicable both online, where model parameters are continuously updated based on process observations, and offline, where a fixed model is employed. The space-filling performance of the proposed strategy is evaluated on an artificial example and compared to state-of-the-art methods, demonstrating superior efficiency in exploring process operating regions.
Infrared small target detection(IRSTD) is widely recognized as a challenging task due to the inherent limitations of infrared imaging, including low signal-to-noise ratios, lack of texture details, and complex background interference. While most existing methods model IRSTD as a semantic segmentation task, but they suffer from two critical drawbacks: (1)blurred target boundaries caused by long-distance imaging dispersion; and (2) excessive computational overhead due to indiscriminate feature stackin. To address these issues, we propose the Lightweight Efficiency Infrared Small Target Detection (LE-IRSTD), a lightweight and efficient framework based on YOLOv8n, with following key innovations. Firstly, we identify that the multiple bottleneck structures within the C2f component of the YOLOv8-n backbone contribute to an increased computational burden. Therefore, we implement the Mobile Inverted Bottleneck Convolution block (MBConvblock) and Bottleneck Structure block (BSblock) in the backbone, effectively balancing the trade-off between computational efficiency and the extraction of deep semantic information. Secondly, we introduce the Attention-based Variable Convolution Stem (AVCStem) structure, substituting the final convolution with Variable Kernel Convolution (VKConv), which allows for adaptive convolutional kernels that can transform into various shapes, facilitating the receptive field for the extraction of targets. Finally, we employ Global Shuffle Convolution (GSConv) to shuffle the channel dimension features obtained from different convolutional approaches, thereby enhancing the robustness and generalization capabilities of our method. Experimental results demonstrate that our LE-IRSTD method achieves compelling results in both accuracy and lightweight performance, outperforming several state-of-the-art deep learning methods.
Terahertz single-pixel imaging (THz SPI) has garnered widespread attention for its potential to overcome challenges associated with THz focal plane arrays. However, the inherently long wavelength of THz waves limits imaging resolution, while achieving subwavelength resolution requires harsh experimental conditions and time-consuming processes. Here, we propose a sub-diffraction THz backpropagation SPI technique. We illuminate the object with continuous-wave 0.36-THz radiation ({\lambda}0 = 833.3 {\mu}m). The transmitted THz wave is modulated by prearranged patterns generated on a 500-{\mu}m-thick silicon wafer and subsequently recorded by a far-field single-pixel detector. An untrained neural network constrained with the physical SPI process iteratively reconstructs the THz images with an ultralow sampling ratio of 1.5625%, significantly reducing the long sampling times. To further suppress the THz diffraction-field effects, a backpropagation SPI from near field to far field is implemented by integrating with a THz physical propagation model into the output layer of the network. Notably, using the thick wafer where THz evanescent field cannot be fully recorded, we achieve a spatial resolution of 118 {\mu}m (~{\lambda}0/7) through backpropagation SPI, thus eliminating the need for ultrathin photomodulators. This approach provides an efficient solution for advancing THz microscopic imaging and addressing other inverse imaging challenges.
Accurately tracking particles and determining their coordinate along the optical axis is a major challenge in optical microscopy, especially when extremely high precision is needed. In this study, we introduce a deep learning approach using convolutional neural networks (CNNs) that can determine axial coordinates from dual-focal-plane images without relying on predefined models. Our method achieves an axial localization precision of 40 nanometers-six times better than traditional single-focal-plane techniques. The model's simple design and strong performance make it suitable for a wide range of uses, including dark matter detection, proton therapy for cancer, and radiation protection in space. It also shows promise in fields like biological imaging, materials science, and environmental monitoring. This work highlights how machine learning can turn complex image data into reliable, precise information, offering a flexible and powerful tool for many scientific applications.
Accurate placental segmentation is essential for quantitative analysis of the placenta. However, this task is particularly challenging in T2*-weighted placental imaging due to: (1) weak and inconsistent boundary contrast across individual echoes; (2) the absence of manual ground truth annotations for all echo times; and (3) motion artifacts across echoes caused by fetal and maternal movement. In this work, we propose a contrast-augmented segmentation framework that leverages complementary information across multi-echo T2*-weighted MRI to learn robust, contrast-invariant representations. Our method integrates: (i) masked autoencoding (MAE) for self-supervised pretraining on unlabeled multi-echo slices; (ii) masked pseudo-labeling (MPL) for unsupervised domain adaptation across echo times; and (iii) global-local collaboration to align fine-grained features with global anatomical context. We further introduce a semantic matching loss to encourage representation consistency across echoes of the same subject. Experiments on a clinical multi-echo placental MRI dataset demonstrate that our approach generalizes effectively across echo times and outperforms both single-echo and naive fusion baselines. To our knowledge, this is the first work to systematically exploit multi-echo T2*-weighted MRI for placental segmentation.
The decarbonization goals worldwide drive the energy transition of power distribution grids, which operate under increasingly volatile conditions and closer to their technical limits. In this context, localized operational data with high temporal and spatial resolution is essential for their effective planning and regulation. Nevertheless, information on grid-connected distributed energy resources, such as electric vehicles, photovoltaic systems, and heat pumps, is often fragmented, inconsistent, and unavailable. This work introduces a comprehensive database of distributed energy resources and non-controllable loads allocated in Switzerland's medium- and low-voltage distribution grid models, covering over 2 million points of connection. Remarkably, this data specifies the flexibility capabilities of the controllable devices, with a set of projections aligned with national forecasts for 2030, 2040, and 2050. The database supports studies on flexibility provision of distributed energy resources, distribution grid resilience, and national energy policy, among other topics. Importantly, its modular structure allows users to extract national- and local-scale information across medium- and low-voltage systems, enabling broad applicability across locations.
This paper proposes an analog orthogonal frequency division multiplexing (OFDM) architecture based on the real-time Fourier transform (RTFT). The core enabling component is a linear-chirp phaser with engineered group velocity dispersion (GVD), which realizes RTFT and performs frequency-to-time mapping in the analog domain. In this architecture, conventional digital fast Fourier transform (FFT) and inverse FFT (IFFT) processors are replaced by two linear-chirp phasers with opposite group delay dispersions, respectively. Theoretical analysis demonstrates that, under specific phaser conditions, the OFDM signal generated by the RTFT-based analog system is mathematically equivalent to that of a conventional digital OFDM system. This equivalence is further supported by simulation results, which confirm accurate symbol transmission and recovery, as well as robustness to multipath fading when a prefix is applied. Benefiting from the use of passive microwave components, the analog OFDM system offers ultra-fast processing with reduced power consumption. Overall, this work establishes a foundation for fully analog or hybrid analog-digital OFDM system, offering a promising solution for next-generation high-speed, wideband, and energy-efficient wireless communication platforms.
\textit{DPLib} is an open-source MATLAB-based benchmark library created to support research and development in distributed and decentralized power system analysis and optimization. Distributed and decentralized methods offer scalability, privacy preservation, and resilience to single points of failure, making them increasingly important for modern power systems. However, unlike centralized tools such as MATPOWER, no general-purpose, reproducible data library package currently exists for distributed power system studies. DPLib, available at \href{this https URL}{GitHub}, fills this gap by providing a standard power system library featuring over 20 multi-region benchmark test cases of varying sizes, along with a graph-based partitioning toolkit that decomposes any MATPOWER test system into multiple electrically coherent regions. The partitioning toolkit, an easy-to-use MATLAB code, generates standardized \texttt{.mat} and \texttt{.m} files, along with region visualizations for intuitive understanding. We also provide modular, easy-to-use distributed optimal power flow (OPF) solvers: an alternating direction method of multipliers(ADMM)-based DC-OPF solver implemented in YALMIP, and an ADMM-based AC-OPF solver leveraging IPOPT. These solvers validate the generated test systems for distributed optimization applications. Numerical results validate the generated test cases, establishing DPLib as a foundation for reproducible distributed power system research.
Colorectal polyp segmentation is critical for early detection of colorectal cancer, yet weak and low contrast boundaries significantly limit automated accuracy. Existing deep models either blur fine edge details or rely on handcrafted filters that perform poorly under variable imaging conditions. We propose MEGANet-W, a Wavelet Driven Edge Guided Attention Network that injects directional, parameter free Haar wavelet edge maps into each decoder stage to recalibrate semantic features. The key novelties of MEGANet-W include a two-level Haar wavelet head for multi-orientation edge extraction; and Wavelet Edge Guided Attention (W-EGA) modules that fuse wavelet cues with boundary and input branches. On five public polyp datasets, MEGANet-W consistently outperforms existing methods, improving mIoU by up to 2.3% and mDice by 1.2%, while introducing no additional learnable parameters. This approach improves reliability in difficult cases and offers a robust solution for medical image segmentation tasks requiring precise boundary detection.
A procedure for the determination of the optimal group-delay of a Linearly-Constrained Minimum-Variance (LCMV) beamformer is proposed. Two ways of selecting the optimal delay are recommended: the first is the solution that minimizes the noise power; the second is the solution that minimizes the processing delay. The potential of this hitherto unexplored degree of design freedom is investigated using simulated Very-High-Frequency (VHF) communication, and Ultra-High-Frequency (UHF) bistatic radar, applications
Device sizing is a critical yet challenging step in analog and mixed-signal circuit design, requiring careful optimization to meet diverse performance specifications. This challenge is further amplified under process, voltage, and temperature (PVT) variations, which cause circuit behavior to shift across different corners. While reinforcement learning (RL) has shown promise in automating sizing for fixed targets, training a generalized policy that can adapt to a wide range of design specifications under PVT variations requires much more training samples and resources. To address these challenges, we propose a \textbf{Goal-conditioned RL framework} that enables efficient policy training for analog device sizing across PVT corners, with strong generalization capability. To improve sample efficiency, we introduce Pareto-front Dominance Goal Sampling, which constructs an automatic curriculum by sampling goals from the Pareto frontier of previously achieved goals. This strategy is further enhanced by integrating Conservative Hindsight Experience Replay to stabilize training and accelerate convergence. To reduce simulation overhead, our framework incorporates a Skip-on-Fail simulation strategy. Experiments on benchmark circuits demonstrate $\sim$1.6$\times$ improvement in sample efficiency and $\sim$4.1$\times$ improvement in simulation efficiency compared to existing sizing methods. Code and benchmarks are publicly available at this https URL
Speech enhancement in audio-only settings remains challenging, particularly in the presence of interfering speakers. This paper presents a simple yet effective real-time audio-visual speech enhancement (AVSE) system, RAVEN, which isolates and enhances the on-screen target speaker while suppressing interfering speakers and background noise. We investigate how visual embeddings learned from audio-visual speech recognition (AVSR) and active speaker detection (ASD) contribute to AVSE across different SNR conditions and numbers of interfering speakers. Our results show concatenating embeddings from AVSR and ASD models provides the greatest improvement in low-SNR, multi-speaker environments, while AVSR embeddings alone perform best in noise-only scenarios. In addition, we develop a real-time streaming system that operates on a computer CPU and we provide a video demonstration and code repository. To our knowledge, this is the first open-source implementation of a real-time AVSE system.
This work is completed on a whim after discussions with my junior colleague. The motion direction angle affects the micro-Doppler spectrum width, thus determining the human motion direction can provide important prior information for downstream tasks such as gait recognition. However, Doppler-Time map (DTM)-based methods still have room for improvement in achieving feature augmentation and motion determination simultaneously. In response, a low-cost but accurate radar-based human motion direction determination (HMDD) method is explored in this paper. In detail, the radar-based human gait DTMs are first generated, and then the feature augmentation is achieved using feature linking model. Subsequently, the HMDD is implemented through a lightweight and fast Vision Transformer-Convolutional Neural Network hybrid model structure. The effectiveness of the proposed method is verified through open-source dataset. The open-source code of this work is released at: this https URL.
High-resolution satellite imagery is essential for geospatial analysis, yet differences in spatial resolution across satellite sensors present challenges for data fusion and downstream applications. Super-resolution techniques can help bridge this gap, but existing methods rely on artificially downscaled images rather than real sensor data and are not well suited for heterogeneous satellite sensors with differing spectral, temporal characteristics. In this work, we develop a preliminary framework to align and upscale Harmonized Landsat Sentinel 30m(HLS 30) imagery using Harmonized Landsat Sentinel 10m(HLS10) as a reference from the HLS dataset. Our approach aims to bridge the resolution gap between these sensors and improve the quality of super-resolved Landsat imagery. Quantitative and qualitative evaluations demonstrate the effectiveness of our method, showing its potential for enhancing satellite-based sensing applications. This study provides insights into the feasibility of heterogeneous satellite image super-resolution and highlights key considerations for future advancements in the field.
While large audio-language models have advanced open-ended audio understanding, they still fall short of nuanced human-level comprehension. This gap persists largely because current benchmarks, limited by data annotations and evaluation metrics, fail to reliably distinguish between generic and highly detailed model outputs. To this end, this work introduces MECAT, a Multi-Expert Constructed Benchmark for Fine-Grained Audio Understanding Tasks. Generated via a pipeline that integrates analysis from specialized expert models with Chain-of-Thought large language model reasoning, MECAT provides multi-perspective, fine-grained captions and open-set question-answering pairs. The benchmark is complemented by a novel metric: DATE (Discriminative-Enhanced Audio Text Evaluation). This metric penalizes generic terms and rewards detailed descriptions by combining single-sample semantic similarity with cross-sample discriminability. A comprehensive evaluation of state-of-the-art audio models is also presented, providing new insights into their current capabilities and limitations. The data and code are available at this https URL
In this paper, we present a Geometry-aware Neural Interpolation (Geo-NI) framework for light field rendering. Previous learning-based approaches either rely on the capability of neural networks to perform direct interpolation, which we dubbed Neural Interpolation (NI), or explore scene geometry for novel view synthesis, also known as Depth Image-Based Rendering (DIBR). Instead, we incorporate the ideas behind these two kinds of approaches by launching the NI with a novel DIBR pipeline. Specifically, the proposed Geo-NI first performs NI using input light field sheared by a set of depth hypotheses. Then the DIBR is implemented by assigning the sheared light fields with a novel reconstruction cost volume according to the reconstruction quality under different depth hypotheses. The reconstruction cost is interpreted as a blending weight to render the final output light field by blending the reconstructed light fields along the dimension of depth hypothesis. By combining the superiorities of NI and DIBR, the proposed Geo-NI is able to render views with large disparity with the help of scene geometry while also reconstruct non-Lambertian effect when depth is prone to be ambiguous. Extensive experiments on various datasets demonstrate the superior performance of the proposed geometry-aware light field rendering framework.
Purpose: To develop and evaluate a new pulse sequence for highly accelerated distortion-free diffusion MRI (dMRI) by inserting additional echoes without prolonging TR, when generalized slice dithered enhanced resolution (gSlider) radiofrequency encoding is used for volumetric acquisition. Methods: A phase-reversed interleaved multi-echo acquisition (PRIME) was developed for rapid, high-resolution, and distortion-free dMRI, which includes several echoes where the first echo is for target diffusion-weighted imaging (DWI) acquisition with high-resolution and additional echoes are acquired with either lower resolution for 1) high-fidelity field map estimation, 2) phase navigation for shot-to-shot phase correction, 3) motion navigation across diffusion directions, or with high resolution to enable 4) high fidelity diffusion relaxometry acquisitions. The sequence was evaluated on in vivo data acquired from healthy volunteers on clinical and Connectome 2.0 scanners. Results: In vivo experiments demonstrated that 1) high in-plane acceleration (Rin-plane of 5-fold with 2D partial Fourier) was achieved using the high-fidelity field maps estimated from the second echo, which was made at a lower resolution/acceleration to increase its SNR while matching the effective echo spacing of the first readout, 2) high-resolution diffusion relaxometry parameters were estimated from triple-echo PRIME data using a white matter model of multi-TE spherical mean technique (MTE-SMT), and 3) high-fidelity mesoscale DWI at 490 um isotropic resolution was obtained in vivo by capitalizing on the high-performance gradients of the Connectome 2.0 scanner. Conclusion: The proposed PRIME sequence enabled highly accelerated, high-resolution, and distortion-free dMRI using additional echoes without prolonging scan time when gSlider encoding is utilized.
Automatic speech recognition (ASR) models often experience performance degradation due to data domain shifts introduced at test time, a challenge that is further amplified for child speakers. Test-time adaptation (TTA) methods have shown great potential in bridging this domain gap. However, the use of TTA to adapt ASR models to the individual differences in each child's speech has not yet been systematically studied. In this work, we investigate the effectiveness of two widely used TTA methods-SUTA, SGEM-in adapting off-the-shelf ASR models and their fine-tuned versions for child speech recognition, with the goal of enabling continuous, unsupervised adaptation at test time. Our findings show that TTA significantly improves the performance of both off-the-shelf and fine-tuned ASR models, both on average and across individual child speakers, compared to unadapted baselines. However, while TTA helps adapt to individual variability, it may still be limited with non-linguistic child speech.
Artificial intelligence (AI) has demonstrated significant potential in ECG analysis and cardiovascular disease assessment. Recently, foundation models have played a remarkable role in advancing medical AI. The development of an ECG foundation model holds the promise of elevating AI-ECG research to new heights. However, building such a model faces several challenges, including insufficient database sample sizes and inadequate generalization across multiple domains. Additionally, there is a notable performance gap between single-lead and multi-lead ECG analyses. We introduced an ECG Foundation Model (ECGFounder), a general-purpose model that leverages real-world ECG annotations from cardiology experts to broaden the diagnostic capabilities of ECG analysis. ECGFounder was trained on over 10 million ECGs with 150 label categories from the Harvard-Emory ECG Database, enabling comprehensive cardiovascular disease diagnosis through ECG analysis. The model is designed to be both an effective out-of-the-box solution, and a to be fine-tunable for downstream tasks, maximizing usability. Importantly, we extended its application to lower rank ECGs, and arbitrary single-lead ECGs in particular. ECGFounder is applicable to supporting various downstream tasks in mobile monitoring scenarios. Experimental results demonstrate that ECGFounder achieves expert-level performance on internal validation sets, with AUROC exceeding 0.95 for eighty diagnoses. It also shows strong classification performance and generalization across various diagnoses on external validation sets. When fine-tuned, ECGFounder outperforms baseline models in demographic analysis, clinical event detection, and cross-modality cardiac rhythm diagnosis. The trained model and data will be publicly released upon publication through the this http URL. Our code is available at this https URL
I report on the experimental observation of DC instability and self-amplification through stimulated emission of 0.2 and 1.63 THz radiation using InGaAs/GaAs HEMT operating in the deep saturation regime at room temperature. I demonstrate both theoretically and experimentally, that the Sub-THz and THz response of FETs are attributable to the rectification of the nonlinear dependence of the device's current-voltage characteristics. FETs function as nonlinear THz mixers and rectifiers, with their open-drain responsivity described by an expression analogous to that of a zero-bias Schottky diode detector. However, operating FETs in the deep saturation regime permits precise tuning of the device to the quantum localized resonance condition and the negative resistance mode at room temperature. Consequently, FETs can be adjusted in the deep saturation regime to facilitate tunable sub-THz and THz detection and generation as well as tunable sub-THz and THz lasing effect. These results are anticipated to significantly impact technological advancements across various fields in the near future.
The high computational cost and slow inference time are major obstacles to deploying Video Diffusion Models (VDMs). To overcome this, we introduce a new Video Diffusion Model Compression approach using individual content and motion dynamics preserved pruning and consistency loss. First, we empirically observe that deeper VDM layers are crucial for maintaining the quality of \textbf{motion dynamics} (\textit{e.g.,} coherence of the entire video), while shallower layers are more focused on \textbf{individual content} (\textit{e.g.,} individual frames). Therefore, we prune redundant blocks from the shallower layers while preserving more of the deeper layers, resulting in a lightweight VDM variant called VDMini. Moreover, we propose an \textbf{Individual Content and Motion Dynamics (ICMD)} Consistency Loss to gain comparable generation performance as larger VDM to VDMini. In particular, we first use the Individual Content Distillation (ICD) Loss to preserve the consistency in the features of each generated frame between the teacher and student models. Next, we introduce a Multi-frame Content Adversarial (MCA) Loss to enhance the motion dynamics across the generated video as a whole. This method significantly accelerates inference time while maintaining high-quality video generation. Extensive experiments demonstrate the effectiveness of our VDMini on two important video generation tasks, Text-to-Video (T2V) and Image-to-Video (I2V), where we respectively achieve an average 2.5 $\times$, 1.4 $\times$, and 1.25 $\times$ speed up for the I2V method SF-V, the T2V method T2V-Turbo-v2, and the T2V method HunyuanVideo, while maintaining the quality of the generated videos on several benchmarks including UCF101, VBench-T2V, and VBench-I2V.
Channel state information (CSI) is a fundamental component in both wireless communication and sensing systems, enabling critical functions such as radio resource optimization and environmental perception. In wireless sensing, data scarcity and packet loss hinder efficient model training, while in wireless communication, high-dimensional CSI matrices and short coherent times caused by high mobility present challenges in CSI this http URL address these issues, we propose a unified framework named CSI-BERT2 for CSI prediction and classification tasks. Building on CSI-BERT, we introduce a two-stage training method that first uses a mask language model (MLM) to enable the model to learn general feature extraction from scarce datasets in an unsupervised manner, followed by fine-tuning for specific downstream tasks. Specifically, we extend MLM into a mask prediction model (MPM), which efficiently addresses the CSI prediction task. We also introduce an adaptive re-weighting layer (ARL) to enhance subcarrier representation and a multi-layer perceptron (MLP) based temporal embedding module to mitigate permutation invariance issues in time-series CSI data. This significantly improves the CSI classification performance of the original CSI-BERT model. Extensive experiments on both real-world collected and simulated datasets demonstrate that CSI-BERT2 achieves state-of-the-art performance across all tasks. Our results further show that CSI-BERT2 generalizes effectively across varying sampling rates and robustly handles discontinuous CSI sequences caused by packet loss-challenges that conventional methods fail to address.
Video tokenizers are essential for latent video diffusion models, converting raw video data into spatiotemporally compressed latent spaces for efficient training. However, extending state-of-the-art video tokenizers to achieve a temporal compression ratio beyond 4x without increasing channel capacity poses significant challenges. In this work, we propose an alternative approach to enhance temporal compression. We find that the reconstruction quality of temporally subsampled videos from a low-compression encoder surpasses that of high-compression encoders applied to original videos. This indicates that high-compression models can leverage representations from lower-compression models. Building on this insight, we develop a bootstrapped high-temporal-compression model that progressively trains high-compression blocks atop well-trained lower-compression models. Our method includes a cross-level feature-mixing module to retain information from the pretrained low-compression model and guide higher-compression blocks to capture the remaining details from the full video sequence. Evaluation of video benchmarks shows that our method significantly improves reconstruction quality while increasing temporal compression compared to directly training the full model. Furthermore, the resulting compact latent space effectively trains a video diffusion model for high-quality video generation with a significantly reduced token budget.
We introduce distributional dynamic programming (DP) methods for optimizing statistical functionals of the return distribution, with standard reinforcement learning as a special case. Previous distributional DP methods could optimize the same class of expected utilities as classic DP. To go beyond, we combine distributional DP with stock augmentation, a technique previously introduced for classic DP in the context of risk-sensitive RL, where the MDP state is augmented with a statistic of the rewards obtained since the first time step. We find that a number of recently studied problems can be formulated as stock-augmented return distribution optimization, and we show that we can use distributional DP to solve them. We analyze distributional value and policy iteration, with bounds and a study of what objectives these distributional DP methods can or cannot optimize. We describe a number of applications outlining how to use distributional DP to solve different stock-augmented return distribution optimization problems, for example maximizing conditional value-at-risk, and homeostatic regulation. To highlight the practical potential of stock-augmented return distribution optimization and distributional DP, we introduce an agent that combines DQN and the core ideas of distributional DP, and empirically evaluate it for solving instances of the applications discussed.
The IMT-2030 framework provides the vision and conceptual foundation for the next-generation of mobile broadband systems, colloquially known as Sixth-Generation (6G) cellular networks. Academic circles, industry players, and Standard Developing Organizations (SDOs) are already engaged in early standardization discussions for the system, providing key insights for future technical specifications. In this context, a structured thematic synthesis aligned with IMT-2030 is essential to inform the discussions and assist collaboration among 6G stakeholders -- including scholars, professionals, regulators, and SDO officials. This review intends to offer a concise yet informative synthesis of well-established 6G literature, viewed through the IMT-2030 lens, for both specialists and generalists engaged in shaping future standards and advancing 6G research.
The rampdown phase of a tokamak pulse is difficult to simulate and often exacerbates multiple plasma instabilities. To reduce the risk of disrupting operations, we leverage advances in Scientific Machine Learning (SciML) to combine physics with data-driven models, developing a neural state-space model (NSSM) that predicts plasma dynamics during Tokamak à Configuration Variable (TCV) rampdowns. The NSSM efficiently learns dynamics from a modest dataset of 311 pulses with only five pulses in a reactor-relevant high-performance regime. The NSSM is parallelized across uncertainties, and reinforcement learning (RL) is applied to design trajectories that avoid instability limits. High-performance experiments at TCV show statistically significant improvements in relevant metrics. A predict-first experiment, increasing plasma current by 20% from baseline, demonstrates the NSSM's ability to make small extrapolations. The developed approach paves the way for designing tokamak controls with robustness to considerable uncertainty and demonstrates the relevance of SciML for fusion experiments.
Modular robotics enables the development of versatile and adaptive robotic systems with autonomous reconfiguration. This paper presents a modular robotic system in which each module has independent actuation, battery power, and control, allowing both individual mobility and coordinated locomotion. A hierarchical Central Pattern Generator (CPG) framework governs motion, with a low-level CPG controlling individual modules and a high-level CPG synchronizing inter-module coordination, enabling smooth transitions between independent and collective behaviors. To validate the system, we conduct simulations in MuJoCo and hardware experiments, evaluating locomotion across different configurations. We first analyze single-module motion, followed by two-module cooperative locomotion. Results demonstrate the effectiveness of the CPG-based control framework in achieving robust, flexible, and scalable locomotion. The proposed modular architecture has potential applications in search and rescue, environmental monitoring, and autonomous exploration, where adaptability and reconfigurability are essential.
We investigate a novel finite-horizon linear-quadratic (LQ) feedback dynamic potential game with a priori unknown cost matrices played between two players. The cost matrices are revealed to the players sequentially, with the potential for future values to be previewed over a short time window. We propose an algorithm that enables the players to predict and track a feedback Nash equilibrium trajectory, and we measure the quality of their resulting decisions by introducing the concept of \emph{price of uncertainty}. We show that under the proposed algorithm, the price of uncertainty is bounded by horizon-invariant constants. The constants are the sum of three terms; the first and second terms decay exponentially as the preview window grows, and another depends on the magnitude of the differences between the cost matrices for each player. Through simulations, we illustrate that the resulting price of uncertainty initially decays at an exponential rate as the preview window lengthens, then remains constant for large time horizons.
Vision Transformers (ViTs) have achieved state-of-the-art performance across various computer vision tasks, but their high computational cost remains a challenge. Token pruning has been proposed to reduce this cost by selectively removing less important tokens. While effective in vision tasks by discarding non-object regions, applying this technique to audio tasks presents unique challenges, as distinguishing relevant from irrelevant regions in time-frequency representations is less straightforward. In this study, for the first time, we applied token pruning to ViT-based audio classification models using Mel-spectrograms and analyzed the trade-offs between model performance and computational cost: TopK token pruning can reduce MAC operations of AudioMAE and AST by 30-40%, with less than a 1% drop in accuracy. Our analysis reveals that while high-intensity or high-variation tokens contribute significantly to model accuracy, low-intensity or low variation tokens also remain important when token pruning is applied; pruning solely based on the intensity or variation of signals in a patch leads to a noticeable drop in accuracy. We support our claim by measuring high correlation between attention scores and these statistical features and by showing retained tokens consistently receive distinct attention compared to pruned ones. We also show that AudioMAE retains more low-intensity tokens than AST. This can be explained by AudioMAE's self-supervised reconstruction objective, which encourages attention to all patches, whereas AST's supervised training focuses on label-relevant tokens.
Sampling-based model predictive control (MPC) offers strong performance in nonlinear and contact-rich robotic tasks, yet often suffers from poor exploration due to locally greedy sampling schemes. We propose \emph{Model Tensor Planning} (MTP), a novel sampling-based MPC framework that introduces high-entropy control trajectory generation through structured tensor sampling. By sampling over randomized multipartite graphs and interpolating control trajectories with B-splines and Akima splines, MTP ensures smooth and globally diverse control candidates. We further propose a simple $\beta$-mixing strategy that blends local exploitative and global exploratory samples within the modified Cross-Entropy Method (CEM) update, balancing control refinement and exploration. Theoretically, we show that MTP achieves asymptotic path coverage and maximum entropy in the control trajectory space in the limit of infinite tensor depth and width. Our implementation is fully vectorized using JAX and compatible with MuJoCo XLA, supporting \emph{Just-in-time} (JIT) compilation and batched rollouts for real-time control with online domain randomization. Through experiments on various challenging robotic tasks, ranging from dexterous in-hand manipulation to humanoid locomotion, we demonstrate that MTP outperforms standard MPC and evolutionary strategy baselines in task success and control robustness. Design and sensitivity ablations confirm the effectiveness of MTP tensor sampling structure, spline interpolation choices, and mixing strategy. Altogether, MTP offers a scalable framework for robust exploration in model-based planning and control.
Non-intrusive load monitoring (NILM) aims to disaggregate total electricity consumption into individual appliance usage, thus enabling more effective energy management. While deep learning has advanced NILM, it remains limited by its dependence on labeled data, restricted generalization, and lack of explainability. This paper introduces the first prompt-based NILM framework that leverages large language models (LLMs) with in-context learning. We design and evaluate prompt strategies that integrate appliance features, contextual information, and representative time-series examples through extensive case studies. Extensive experiments on the REDD and UK-DALE datasets show that LLMs guided solely by prompts deliver only basic NILM capabilities, with performance that lags behind traditional deep-learning models in complex scenarios. However, the experiments also demonstrate strong generalization across different houses and even regions by simply adapting the injected appliance features. It also provides clear, human-readable explanations for the inferred appliance states. Our findings define the capability boundaries of using prompt-only LLMs for NILM tasks. Their strengths in generalization and explainability present a promising new direction for the field.
Diffusion Models (DMs) have achieved remarkable success in realistic voice cloning (VC), while they also increase the risk of malicious misuse. Existing proactive defenses designed for traditional VC models aim to disrupt the forgery process, but they have been proven incompatible with DMs due to the intricate generative mechanisms of diffusion. To bridge this gap, we introduce VoiceCloak, a multi-dimensional proactive defense framework with the goal of obfuscating speaker identity and degrading perceptual quality in potential unauthorized VC. To achieve these goals, we conduct a focused analysis to identify specific vulnerabilities within DMs, allowing VoiceCloak to disrupt the cloning process by introducing adversarial perturbations into the reference audio. Specifically, to obfuscate speaker identity, VoiceCloak first targets speaker identity by distorting representation learning embeddings to maximize identity variation, which is guided by auditory perception principles. Additionally, VoiceCloak disrupts crucial conditional guidance processes, particularly attention context, thereby preventing the alignment of vocal characteristics that are essential for achieving convincing cloning. Then, to address the second objective, VoiceCloak introduces score magnitude amplification to actively steer the reverse trajectory away from the generation of high-quality speech. Noise-guided semantic corruption is further employed to disrupt structural speech semantics captured by DMs, degrading output quality. Extensive experiments highlight VoiceCloak's outstanding defense success rate against unauthorized diffusion-based voice cloning. Audio samples of VoiceCloak are available at this https URL.
Recent advances in neural audio codec-based speech generation (CoSG) models have produced remarkably realistic audio deepfakes. We refer to deepfake speech generated by CoSG systems as codec-based deepfake, or CodecFake. Although existing anti-spoofing research on CodecFake predominantly focuses on verifying the authenticity of audio samples, almost no attention was given to tracing the CoSG used in generating these deepfakes. In CodecFake generation, processes such as speech-to-unit encoding, discrete unit modeling, and unit-to-speech decoding are fundamentally based on neural audio codecs. Motivated by this, we introduce source tracing for CodecFake via neural audio codec taxonomy, which dissects neural audio codecs to trace CoSG. Our experimental results on the CodecFake+ dataset provide promising initial evidence for the feasibility of CodecFake source tracing while also highlighting several challenges that warrant further investigation.
1) Biological collections house millions of specimens with digital images increasingly available through open-access platforms. However, most imaging protocols were developed for human interpretation without considering automated analysis requirements. As computer vision applications revolutionize taxonomic identification and trait extraction, a critical gap exists between current digitization practices and computational analysis needs. This review provides the first comprehensive practical framework for optimizing biological specimen imaging for computer vision applications. 2) Through interdisciplinary collaboration between taxonomists, collection managers, ecologists, and computer scientists, we synthesized evidence-based recommendations addressing fundamental computer vision concepts and practical imaging considerations. We provide immediately actionable implementation guidance while identifying critical areas requiring community standards development. 3) Our framework encompasses ten interconnected considerations for optimizing image capture for computer vision-powered taxonomic identification and trait extraction. We translate these into practical implementation checklists, equipment selection guidelines, and a roadmap for community standards development including filename conventions, pixel density requirements, and cross-institutional protocols. 4)By bridging biological and computational disciplines, this approach unlocks automated analysis potential for millions of existing specimens and guides future digitization efforts toward unprecedented analytical capabilities.
This paper proposes a novel theoretical framework for guaranteeing and evaluating the resilience of long short-term memory (LSTM) networks in control systems. We introduce "recovery time" as a new metric of resilience in order to quantify the time required for an LSTM to return to its normal state after anomalous inputs. By mathematically refining incremental input-to-state stability ($\delta$ISS) theory for LSTM, we derive a practical data-independent upper bound on recovery time. This upper bound gives us resilience-aware training. Experimental validation on simple models demonstrates the effectiveness of our resilience estimation and control methods, enhancing a foundation for rigorous quality assurance in safety-critical AI applications.
The rapid advancement of generative AI has enabled the creation of highly photorealistic visual content, offering practical substitutes for real images and videos in scenarios where acquiring real data is difficult or expensive. However, reliably substituting real visual content with AI-generated counterparts requires robust assessment of the perceived realness of AI-generated visual content, a challenging task due to its inherent subjective nature. To address this, we conducted a comprehensive human study evaluating the perceptual realness of both real and AI-generated images, resulting in a new dataset, containing images paired with subjective realness scores, introduced as RAISE in this paper. Further, we develop and train multiple models on RAISE to establish baselines for realness prediction. Our experimental results demonstrate that features derived from deep foundation vision models can effectively capture the subjective realness. RAISE thus provides a valuable resource for developing robust, objective models of perceptual realness assessment.
Federated Learning (FL) offers a privacy-preserving framework for training audio classification (AC) models across decentralized clients without sharing raw data. However, Federated Audio Classification (FedAC) faces three major challenges: data heterogeneity, model heterogeneity, and data poisoning, which degrade performance in real-world settings. While existing methods often address these issues separately, a unified and robust solution remains underexplored. We propose FedMLAC, a mutual learning-based FL framework that tackles all three challenges simultaneously. Each client maintains a personalized local AC model and a lightweight, globally shared Plug-in model. These models interact via bidirectional knowledge distillation, enabling global knowledge sharing while adapting to local data distributions, thus addressing both data and model heterogeneity. To counter data poisoning, we introduce a Layer-wise Pruning Aggregation (LPA) strategy that filters anomalous Plug-in updates based on parameter deviations during aggregation. Extensive experiments on four diverse audio classification benchmarks, including both speech and non-speech tasks, show that FedMLAC consistently outperforms state-of-the-art baselines in classification accuracy and robustness to noisy data.
The challenge in WiFi-based cross-domain Behavior Recognition lies in the significant interference of domain-specific signals on gesture variation. However, previous methods alleviate this interference by mapping the phase from multiple domains into a common feature space. If the Doppler Frequency Shift (DFS) signal is used to dynamically supplement the phase features to achieve better generalization, enabling model to not only explore a wider feature space but also avoid potential degradation of gesture semantic information. Specifically, we propose a novel Salient-aware Adaptive WiFi Sensing for Cross-domain Behavior Recognition (Wi-CBR}, which constructs a dual-branch self-attention module that captures temporal features from phase information reflecting dynamic path length variations, while extracting spatial features from DFS correlated with motion velocity. Moreover, we design a Saliency Guidance Module that employs group attention mechanisms to mine critical activity features, and utilizes gating mechanisms to optimize information entropy, facilitating feature fusion and enabling effective interaction between salient and non-salient behavior characteristics. Extensive experiments on two large-scale public datasets (Widar3.0 and XRF55) demonstrate the superior performance of our method in both in-domain and cross-domain scenarios.
An abstract sound is defined as a sound that does not disclose identifiable real-world sound events to a listener. Sound fusion aims to synthesize an original sound and a reference sound to generate a novel sound that exhibits auditory features beyond mere additive superposition of the sound constituents. To achieve this fusion, we employ inversion techniques that preserve essential features of the original sample while enabling controllable synthesis. We propose novel SDE and ODE inversion models based on DPMSolver++ samplers that reverse the sampling process by configuring model outputs as constants, eliminating circular dependencies incurred by noise prediction terms. Our inversion approach requires no prompt conditioning while maintaining flexible guidance during sampling.
Conventional voice conversion modifies voice characteristics from a source speaker to a target speaker, relying on audio input from both sides. However, this process becomes infeasible when clean audio is unavailable, such as in silent videos or noisy environments. In this work, we focus on the task of Silent Face-based Voice Conversion (SFVC), which does voice conversion entirely from visual inputs. i.e., given images of a target speaker and a silent video of a source speaker containing lip motion, SFVC generates speech aligning the identity of the target speaker while preserving the speech content in the source silent video. As this task requires generating intelligible speech and converting identity using only visual cues, it is particularly challenging. To address this, we introduce MuteSwap, a novel framework that employs contrastive learning to align cross-modality identities and minimize mutual information to separate shared visual features. Experimental results show that MuteSwap achieves impressive performance in both speech synthesis and identity conversion, especially under noisy conditions where methods dependent on audio input fail to produce intelligible results, demonstrating both the effectiveness of our training approach and the feasibility of SFVC.
The integration of artificial intelligence into hearing assistance marks a paradigm shift from traditional amplification-based systems to intelligent, context-aware audio processing. This systematic literature review evaluates advances in AI-driven selective noise cancellation (SNC) for hearing aids, highlighting technological evolution, implementation challenges, and future research directions. We synthesize findings across deep learning architectures, hardware deployment strategies, clinical validation studies, and user-centric design. The review traces progress from early machine learning models to state-of-the-art deep networks, including Convolutional Recurrent Networks for real-time inference and Transformer-based architectures for high-accuracy separation. Key findings include significant gains over traditional methods, with recent models achieving up to 18.3 dB SI-SDR improvement on noisy-reverberant benchmarks, alongside sub-10 ms real-time implementations and promising clinical outcomes. Yet, challenges remain in bridging lab-grade models with real-world deployment - particularly around power constraints, environmental variability, and personalization. Identified research gaps include hardware-software co-design, standardized evaluation protocols, and regulatory considerations for AI-enhanced hearing devices. Future work must prioritize lightweight models, continual learning, contextual-based classification and clinical translation to realize transformative hearing solutions for millions globally.
Recent advances in Automatic Speech Recognition (ASR) have demonstrated remarkable accuracy and robustness in diverse audio applications, such as live transcription and voice command processing. However, deploying these models on resource-constrained edge devices (e.g., IoT device, wearables) still presents substantial challenges due to strict limits on memory, compute and power. Quantization, particularly Post-Training Quantization (PTQ), offers an effective way to reduce model size and inference cost without retraining. Despite its importance, the performance implications of various advanced quantization methods and bit-width configurations on ASR models remain unclear. In this work, we present a comprehensive benchmark of eight state-of-the-art (SOTA) PTQ methods applied to two leading edge-ASR model families, Whisper and Moonshine. We systematically evaluate model performances (i.e., accuracy, memory I/O and bit operations) across seven diverse datasets from the open ASR leader-board, analyzing the impact of quantization and various configurations on both weights and activations. Built on an extension of the LLM compression toolkit, our framework integrates edge-ASR models, diverse advanced quantization algorithms, a unified calibration and evaluation data pipeline, with detailed analysis tools. Our results characterize the trade-offs between efficiency and accuracy, demonstrating that even $3$-bit quantization can succeed on high capacity models when using advanced PTQ techniques. These findings provide valuable insights for optimizing ASR models on low-power, always-on edge devices.
False Data Injection Attacks (FDIAs) pose severe security risks to smart grids by manipulating measurement data collected from spatially distributed devices such as SCADA systems and PMUs. These measurements typically exhibit Non-Independent and Identically Distributed (Non-IID) characteristics across different regions, which significantly challenges the generalization ability of detection models. Traditional centralized training approaches not only face privacy risks and data sharing constraints but also incur high transmission costs, limiting their scalability and deployment feasibility. To address these issues, this paper proposes a privacy-preserving federated learning framework, termed Federated Cluster Average (FedClusAvg), designed to improve FDIA detection in Non-IID and resource-constrained environments. FedClusAvg incorporates cluster-based stratified sampling and hierarchical communication (client-subserver-server) to enhance model generalization and reduce communication overhead. By enabling localized training and weighted parameter aggregation, the algorithm achieves accurate model convergence without centralizing sensitive data. Experimental results on benchmark smart grid datasets demonstrate that FedClusAvg not only improves detection accuracy under heterogeneous data distributions but also significantly reduces communication rounds and bandwidth consumption. This work provides an effective solution for secure and efficient FDIA detection in large-scale distributed power systems.
Autonomous parking (AP) represents a critical yet complex subset of intelligent vehicle automation, characterized by tight spatial constraints, frequent close-range obstacle interactions, and stringent safety margins. However, conventional rule-based and model-predictive methods often lack the adaptability and generalization needed to handle the nonlinear and environment-dependent complexities of AP. To address these limitations, we propose a reward-augmented learning framework for AP (RARLAP), that mitigates the inherent complexities of continuous-domain control by leveraging structured reward design to induce smooth and adaptable policy behavior, trained entirely within a high-fidelity Unity-based custom 3D simulation environment. We systematically design and assess three structured reward strategies: goal-only reward (GOR), dense proximity reward (DPR), and milestone-augmented reward (MAR), each integrated with both on-policy and off-policy optimization paradigms. Empirical evaluations demonstrate that the on-policy MAR achieves a 91\% success rate, yielding smoother trajectories and more robust behavior, while GOR and DPR fail to guide effective learning. Convergence and trajectory analyses demonstrate that the proposed framework enhances policy adaptability, accelerates training, and improves safety in continuous control. Overall, RARLAP establishes that reward augmentation effectively addresses complex autonomous parking challenges, enabling scalable and efficient policy optimization with both on- and off-policy methods. To support reproducibility, the code accompanying this paper is publicly available.
In the user-centric cell-free massive MIMO (UC-mMIMO) network scheme, user mobility necessitates updating the set of serving access points to maintain the user-centric clustering. Such updates are typically performed through handoff (HO) operations; however, frequent HOs lead to overheads associated with the allocation and release of resources. This paper presents a deep reinforcement learning (DRL)-based solution to predict and manage these connections for mobile users. Our solution employs the Soft Actor-Critic algorithm, with continuous action space representation, to train a deep neural network to serve as the HO policy. We present a novel proposition for a reward function that integrates a HO penalty in order to balance the attainable rate and the associated overhead related to HOs. We develop two variants of our system; the first one uses mobility direction-assisted (DA) observations that are based on the user movement pattern, while the second one uses history-assisted (HA) observations that are based on the history of the large-scale fading (LSF). Simulation results show that our DRL-based continuous action space approach is more scalable than discrete space counterpart, and that our derived HO policy automatically learns to gather HOs in specific time slots to minimize the overhead of initiating HOs. Our solution can also operate in real time with a response time less than 0.4 ms.
Cell-Free (CF) Massive Multiple-Input Multiple-Output (MaMIMO) is considered one of the leading candidates for enabling next-generation wireless communication. With the growing interest in the Internet of Things (IoT), the Grant-Free (GF) access scheme has emerged as a promising solution to support massive device connectivity. The integration of GF and CF-MaMIMO introduces significant challenges, particularly in designing distributed algorithms for activity detection and pilot contamination mitigation. In this paper, we propose a distributed algorithm that addresses these challenges. Our method first employs a component-wise iterative distributed Maximum Likelihood (ML) approach for activity detection, which considers both the pilot and data portions of the received signal. This is followed by a Pseudo-Prior Hybrid Variational Bayes and Expectation Propagation (PP-VB-EP) algorithm for joint data detection and channel estimation. Compared to conventional VB-EP, the proposed PP-VB-EP demonstrates improved convergence behavior and reduced sensitivity to initialization, especially when data symbols are drawn from a finite alphabet. The pseudo prior used in PP-VB-EP acts as an approximated posterior and serves as a regularization term that prevents the Message Passing (MP) algorithm from diverging. To compute the pseudo prior in a distributed fashion, we further develop a distributed version of the Variable-Level Expectation Propagation (VL-EP) algorithm.
Sequence modeling is crucial for AI to understand temporal data and detect complex time-dependent patterns. While recurrent neural networks (RNNs), convolutional neural networks (CNNs), and Transformers have advanced in capturing long-range dependencies, they struggle with achieving high accuracy with very long sequences due to limited memory retention (fixed context window). State-Space Models (SSMs) leverage exponentially decaying memory enabling lengthy context window and so they process very long data sequences more efficiently than recurrent and Transformer-based models. Unlike traditional neural models like CNNs and RNNs, SSM-based models require solving differential equations through continuous integration, making training and inference both compute- and memory-intensive on conventional CPUs and GPUs. In this paper we introduce a specialized hardware accelerator, EpochCore, for accelerating SSMs. EpochCore is based on systolic arrays (SAs) and is designed to enhance the energy efficiency and throughput of inference of SSM-based models for long-range sequence tasks. Within the SA, we propose a versatile processing element (PE) called LIMA-PE to perform traditional and specialized MAC operations to support traditional DNNs and SSMs. To complement the EpochCore microarchitecture, we propose a novel dataflow, ProDF, which enables highly efficient execution of SSM-based models. By leveraging the LIMA-PE microarchitecture and ProDF, EpochCore achieves on average 250x gains in performance and 45x improvement in energy efficiency, at the expense of 2x increase in area cost over traditional SA-based accelerators, and around ~2,000x improvement in latency/inference on LRA datasets compared to GPU kernel operations.
This paper analyzes the stochastic security performance of a multiple-input multiple-output (MIMO) integrated sensing and communication (ISAC) system in a downlink scenario. A base station (BS) transmits a multi-functional signal to simultaneously communicate with a user, sense a target's angular location, and counteract eavesdropping threats. The attack model considers a passive single-antenna communication eavesdropper intercepting communication data, as well as a multi-antenna sensing eavesdropper attempting to infer the target's location. We also consider a malicious target scenario where the target plays the role of the communication eavesdropper. The BS-user and BS-eavesdroppers channels follow Rayleigh fading, while the target's azimuth angle is uniformly distributed. To evaluate the performance in this random network, we derive the ergodic secrecy rate (ESR) and the ergodic Cramer-Rao lower bound (CRB), for target localization, at both the BS and the sensing eavesdropper. This involves computing the probability density functions (PDFs) of the signal-to-noise ratio (SNR) and CRB, leveraging the central limit theorem for tractability. We characterize the boundary of the CRB-secrecy rate region, and interpret the performance tradeoffs between communication and sensing while guaranteeing a level of security and privacy in the random ISAC networks.
In human dialogue, nonverbal information such as nodding and facial expressions is as crucial as verbal information, and spoken dialogue systems are also expected to express such nonverbal behaviors. We focus on nodding, which is critical in an attentive listening system, and propose a model that predicts both its timing and type in real time. The proposed model builds on the voice activity projection (VAP) model, which predicts voice activity from both listener and speaker audio. We extend it to prediction of various types of nodding in a continuous and real-time manner unlike conventional models. In addition, the proposed model incorporates multi-task learning with verbal backchannel prediction and pretraining on general dialogue data. In the timing and type prediction task, the effectiveness of multi-task learning was significantly demonstrated. We confirmed that reducing the processing rate enables real-time operation without a substantial drop in accuracy, and integrated the model into an avatar attentive listening system. Subjective evaluations showed that it outperformed the conventional method, which always does nodding in sync with verbal backchannel. The code and trained models are available at this https URL.
We investigate the integration of beyond diagonal reconfigurable intelligent surfaces (BDRISs) into cell free massive multiple input multiple output (CFmMIMO) systems to enhance simultaneous wireless information and power transfer (SWIPT). To simultaneously support two groups of users energy receivers (ERs) and information receivers (IRs) without sacrificing time frequency resources, a subset of access points (APs) is dedicated to serving ERs with the aid of a BDRIS, while the remaining APs focus on supporting IRs. A protective partial zero forcing precoding technique is implemented at the APs to manage the non coherent interference between the ERs and IRs. Subsequently, closed form expressions for the spectral efficiency of the IRs and the average sum of harvested energy at the ERs are leveraged to formulate a comprehensive optimization problem. This problem jointly optimizes the AP selection, AP power control, and scattering matrix design at the BDRIS, all based on long term statistical channel state information. This challenging problem is then effectively transformed into more tractable forms. To solve these sub problems, efficient algorithms are proposed, including a heuristic search for the scattering matrix design, as well as successive convex approximation and deep reinforcement learning methods for the joint AP mode selection and power control design. Numerical results show that a BDRIS with a group or fully connected architecture achieves significant energy harvesting gains over the conventional diagonal RIS, especially delivering up to a seven fold increase in the average sum of harvested energy when a heuristic based scattering matrix design is employed.