New articles on Electrical Engineering and Systems Science


[1] 2501.06223

Interpretable Auto Window Setting for Deep-Learning-Based CT Analysis

Whether during the early days of popularization or in the present, the window setting in Computed Tomography (CT) has always been an indispensable part of the CT analysis process. Although research has investigated the capabilities of CT multi-window fusion in enhancing neural networks, there remains a paucity of domain-invariant, intuitively interpretable methodologies for Auto Window Setting. In this work, we propose an plug-and-play module originate from Tanh activation function, which is compatible with mainstream deep learning architectures. Starting from the physical principles of CT, we adhere to the principle of interpretability to ensure the module's reliability for medical implementations. The domain-invariant design facilitates observation of the preference decisions rendered by the adaptive mechanism from a clinically intuitive perspective. This enables the proposed method to be understood not only by experts in neural networks but also garners higher trust from clinicians. We confirm the effectiveness of the proposed method in multiple open-source datasets, yielding 10%~200% Dice improvements on hard segment targets.


[2] 2501.06273

Underwater Image Enhancement using Generative Adversarial Networks: A Survey

In recent years, there has been a surge of research focused on underwater image enhancement using Generative Adversarial Networks (GANs), driven by the need to overcome the challenges posed by underwater environments. Issues such as light attenuation, scattering, and color distortion severely degrade the quality of underwater images, limiting their use in critical applications. Generative Adversarial Networks (GANs) have emerged as a powerful tool for enhancing underwater photos due to their ability to learn complex transformations and generate realistic outputs. These advancements have been applied to real-world applications, including marine biology and ecosystem monitoring, coral reef health assessment, underwater archaeology, and autonomous underwater vehicle (AUV) navigation. This paper explores all major approaches to underwater image enhancement, from physical and physics-free models to Convolutional Neural Network (CNN)-based models and state-of-the-art GAN-based methods. It provides a comprehensive analysis of these methods, evaluation metrics, datasets, and loss functions, offering a holistic view of the field. Furthermore, the paper delves into the limitations and challenges faced by current methods, such as generalization issues, high computational demands, and dataset biases, while suggesting potential directions for future research.


[3] 2501.06320

TTS-Transducer: End-to-End Speech Synthesis with Neural Transducer

This work introduces TTS-Transducer - a novel architecture for text-to-speech, leveraging the strengths of audio codec models and neural transducers. Transducers, renowned for their superior quality and robustness in speech recognition, are employed to learn monotonic alignments and allow for avoiding using explicit duration predictors. Neural audio codecs efficiently compress audio into discrete codes, revealing the possibility of applying text modeling approaches to speech generation. However, the complexity of predicting multiple tokens per frame from several codebooks, as necessitated by audio codec models with residual quantizers, poses a significant challenge. The proposed system first uses a transducer architecture to learn monotonic alignments between tokenized text and speech codec tokens for the first codebook. Next, a non-autoregressive Transformer predicts the remaining codes using the alignment extracted from transducer loss. The proposed system is trained end-to-end. We show that TTS-Transducer is a competitive and robust alternative to contemporary TTS systems.


[4] 2501.06335

A Comparison of Strategies to Embed Physics-Informed Neural Networks in Nonlinear Model Predictive Control Formulations Solved via Direct Transcription

This study aims to benchmark candidate strategies for embedding neural network (NN) surrogates in nonlinear model predictive control (NMPC) formulations that are subject to systems described with partial differential equations and that are solved via direct transcription (i.e., simultaneous methods). This study focuses on the use of physics-informed NNs and physics-informed convolutional NNs as the internal (surrogate) models within the NMPC formulation. One strategy embeds NN models as explicit algebraic constraints, leveraging the automatic differentiation (AD) of an algebraic modelling language (AML) to evaluate the derivatives. Alternatively, the solver can be provided with derivatives computed external to the AML via the AD routines of the machine learning environment the NN is trained in. The three numerical experiments considered in this work reveal that replacing mechanistic models with NN surrogates may not always offer computational advantages when smooth activation functions are used in conjunction with a local nonlinear solver (e.g., Ipopt), even with highly nonlinear systems. Moreover, in this context, the external function evaluation of the NN surrogates often outperforms the embedding strategies that rely on explicit algebraic constraints, likely due to the difficulty in initializing the auxiliary variables and constraints introduced by explicit algebraic reformulations.


[5] 2501.06355

Low-Complexity Detection of Multiple Preambles in the Presence of Mobility and Delay Spread

Current wireless infrastructure is optimized to support downlink applications. This paper anticipates the emergence of applications where engineering focus shifts from downlink to uplink. The current paradigm of scheduling users on reserved uplink resources is not able to deal efficiently with unpredictable traffic patterns. As a result, 3GPP introduced the 2-step RACH as a mechanism to enable grant-free (random) initial access. The first of the two steps is preamble detection in a RACH slot, and in this paper we describe a low-complexity algorithm for simultaneous detection of multiple preambles in the presence of mobility and delay spread. We provide a pathway to standards adoption by choosing ZC sequences as preambles, as ZC sequences already appear in 5G standards. We construct preambles by using the discrete Zak transform to pass from a ZC sequence of length MN in the TD to a quasi-periodic MxN array in the DD domain. There are MN quasi-periodic Dirac pulses, each corresponding to a Zak-OTFS carrier waveform, and the ZC preamble is simply the corresponding sum of Zak-OTFS carrier waveforms. We detect multiple preambles in the presence of mobility and delay spread by sampling the received signal on the MxN period grid in the DD domain. We approach detection as a compressed sensing problem. We represent a preamble as a column of length MN in the DD domain and apply discrete shifts in delay and Doppler to produce a block with O(MN) columns in the compressed sensing matrix. The superposition of multiple preambles determines a block sparse sum of columns in the sensing matrix. The correlation properties of ZC sequences result in a highly structured compressed sensing matrix, making it possible to identify constituent preambles using OST, which has complexity O(M^3N^3). In this paper, we describe an algorithm with complexity that is O(M^2N^2) in the size of an individual column.


[6] 2501.06356

Ultrasound Image Synthesis Using Generative AI for Lung Ultrasound Detection

Developing reliable healthcare AI models requires training with representative and diverse data. In imbalanced datasets, model performance tends to plateau on the more prevalent classes while remaining low on less common cases. To overcome this limitation, we propose DiffUltra, the first generative AI technique capable of synthesizing realistic Lung Ultrasound (LUS) images with extensive lesion variability. Specifically, we condition the generative AI by the introduced Lesion-anatomy Bank, which captures the lesion's structural and positional properties from real patient data to guide the image synthesis.We demonstrate that DiffUltra improves consolidation detection by 5.6% in AP compared to the models trained solely on real patient data. More importantly, DiffUltra increases data diversity and prevalence of rare cases, leading to a 25% AP improvement in detecting rare instances such as large lung consolidations, which make up only 10% of the dataset.


[7] 2501.06414

IPP-Net: A Generalizable Deep Neural Network Model for Indoor Pathloss Radio Map Prediction

In this paper, we propose a generalizable deep neural network model for indoor pathloss radio map prediction (termed as IPP-Net). IPP-Net is based on a UNet architecture and learned from both large-scale ray tracing simulation data and a modified 3GPP indoor hotspot model. The performance of IPP-Net is evaluated in the First Indoor Pathloss Radio Map Prediction Challenge in ICASSP 2025. The evaluation results show that IPP-Net achieves a weighted root mean square error of 9.501 dB on three competition tasks and obtains the second overall ranking.


[8] 2501.06449

Target Detection in ISAC Systems with Active RISs: A Multi-Perspective Observation Approach

Integrated sensing and communication (ISAC) has emerged as a transformative technology for 6G networks, enabling the seamless integration of communication and sensing functionalities. Reconfigurable intelligent surfaces (RIS), with their capability to adaptively reconfigure the radio environment, have shown significant potential in enhancing communication quality and enabling advanced cooperative sensing. This paper investigates a multi-RIS-assisted ISAC system and introduces a novel multi-perspective observation framework that leverages the diversity of multiple observation paths, each exhibiting distinct spatial, delay, and Doppler characteristics for both target and clutter. The proposed framework integrates symbol-level precoding (SLP) and space-time adaptive processing (STAP) to fully exploit the benefits of multi-perspective observations, enabling superior target-clutter separation and significantly improving detection accuracy. The objective is to jointly design the transmit waveform, reflection coefficients of multiple active RISs, and spatial-temporal receive filters to maximize the radar output signal-to-clutter-plus-noise ratio (SCNR) for target detection, while ensuring the quality-of-service (QoS) requirements of communication users. To address the resulting non-convex optimization problem, an effective iterative algorithm is developed, combining fractional programming (FP), majorization-minimization (MM), and the alternating direction method of multipliers (ADMM). Extensive simulation results validate the effectiveness of the proposed multi-perspective observation strategy, demonstrating its advantages in improving target detection performance in challenging environments.


[9] 2501.06454

Reinforcement Learning for Enhancing Sensing Estimation in Bistatic ISAC Systems with UAV Swarms

This paper introduces a novel Multi-Agent Reinforcement Learning (MARL) framework to enhance integrated sensing and communication (ISAC) networks using unmanned aerial vehicle (UAV) swarms as sensing radars. By framing the positioning and trajectory optimization of UAVs as a Partially Observable Markov Decision Process, we develop a MARL approach that leverages centralized training with decentralized execution to maximize the overall sensing performance. Specifically, we implement a decentralized cooperative MARL strategy to enable UAVs to develop effective communication protocols, therefore enhancing their environmental awareness and operational efficiency. Additionally, we augment the MARL solution with a transmission power adaptation technique to mitigate interference between the communicating drones and optimize the communication protocol efficiency. Moreover, a transmission power adaptation technique is incorporated to mitigate interference and optimize the learned communication protocol efficiency. Despite the increased complexity, our solution demonstrates robust performance and adaptability across various scenarios, providing a scalable and cost-effective enhancement for future ISAC networks.


[10] 2501.06470

Ptychography using Blind Multi-Mode PMACE

Ptychography is an imaging technique that enables nanometer-scale reconstruction of complex transmittance images by scanning objects with overlapping illumination patterns. However, the illumination function is typically unknown, which presents challenges for reconstruction, especially when using partially coherent light sources. In this paper, we introduce Blind Multi-Mode Projected Multi-Agent Consensus Equilibrium (BM-PMACE) for blind ptychographic reconstruction. We extend the PMACE framework for distributed inverse problems to jointly estimate the complex transmittance image and multiple, unknown, partially coherent probe functions. Importantly, our method maintains local probe estimates to exploit complementary information at multiple probe locations. Our method also incorporates a dynamic strategy for integrating additional probe modes. Through experimental simulations and validations using both synthetic and measured data, we demonstrate that BM-PMACE outperforms existing approaches in reconstruction quality and convergence rate.


[11] 2501.06474

The 1st SpeechWellness Challenge: Detecting Suicidal Risk Among Adolescents

The 1st SpeechWellness Challenge (SW1) aims to advance methods for detecting suicidal risk in adolescents using speech analysis techniques. Suicide among adolescents is a critical public health issue globally. Early detection of suicidal tendencies can lead to timely intervention and potentially save lives. Traditional methods of assessment often rely on self-reporting or clinical interviews, which may not always be accessible. The SW1 challenge addresses this gap by exploring speech as a non-invasive and readily available indicator of mental health. We release the SW1 dataset which contains speech recordings from 600 adolescents aged 10-18 years. By focusing on speech generated from natural tasks, the challenge seeks to uncover patterns and markers that correlate with suicidal risk.


[12] 2501.06478

Speech Recognition for Automatically Assessing Afrikaans and isiXhosa Preschool Oral Narratives

We develop automatic speech recognition (ASR) systems for stories told by Afrikaans and isiXhosa preschool children. Oral narratives provide a way to assess children's language development before they learn to read. We consider a range of prior child-speech ASR strategies to determine which is best suited to this unique setting. Using Whisper and only 5 minutes of transcribed in-domain child speech, we find that additional in-domain adult data (adult speech matching the story domain) provides the biggest improvement, especially when coupled with voice conversion. Semi-supervised learning also helps for both languages, while parameter-efficient fine-tuning helps on Afrikaans but not on isiXhosa (which is under-represented in the Whisper model). Few child-speech studies look at non-English data, and even fewer at the preschool ages of 4 and 5. Our work therefore represents a unique validation of a wide range of previous child-speech ASR strategies in an under-explored setting.


[13] 2501.06482

Deep Reinforcement Learning Optimized Intelligent Resource Allocation in Active RIS-Integrated TN-NTN Networks

This work explores the deployment of active reconfigurable intelligent surfaces (A-RIS) in integrated terrestrial and non-terrestrial networks (TN-NTN) while utilizing coordinated multipoint non-orthogonal multiple access (CoMP-NOMA). Our system model incorporates a UAV-assisted RIS in coordination with a terrestrial RIS which aims for signal enhancement. We aim to maximize the sum rate for all users in the network using a custom hybrid proximal policy optimization (H-PPO) algorithm by optimizing the UAV trajectory, base station (BS) power allocation factors, active RIS amplification factor, and phase shift matrix. We integrate edge users into NOMA pairs to achieve diversity gain, further enhancing the overall experience for edge users. Exhaustive comparisons are made with passive RIS-assisted networks to demonstrate the superior efficacy of active RIS in terms of energy efficiency, outage probability, and network sum rate.


[14] 2501.06494

TopoFormer: Integrating Transformers and ConvLSTMs for Coastal Topography Prediction

This paper presents \textit{TopoFormer}, a novel hybrid deep learning architecture that integrates transformer-based encoders with convolutional long short-term memory (ConvLSTM) layers for the precise prediction of topographic beach profiles referenced to elevation datums, with a particular focus on Mean Low Water Springs (MLWS) and Mean Low Water Neaps (MLWN). Accurate topographic estimation down to MLWS is critical for coastal management, navigation safety, and environmental monitoring. Leveraging a comprehensive dataset from the Wales Coastal Monitoring Centre (WCMC), consisting of over 2000 surveys across 36 coastal survey units, TopoFormer addresses key challenges in topographic prediction, including temporal variability and data gaps in survey measurements. The architecture uniquely combines multi-head attention mechanisms and ConvLSTM layers to capture both long-range dependencies and localized temporal patterns inherent in beach profiles data. TopoFormer's predictive performance was rigorously evaluated against state-of-the-art models, including DenseNet, 1D/2D CNNs, and LSTMs. While all models demonstrated strong performance, \textit{TopoFormer} achieved the lowest mean absolute error (MAE), as low as 2 cm, and provided superior accuracy in both in-distribution (ID) and out-of-distribution (OOD) evaluations.


[15] 2501.06510

Cooperative Optimal Output Tracking for Discrete-Time Multiagent Systems: Stabilizing Policy Iteration Frameworks and Analysis

In this paper, two model-free optimal output tracking frameworks based on policy iteration for discrete-time multi-agent systems are proposed. First, we establish a framework of stabilizing policy iteration that can start from any initial feedback control policy, relaxing the dependence of traditional policy iteration on the initial stabilizing control policy. Then, another efficient and equivalent $Q$-learning policy iteration framework is developed, which is shown to require only less system data to get the same results as the stabilizing policy iteration. Both frameworks obtain stabilizing control policy by iterating the stabilizing virtual closed-loop system step-by-step to the actual closed-loop system. Multiple explicit schemes for the iteration step-size/coefficient are designed and their stability during the above iterations is analyzed. By using the generated closed-loop stabilizing control policy and two frameworks, the optimal feedback control gain is obtained. The approximate solution of the regulator equations is found by model-free iteration, which leads to the optimal feedforward gain. Finally, the cooperative optimal output tracking is realized by a distributed feedforward-feedback controller. The proposed algorithms are validated by simulation.


[16] 2501.06530

Multi-modal Speech Enhancement with Limited Electromyography Channels

Speech enhancement (SE) aims to improve the clarity, intelligibility, and quality of speech signals for various speech enabled applications. However, air-conducted (AC) speech is highly susceptible to ambient noise, particularly in low signal-to-noise ratio (SNR) and non-stationary noise environments. Incorporating multi-modal information has shown promise in enhancing speech in such challenging scenarios. Electromyography (EMG) signals, which capture muscle activity during speech production, offer noise-resistant properties beneficial for SE in adverse conditions. Most previous EMG-based SE methods required 35 EMG channels, limiting their practicality. To address this, we propose a novel method that considers only 8-channel EMG signals with acoustic signals using a modified SEMamba network with added cross-modality modules. Our experiments demonstrate substantial improvements in speech quality and intelligibility over traditional approaches, especially in extremely low SNR settings. Notably, compared to the SE (AC) approach, our method achieves a significant PESQ gain of 0.235 under matched low SNR conditions and 0.527 under mismatched conditions, highlighting its robustness.


[17] 2501.06552

When xURLLC Meets NOMA: A Stochastic Network Calculus Perspective

The advent of next-generation ultra-reliable and low-latency communications (xURLLC) presents stringent and unprecedented requirements for key performance indicators (KPIs). As a disruptive technology, non-orthogonal multiple access (NOMA) harbors the potential to fulfill these stringent KPIs essential for xURLLC. However, the immaturity of research on the tail distributions of these KPIs significantly impedes the application of NOMA to xURLLC. Stochastic network calculus (SNC), as a potent methodology, is leveraged to provide dependable theoretical insights into tail distribution analysis and statistical QoS provisioning (SQP). In this article, we develop a NOMA-assisted uplink xURLLC network architecture that incorporates an SNC-based SQP theoretical framework (SNC-SQP) to support tail distribution analysis in terms of delay, age-of-information (AoI), and reliability. Based on SNC-SQP, an SQP-driven power optimization problem is proposed to minimize transmit power while guaranteeing xURLLC's KPIs on delay, AoI, reliability, and power consumption. Extensive simulations validate our proposed theoretical framework and demonstrate that the proposed power allocation scheme significantly reduces uplink transmit power and outperforms conventional schemes in terms of SQP performance.


[18] 2501.06562

Discrete Speech Unit Extraction via Independent Component Analysis

Self-supervised speech models (S3Ms) have become a common tool for the speech processing community, leveraging representations for downstream tasks. Clustering S3M representations yields discrete speech units (DSUs), which serve as compact representations for speech signals. DSUs are typically obtained by k-means clustering. Using DSUs often leads to strong performance in various tasks, including automatic speech recognition (ASR). However, even with the high dimensionality and redundancy of S3M representations, preprocessing S3M representations for better clustering remains unexplored, even though it can affect the quality of DSUs. In this paper, we investigate the potential of linear preprocessing methods for extracting DSUs. We evaluate standardization, principal component analysis, whitening, and independent component analysis (ICA) on DSU-based ASR benchmarks and demonstrate their effectiveness as preprocessing for k-means. We also conduct extensive analyses of their behavior, such as orthogonality or interpretability of individual components of ICA.


[19] 2501.06573

Modeling the residual queue and queue-dependent capacity in a static traffic assignment problem

The residual queue during a given study period (e.g., peak hour) is an important feature that should be considered when solving a traffic assignment problem under equilibrium for strategic traffic planning. Although studies have focused extensively on static or quasi-dynamic traffic assignment models considering the residual queue, they have failed to capture the situation wherein the equilibrium link flow passing through the link is less than the link physical capacity under congested conditions. To address this critical issue, we introduce a novel static traffic assignment model that explicitly incorporates the residual queue and queue-dependent link capacity. The proposed model ensures that equilibrium link flows remain within the physical capacity bounds, yielding estimations more aligned with data observed by traffic detectors, especially in oversaturated scenarios. A generalized link cost function considering queue-dependent capacity, with an additional queuing delay term is proposed. The queuing delay term represents the added travel cost under congestion, offering a framework wherein conventional static models, both with and without physical capacity constraints, become special cases of our model. Our study rigorously analyzes the mathematical properties of the new model, establishing the theoretical uniqueness of solutions for link flow and residual queue under certain conditions. We also introduce a gradient projection-based alternating minimization algorithm tailored for the proposed model. Numerical examples are conducted to demonstrate the superiority and merit of the proposed model and solution algorithm.


[20] 2501.06595

Fast multi-contrast MRI using joint multiscale energy model

The acquisition of 3D multicontrast MRI data with good isotropic spatial resolution is challenged by lengthy scan times. In this work, we introduce a CNN-based multiscale energy model to learn the joint probability distribution of the multi-contrast images. The joint recovery of the contrasts from undersampled data is posed as a maximum a posteriori estimation scheme, where the learned energy serves as the prior. We use a majorize-minimize algorithm to solve the optimization scheme. The proposed model leverages the redundancies across different contrasts to improve image fidelity. The proposed scheme is observed to preserve fine details and contrast, offering sharper reconstructions compared to reconstruction methods that independently recover the contrasts. While we focus on 3D MPNRAGE acquisitions in this work, the proposed approach is generalizable to arbitrary multi-contrast settings.


[21] 2501.06635

A Reduced Order Iterative Linear Quadratic Regulator (ILQR) Technique for the Optimal Control of Nonlinear Partial Differential Equations

In this paper, we introduce a reduced order model-based reinforcement learning (MBRL) approach, utilizing the Iterative Linear Quadratic Regulator (ILQR) algorithm for the optimal control of nonlinear partial differential equations (PDEs). The approach proposes a novel modification of the ILQR technique: it uses the Method of Snapshots to identify a reduced order Linear Time Varying (LTV) approximation of the nonlinear PDE dynamics around a current estimate of the optimal trajectory, utilizes the identified LTV model to solve a time-varying reduced order LQR problem to obtain an improved estimate of the optimal trajectory along with a new reduced basis, and iterates till convergence. The convergence behavior of the reduced order approach is analyzed and the algorithm is shown to converge to a limit set that is dependent on the truncation error in the reduction. The proposed approach is tested on the viscous Burger's equation and two phase-field models for microstructure evolution in materials, and the results show that there is a significant reduction in the computational burden over the standard ILQR approach, without significantly sacrificing performance.


[22] 2501.06657

Sidelobe Level Reduction in the ACF of NLFM Signals Using the Smoothing Spline Method

The high level of sidelobes in the autocorrelation function of the nonlinear frequency modulation signal is a challenge. One of the conventional methods to reduce the sidelobe levels is to use the principle of stationary phase. In this method, the frequency function is calculated using a selection window. The signal frequency function cannot be obtained in closed form and numerical methods must be used to find it. This is usually done using the polynomial curve fitting. In this paper, the frequency function of the signal has been obtained using the smoothing spline method. The simulation results show an improvement of 10 dB to 20 dB in the peak sidelobe level of the autocorrelation function of the nonlinear frequency modulation signal compared to the previous methods.


[23] 2501.06670

A Geometric Analysis-Based Safety Assessment Framework for MASS Route Decision-Making in Restricted Waters

To enhance the safety of Maritime Autonomous Surface Ships (MASS) navigating in restricted waters, this paper aims to develop a geometric analysis-based route safety assessment (GARSA) framework, specifically designed for their route decision-making in irregularly shaped waterways. Utilizing line and point geometric elements to define waterway boundaries, the framework enables to construct a dynamic width characterization function to quantify spatial safety along intricate waterways. An iterative method is developed to calculate this function, enabling an abstracted spatial property representation of the waterways. Based on this, we introduce a navigational safety index that balances global navigational safety and local risk to determine the safest route. To accommodate ship kinematic constraints, path modifications are applied using a dynamic window approach. A case study in a simulated Port of Hamburg environment shows that GARSA effectively identifies safe routes and avoids the risk of entering narrow waterways in an autonomous manner, thereby prioritizing safety in route decision-making for MASS in confined waters.


[24] 2501.06679

Coordinated Deliverable Energy Flexibility from EV Aggregators in Distribution Networks

This paper presents a coordinated framework to optimize electric vehicle (EV) charging considering grid constraints and system uncertainties. The proposed framework consists of two optimization models. In particular, the distribution system operator (DSO) solves the first model to optimize the amount of deliverable energy flexibility that can be obtained from EV aggregators. To address the uncertainties of loads and solar energy generation, a hybrid robust/stochastic approach is employed, enabling the transformation of uncertainty-related constraints into a set of equivalent deterministic constraints. Once the DSO has computed the optimal energy flexibility, each aggregator utilizes the second optimization model to optimize the charging schedule for its respective fleet of EVs. Numerical simulations are performed on a modified IEEE 33-bus distribution network to illustrate the efficiency of the proposed framework.


[25] 2501.06724

Wavelet Integrated Convolutional Neural Network for ECG Signal Denoising

Wearable electrocardiogram (ECG) measurement using dry electrodes has a problem with high-intensity noise distortion. Hence, a robust noise reduction method is required. However, overlapping frequency bands of ECG and noise make noise reduction difficult. Hence, it is necessary to provide a mechanism that changes the characteristics of the noise based on its intensity and type. This study proposes a convolutional neural network (CNN) model with an additional wavelet transform layer that extracts the specific frequency features in a clean ECG. Testing confirms that the proposed method effectively predicts accurate ECG behavior with reduced noise by accounting for all frequency domains. In an experiment, noisy signals in the signal-to-noise ratio (SNR) range of -10-10 are evaluated, demonstrating that the efficiency of the proposed method is higher when the SNR is small.


[26] 2501.06727

Integrating Pause Information with Word Embeddings in Language Models for Alzheimer's Disease Detection from Spontaneous Speech

Alzheimer's disease (AD) is a progressive neurodegenerative disorder characterized by cognitive decline and memory loss. Early detection of AD is crucial for effective intervention and treatment. In this paper, we propose a novel approach to AD detection from spontaneous speech, which incorporates pause information into language models. Our method involves encoding pause information into embeddings and integrating them into the typical transformer-based language model, enabling it to capture both semantic and temporal features of speech data. We conduct experiments on the Alzheimer's Dementia Recognition through Spontaneous Speech (ADReSS) dataset and its extension, the ADReSSo dataset, comparing our method with existing approaches. Our method achieves an accuracy of 83.1% in the ADReSSo test set. The results demonstrate the effectiveness of our approach in discriminating between AD patients and healthy individuals, highlighting the potential of pauses as a valuable indicator for AD detection. By leveraging speech analysis as a non-invasive and cost-effective tool for AD detection, our research contributes to early diagnosis and improved management of this debilitating disease.


[27] 2501.06750

Multi-Carrier Faster-Than-Nyquist Signaling for OTFS Systems

Orthogonal time frequency space (OTFS) modulation technique is promising for high-mobility applications to achieve reliable communications. However, the capacity of OTFS systems is generally limited by the Nyquist criterion, requiring orthogonal pulses in both time and frequency domains. In this paper, we propose a novel multi-carrier faster-than-Nyquist (MC-FTN) signaling scheme for OTFS systems. By adopting non-orthogonal pulses in both time and frequency domains, our scheme significantly improves the capacity of OTFS systems. Specifically, we firstly develop the signal models for both single-input single-output (SISO) and multiple-input multiple-output (MIMO) OTFS systems. Then, we optimize the delay-Doppler (DD) domain precoding matrix at the transmitter to suppress both the inter-symbol interference (ISI) and inter-carrier interference (ICI) introduced by the MC-FTN signaling. For SISO systems, we develop an eigenvalue decomposition (EVD) precoding scheme with optimal power allocation (PA) for achieving the maximum capacity. For MIMO systems, we develop a successive interference cancellation (SIC)-based precoding scheme via decomposing the capacity maximization problem into multiple sub-capacity maximization problems with largely reduced dimensions of optimization variables. Numerical results demonstrate that our proposed MC-FTN-OTFS signaling scheme achieves significantly higher capacity than traditional Nyquist-criterion-based OTFS systems. Moreover, the SIC-based precoding scheme can effectively reduce the complexity of MIMO capacity maximization, while attaining performance close to the optimal EVD-based precoding scheme.


[28] 2501.06755

Robust Phantom-Assisted Framework for Multi-Person Localization and Vital Signs Monitoring Using MIMO FMCW Radar

With the rising prevalence of cardiovascular and respiratory disorders and an aging global population, healthcare systems face increasing pressure to adopt efficient, non-contact vital sign monitoring (NCVSM) solutions. This study introduces a robust framework for multi-person localization and vital signs monitoring, using multiple-input-multiple-output frequency-modulated continuous wave radar, addressing challenges in real-world, cluttered environments. Two key contributions are presented. First, a custom hardware phantom was developed to simulate multi-person NCVSM scenarios, utilizing recorded thoracic impedance signals to replicate realistic cardiopulmonary dynamics. The phantom's design facilitates repeatable and rapid validation of radar systems and algorithms under diverse conditions to accelerate deployment in human monitoring. Second, aided by the phantom, we designed a robust algorithm for multi-person localization utilizing joint sparsity and cardiopulmonary properties, alongside harmonics-resilient dictionary-based vital signs estimation, to mitigate interfering respiration harmonics. Additionally, an adaptive signal refinement procedure is introduced to enhance the accuracy of continuous NCVSM by leveraging the continuity of the estimates. Performance was validated and compared to existing techniques through 12 phantom trials and 12 human trials, including both single- and multi-person scenarios, demonstrating superior localization and NCVSM performance. For example, in multi-person human trials, our method achieved average respiration rate estimation accuracies of 94.14%, 98.12%, and 98.69% within error thresholds of 2, 3, and 4 breaths per minute, respectively, and heart rate accuracies of 87.10%, 94.12%, and 95.54% within the same thresholds. These results highlight the potential of this framework for reliable multi-person NCVSM in healthcare and IoT applications.


[29] 2501.06756

Generative AI Enabled Robust Sensor Placement in Cyber-Physical Power Systems: A Graph Diffusion Approach

With advancements in physical power systems and network technologies, integrated Cyber-Physical Power Systems (CPPS) have significantly enhanced system monitoring and control efficiency and reliability. This integration, however, introduces complex challenges in designing coherent CPPS, particularly as few studies concurrently address the deployment of physical layers and communication connections in the cyber layer. This paper addresses these challenges by proposing a framework for robust sensor placement to optimize anomaly detection in the physical layer and enhance communication resilience in the cyber layer. We model the CPPS as an interdependent network via a graph, allowing for simultaneous consideration of both layers. Then, we adopt the Log-normal Shadowing Path Loss (LNSPL) model to ensure reliable data transmission. Additionally, we leverage the Fiedler value to measure graph resilience against line failures and three anomaly detectors to fortify system safety. However, the optimization problem is NP-hard. Therefore, we introduce the Experience Feedback Graph Diffusion (EFGD) algorithm, which utilizes a diffusion process to generate optimal sensor placement strategies. This algorithm incorporates cross-entropy gradient and experience feedback mechanisms to expedite convergence and generate higher reward strategies. Extensive simulations demonstrate that the EFGD algorithm enhances model convergence by 18.9% over existing graph diffusion methods and improves average reward by 22.90% compared to Denoising Diffusion Policy Optimization (DDPO) and 19.57% compared to Graph Diffusion Policy Optimization (GDPO), thereby significantly bolstering the robustness and reliability of CPPS operations.


[30] 2501.06793

Differentially Private Gradient-Tracking-Based Distributed Stochastic Optimization over Directed Graphs

This paper proposes a new differentially private gradient-tracking-based distributed stochastic optimization algorithm over directed graphs. Specifically, privacy noises are added to each agent's state and tracking variable to prevent information leakage, and then perturbed states and tracking variables are transmitted to neighbors. We design two novel schemes of the iteration step-sizes and the sampling number for the algorithm. By using the sampling parameter-controlled subsampling method, both schemes enhance the differential privacy level, and achieve the finite cumulative privacy budget even over infinite iterations. The convergence rate of the algorithm is shown for both nonconvex with the Polyak-Lojasiewicz condition and strongly convex objectives: Scheme (S1) achieves the polynomial convergence rate, and Scheme (S2) achieves the exponential convergence rate. The trade-off between the privacy and the convergence rate is presented. The algorithm's effectiveness and superior performance over the existing works are demonstrated through numerical examples of distributed training on benchmark datasets "MNIST" and "CIFAR-10".


[31] 2501.06810

Improving Cross-Lingual Phonetic Representation of Low-Resource Languages Through Language Similarity Analysis

This paper examines how linguistic similarity affects cross-lingual phonetic representation in speech processing for low-resource languages, emphasizing effective source language selection. Previous cross-lingual research has used various source languages to enhance performance for the target low-resource language without thorough consideration of selection. Our study stands out by providing an in-depth analysis of language selection, supported by a practical approach to assess phonetic proximity among multiple language families. We investigate how within-family similarity impacts performance in multilingual training, which aids in understanding language dynamics. We also evaluate the effect of using phonologically similar languages, regardless of family. For the phoneme recognition task, utilizing phonologically similar languages consistently achieves a relative improvement of 55.6% over monolingual training, even surpassing the performance of a large-scale self-supervised learning model. Multilingual training within the same language family demonstrates that higher phonological similarity enhances performance, while lower similarity results in degraded performance compared to monolingual training.


[32] 2501.06814

Improved joint modelling of breast cancer radiomics features and hazard by image registration aided longitudinal CT data

Patients with metastatic breast cancer (mBC) undergo continuous medical imaging during treatment, making accurate lesion detection and monitoring over time critical for clinical decisions. Predicting drug response from post-treatment data is essential for personalized care and pharmacological research. In collaboration with the U.S. Food and Drug Administration and Novartis Pharmaceuticals, we analyzed serial chest CT scans from two large-scale Phase III trials, MONALEESA 3 and MONALEESA 7. This paper has two objectives (a) Data Structuring developing a Registration Aided Automated Correspondence (RAMAC) algorithm for precise lesion tracking in longitudinal CT data, and (b) Survival Analysis creating imaging features and models from RAMAC structured data to predict patient outcomes. The RAMAC algorithm uses a two phase pipeline: three dimensional rigid registration aligns CT images, and a distance metric-based Hungarian algorithm tracks lesion correspondence. Using structured data, we developed interpretable models to assess progression-free survival (PFS) in mBC patients by combining baseline radiomics, post-treatment changes (Weeks 8, 16, 24), and demographic features. Radiomics effects were studied across time points separately and through a non-correlated additive framework. Radiomics features were reduced using (a) a regularized (L1-penalized) additive Cox proportional hazards model, and (b) variable selection via best subset selection. Performance, measured using the concordance index (C-index), improved with additional time points. Joint modeling, considering correlations among radiomics effects over time, provided insights into relationships between longitudinal radiomics and survival outcomes.


[33] 2501.06838

Generalized and Efficient 2D Gaussian Splatting for Arbitrary-scale Super-Resolution

Equipped with the continuous representation capability of Multi-Layer Perceptron (MLP), Implicit Neural Representation (INR) has been successfully employed for Arbitrary-scale Super-Resolution (ASR). However, the limited receptive field of the linear layers in MLP restricts the representation capability of INR, while it is computationally expensive to query the MLP numerous times to render each pixel. Recently, Gaussian Splatting (GS) has shown its advantages over INR in both visual quality and rendering speed in 3D tasks, which motivates us to explore whether GS can be employed for the ASR task. However, directly applying GS to ASR is exceptionally challenging because the original GS is an optimization-based method through overfitting each single scene, while in ASR we aim to learn a single model that can generalize to different images and scaling factors. We overcome these challenges by developing two novel techniques. Firstly, to generalize GS for ASR, we elaborately design an architecture to predict the corresponding image-conditioned Gaussians of the input low-resolution image in a feed-forward manner. Secondly, we implement an efficient differentiable 2D GPU/CUDA-based scale-aware rasterization to render super-resolved images by sampling discrete RGB values from the predicted contiguous Gaussians. Via end-to-end training, our optimized network, namely GSASR, can perform ASR for any image and unseen scaling factors. Extensive experiments validate the effectiveness of our proposed method. The project page can be found at \url{https://mt-cly.github.io/GSASR.github.io/}.


[34] 2501.06881

Gaussian Integral based Bayesian Smoother

This work introduces the Gaussian integration to address a smoothing problem of a nonlinear stochastic state space model. The probability densities of states at each time instant are assumed to be Gaussian, and their means and covariances are evaluated by utilizing the odd-even properties of Gaussian integral, which are further utilized to realize Rauch-Tung-Striebel (RTS) smoothing expressions. Given that the Gaussian integration provides an exact solution for the integral of a polynomial function over a Gaussian probability density function, it is anticipated to provide more accurate results than other existing Gaussian approximation-based smoothers such as extended Kalman, cubature Kalman, and unscented Kalman smoothers, especially when polynomial types of nonlinearity are present in the state space models. The developed smoothing algorithm is applied to the Van der Pol oscillator, where the nonlinearity associated with their dynamics is represented using polynomial functions. Simulation results are provided to demonstrate the superiority of the proposed algorithm.


[35] 2501.06917

Optimizing Phase Allocation in Unbalanced Power Distribution Networks using a Linearized DistFlow Formulation

Power distribution networks, especially in North America, are often unbalanced but are designed to keep unbalance levels within the limits specified by IEEE, IEC, and NEMA standards. However, rapid integration of unbalanced devices, such as electric vehicle (EV) chargers and single-phase solar plants, can exacerbate these imbalances. This increase can trigger protection devices, increase losses, and potentially damage devices. To address this issue, phase swapping (or phase allocation) has been proposed. Existing approaches predominantly rely on heuristic methods. In this work, we develop a mixed integer linear programming (MILP) approach for phase allocation. Our approach uses linearized DistFlow equations to represent the distribution network and incorporates a phase consistency constraint, enforced with binary variables, to ensure that downstream phase configurations align with upstream configurations. We validate the proposed approach on multiple benchmark test cases and demonstrate that it effectively improves network balance, as quantified by various metrics.


[36] 2501.06939

Super-Resolution of 3D Micro-CT Images Using Generative Adversarial Networks: Enhancing Resolution and Segmentation Accuracy

We develop a procedure for substantially improving the quality of segmented 3D micro-Computed Tomography (micro-CT) images of rocks with a Machine Learning (ML) Generative Model. The proposed model enhances the resolution eightfold (8x) and addresses segmentation inaccuracies due to the overlapping X-ray attenuation in micro-CT measurement for different rock minerals and phases. The proposed generative model is a 3D Deep Convolutional Wasserstein Generative Adversarial Network with Gradient Penalty (3D DC WGAN-GP). The algorithm is trained on segmented 3D low-resolution micro-CT images and segmented unpaired complementary 2D high-resolution Laser Scanning Microscope (LSM) images. The algorithm was demonstrated on multiple samples of Berea sandstones. We achieved high-quality super-resolved 3D images with a resolution of 0.4375 micro-m/voxel and accurate segmentation for constituting minerals and pore space. The described procedure can significantly expand the modern capabilities of digital rock physics.


[37] 2501.06940

Collaborative Human Activity Recognition with Passive Inter-Body Electrostatic Field

The passive body-area electrostatic field has recently been aspiringly explored for wearable motion sensing, harnessing its two thrilling characteristics: full-body motion sensitivity and environmental sensitivity, which potentially empowers human activity recognition both independently and jointly from a single sensing front-end and theoretically brings significant competition against traditional inertial sensor that is incapable in environmental variations sensing. While most works focus on exploring the electrostatic field of a single body as the target, this work, for the first time, quantitatively evaluates the mutual effect of inter-body electrostatic fields and its contribution to collaborative activity recognition. A wearable electrostatic field sensing front-end and wrist-worn prototypes are built, and a sixteen-hour, manually annotated dataset is collected, involving an experiment of manipulating objects both independently and collaboratively. A regression model is finally used to recognize the collaborative activities among users. Despite the theoretical advantages of the body electrostatic field, the recognition of both single and collaborative activities shows unanticipated less-competitive recognition performance compared with the accelerometer. However, It is worth mentioning that this novel sensing modality improves the recognition F-score of user collaboration by 16\% in the fusion result of the two wearable motion sensing modalities, demonstrating the potential of bringing body electrostatic field as a complementary power-efficient signal for collaborative activity tracking using wearables.


[38] 2501.06945

OpenGERT: Open Source Automated Geometry Extraction with Geometric and Electromagnetic Sensitivity Analyses for Ray-Tracing Propagation Models

Accurate RF propagation modeling in urban environments is critical for developing digital spectrum twins and optimizing wireless communication systems. We introduce OpenGERT, an open-source automated Geometry Extraction tool for Ray Tracing, which collects and processes terrain and building data from OpenStreetMap, Microsoft Global ML Building Footprints, and USGS elevation data. Using the Blender Python API, it creates detailed urban models for high-fidelity simulations with NVIDIA Sionna RT. We perform sensitivity analyses to examine how variations in building height, position, and electromagnetic material properties affect ray-tracing accuracy. Specifically, we present pairwise dispersion plots of channel statistics (path gain, mean excess delay, delay spread, link outage, and Rician K-factor) and investigate how their sensitivities change with distance from transmitters. We also visualize the variance of these statistics for selected transmitter locations to gain deeper insights. Our study covers Munich and Etoile scenes, each with 10 transmitter locations. For each location, we apply five types of perturbations: material, position, height, height-position, and all combined, with 50 perturbations each. Results show that small changes in permittivity and conductivity minimally affect channel statistics, whereas variations in building height and position significantly alter all statistics, even with noise standard deviations of 1 meter in height and 0.4 meters in position. These findings highlight the importance of precise environmental modeling for accurate propagation predictions, essential for digital spectrum twins and advanced communication networks. The code for geometry extraction and sensitivity analyses is available at github.com/serhatadik/OpenGERT/.


[39] 2501.07005

Global Search for Optimal Low Thrust Spacecraft Trajectories using Diffusion Models and the Indirect Method

Long time-duration low-thrust nonlinear optimal spacecraft trajectory global search is a computationally and time expensive problem characterized by clustering patterns in locally optimal solutions. During preliminary mission design, mission parameters are subject to frequent changes, necessitating that trajectory designers efficiently generate high-quality control solutions for these new scenarios. Generative machine learning models can be trained to learn how the solution structure varies with respect to a conditional parameter, thereby accelerating the global search for missions with updated parameters. In this work, state-of-the-art diffusion models are integrated with the indirect approach for trajectory optimization within a global search framework. This framework is tested on two low-thrust transfers of different complexity in the circular restricted three-body problem. By generating and analyzing a training data set, we develop mathematical relations and techniques to understand the complex structures in the costate domain of locally optimal solutions for these problems. A diffusion model is trained on this data and successfully accelerates the global search for both problems. The model predicts how the costate solution structure changes, based on the maximum spacecraft thrust magnitude. Warm-starting a numerical solver with diffusion model samples for the costates at the initial time increases the number of solutions generated per minute for problems with unseen thrust magnitudes by one to two orders of magnitude in comparison to samples from a uniform distribution and from an adjoint control transformation.


[40] 2501.07008

Advancing Single-Snapshot DOA Estimation with Siamese Neural Networks for Sparse Linear Arrays

Single-snapshot signal processing in sparse linear arrays has become increasingly vital, particularly in dynamic environments like automotive radar systems, where only limited snapshots are available. These arrays are often utilized either to cut manufacturing costs or result from unintended antenna failures, leading to challenges such as high sidelobe levels and compromised accuracy in direction-of-arrival (DOA) estimation. Despite deep learning's success in tasks such as DOA estimation, the need for extensive training data to increase target numbers or improve angular resolution poses significant challenges. In response, this paper presents a novel Siamese neural network (SNN) featuring a sparse augmentation layer, which enhances signal feature embedding and DOA estimation accuracy in sparse arrays. We demonstrate the enhanced DOA estimation performance of our approach through detailed feature analysis and performance evaluation. The code for this study is available at https://github.com/ruxinzh/SNNS_SLA.


[41] 2501.07016

A Multi-Modal Deep Learning Framework for Pan-Cancer Prognosis

Prognostic task is of great importance as it closely related to the survival analysis of patients, the optimization of treatment plans and the allocation of resources. The existing prognostic models have shown promising results on specific datasets, but there are limitations in two aspects. On the one hand, they merely explore certain types of modal data, such as patient histopathology WSI and gene expression analysis. On the other hand, they adopt the per-cancer-per-model paradigm, which means the trained models can only predict the prognostic effect of a single type of cancer, resulting in weak generalization ability. In this paper, a deep-learning based model, named UMPSNet, is proposed. Specifically, to comprehensively understand the condition of patients, in addition to constructing encoders for histopathology images and genomic expression profiles respectively, UMPSNet further integrates four types of important meta data (demographic information, cancer type information, treatment protocols, and diagnosis results) into text templates, and then introduces a text encoder to extract textual features. In addition, the optimal transport OT-based attention mechanism is utilized to align and fuse features of different modalities. Furthermore, a guided soft mixture of experts (GMoE) mechanism is introduced to effectively address the issue of distribution differences among multiple cancer datasets. By incorporating the multi-modality of patient data and joint training, UMPSNet outperforms all SOTA approaches, and moreover, it demonstrates the effectiveness and generalization ability of the proposed learning paradigm of a single model for multiple cancer types. The code of UMPSNet is available at https://github.com/binging512/UMPSNet.


[42] 2501.07026

IEEE_TIE25: Analysis and Synthesis of DOb-based Robust Motion Controllers

By employing a unified state-space design framework, this paper proposes a novel systematic analysis and synthesis method that facilitates the implementation of both conventional zero-order (ZO) and high-order (HO) DObs. Furthermore, this design method supports the development of advanced DObs (e.g., the proposed High-Performance (HP) DOb in this paper), enabling more accurate disturbance estimation and, consequently, enhancing the robust stability and performance of motion control systems. Lyapunov direct method is employed in the discrete-time domain to analyse the stability of the proposed digital robust motion controllers. The analysis demonstrates that the proposed DObs are stable in the sense that the estimation error is uniformly ultimately bounded when subjected to bounded disturbances. Additionally, they are proven to be asymptotically stable under specific disturbance conditions, such as constant disturbances for the ZO and HP DObs. Stability constraints on the design parameters of the DObs are analytically derived, providing effective synthesis tools for the implementation of the digital robust motion controllers. The discrete-time analysis facilitates the derivation of more practical design constraints. The proposed analysis and synthesis methods have been rigorously validated through experimental evaluations, confirming their effectiveness.


[43] 2501.07030

Erasing Noise in Signal Detection with Diffusion Model: From Theory to Application

In this paper, a signal detection method based on the denoise diffusion model (DM) is proposed, which outperforms the maximum likelihood (ML) estimation method that has long been regarded as the optimal signal detection technique. Theoretically, a novel mathematical theory for intelligent signal detection based on stochastic differential equations (SDEs) is established in this paper, demonstrating the effectiveness of DM in reducing the additive white Gaussian noise in received signals. Moreover, a mathematical relationship between the signal-to-noise ratio (SNR) and the timestep in DM is established, revealing that for any given SNR, a corresponding optimal timestep can be identified. Furthermore, to address potential issues with out-of-distribution inputs in the DM, we employ a mathematical scaling technique that allows the trained DM to handle signal detection across a wide range of SNRs without any fine-tuning. Building on the above theoretical foundation, we propose a DM-based signal detection method, with the diffusion transformer (DiT) serving as the backbone neural network, whose computational complexity of this method is $\mathcal{O}(n^2)$. Simulation results demonstrate that, for BPSK and QAM modulation schemes, the DM-based method achieves a significantly lower symbol error rate (SER) compared to ML estimation, while maintaining a much lower computational complexity.


[44] 2501.07062

Effective DoF-Oriented Optimal Antenna Spacing in Near-Field XL-MIMO Systems

This letter investigates the optimal antenna spacing for a near-field XL-MIMO communication system from the perspective of the array gain. Specifically, using the Green's function-based channel model, the letter analyzes the channel capacity, which is related to the effective degrees-of-freedom (EDoF). Then, the letter further investigates the applicability of two EDoF estimation methods. To increase EDoF, this letter focuses on analyzing the impact of antenna spacing. Furthermore, from the perspective of the array gain, the letter derives an approximate closed-form expression of the optimal antenna spacing, at which EDoF is maximized and the array gain at the antenna nearest to the focused antenna of the transmit array becomes zero. Finally, numerical results verify the main results of this letter.


[45] 2501.07094

Reducing Latency by Eliminating CSIT Feedback: FDD Downlink MIMO Precoding Without CSIT Feedback for Internet-of-Things Communications

This paper presents a novel framework for low-latency frequency division duplex (FDD) multi-input multi-output (MIMO) transmission with Internet of Things (IoT) communications. Our key idea is eliminating feedback associated with downlink channel state information at the transmitter (CSIT) acquisition. Instead, we propose to reconstruct downlink CSIT from uplink reference signals by exploiting the frequency invariance property on channel parameters. Nonetheless, the frequency disparity between the uplink and downlink makes it impossible to get perfect downlink CSIT, resulting in substantial interference. To address this, we formulate a max-min fairness problem and propose a rate-splitting multiple access (RSMA)-aided efficient precoding method. In particular, to fully harness the potential benefits of RSMA, we propose a method that approximates the error covariance matrix and incorporates it into the precoder optimization process. This approach effectively accounts for the impact of imperfect CSIT, enabling the design of a robust precoder that efficiently handles CSIT inaccuracies. Simulation results demonstrate that our framework outperforms other baseline methods in terms of the minimum spectral efficiency when no direct CSI feedback is used. Moreover, we show that our framework significantly reduces communication latency compared to conventional CSI feedback-based methods, underscoring its effectiveness in enhancing latency performance for IoT communications.


[46] 2501.07120

MSV-Mamba: A Multiscale Vision Mamba Network for Echocardiography Segmentation

Ultrasound imaging frequently encounters challenges, such as those related to elevated noise levels, diminished spatiotemporal resolution, and the complexity of anatomical structures. These factors significantly hinder the model's ability to accurately capture and analyze structural relationships and dynamic patterns across various regions of the heart. Mamba, an emerging model, is one of the most cutting-edge approaches that is widely applied to diverse vision and language tasks. To this end, this paper introduces a U-shaped deep learning model incorporating a large-window Mamba scale (LMS) module and a hierarchical feature fusion approach for echocardiographic segmentation. First, a cascaded residual block serves as an encoder and is employed to incrementally extract multiscale detailed features. Second, a large-window multiscale mamba module is integrated into the decoder to capture global dependencies across regions and enhance the segmentation capability for complex anatomical structures. Furthermore, our model introduces auxiliary losses at each decoder layer and employs a dual attention mechanism to fuse multilayer features both spatially and across channels. This approach enhances segmentation performance and accuracy in delineating complex anatomical structures. Finally, the experimental results using the EchoNet-Dynamic and CAMUS datasets demonstrate that the model outperforms other methods in terms of both accuracy and robustness. For the segmentation of the left ventricular endocardium (${LV}_{endo}$), the model achieved optimal values of 95.01 and 93.36, respectively, while for the left ventricular epicardium (${LV}_{epi}$), values of 87.35 and 87.80, respectively, were achieved. This represents an improvement ranging between 0.54 and 1.11 compared with the best-performing model.


[47] 2501.07126

A Federated Deep Learning Framework for Cell-Free RSMA Networks

Next-generation wireless networks are poised to benefit significantly from the integration of three key technologies (KTs): Rate-Splitting Multiple Access (RSMA), cell-free architectures, and federated learning. Each of these technologies offers distinct advantages in terms of security, robustness, and distributed structure. In this paper, we propose a novel cell-free network architecture that incorporates RSMA and employs machine learning techniques within a federated framework. This combination leverages the strengths of each KT, creating a synergistic effect that maximizes the benefits of security, robustness, and distributed structure. We formally formulate the access point (AP) selection and precoder design for max-min rate optimization in a cell-free MIMO RSMA network. Our proposed solution scheme involves a three-block procedure. The first block trains deep reinforcement learning (DRL) neural networks to obtain RSMA precoders, assuming full connectivity between APs and user equipments (UEs). The second block uses these precoders and principal component analysis (PCA) to assign APs to UEs by removing a subset of AP-UE connections. The final block fine-tunes the RSMA precoders by incorporating the associated APs into a second DRL network. To leverage the distributed nature of the cell-free network, this process is implemented in a Federated Deep Reinforcement Learning (FDRL) structure operating through the cooperation of APs and a central processing unit (CPU). Simulation results demonstrate that the proposed FDRL approach performs comparably to a benchmark centralized DRL scheme. Our FDRL approach, provides a balanced trade-off, maintaining high performance with enhanced security and reduced processing demands.


[48] 2501.07127

QoE-oriented Communication Service Provision for Annotation Rendering in Mobile Augmented Reality

As mobile augmented reality (MAR) continues to evolve, future 6G networks will play a pivotal role in supporting immersive and personalized user experiences. In this paper, we address the communication service provision problem for annotation rendering in edge-assisted MAR, with the objective of optimizing spectrum resource utilization while ensuring the required quality of experience (QoE) for MAR users. To overcome the challenges of user-specific uplink data traffic patterns and the complex operational mechanisms of annotation rendering, we propose a digital twin (DT)-based approach. We first design a DT specifically tailored for MAR applications to learn key annotation rendering mechanisms, enabling the network controller to access MAR application-specific information. Then, we develop a DT based QoE modeling approach to capture the unique relationship between individual user QoE and spectrum resource demands. Finally, we propose a QoE-oriented resource allocation algorithm that decreases resource utilization compared to conventional net work slicing-based approaches. Simulation results demonstrate that our DT-based approach outperforms benchmark approaches in the accuracy and granularity of QoE modeling.


[49] 2501.07187

Real-time Mode-Aware Dataflow: A Dataflow Model to Specify and Analyze Mode-dependent CPSs under Relaxed Timing Constraints

Modern Cyber-Physical Systems (CPS) often exhibit both relaxed real-time constraints and a mode-dependent execution. Relaxed real-time constraints mean that only a subset of the processes of a CPS have real-time constraints, and a mode-dependent CPS has conditional execution branches. Static analysis tools, such as the PolyGraph model (a formalism extending the Cyclo-Static Dataflow model with real-time constraints), can specify and analyze systems with relaxed real-time constraints. However, PolyGraph is limited in its ability to specify and analyze mode-dependent CPSs. This paper extends PolyGraph with routing actors, yielding the Routed PolyGraph model. This model is further extended to the Real-time Mode-Aware Dataflow (RMDF), which both leverages routing actors and incorporates a new dataflow actor to specify mode-dependent CPSs under relaxed real-time constraints. This paper also extends the static analyses of PolyGraph to RMDF. We showcase the application of RMDF with a specification and an analysis (derivation of timing constraints at the job-level and a feasibility test) of the vision processing system of the Ingenuity Mars helicopter.


[50] 2501.07191

Pre-Trained Large Language Model Based Remaining Useful Life Transfer Prediction of Bearing

Accurately predicting the remaining useful life (RUL) of rotating machinery, such as bearings, is essential for ensuring equipment reliability and minimizing unexpected industrial failures. Traditional data-driven deep learning methods face challenges in practical settings due to inconsistent training and testing data distributions and limited generalization for long-term predictions.


[51] 2501.07197

Lung Cancer detection using Deep Learning

In this paper we discuss lung cancer detection using hybrid model of Convolutional-Neural-Networks (CNNs) and Support-Vector-Machines-(SVMs) in order to gain early detection of tumors, benign or malignant. The work uses this hybrid model by training upon the Computed Tomography scans (CT scans) as dataset. Using deep learning for detecting lung cancer early is a cutting-edge method.


[52] 2501.07215

Microphone Array Signal Processing and Deep Learning for Speech Enhancement

Multi-channel acoustic signal processing is a well-established and powerful tool to exploit the spatial diversity between a target signal and non-target or noise sources for signal enhancement. However, the textbook solutions for optimal data-dependent spatial filtering rest on the knowledge of second-order statistical moments of the signals, which have traditionally been difficult to acquire. In this contribution, we compare model-based, purely data-driven, and hybrid approaches to parameter estimation and filtering, where the latter tries to combine the benefits of model-based signal processing and data-driven deep learning to overcome their individual deficiencies. We illustrate the underlying design principles with examples from noise reduction, source separation, and dereverberation.


[53] 2501.07247

Interpretable machine-learning for predicting molecular weight of PLA based on artificial bee colony optimization algorithm and adaptive neurofuzzy inference system

This article discusses the integration of the Artificial Bee Colony (ABC) algorithm with two supervised learning methods, namely Artificial Neural Networks (ANNs) and Adaptive Network-based Fuzzy Inference System (ANFIS), for feature selection from Near-Infrared (NIR) spectra for predicting the molecular weight of medical-grade Polylactic Acid (PLA). During extrusion processing of PLA, in-line NIR spectra were captured along with extrusion process and machine setting data. With a dataset comprising 63 observations and 512 input features, appropriate machine learning tools are essential for interpreting data and selecting features to improve prediction accuracy. Initially, the ABC optimization algorithm is coupled with ANN/ANFIS to forecast PLA molecular weight. The objective functions of the ABC algorithm are to minimize the root mean square error (RMSE) between experimental and predicted PLA molecular weights while also minimizing the number of input features. Results indicate that employing ABC-ANFIS yields the lowest RMSE of 282 Da and identifies four significant parameters (NIR wavenumbers 6158 cm-1, 6310 cm-1, 6349 cm-1, and melt temperature) for prediction. These findings demonstrate the effectiveness of using the ABC algorithm with ANFIS for selecting a minimal set of features to predict PLA molecular weight with high accuracy during processing


[54] 2501.07248

Implicit Neural Representations for Registration of Left Ventricle Myocardium During a Cardiac Cycle

Understanding the movement of the left ventricle myocardium (LVmyo) during the cardiac cycle is essential for assessing cardiac function. One way to model this movement is through a series of deformable image registrations (DIRs) of the LVmyo. Traditional deep learning methods for DIRs, such as those based on convolutional neural networks, often require substantial memory and computational resources. In contrast, implicit neural representations (INRs) offer an efficient approach by operating on any number of continuous points. This study extends the use of INRs for DIR to cardiac computed tomography (CT), focusing on LVmyo registration. To enhance the precision of the registration around the LVmyo, we incorporate the signed distance field of the LVmyo with the Hounsfield Unit values from the CT frames. This guides the registration of the LVmyo, while keeping the tissue information from the CT frames. Our framework demonstrates high registration accuracy and provides a robust method for temporal registration that facilitates further analysis of LVmyo motion.


[55] 2501.07270

Dual-Function Beamforming Design For Multi-Target Localization and Reliable Communications

This paper investigates the transmit beamforming design for multiple-input multiple-output systems to support both multi-target localization and multi-user communications. To enhance the target localization performance, we derive the asymptotic Cram\'{e}r-Rao bound (CRB) for target angle estimation by assuming that the receive array is linear and uniform. Then we formulate a beamforming design problem based on minimizing an upper bound on the asymptotic CRB (which is shown to be equivalent to {maximizing} the harmonic mean of the weighted beampattern responses at the target directions). Moreover, we impose a constraint on the SINR of each received communication signal to guarantee reliable communication performance. Two iterative algorithms are derived to tackle the non-convex design problem: one is based on the alternating direction method of multipliers, and the other uses the majorization-minimization technique to solve an equivalent minimax problem. Numerical results show that, through elaborate dual-function beamforming matrix design, the proposed algorithms can simultaneously achieve superior angle estimation performance as well as high-quality multi-user communications.


[56] 2501.07273

An Extended Survey and a Comparison Framework for Dataflow Models of Computation and Communication

Dataflow Model of Computation and Communications (DF MoCCs) is a formalism used to specify the behavior of Cyber-Physical Systems (CPSs). DF MoCCs are widely used in the design of CPSs, as they provide a high-level of abstraction to specify the system's behavior. DF MoCCs rules give semantics to a dataflow specification of a CPS, and static analysis algorithms rely on these semantics to guarantee safety properties of the dataflow specification, such as bounded memory usage and deadlock freeness. A wide range of DF MoCCs exists, each with its own characteristics and static analyses. This paper presents a survey of those DF MoCCs and a classification in eight categories. In addition, DF MoCCs are characterized by a comprehensive list of features and static analyses, which reflect their expressiveness and analyzability. Based on this characterization, a framework is proposed to compare the expressiveness and the analyzability of DF MoCCs quantitatively.


[57] 2501.07333

Synesthesia of Machines Based Multi-Modal Intelligent V2V Channel Model

This paper proposes a novel sixth-generation (6G) multi-modal intelligent vehicle-to-vehicle (V2V) channel model from light detection and ranging (LiDAR) point clouds based on Synesthesia of Machines (SoM). To explore the mapping relationship between physical environment and electromagnetic space, a new V2V high-fidelity mixed sensing-communication integration simulation dataset with different vehicular traffic densities (VTDs) is constructed. Based on the constructed dataset, a novel scatterer recognition (ScaR) algorithm utilizing neural network SegNet is developed to recognize scatterer spatial attributes from LiDAR point clouds via SoM. In the developed ScaR algorithm, the mapping relationship between LiDAR point clouds and scatterers is explored, where the distribution of scatterers is obtained in the form of grid maps. Furthermore, scatterers are distinguished into dynamic and static scatterers based on LiDAR point cloud features, where parameters, e.g., distance, angle, and number, related to scatterers are determined. Through ScaR, dynamic and static scatterers change with the variation of LiDAR point clouds over time, which precisely models channel non-stationarity and consistency under different VTDs. Some important channel statistical properties, such as time-frequency correlation function (TF-CF) and Doppler power spectral density (DPSD), are obtained. Simulation results match well with ray-tracing (RT)-based results, thus demonstrating the necessity of exploring the mapping relationship and the utility of the proposed model.


[58] 2501.07376

Bigger Isn't Always Better: Towards a General Prior for Medical Image Reconstruction

Diffusion model have been successfully applied to many inverse problems, including MRI and CT reconstruction. Researchers typically re-purpose models originally designed for unconditional sampling without modifications. Using two different posterior sampling algorithms, we show empirically that such large networks are not necessary. Our smallest model, effectively a ResNet, performs almost as good as an attention U-Net on in-distribution reconstruction, while being significantly more robust towards distribution shifts. Furthermore, we introduce models trained on natural images and demonstrate that they can be used in both MRI and CT reconstruction, out-performing model trained on medical images in out-of-distribution cases. As a result of our findings, we strongly caution against simply re-using very large networks and encourage researchers to adapt the model complexity to the respective task. Moreover, we argue that a key step towards a general diffusion-based prior is training on natural images.


[59] 2501.07459

SynthSoM: A synthetic intelligent multi-modal sensing-communication dataset for Synesthesia of Machines (SoM)

Given the importance of datasets for sensing-communication integration research, a novel simulation platform for constructing communication and multi-modal sensory dataset is developed. The developed platform integrates three high-precision software, i.e., AirSim, WaveFarer, and Wireless InSite, and further achieves in-depth integration and precise alignment of them. Based on the developed platform, a new synthetic intelligent multi-modal sensing-communication dataset for Synesthesia of Machines (SoM), named SynthSoM, is proposed. The SynthSoM dataset contains various air-ground multi-link cooperative scenarios with comprehensive conditions, including multiple weather conditions, times of the day, intelligent agent densities, frequency bands, and antenna types. The SynthSoM dataset encompasses multiple data modalities, including radio-frequency (RF) channel large-scale and small-scale fading data, RF millimeter wave (mmWave) radar sensory data, and non-RF sensory data, e.g., RGB images, depth maps, and light detection and ranging (LiDAR) point clouds. The quality of SynthSoM dataset is validated via statistics-based qualitative inspection and evaluation metrics through machine learning (ML) via real-world measurements. The SynthSoM dataset is open-sourced and provides consistent data for cross-comparing SoM-related algorithms.


[60] 2501.07498

Computing Safety Margins of Parameterized Nonlinear Systems for Vulnerability Assessment via Trajectory Sensitivities

Physical systems experience nonlinear disturbances which have the potential to disrupt desired behavior. For a particular disturbance, whether or not the system recovers from the disturbance to a desired stable equilibrium point depends on system parameter values, which are typically uncertain and time-varying. Therefore, to quantify proximity to vulnerability we define the safety margin to be the smallest change in parameter values from a nominal value such that the system will no longer be able to recover from the disturbance. Safety margins are valuable but challenging to compute as related methods, such as those for robust region of attraction estimation, are often either overly conservative or computationally intractable for high dimensional systems. Recently, we developed algorithms to compute safety margins efficiently and non-conservatively by exploiting the large sensitivity of the system trajectory near the region of attraction boundary to small perturbations. Although these algorithms have enjoyed empirical success, they lack theoretical guarantees that would ensure their generalizability. This work develops a novel characterization of safety margins in terms of trajectory sensitivities, and uses this to derive well-posedness and convergence guarantees for these algorithms, enabling their generalizability and successful application to a large class of nonlinear systems.


[61] 2501.07516

Determining Disturbance Recovery Conditions by Inverse Sensitivity Minimization

Power systems naturally experience disturbances, some of which can damage equipment and disrupt consumers. It is important to quickly assess the likely consequences of credible disturbances and take preventive action, if necessary. However, assessing the impact of potential disturbances is challenging because many of the influential factors, such as loading patterns, controller settings and load dynamics, are not precisely known. To address this issue, the paper introduces the concept of parameter-space recovery regions. For each disturbance, the corresponding recovery region is the region of parameter space for which the system will recover to the desired operating point. The boundary of the recovery region establishes the separation between parameter values that result in trouble-free recovery and those that incur undesirable non-recovery. The safety margin for a given set of parameter values is defined as the smallest distance (in parameter space) between the given values and the recovery boundary. Novel numerical algorithms with theoretical guarantees are presented for efficiently computing recovery boundaries and safety margins. Unlike prior methods, which tend to be overly conservative and restricted to low dimensional parameter space, these methods compute safety margins to arbitrary user-specified accuracy and do so efficiently in high dimensional parameter space. The efficacy of the methods is demonstrated using the IEEE 39-bus benchmark power system, where safety margins are computed for cases that consider up to 86 parameters, and reveal unexpected safety implications that would not have been observed otherwise.


[62] 2501.07524

Completing Sets of Prototype Transfer Functions for Subspace-based Direction of Arrival Estimation of Multiple Speakers

To estimate the direction of arrival (DOA) of multiple speakers, subspace-based prototype transfer function matching methods such as multiple signal classification (MUSIC) or relative transfer function (RTF) vector matching are commonly employed. In general, these methods require calibrated microphone arrays, which are characterized by a known array geometry or a set of known prototype transfer functions for several directions. In this paper, we consider a partially calibrated microphone array, composed of a calibrated binaural hearing aid and a (non-calibrated) external microphone at an unknown location with no available set of prototype transfer functions. We propose a procedure for completing sets of prototype transfer functions by exploiting the orthogonality of subspaces, allowing to apply matching-based DOA estimation methods with partially calibrated microphone arrays. For the MUSIC and RTF vector matching methods, experimental results for two speakers in noisy and reverberant environments clearly demonstrate that for all locations of the external microphone DOAs can be estimated more accurately with completed sets of prototype transfer functions than with incomplete sets. \c{opyright}20XX IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.


[63] 2501.06215

Fitting Different Interactive Information: Joint Classification of Emotion and Intention

This paper is the first-place solution for ICASSP MEIJU@2025 Track I, which focuses on low-resource multimodal emotion and intention recognition. How to effectively utilize a large amount of unlabeled data, while ensuring the mutual promotion of different difficulty levels tasks in the interaction stage, these two points become the key to the competition. In this paper, pseudo-label labeling is carried out on the model trained with labeled data, and samples with high confidence and their labels are selected to alleviate the problem of low resources. At the same time, the characteristic of easy represented ability of intention recognition found in the experiment is used to make mutually promote with emotion recognition under different attention heads, and higher performance of intention recognition is achieved through fusion. Finally, under the refined processing data, we achieve the score of 0.5532 in the Test set, and win the championship of the track.


[64] 2501.06229

Open-Source Manually Annotated Vocal Tract Database for Automatic Segmentation from 3D MRI Using Deep Learning: Benchmarking 2D and 3D Convolutional and Transformer Networks

Accurate segmentation of the vocal tract from magnetic resonance imaging (MRI) data is essential for various voice and speech applications. Manual segmentation is time intensive and susceptible to errors. This study aimed to evaluate the efficacy of deep learning algorithms for automatic vocal tract segmentation from 3D MRI.


[65] 2501.06230

BEN: Using Confidence-Guided Matting for Dichotomous Image Segmentation

Current approaches to dichotomous image segmentation (DIS) treat image matting and object segmentation as fundamentally different tasks. As improvements in image segmentation become increasingly challenging to achieve, combining image matting and grayscale segmentation techniques offers promising new directions for architectural innovation. Inspired by the possibility of aligning these two model tasks, we propose a new architectural approach for DIS called Confidence-Guided Matting (CGM). We created the first CGM model called Background Erase Network (BEN). BEN is comprised of two components: BEN Base for initial segmentation and BEN Refiner for confidence refinement. Our approach achieves substantial improvements over current state-of-the-art methods on the DIS5K validation dataset, demonstrating that matting-based refinement can significantly enhance segmentation quality. This work opens new possibilities for cross-pollination between matting and segmentation techniques in computer vision.


[66] 2501.06262

Towards smart and adaptive agents for active sensing on edge devices

TinyML has made deploying deep learning models on low-power edge devices feasible, creating new opportunities for real-time perception in constrained environments. However, the adaptability of such deep learning methods remains limited to data drift adaptation, lacking broader capabilities that account for the environment's underlying dynamics and inherent uncertainty. Deep learning's scaling laws, which counterbalance this limitation by massively up-scaling data and model size, cannot be applied when deploying on the Edge, where deep learning limitations are further amplified as models are scaled down for deployment on resource-constrained devices. This paper presents a smart agentic system capable of performing on-device perception and planning, enabling active sensing on the edge. By incorporating active inference into our solution, our approach extends beyond deep learning capabilities, allowing the system to plan in dynamic environments while operating in real time with a modest total model size of 2.3 MB. We showcase our proposed system by creating and deploying a saccade agent connected to an IoT camera with pan and tilt capabilities on an NVIDIA Jetson embedded device. The saccade agent controls the camera's field of view following optimal policies derived from the active inference principles, simulating human-like saccadic motion for surveillance and robotics applications.


[67] 2501.06276

PROEMO: Prompt-Driven Text-to-Speech Synthesis Based on Emotion and Intensity Control

Speech synthesis has significantly advanced from statistical methods to deep neural network architectures, leading to various text-to-speech (TTS) models that closely mimic human speech patterns. However, capturing nuances such as emotion and style in speech synthesis is challenging. To address this challenge, we introduce an approach centered on prompt-based emotion control. The proposed architecture incorporates emotion and intensity control across multi-speakers. Furthermore, we leverage large language models (LLMs) to manipulate speech prosody while preserving linguistic content. Using embedding emotional cues, regulating intensity levels, and guiding prosodic variations with prompts, our approach infuses synthesized speech with human-like expressiveness and variability. Lastly, we demonstrate the effectiveness of our approach through a systematic exploration of the control mechanisms mentioned above.


[68] 2501.06282

MinMo: A Multimodal Large Language Model for Seamless Voice Interaction

Recent advancements in large language models (LLMs) and multimodal speech-text models have laid the groundwork for seamless voice interactions, enabling real-time, natural, and human-like conversations. Previous models for voice interactions are categorized as native and aligned. Native models integrate speech and text processing in one framework but struggle with issues like differing sequence lengths and insufficient pre-training. Aligned models maintain text LLM capabilities but are often limited by small datasets and a narrow focus on speech tasks. In this work, we introduce MinMo, a Multimodal Large Language Model with approximately 8B parameters for seamless voice interaction. We address the main limitations of prior aligned multimodal models. We train MinMo through multiple stages of speech-to-text alignment, text-to-speech alignment, speech-to-speech alignment, and duplex interaction alignment, on 1.4 million hours of diverse speech data and a broad range of speech tasks. After the multi-stage training, MinMo achieves state-of-the-art performance across various benchmarks for voice comprehension and generation while maintaining the capabilities of text LLMs, and also facilitates full-duplex conversation, that is, simultaneous two-way communication between the user and the system. Moreover, we propose a novel and simple voice decoder that outperforms prior models in voice generation. The enhanced instruction-following capabilities of MinMo supports controlling speech generation based on user instructions, with various nuances including emotions, dialects, and speaking rates, and mimicking specific voices. For MinMo, the speech-to-text latency is approximately 100ms, full-duplex latency is approximately 600ms in theory and 800ms in practice. The MinMo project web page is https://funaudiollm.github.io/minmo, and the code and models will be released soon.


[69] 2501.06306

On How Traffic Signals Impact the Fundamental Diagrams of Urban Roads

Being widely adopted by the transportation and planning practitioners, the fundamental diagram (FD) is the primary tool used to relate the key macroscopic traffic variables of speed, flow, and density. We empirically analyze the relation between vehicular space-mean speeds and flows given different signal settings and postulate a parsimonious parametric function form of the traditional FD where its function parameters are explicitly modeled as a function of the signal plan factors. We validate the proposed formulation using data from signalized urban road segments in Salt Lake City, Utah, USA. The proposed formulation builds our understanding of how changes to signal settings impact the FDs, and more generally the congestion patterns, of signalized urban segments.


[70] 2501.06326

On Creating A Brain-To-Text Decoder

Brain decoding has emerged as a rapidly advancing and extensively utilized technique within neuroscience. This paper centers on the application of raw electroencephalogram (EEG) signals for decoding human brain activity, offering a more expedited and efficient methodology for enhancing our understanding of the human brain. The investigation specifically scrutinizes the efficacy of brain-computer interfaces (BCI) in deciphering neural signals associated with speech production, with particular emphasis on the impact of vocabulary size, electrode density, and training data on the framework's performance. The study reveals the competitive word error rates (WERs) achievable on the Librispeech benchmark through pre-training on unlabelled data for speech processing. Furthermore, the study evaluates the efficacy of voice recognition under configurations with limited labeled data, surpassing previous state-of-the-art techniques while utilizing significantly fewer labels. Additionally, the research provides a comprehensive analysis of error patterns in voice recognition and the influence of model size and unlabelled training data. It underscores the significance of factors such as vocabulary size and electrode density in enhancing BCI performance, advocating for an increase in microelectrodes and refinement of language models.


[71] 2501.06334

Over-the-Air FEEL with Integrated Sensing: Joint Scheduling and Beamforming Design

Employing wireless systems with dual sensing and communications functionalities is becoming critical in next generation of wireless networks. In this paper, we propose a robust design for over-the-air federated edge learning (OTA-FEEL) that leverages sensing capabilities at the parameter server (PS) to mitigate the impact of target echoes on the analog model aggregation. We first derive novel expressions for the Cramer-Rao bound of the target response and mean squared error (MSE) of the estimated global model to measure radar sensing and model aggregation quality, respectively. Then, we develop a joint scheduling and beamforming framework that optimizes the OTA-FEEL performance while keeping the sensing and communication quality, determined respectively in terms of Cramer-Rao bound and achievable downlink rate, in a desired range. The resulting scheduling problem reduces to a combinatorial mixed-integer nonlinear programming problem (MINLP). We develop a low-complexity hierarchical method based on the matching pursuit algorithm used widely for sparse recovery in the literature of compressed sensing. The proposed algorithm uses a step-wise strategy to omit the least effective devices in each iteration based on a metric that captures both the aggregation and sensing quality of the system. It further invokes alternating optimization scheme to iteratively update the downlink beamforming and uplink post-processing by marginally optimizing them in each iteration. Convergence and complexity analysis of the proposed algorithm is presented. Numerical evaluations on MNIST and CIFAR-10 datasets demonstrate the effectiveness of our proposed algorithm. The results show that by leveraging accurate sensing, the target echoes on the uplink signal can be effectively suppressed, ensuring the quality of model aggregation to remain intact despite the interference.


[72] 2501.06336

MEt3R: Measuring Multi-View Consistency in Generated Images

We introduce MEt3R, a metric for multi-view consistency in generated images. Large-scale generative models for multi-view image generation are rapidly advancing the field of 3D inference from sparse observations. However, due to the nature of generative modeling, traditional reconstruction metrics are not suitable to measure the quality of generated outputs and metrics that are independent of the sampling procedure are desperately needed. In this work, we specifically address the aspect of consistency between generated multi-view images, which can be evaluated independently of the specific scene. Our approach uses DUSt3R to obtain dense 3D reconstructions from image pairs in a feed-forward manner, which are used to warp image contents from one view into the other. Then, feature maps of these images are compared to obtain a similarity score that is invariant to view-dependent effects. Using MEt3R, we evaluate the consistency of a large set of previous methods for novel view and video generation, including our open, multi-view latent diffusion model.


[73] 2501.06353

Event Constrained Programming

In this paper, we present event constraints as a new modeling paradigm that generalizes joint chance constraints from stochastic optimization to (1) enforce a constraint on the probability of satisfying a set of constraints aggregated via application-specific logic (constituting an event) and (2) to be applied to general infinite-dimensional optimization (InfiniteOpt) problems (i.e., time, space, and/or uncertainty domains). This new constraint class offers significant modeling flexibility in posing InfiniteOpt constraints that are enforced over a certain portion of their domain (e.g., to a certain probability level), but can be challenging to reformulate/solve due to difficulties in representing arbitrary logical conditions and specifying a probabilistic measure on a collection of constraints. To address these challenges, we derive a generalized disjunctive programming (GDP) representation of event constrained optimization problems, which readily enables us to pose logical event conditions in a standard form and allows us to draw from a suite of GDP solution strategies that leverage the special structure of this problem class. We also extend several approximation techniques from the chance constraint literature to provide a means to reformulate certain event constraints without the use of binary variables. We illustrate these findings with case studies in stochastic optimal power flow, dynamic disease control, and optimal 2D diffusion.


[74] 2501.06394

Unispeaker: A Unified Approach for Multimodality-driven Speaker Generation

Recent advancements in personalized speech generation have brought synthetic speech increasingly close to the realism of target speakers' recordings, yet multimodal speaker generation remains on the rise. This paper introduces UniSpeaker, a unified approach for multimodality-driven speaker generation. Specifically, we propose a unified voice aggregator based on KV-Former, applying soft contrastive loss to map diverse voice description modalities into a shared voice space, ensuring that the generated voice aligns more closely with the input descriptions. To evaluate multimodality-driven voice control, we build the first multimodality-based voice control (MVC) benchmark, focusing on voice suitability, voice diversity, and speech quality. UniSpeaker is evaluated across five tasks using the MVC benchmark, and the experimental results demonstrate that UniSpeaker outperforms previous modality-specific models. Speech samples are available at \url{https://UniSpeaker.github.io}.


[75] 2501.06440

UCloudNet: A Residual U-Net with Deep Supervision for Cloud Image Segmentation

Recent advancements in meteorology involve the use of ground-based sky cameras for cloud observation. Analyzing images from these cameras helps in calculating cloud coverage and understanding atmospheric phenomena. Traditionally, cloud image segmentation relied on conventional computer vision techniques. However, with the advent of deep learning, convolutional neural networks (CNNs) are increasingly applied for this purpose. Despite their effectiveness, CNNs often require many epochs to converge, posing challenges for real-time processing in sky camera systems. In this paper, we introduce a residual U-Net with deep supervision for cloud segmentation which provides better accuracy than previous approaches, and with less training consumption. By utilizing residual connection in encoders of UCloudNet, the feature extraction ability is further improved.


[76] 2501.06488

NVS-SQA: Exploring Self-Supervised Quality Representation Learning for Neurally Synthesized Scenes without References

Neural View Synthesis (NVS), such as NeRF and 3D Gaussian Splatting, effectively creates photorealistic scenes from sparse viewpoints, typically evaluated by quality assessment methods like PSNR, SSIM, and LPIPS. However, these full-reference methods, which compare synthesized views to reference views, may not fully capture the perceptual quality of neurally synthesized scenes (NSS), particularly due to the limited availability of dense reference views. Furthermore, the challenges in acquiring human perceptual labels hinder the creation of extensive labeled datasets, risking model overfitting and reduced generalizability. To address these issues, we propose NVS-SQA, a NSS quality assessment method to learn no-reference quality representations through self-supervision without reliance on human labels. Traditional self-supervised learning predominantly relies on the "same instance, similar representation" assumption and extensive datasets. However, given that these conditions do not apply in NSS quality assessment, we employ heuristic cues and quality scores as learning objectives, along with a specialized contrastive pair preparation process to improve the effectiveness and efficiency of learning. The results show that NVS-SQA outperforms 17 no-reference methods by a large margin (i.e., on average 109.5% in SRCC, 98.6% in PLCC, and 91.5% in KRCC over the second best) and even exceeds 16 full-reference methods across all evaluation metrics (i.e., 22.9% in SRCC, 19.1% in PLCC, and 18.6% in KRCC over the second best).


[77] 2501.06491

Improving Requirements Classification with SMOTE-Tomek Preprocessing

This study emphasizes the domain of requirements engineering by applying the SMOTE-Tomek preprocessing technique, combined with stratified K-fold cross-validation, to address class imbalance in the PROMISE dataset. This dataset comprises 969 categorized requirements, classified into functional and non-functional types. The proposed approach enhances the representation of minority classes while maintaining the integrity of validation folds, leading to a notable improvement in classification accuracy. Logistic regression achieved 76.16\%, significantly surpassing the baseline of 58.31\%. These results highlight the applicability and efficiency of machine learning models as scalable and interpretable solutions.


[78] 2501.06514

Neural Codec Source Tracing: Toward Comprehensive Attribution in Open-Set Condition

Current research in audio deepfake detection is gradually transitioning from binary classification to multi-class tasks, referred as audio deepfake source tracing task. However, existing studies on source tracing consider only closed-set scenarios and have not considered the challenges posed by open-set conditions. In this paper, we define the Neural Codec Source Tracing (NCST) task, which is capable of performing open-set neural codec classification and interpretable ALM detection. Specifically, we constructed the ST-Codecfake dataset for the NCST task, which includes bilingual audio samples generated by 11 state-of-the-art neural codec methods and ALM-based out-ofdistribution (OOD) test samples. Furthermore, we establish a comprehensive source tracing benchmark to assess NCST models in open-set conditions. The experimental results reveal that although the NCST models perform well in in-distribution (ID) classification and OOD detection, they lack robustness in classifying unseen real audio. The ST-codecfake dataset and code are available.


[79] 2501.06526

Advancements in UAV-based Integrated Sensing and Communication: A Comprehensive Survey

Unmanned aerial vehicle (UAV)-based integrated sensing and communication (ISAC) systems are poised to revolutionize next-generation wireless networks by enabling simultaneous sensing and communication (S\&C). This survey comprehensively reviews UAV-ISAC systems, highlighting foundational concepts, key advancements, and future research directions. We explore recent advancements in UAV-based ISAC systems from various perspectives and objectives, including advanced channel estimation (CE), beam tracking, and system throughput optimization under joint sensing and communication S\&C constraints. Additionally, we examine weighted sum rate (WSR) and sensing trade-offs, delay and age of information (AoI) minimization, energy efficiency (EE), and security enhancement. These applications highlight the potential of UAV-based ISAC systems to improve spectrum utilization, enhance communication reliability, reduce latency, and optimize energy consumption across diverse domains, including smart cities, disaster relief, and defense operations. The survey also features summary tables for comparative analysis of existing methodologies, emphasizing performance, limitations, and effectiveness in addressing various challenges. By synthesizing recent advancements and identifying open research challenges, this survey aims to be a valuable resource for developing efficient, adaptive, and secure UAV-based ISAC systems.


[80] 2501.06528

Safe Circumnavigation of a Hostile Target Using Range-Based Measurements

Robotic systems are frequently deployed in missions that are dull, dirty, and dangerous, where ensuring their safety is of paramount importance when designing stabilizing controllers to achieve their desired goals. This paper addresses the problem of safe circumnavigation around a hostile target by a nonholonomic robot, with the objective of maintaining a desired safe distance from the target. Our solution approach involves incorporating an auxiliary circle into the problem formulation, which assists in navigating the robot around the target using available range-based measurements. By leveraging the concept of a barrier Lyapunov function, we propose a novel control law that ensures stable circumnavigation around the target while preventing the robot from entering the safety circle. This controller is designed based on a parameter that depends on the radii of three circles, namely the stabilizing circle, the auxiliary circle, and the safety circle. By identifying an appropriate range for this design parameter, we rigorously prove the stability of the desired equilibrium of the closed-loop system. Additionally, we provide an analysis of the robot's motion within the auxiliary circle, which is influenced by a gain parameter in the proposed controller. Simulation and experimental results are presented to illustrate the key theoretical developments.


[81] 2501.06545

Energy-Aware Resource Allocation for Energy Harvesting Powered Wireless Sensor Nodes

Low harvested energy poses a significant challenge to sustaining continuous communication in energy harvesting (EH)-powered wireless sensor networks. This is mainly due to intermittent and limited power availability from radio frequency signals. In this paper, we introduce a novel energy-aware resource allocation problem aimed at enabling the asynchronous accumulate-then-transmit protocol, offering an alternative to the extensively studied harvest-then-transmit approach. Specifically, we jointly optimize power allocation and time fraction dedicated to EH to maximize the average long-term system throughput, accounting for both data and energy queue lengths. By leveraging inner approximation and network utility maximization techniques, we develop a simple yet efficient iterative algorithm that guarantees at least a local optimum and achieves long-term utility improvement. Numerical results highlight the proposed approach's effectiveness in terms of both queue length and sustained system throughput.


[82] 2501.06566

Cooperative Aerial Robot Inspection Challenge: A Benchmark for Heterogeneous Multi-UAV Planning and Lessons Learned

We propose the Cooperative Aerial Robot Inspection Challenge (CARIC), a simulation-based benchmark for motion planning algorithms in heterogeneous multi-UAV systems. CARIC features UAV teams with complementary sensors, realistic constraints, and evaluation metrics prioritizing inspection quality and efficiency. It offers a ready-to-use perception-control software stack and diverse scenarios to support the development and evaluation of task allocation and motion planning algorithms. Competitions using CARIC were held at IEEE CDC 2023 and the IROS 2024 Workshop on Multi-Robot Perception and Navigation, attracting innovative solutions from research teams worldwide. This paper examines the top three teams from CDC 2023, analyzing their exploration, inspection, and task allocation strategies while drawing insights into their performance across scenarios. The results highlight the task's complexity and suggest promising directions for future research in cooperative multi-UAV systems.


[83] 2501.06583

Optimizing wheel loader performance: an end-to-end approach

Wheel loaders in mines and construction sites repeatedly load soil from a pile to load receivers. This task presents a challenging optimization problem since each loading's performance depends on the pile state, which depends on previous loadings. We investigate an end-to-end optimization approach considering future loading outcomes and V-cycle transportation costs. To predict the evolution of the pile state and the loading performance, we use world models that leverage deep neural networks trained on numerous simulated loading cycles. A look-ahead tree search optimizes the sequence of loading actions by evaluating the performance of thousands of action candidates, which expand into subsequent action candidates under the predicted pile states recursively. Test results demonstrate that, over a horizon of 15 sequential loadings, the look-ahead tree search is 6% more efficient than a greedy strategy, which always selects the action that maximizes the current single loading performance, and 14% more efficient than using a fixed loading controller optimized for the nominal case.


[84] 2501.06620

Differentially Private Distribution Estimation Using Functional Approximation

The cumulative distribution function (CDF) is fundamental due to its ability to reveal information about random variables, making it essential in studies that require privacy-preserving methods to protect sensitive data. This paper introduces a novel privacy-preserving CDF method inspired by the functional analysis and functional mechanism. Our approach projects the empirical CDF into a predefined space, approximating it using specific functions, and protects the coefficients to achieve a differentially private empirical CDF. Compared to existing methods like histogram queries and adaptive quantiles, our method is preferable in decentralized settings and scenarios where CDFs must be updated with newly collected data.


[85] 2501.06653

Theoretical Characterization of Effect of Masks in Snapshot Compressive Imaging

Snapshot compressive imaging (SCI) refers to the recovery of three-dimensional data cubes-such as videos or hyperspectral images-from their two-dimensional projections, which are generated by a special encoding of the data with a mask. SCI systems commonly use binary-valued masks that follow certain physical constraints. Optimizing these masks subject to these constraints is expected to improve system performance. However, prior theoretical work on SCI systems focuses solely on independently and identically distributed (i.i.d.) Gaussian masks, which do not permit such optimization. On the other hand, existing practical mask optimizations rely on computationally intensive joint optimizations that provide limited insight into the role of masks and are expected to be sub-optimal due to the non-convexity and complexity of the optimization. In this paper, we analytically characterize the performance of SCI systems employing binary masks and leverage our analysis to optimize hardware parameters. Our findings provide a comprehensive and fundamental understanding of the role of binary masks - with both independent and dependent elements - and their optimization. We also present simulation results that confirm our theoretical findings and further illuminate different aspects of mask design.


[86] 2501.06700

Average Reward Reinforcement Learning for Wireless Radio Resource Management

In this paper, we address a crucial but often overlooked issue in applying reinforcement learning (RL) to radio resource management (RRM) in wireless communications: the mismatch between the discounted reward RL formulation and the undiscounted goal of wireless network optimization. To the best of our knowledge, we are the first to systematically investigate this discrepancy, starting with a discussion of the problem formulation followed by simulations that quantify the extent of the gap. To bridge this gap, we introduce the use of average reward RL, a method that aligns more closely with the long-term objectives of RRM. We propose a new method called the Average Reward Off policy Soft Actor Critic (ARO SAC) is an adaptation of the well known Soft Actor Critic algorithm in the average reward framework. This new method achieves significant performance improvement our simulation results demonstrate a 15% gain in the system performance over the traditional discounted reward RL approach, underscoring the potential of average reward RL in enhancing the efficiency and effectiveness of wireless network optimization.


[87] 2501.06719

Hierarchical Sampling-based Planner with LTL Constraints and Text Prompting

This project introduces a hierarchical planner integrating Linear Temporal Logic (LTL) constraints with natural language prompting for robot motion planning. The framework decomposes maps into regions, generates directed graphs, and converts them into transition systems for high-level planning. Text instructions are translated into LTL formulas and converted to Deterministic Finite Automata (DFA) for sequential goal-reaching tasks while adhering to safety constraints. High-level plans, derived via Breadth-First Search (BFS), guide low-level planners like Exploring Random Trees (RRT) and Probabilistic Roadmaps (PRM) for obstacle-avoidant navigation along with LTL tasks. The approach demonstrates adaptability to various task complexities, though challenges such as graph construction overhead and suboptimal path generation remain. Future directions include extending to considering terrain conditions and incorporating higher-order dynamics.


[88] 2501.06726

Integrated Sensing and Edge AI: Realizing Intelligent Perception in 6G

Sensing and edge artificial intelligence (AI) are envisioned as two essential and interconnected functions in sixth-generation (6G) mobile networks. On the one hand, sensing-empowered applications rely on powerful AI models to extract features and understand semantics from ubiquitous wireless sensors. On the other hand, the massive amount of sensory data serves as the fuel to continuously refine edge AI models. This deep integration of sensing and edge AI has given rise to a new task-oriented paradigm known as integrated sensing and edge AI (ISEA), which features a holistic design approach to communication, AI computation, and sensing for optimal sensing-task performance. In this article, we present a comprehensive survey for ISEA. We first provide technical preliminaries for sensing, edge AI, and new communication paradigms in ISEA. Then, we study several use cases of ISEA to demonstrate its practical relevance and introduce current standardization and industrial progress. Next, the design principles, metrics, tradeoffs, and architectures of ISEA are established, followed by a thorough overview of ISEA techniques, including digital air interface, over-the-air computation, and advanced signal processing. Its interplay with various 6G advancements, e.g., new physical-layer and networking techniques, are presented. Finally, we present future research opportunities in ISEA, including the integration of foundation models, convergence of ISEA and integrated sensing and communications (ISAC), and ultra-low-latency ISEA.


[89] 2501.06744

Enabling Cardiac Monitoring using In-ear Ballistocardiogram on COTS Wireless Earbuds

The human ear offers a unique opportunity for cardiac monitoring due to its physiological and practical advantages. However, existing earable solutions require additional hardware and complex processing, posing challenges for commercial True Wireless Stereo (TWS) earbuds which are limited by their form factor and resources. In this paper, we propose TWSCardio, a novel system that repurposes the IMU sensors in TWS earbuds for cardiac monitoring. Our key finding is that these sensors can capture in-ear ballistocardiogram (BCG) signals. TWSCardio reuses the unstable Bluetooth channel to stream the IMU data to a smartphone for BCG processing. It incorporates a signal enhancement framework to address issues related to missing data and low sampling rate, while mitigating motion artifacts by fusing multi-axis information. Furthermore, it employs a region-focused signal reconstruction method to translate the multi-axis in-ear BCG signals into fine-grained seismocardiogram (SCG) signals. We have implemented TWSCardio as an efficient real-time app. Our experiments on 100 subjects verify that TWSCardio can accurately reconstruct cardiac signals while showing resilience to motion artifacts, missing data, and low sampling rates. Our case studies further demonstrate that TWSCardio can support diverse cardiac monitoring applications.


[90] 2501.06783

Cost-Effective Robotic Handwriting System with AI Integration

This paper introduces a cost-effective robotic handwriting system designed to replicate human-like handwriting with high precision. Combining a Raspberry Pi Pico microcontroller, 3D-printed components, and a machine learning-based handwriting generation model implemented via TensorFlow.js, the system converts user-supplied text into realistic stroke trajectories. By leveraging lightweight 3D-printed materials and efficient mechanical designs, the system achieves a total hardware cost of approximately \$56, significantly undercutting commercial alternatives. Experimental evaluations demonstrate handwriting precision within $\pm$0.3 millimeters and a writing speed of approximately 200 mm/min, positioning the system as a viable solution for educational, research, and assistive applications. This study seeks to lower the barriers to personalized handwriting technologies, making them accessible to a broader audience.


[91] 2501.06798

OFDM-based JCAS under Attack: The Dual Threat of Spoofing and Jamming in WLAN Sensing

This study reveals the vulnerabilities of Wireless Local Area Networks (WLAN) sensing, under the scope of joint communication and sensing (JCAS), focusing on target spoofing and deceptive jamming techniques. We use orthogonal frequency-division multiplexing (OFDM) to explore how adversaries can exploit WLAN's sensing capabilities to inject false targets and disrupt normal operations. Unlike traditional methods that require sophisticated digital radio-frequency memory hardware, we demonstrate that much simpler software-defined radios can effectively serve as deceptive jammers in WLAN settings. Through comprehensive modeling and practical experiments, we show how deceptive jammers can manipulate the range-Doppler map (RDM) by altering signal integrity, thereby posing significant security threats to OFDM-based JCAS systems. Our findings comprehensively evaluate jammer impact on RDMs and propose several jamming strategies that vary in complexity and detectability.


[92] 2501.06938

Evaluating unsupervised contrastive learning framework for MRI sequences classification

The automatic identification of Magnetic Resonance Imaging (MRI) sequences can streamline clinical workflows by reducing the time radiologists spend manually sorting and identifying sequences, thereby enabling faster diagnosis and treatment planning for patients. However, the lack of standardization in the parameters of MRI scans poses challenges for automated systems and complicates the generation and utilization of datasets for machine learning research. To address this issue, we propose a system for MRI sequence identification using an unsupervised contrastive deep learning framework. By training a convolutional neural network based on the ResNet-18 architecture, our system classifies nine common MRI sequence types as a 9-class classification problem. The network was trained using an in-house internal dataset and validated on several public datasets, including BraTS, ADNI, Fused Radiology-Pathology Prostate Dataset, the Breast Cancer Dataset (ACRIN), among others, encompassing diverse acquisition protocols and requiring only 2D slices for training. Our system achieves a classification accuracy of over 0.95 across the nine most common MRI sequence types.


[93] 2501.06959

Sanidha: A Studio Quality Multi-Modal Dataset for Carnatic Music

Music source separation demixes a piece of music into its individual sound sources (vocals, percussion, melodic instruments, etc.), a task with no simple mathematical solution. It requires deep learning methods involving training on large datasets of isolated music stems. The most commonly available datasets are made from commercial Western music, limiting the models' applications to non-Western genres like Carnatic music. Carnatic music is a live tradition, with the available multi-track recordings containing overlapping sounds and bleeds between the sources. This poses a challenge to commercially available source separation models like Spleeter and Hybrid Demucs. In this work, we introduce 'Sanidha', the first open-source novel dataset for Carnatic music, offering studio-quality, multi-track recordings with minimal to no overlap or bleed. Along with the audio files, we provide high-definition videos of the artists' performances. Additionally, we fine-tuned Spleeter, one of the most commonly used source separation models, on our dataset and observed improved SDR performance compared to fine-tuning on a pre-existing Carnatic multi-track dataset. The outputs of the fine-tuned model with 'Sanidha' are evaluated through a listening study.


[94] 2501.06965

Kolmogorov-Arnold Recurrent Network for Short Term Load Forecasting Across Diverse Consumers

Load forecasting plays a crucial role in energy management, directly impacting grid stability, operational efficiency, cost reduction, and environmental sustainability. Traditional Vanilla Recurrent Neural Networks (RNNs) face issues such as vanishing and exploding gradients, whereas sophisticated RNNs such as LSTMs have shown considerable success in this domain. However, these models often struggle to accurately capture complex and sudden variations in energy consumption, and their applicability is typically limited to specific consumer types, such as offices or schools. To address these challenges, this paper proposes the Kolmogorov-Arnold Recurrent Network (KARN), a novel load forecasting approach that combines the flexibility of Kolmogorov-Arnold Networks with RNN's temporal modeling capabilities. KARN utilizes learnable temporal spline functions and edge-based activations to better model non-linear relationships in load data, making it adaptable across a diverse range of consumer types. The proposed KARN model was rigorously evaluated on a variety of real-world datasets, including student residences, detached homes, a home with electric vehicle charging, a townhouse, and industrial buildings. Across all these consumer categories, KARN consistently outperformed traditional Vanilla RNNs, while it surpassed LSTM and Gated Recurrent Units (GRUs) in six buildings. The results demonstrate KARN's superior accuracy and applicability, making it a promising tool for enhancing load forecasting in diverse energy management scenarios.


[95] 2501.06974

Downlink OFDM-FAMA in 5G-NR Systems

Fluid antenna multiple access (FAMA), enabled by the fluid antenna system (FAS), offers a new and straightforward solution to massive connectivity. Previous results on FAMA were primarily based on narrowband channels. This paper studies the adoption of FAMA within the fifth-generation (5G) orthogonal frequency division multiplexing (OFDM) framework, referred to as OFDM-FAMA, and evaluate its performance in broadband multipath channels. We first design the OFDM-FAMA system, taking into account 5G channel coding and OFDM modulation. Then the system's achievable rate is analyzed, and an algorithm to approximate the FAS configuration at each user is proposed based on the rate. Extensive link-level simulation results reveal that OFDM-FAMA can significantly improve the multiplexing gain over the OFDM system with fixed-position antenna (FPA) users, especially when robust channel coding is applied and the number of radio-frequency (RF) chains at each user is small.


[96] 2501.06976

TensorConvolutionPlus: A python package for distribution system flexibility area estimation

Power system operators need new, efficient operational tools to use the flexibility of distributed resources and deal with the challenges of highly uncertain and variable power systems. Transmission system operators can consider the available flexibility in distribution systems (DSs) without breaching the DS constraints through flexibility areas. However, there is an absence of open-source packages for flexibility area estimation. This paper introduces TensorConvolutionPlus, a user-friendly Python-based package for flexibility area estimation. The main features of TensorConvolutionPlus include estimating flexibility areas using the TensorConvolution+ algorithm, the power flow-based algorithm, an exhaustive PF-based algorithm, and an optimal power flow-based algorithm. Additional features include adapting flexibility area estimations from different operating conditions and including flexibility service providers offering discrete setpoints of flexibility. The TensorConvolutionPlus package facilitates a broader adaptation of flexibility estimation algorithms by system operators and power system researchers.


[97] 2501.07041

Beam Structured Turbo Receiver for HF Skywave Massive MIMO

In this paper, we investigate receiver design for high frequency (HF) skywave massive multiple-input multiple-output (MIMO) communications. We first establish a modified beam based channel model (BBCM) by performing uniform sampling for directional cosine with deterministic sampling interval, where the beam matrix is constructed using a phase-shifted discrete Fourier transform (DFT) matrix. Based on the modified BBCM, we propose a beam structured turbo receiver (BSTR) involving low-dimensional beam domain signal detection for grouped user terminals (UTs), which is proved to be asymptotically optimal in terms of minimizing mean-squared error (MSE). Moreover, we extend it to windowed BSTR by introducing a windowing approach for interference suppression and complexity reduction, and propose a well-designed energy-focusing window. We also present an efficient implementation of the windowed BSTR by exploiting the structure properties of the beam matrix and the beam domain channel sparsity. Simulation results validate the superior performance of the proposed receivers but with remarkably low complexity.


[98] 2501.07057

Optimization with Multi-sourced Reference Information and Unknown Trust: A Distributionally Robust Approach

In problems that involve input parameter information gathered from multiple data sources with varying reliability, incorporating users' trust about different sources in decision-optimization models can potentially improve solution performance and reliability. In this work, we propose a novel multi-reference distributionally robust optimization (MR-DRO) framework, where the model inputs are uncertain and their probability distributions can be statistically inferred from multiple data sources. Via nonparametric data fusion, we construct a Wasserstein ambiguity set to minimize the worst-case expected value of a stochastic objective function, accounting for both uncertainty and unknown reliability of information sources. We reformulate the MR-DRO model as a linear program given linear objective and constraints in the original problem. We also incorporate a dynamic trust update mechanism that adjusts the trust for each source based on its performance over time. In addition, we introduce the concept of probability dominance to identify sources with dominant trust. Via solving instances of resource allocation and portfolio optimization, we demonstrate the effectiveness of the trust-informed MR-DRO approach compared to traditional optimization frameworks relying on a single data source. Our results highlight the significance of integrating (dynamic) user trust in decision making under uncertainty, particularly when given diverse and potentially conflicting input data.


[99] 2501.07088

MathReader : Text-to-Speech for Mathematical Documents

TTS (Text-to-Speech) document reader from Microsoft, Adobe, Apple, and OpenAI have been serviced worldwide. They provide relatively good TTS results for general plain text, but sometimes skip contents or provide unsatisfactory results for mathematical expressions. This is because most modern academic papers are written in LaTeX, and when LaTeX formulas are compiled, they are rendered as distinctive text forms within the document. However, traditional TTS document readers output only the text as it is recognized, without considering the mathematical meaning of the formulas. To address this issue, we propose MathReader, which effectively integrates OCR, a fine-tuned T5 model, and TTS. MathReader demonstrated a lower Word Error Rate (WER) than existing TTS document readers, such as Microsoft Edge and Adobe Acrobat, when processing documents containing mathematical formulas. MathReader reduced the WER from 0.510 to 0.281 compared to Microsoft Edge, and from 0.617 to 0.281 compared to Adobe Acrobat. This will significantly contribute to alleviating the inconvenience faced by users who want to listen to documents, especially those who are visually impaired. The code is available at https://github.com/hyeonsieun/MathReader.


[100] 2501.07102

AdaCS: Adaptive Normalization for Enhanced Code-Switching ASR

Intra-sentential code-switching (CS) refers to the alternation between languages that happens within a single utterance and is a significant challenge for Automatic Speech Recognition (ASR) systems. For example, when a Vietnamese speaker uses foreign proper names or specialized terms within their speech. ASR systems often struggle to accurately transcribe intra-sentential CS due to their training on monolingual data and the unpredictable nature of CS. This issue is even more pronounced for low-resource languages, where limited data availability hinders the development of robust models. In this study, we propose AdaCS, a normalization model integrates an adaptive bias attention module (BAM) into encoder-decoder network. This novel approach provides a robust solution to CS ASR in unseen domains, thereby significantly enhancing our contribution to the field. By utilizing BAM to both identify and normalize CS phrases, AdaCS enhances its adaptive capabilities with a biased list of words provided during inference. Our method demonstrates impressive performance and the ability to handle unseen CS phrases across various domains. Experiments show that AdaCS outperforms previous state-of-the-art method on Vietnamese CS ASR normalization by considerable WER reduction of 56.2% and 36.8% on the two proposed test sets.


[101] 2501.07148

Implementing LoRa MIMO System for Internet of Things

Bandwidth constraints limit LoRa implementations. Contemporary IoT applications require higher throughput than that provided by LoRa. This work introduces a LoRa Multiple Input Multiple Output (MIMO) system and a spatial multiplexing algorithm to address LoRa's bandwidth limitation. The transceivers in the proposed approach modulate the signals on distinct frequencies of the same LoRa band. A Frequency Division Multiplexing (FDM) method is used at the transmitters to provide a wider MIMO channel. Unlike conventional Orthogonal Frequency Division Multiplexing (OFDM) techniques, this work exploits the orthogonality of the LoRa signals facilitated by its proprietary Chirp Spread Spectrum (CSS) modulation to perform an OFDM in the proposed LoRa MIMO system. By varying the Spreading Factor (SF) and bandwidth of LoRa signals, orthogonal signals can transmit on the same frequency irrespective of the FDM. Even though the channel correlation is minimal for different spreading factors and bandwidths, different Carrier Frequencies (CF) ensure the signals do not overlap and provide additional degrees of freedom. This work assesses the proposed model's performance and conducts an extensive analysis to provide an overview of resources consumed by the proposed system. Finally, this work provides the detailed results of a thorough evaluation of the model on test hardware.


[102] 2501.07173

Knowledge Distillation and Enhanced Subdomain Adaptation Using Graph Convolutional Network for Resource-Constrained Bearing Fault Diagnosis

Bearing fault diagnosis under varying working conditions faces challenges, including a lack of labeled data, distribution discrepancies, and resource constraints. To address these issues, we propose a progressive knowledge distillation framework that transfers knowledge from a complex teacher model, utilizing a Graph Convolutional Network (GCN) with Autoregressive moving average (ARMA) filters, to a compact and efficient student model. To mitigate distribution discrepancies and labeling uncertainty, we introduce Enhanced Local Maximum Mean Squared Discrepancy (ELMMSD), which leverages mean and variance statistics in the Reproducing Kernel Hilbert Space (RKHS) and incorporates a priori probability distributions between labels. This approach increases the distance between clustering centers, bridges subdomain gaps, and enhances subdomain alignment reliability. Experimental results on benchmark datasets (CWRU and JNU) demonstrate that the proposed method achieves superior diagnostic accuracy while significantly reducing computational costs. Comprehensive ablation studies validate the effectiveness of each component, highlighting the robustness and adaptability of the approach across diverse working conditions.


[103] 2501.07180

Evaluating Robotic Approach Techniques for the Insertion of a Straight Instrument into a Vitreoretinal Surgery Trocar

Advances in vitreoretinal robotic surgery enable precise techniques for gene therapies. This study evaluates three robotic approaches using the 7-DoF robotic arm for docking a micro-precise tool to a trocar: fully co-manipulated, hybrid co-manipulated/teleoperated, and hybrid with camera assistance. The fully co-manipulated approach was the fastest but had a 42% success rate. Hybrid methods showed higher success rates (91.6% and 100%) and completed tasks within 2 minutes. NASA Task Load Index (TLX) assessments indicated lower physical demand and effort for hybrid approaches.


[104] 2501.07246

Audio-CoT: Exploring Chain-of-Thought Reasoning in Large Audio Language Model

Large Audio-Language Models (LALMs) have demonstrated remarkable performance in tasks involving audio perception and understanding, such as speech recognition and audio captioning. However, their reasoning capabilities - critical for solving complex real-world problems - remain underexplored. In this work, we conduct the first exploration into integrating Chain-of-Thought (CoT) reasoning into LALMs to enhance their reasoning ability across auditory modalities. We evaluate representative CoT methods, analyzing their performance in both information extraction and reasoning tasks across sound, music, and speech domains. Our findings reveal that CoT methods significantly improve performance on easy and medium tasks but encounter challenges with hard tasks, where reasoning chains can confuse the model rather than improve accuracy. Additionally, we identify a positive correlation between reasoning path length and accuracy, demonstrating the potential of scaling inference for advanced instruction-following and reasoning. This study not only highlights the promise of CoT in enhancing LALM reasoning capabilities but also identifies key limitations and provides actionable directions for future research.


[105] 2501.07318

Movable Antenna Enhanced Integrated Sensing and Communication Via Antenna Position Optimization

In this paper, we propose an integrated sensing and communication (ISAC) system aided by the movable-antenna (MA) array, which can improve the communication and sensing performance via flexible antenna movement over conventional fixed-position antenna (FPA) array. First, we consider the downlink multiuser communication, where each user is randomly distributed within a given three-dimensional zone with local movement. To reduce the overhead of frequent antenna movement, the antenna position vector (APV) is designed based on users' statistical channel state information (CSI), so that the antennas only need to be moved in a large timescale. Then, for target sensing, the Cramer-Rao bounds (CRBs) of the estimation mean square error for different spatial angles of arrival (AoAs) are derived as functions of MAs' positions. Based on the above, we formulate an optimization problem to maximize the expected minimum achievable rate among all communication users, with given constraints on the maximum acceptable CRB thresholds for target sensing. An alternating optimization algorithm is proposed to iteratively optimize one of the horizontal and vertical APVs of the MA array with the other being fixed. Numerical results demonstrate that our proposed MA arrays can significantly enlarge the trade-off region between communication and sensing performance compared to conventional FPA arrays with different inter-antenna spacing. It is also revealed that the steering vectors of the designed MA arrays exhibit low correlation in the angular domain, thus effectively reducing channel correlation among communication users to enhance their achievable rates, while alleviating ambiguity in target angle estimation to achieve improved sensing accuracy.


[106] 2501.07329

Joint Automatic Speech Recognition And Structure Learning For Better Speech Understanding

Spoken language understanding (SLU) is a structure prediction task in the field of speech. Recently, many works on SLU that treat it as a sequence-to-sequence task have achieved great success. However, This method is not suitable for simultaneous speech recognition and understanding. In this paper, we propose a joint speech recognition and structure learning framework (JSRSL), an end-to-end SLU model based on span, which can accurately transcribe speech and extract structured content simultaneously. We conduct experiments on name entity recognition and intent classification using the Chinese dataset AISHELL-NER and the English dataset SLURP. The results show that our proposed method not only outperforms the traditional sequence-to-sequence method in both transcription and extraction capabilities but also achieves state-of-the-art performance on the two datasets.


[107] 2501.07337

Digital Operating Mode Classification of Real-World Amateur Radio Transmissions

This study presents an ML approach for classifying digital radio operating modes evaluated on real-world transmissions. We generated 98 different parameterized radio signals from 17 digital operating modes, transmitted each of them on the 70 cm (UHF) amateur radio band, and recorded our transmissions with two different architectures of SDR receivers. Three lightweight ML models were trained exclusively on spectrograms of limited non-transmitted signals with random characters as payloads. This training involved an online data augmentation pipeline to simulate various radio channel impairments. Our best model, EfficientNetB0, achieved an accuracy of 93.80% across the 17 operating modes and 85.47% across all 98 parameterized radio signals, evaluated on our real-world transmissions with Wikipedia articles as payloads. Furthermore, we analyzed the impact of varying signal durations & the number of FFT bins on classification, assessed the effectiveness of our simulated channel impairments, and tested our models across multiple simulated SNRs.


[108] 2501.07461

A Linear Parameter-Varying Framework for the Analysis of Time-Varying Optimization Algorithms

In this paper we propose a framework to analyze iterative first-order optimization algorithms for time-varying convex optimization. We assume that the temporal variability is caused by a time-varying parameter entering the objective, which can be measured at the time of decision but whose future values are unknown. We consider the case of strongly convex objective functions with Lipschitz continuous gradients and address the class of running algorithms where only one iteration per time change is performed. We model these algorithms as discrete-time linear parameter varying (LPV) systems in feedback with a time-varying gradient. We leverage the approach of analyzing algorithms as uncertain control interconnections with integral quadratic constraints (IQCs) and generalize that framework to the time-varying case. We propose novel IQCs that are capable of capturing the behavior of time-varying nonlinearities and leverage techniques from the LPV literature to establish novel bounds on the tracking error. Quantitative bounds can be computed by solving a semi-definite program and can be interpreted as an input-to-state stability result with respect to a disturbance signal which increases with the temporal variability of the problem. As a departure from results in this research area, our bounds introduce terms that can be interpreted as a temporal rate of change in the cost function and the optimal value. We exemplify our main results with numerical experiments that showcase how our analysis framework is able to capture convergence rates of different first-order algorithms for time-varying optimization through the choice of IQC and rate bounds.


[109] 2501.07474

Estimating Musical Surprisal in Audio

In modeling musical surprisal expectancy with computational methods, it has been proposed to use the information content (IC) of one-step predictions from an autoregressive model as a proxy for surprisal in symbolic music. With an appropriately chosen model, the IC of musical events has been shown to correlate with human perception of surprise and complexity aspects, including tonal and rhythmic complexity. This work investigates whether an analogous methodology can be applied to music audio. We train an autoregressive Transformer model to predict compressed latent audio representations of a pretrained autoencoder network. We verify learning effects by estimating the decrease in IC with repetitions. We investigate the mean IC of musical segment types (e.g., A or B) and find that segment types appearing later in a piece have a higher IC than earlier ones on average. We investigate the IC's relation to audio and musical features and find it correlated with timbral variations and loudness and, to a lesser extent, dissonance, rhythmic complexity, and onset density related to audio and musical features. Finally, we investigate if the IC can predict EEG responses to songs and thus model humans' surprisal in music. We provide code for our method on github.com/sonycslparis/audioic.


[110] 2501.07476

Encrypted Computation of Collision Probability for Secure Satellite Conjunction Analysis

The computation of collision probability ($\mathcal{P}_c$) is crucial for space environmentalism and sustainability by providing decision-making knowledge that can prevent collisions between anthropogenic space objects. However, the accuracy and precision of $\mathcal{P}_c$ computations is often compromised by limitations in computational resources and data availability. While significant improvements have been made in the computational aspects, the rising concerns regarding the privacy of collaborative data sharing can be a major limiting factor in the future conjunction analysis and risk assessment, especially as the space environment grows increasingly privatized, competitive, and fraught with conflicting strategic interests. This paper argues that the importance of privacy measures in space situational awareness (SSA) is underappreciated, and regulatory and compliance measures currently in place are not sufficient by themselves, presenting a significant gap. To address this gap, we introduce a novel encrypted architecture that leverages advanced cryptographic techniques, including homomorphic encryption (HE) and multi-party computation (MPC), to safeguard the privacy of entities computing space sustainability metrics, inter alia, $\mathcal{P}_c$. Our proposed protocol, Encrypted $\mathcal{P}_c$, integrates the Monte Carlo estimation algorithm with cryptographic solutions, enabling secure collision probability computation without exposing sensitive or proprietary information. This research advances secure conjunction analysis by developing a secure MPC protocol for $\mathcal{P}_c$ computation and highlights the need for innovative protocols to ensure a more secure and cooperative SSA landscape.


[111] 2501.07534

Investigating Map-Based Path Loss Models: A Study of Feature Representations in Convolutional Neural Networks

Path loss prediction is a beneficial tool for efficient use of the radio frequency spectrum. Building on prior research on high-resolution map-based path loss models, this paper studies convolutional neural network input representations in more detail. We investigate different methods of representing scalar features in convolutional neural networks. Specifically, we compare using frequency and distance as input channels to convolutional layers or as scalar inputs to regression layers. We assess model performance using three different feature configurations and find that representing scalar features as image channels results in the strongest generalization.


[112] 2501.07557

Decoding Musical Evolution Through Network Science

Music has always been central to human culture, reflecting and shaping traditions, emotions, and societal changes. Technological advancements have transformed how music is created and consumed, influencing tastes and the music itself. In this study, we use Network Science to analyze musical complexity. Drawing on $\approx20,000$ MIDI files across six macro-genres spanning nearly four centuries, we represent each composition as a weighted directed network to study its structural properties. Our results show that Classical and Jazz compositions have higher complexity and melodic diversity than recently developed genres. However, a temporal analysis reveals a trend toward simplification, with even Classical and Jazz nearing the complexity levels of modern genres. This study highlights how digital tools and streaming platforms shape musical evolution, fostering new genres while driving homogenization and simplicity.


[113] 2501.07570

Digital Twin for Smart Societies: A Catalyst for Inclusive and Accessible Healthcare

With rapid digitization and digitalization, drawing a fine line between the digital and the physical world has become nearly impossible. It has become essential more than ever to integrate all spheres of life into a single Digital Thread to address pressing challenges of modern society: accessible and inclusive healthcare in terms of equality and equity. Techno-social advancements and mutual acceptance have enabled the infusion of digital models to simulate social settings with minimum resource utilization to make effective decisions. However, a significant gap exists in feeding back the models with appropriate real-time changes. In other words, active behavioral modeling of modern society is lacking, influencing community healthcare as a whole. By creating virtual replicas of (physical) behavioral systems, digital twins can enable real-time monitoring, simulation, and optimization of urban dynamics. This paper explores the potential of digital twins to promote inclusive healthcare for evolving smart cities. We argue that digital twins can be used to: Identify and address disparities in access to healthcare services, Facilitate community participation, Simulate the impact of urban policies and interventions on different groups of people, and Aid policy-making bodies for better access to healthcare. This paper proposes several ways to use digital twins to stitch the actual and virtual societies. Several discussed concepts within this framework envision an active, integrated, and synchronized community aware of data privacy and security. The proposal also provides high-level step-wise transitions that will enable this transformation.