New articles on Electrical Engineering and Systems Science


[1] 2601.09710

Multi-Level Embedding Conformer Framework for Bengali Automatic Speech Recognition

Bengali, spoken by over 300 million people, is a morphologically rich and lowresource language, posing challenges for automatic speech recognition (ASR). This research presents an end-to-end framework for Bengali ASR, building on a Conformer-CTC backbone with a multi-level embedding fusion mechanism that incorporates phoneme, syllable, and wordpiece representations. By enriching acoustic features with these linguistic embeddings, the model captures fine-grained phonetic cues and higher-level contextual patterns. The architecture employs early and late Conformer stages, with preprocessing steps including silence trimming, resampling, Log-Mel spectrogram extraction, and SpecAugment augmentation. The experimental results demonstrate the strong potential of the model, achieving a word error rate (WER) of 10.01% and a character error rate (CER) of 5.03%. These results demonstrate the effectiveness of combining multi-granular linguistic information with acoustic modeling, providing a scalable approach for low-resource ASR development.


[2] 2601.09837

Distributed Hypothesis Testing Under A Covertness Constraint

We study distributed hypothesis testing under a covertness constraint in the non-alert situation, which requires that under the null-hypothesis an external warden be unable to detect whether communication between the sensor and the decision center is taking place. We characterize the achievable Stein exponent of this setup when the channel from the sensor to the decision center is a partially-connected discrete memoryless channel (DMC), i.e., when certain output symbols can only be induced by some of the inputs. The Stein-exponent in this case, does not depend on the specific transition law of the DMC and equals Shalaby and Papamarcou's exponent without a warden but where the sensor can send $k$ noise-free bits to the decision center, for $k$ a function that is sublinear in the observation length $n$. For fully-connected DMCs, we propose an achievable Stein-exponent and show that it can improve over the local exponent at the decision center. All our coding schemes do not require that the sensor and decision center share a common secret key, as commonly assumed in covert communication. Moreover, in our schemes the divergence covertness constraint vanishes (almost) exponentially fast in the obervation length $n$, again, an atypical behaviour for covert communication.


[3] 2601.09917

Collision Avoidance for Non-Cooperative Multi-Swarm Coverage Control with Bounded Disturbance Measurements

This paper proposes a new algorithm for collision-free coverage control of multiple non-cooperating swarms in the presence of bounded disturbances. A new methodology is introduced that accounts for uncertainties in disturbance measurements. The proposed methodology is used to develop an algorithm that ensures collision-free motion in multi-swarm coverage control, specifically for cases where disturbances are present and their measurements are subject to bounded uncertainty. The theoretical results are validated through simulations of multiple swarms that independently aim to cover a given region in an environment with disturbances.


[4] 2601.09992

Towards Native Intelligence: 6G-LLM Trained with Reinforcement Learning from NDT Feedback

Owing to its comprehensive understanding of upper-layer application requirements and the capabilities of practical communication systems, the 6G-LLM (6G domain large language model) offers a promising pathway toward realizing network native intelligence. Serving as the system orchestrator, the 6G-LLM drives a paradigm shift that fundamentally departs from existing rule-based approaches, which primarily rely on modular, experience-driven optimization. By contrast, the 6G-LLM substantially enhances network flexibility and adaptability. Nevertheless, current efforts to construct 6G-LLMs are constrained by their reliance on large-scale, meticulously curated, human-authored corpora, which are impractical to obtain in real-world scenarios. Moreover, purely offline-trained models lack the capacity for continual self-improvement, limiting their ability to adapt to the highly dynamic requirements of wireless communication environments. To overcome these limitations, we propose a novel training paradigm termed RLDTF (Reinforcement Learning from Digital Twin Feedback) for 6G-LLMs. This framework leverages network digital twins to generate reward signals based on orchestration outcomes, while employing reinforcement learning to guide the model toward optimal decision-making dynamically. Furthermore, we introduce a weighted token mechanism to improve output accuracy. Comprehensive experimental results demonstrate that our proposed framework significantly outperforms state-of-the-art baselines in orchestration accuracy and solution optimality.


[5] 2601.09998

Extremum Seeking Nonovershooting Control of Strict-Feedback Systems Under Unknown Control Direction

This paper addresses the nonovershooting control problem for strict-feedback nonlinear systems with unknown control direction. We propose a method that integrates extremum seeking with Lie bracket-based design to achieve approximately nonovershooting tracking. The approach ensures that arbitrary reference trajectories can be tracked from below for any initial condition, with the overshoot reducible to arbitrarily small levels through parameter tuning. The method further provides a mechanism for enforcing high-relative-degree nonovershooting constraints in safety-critical scenarios involving unknown control directions.


[6] 2601.10013

Clustering-Based User Selection in Federated Learning: Metadata Exploitation for 3GPP Networks

Federated learning (FL) enables collaborative model training without sharing raw user data, but conventional simulations often rely on unrealistic data partitioning and current user selection methods ignore data correlation among users. To address these challenges, this paper proposes a metadatadriven FL framework. We first introduce a novel data partition model based on a homogeneous Poisson point process (HPPP), capturing both heterogeneity in data quantity and natural overlap among user datasets. Building on this model, we develop a clustering-based user selection strategy that leverages metadata, such as user location, to reduce data correlation and enhance label diversity across training rounds. Extensive experiments on FMNIST and CIFAR-10 demonstrate that the proposed framework improves model performance, stability, and convergence in non-IID scenarios, while maintaining comparable performance under IID settings. Furthermore, the method shows pronounced advantages when the number of selected users per round is small. These findings highlight the framework's potential for enhancing FL performance in realistic deployments and guiding future standardization.


[7] 2601.10044

Event-Driven Deep RL Dispatcher for Post-Storm Distribution System Restoration

Natural hazards such as hurricanes and floods damage power grid equipment, forcing operators to replan restoration repeatedly as new information becomes available. This paper develops a deep reinforcement learning (DRL) dispatcher that serves as a real-time decision engine for crew-to-repair assignments. We model restoration as a sequential, information-revealing process and learn an actor-critic policy over compact features such as component status, travel/repair times, crew availability, and marginal restoration value. A feasibility mask blocks unsafe or inoperable actions, such as power flow limits, switching rules, and crew-time constraints, before they are applied. To provide realistic runtime inputs without relying on heavy solvers, we use lightweight surrogates for wind and flood intensities, fragility-based failure, spatial clustering of damage, access impairments, and progressive ticket arrivals. In simulated hurricane and flood events, the learned policy updates crew decisions in real time as new field reports arrive. Because the runtime logic is lightweight, it improves online performance (energy-not-supplied, critical-load restoration time, and travel distance) compared with mixed-integer programs and standard heuristics. The proposed approach is tested on the IEEE 13- and 123-bus feeders with mixed hurricane/flood scenarios.


[8] 2601.10060

Microwave Linear Analog Computer (MiLAC)-aided Multiuser MISO: Fundamental Limits and Beamforming Design

As wireless communication systems evolve toward the 6G era, ultra-massive/gigantic MIMO is envisioned as a key enabling technology. Recently, microwave linear analog computer (MiLAC) has emerged as a promising approach to realize beamforming entirely in the analog domain, thereby alleviating the scalability challenges associated with gigantic MIMO. In this paper, we investigate the fundamental beamforming flexibility and design of lossless and reciprocal MiLAC-aided beamforming for MU-MISO systems. We first provide a rigorous characterization of the set of beamforming matrices achievable by MiLAC. Based on this characterization, we prove that MiLAC-aided beamforming does not generally achieve the full flexibility of digital beamforming, while offering greater flexibility than conventional phase-shifter-based analog beamforming. Furthermore, we propose a hybrid digital-MiLAC architecture and show that it achieves digital beamforming flexibility when the number of radio frequency (RF) chains equals the number of data streams, halving that required by conventional hybrid beamforming. We then formulate the MiLAC-aided sum-rate maximization problem for MU-MISO systems. To solve the problem efficiently, we reformulate the MiLAC-related constraints as a convex linear matrix inequality and establish a low-dimensional subspace property that significantly reduces the problem dimension. Leveraging these results, we propose WMMSE-based algorithms for solving the resulting problem. Simulation results demonstrate that MiLAC-aided beamforming achieves performance close to that of digital beamforming in gigantic MIMO systems. Compared with hybrid beamforming, it achieves comparable or superior performance with lower hardware and computational complexity by avoiding symbol-level digital processing and enabling low-resolution digital-to-analog converters (DACs).


[9] 2601.10074

P-norm based Fractional-Order Robust Subband Adaptive Filtering Algorithm for Impulsive Noise and Noisy Input

Building upon the mean p-power error (MPE) criterion, the normalized subband p-norm (NSPN) algorithm demonstrates superior robustness in $\alpha$-stable noise environments ($1 < \alpha \leq 2$) through effective utilization of low-order moment hidden in robust loss functions. Nevertheless, its performance degrades significantly when processing noise input or additive noise characterized by $\alpha$-stable processes ($0 < \alpha \leq 1$). To overcome these limitations, we propose a novel fractional-order NSPN (FoNSPN) algorithm that incorporates the fractional-order stochastic gradient descent (FoSGD) method into the MPE framework. Additionally, this paper also analyzes the convergence range of its step-size, the theoretical domain of values for the fractional-order $\beta$, and establishes the theoretical steady-state mean square deviation (MSD) model. Simulations conducted in diverse impulsive noise environments confirm the superiority of the proposed FoNSPN algorithm against existing state-of-the-art algorithms.


[10] 2601.10078

Nearest Kronecker Product Decomposition Based Subband Adaptive Filter: Algorithms and Applications

Recently, the nearest Kronecker product (NKP) decomposition-based normalized least mean square (NLMS-NKP) algorithm has demonstrated superior convergence performance compared to the conventional NLMS algorithm. However, its convergence rate exhibits significant degradation when processing highly correlated input signals. To address this problem, we propose a type-I NKP-based normalized subband adaptive filter (NSAF) algorithm, namely NSAF-NKP-I. Nevertheless, this algorithm incurs substantially higher computational overhead than the NLMS-NKP algorithm. Remarkably, our enhanced type-II NKP-based NSAF (NSAF-NKP-II) algorithm achieves equivalent convergence performance while substantially reducing computational complexity. Furthermore, to enhance robustness against impulsive noise interference, we develop two robust variants: the maximum correntropy criterion-based robust NSAF-NKP (RNSAF-NKP-MCC) and logarithmic criterion-based robust NSAF-NKP (RNSAF-NKP-LC) algorithms. Additionally, detailed analyses of computational complexity, step-size range, and theoretical steady-state performance are provided for theproposed algorithms. To enhance the practicability of the NSAF-NKP-II algorithm in complex nonlinear environments, we further devise two nonlinear implementations: the trigonometric functional link network-based NKP-NSAF (TFLN-NSAF-NKP) and Volterra series expansion-based NKP-NSAF (Volterra-NKP-NSAF) algorithms. In active noise control (ANC) systems, we further propose the filtered-x NSAF-NKP-II (NKP-FxNSAF) algorithm. Simulation experiments in echo cancellation, sparse system identification, nonlinear processing, and ANC scenarios are conducted to validate the superiority of the proposed algorithms over existing state-of-the-art counterparts.


[11] 2601.10095

On the Computation and Approximation of Backward Reachable Sets for Max-Plus Linear Systems using Polyhedras

This paper investigates reachability analysis for max-plus linear systems (MPLS), an important class of dynamical systems that model synchronization and delay phenomena in timed discrete-event systems. We specifically focus on backward reachability analysis, i.e., determining the set of states that can reach a given target set within a certain number of steps. Computing backward reachable sets presents significant challenges due to the non-convexity of max-plus dynamics and the complexity of set complement operations. To address these challenges, we propose a novel approximation framework that efficiently computes backward reachable sets by exploiting the structure of tropical polyhedra. Our approach reformulates the problem as a sequence of symbolic operations and approximates non-convex target sets through closure operations on unions of tropical polyhedra. We develop a systematic algorithm that constructs both outer (M-form) and inner (V-form) representations of the resulting sets, incorporating extremal filtering to reduce computational complexity. The proposed method offers a scalable alternative to traditional DBM-based approaches, enabling reliable approximate backward reachability analysis for general target regions in MPLS.


[12] 2601.10153

Leveraging Digital Twin Technologies: All-Photonics Networks-as-a-Service for Data Center Xchange in the Era of AI [Invited Tutorial]

This paper presents a data center exchange (Data Center Xchange, DCX) architecture for all-photonics networks-as-a-service in distributed data center infrastructures, enabling the creation of a virtual large-scale data center by directly interconnecting distributed data centers in metropolitan areas. Key requirements for such an architecture are identified: support for low-latency operations, scalability, reliability, and flexibility within a single network architecture; the ability to add new operator-driven automation functionalities based on an open networking approach; and the ability to control and manage remotely deployed transponders connected via access links with unknown physical parameters. We propose a set of technologies that enable digital twin operations for optical networks, including a cloud-native architecture for coherent transceivers, remote transponder control, fast end-to-end optical path provisioning, transceiver-based physical-parameter estimation incorporating digital longitudinal monitoring, and optical line system calibration, demonstrating their feasibility through field validations.


[13] 2601.10178

HyMGP: A Customized MILP-Based Tool for Techno-Economic Planning of Islanded Microgrids

This paper presents a customized microgrid planning algorithm and tool, HyMGP, for remote sites in arid regions, which is formulated as a Mixed Integer Linear Programming (MILP) problem. HyMGP is compared with HOMER Pro to evaluate its performance in optimizing the sizing of microgrid components, including photovoltaic panels (PVs), vertical axis wind turbines (VAWTs), and battery energy storage systems (BESS), for remote and off-grid applications. The study focuses on a standalone microgrid in the Saudi Arabia, considering high solar irradiance, limited wind availability, and a constant load profile composed of continuous cathodic protection and daytime cooling. In the simulation environment, comparisons with HOMER solutions demonstrate the advantages of HyMGP, which provides optimal and more flexible solutions by allowing user-defined component specifications and strictly enforcing all constraints. Further analysis shows that incorporating wind turbines reduces the Net Present Cost (NPC) by decreasing the required PV and battery capacities. Increasing battery autonomy leads to a higher NPC in both PV-only and hybrid systems due to the need for larger storage. Finally, lithium iron phosphate (Li-ion LFP) batteries are found to be more cost effective than lead acid, offering lower NPCs due to their longer lifespan, deeper discharge capability, and fewer replacement cycles.


[14] 2601.10179

Service Provisioning and Path Planning with Obstacle Avoidance for Low-Altitude Wireless Networks

This paper investigates the three-dimensional (3D) deployment of uncrewed aerial vehicles (UAVs) as aerial base stations in heterogeneous communication networks under constraints imposed by diverse ground obstacles. Given the diverse data demands of user equipments (UEs), a user satisfaction model is developed to provide personalized services. In particular, when a UE is located within a ground obstacle, the UAV must approach the obstacle boundary to ensure reliable service quality. Considering constraints such as UAV failures due to battery depletion, heterogeneous UEs, and obstacles, we aim to maximize overall user satisfaction by jointly optimizing the 3D trajectories of UAVs, transmit beamforming vectors, and binary association indicators between UAVs and UEs. To address the complexity and dynamics of the problem, a block coordinate descent method is adopted to decompose it into two subproblems. The beamforming subproblem is efficiently addressed via a bisection-based water-filling algorithm. For the trajectory and association subproblem, we design a deep reinforcement learning algorithm based on proximal policy optimization to learn an adaptive control policy. Simulation results demonstrate that the proposed scheme outperforms baseline schemes in terms of convergence speed and overall system performance. Moreover, it achieves efficient association and accurate obstacle avoidance.


[15] 2601.10189

Model Predictive Control of Thermo-Hydraulic Systems Using Primal Decomposition

Decarbonizing the global energy supply requires more efficient heating and cooling systems. Model predictive control enhances the operation of cooling and heating systems but depends on accurate system models, often based on control volumes. We present an automated framework including time discretization to generate model predictive controllers for such models. To ensure scalability, a primal decomposition exploiting the model structure is applied. The approach is validated on an underground heating system with varying numbers of states, demonstrating the primal decomposition's advantage regarding scalability.


[16] 2601.10207

BeamCKMDiff: Beam-Aware Channel Knowledge Map Construction via Diffusion Transformer

Channel knowledge map (CKM) is emerging as a critical enabler for environment-aware 6G networks, offering a site-specific database to significantly reduce pilot overhead. However, existing CKM construction methods typically rely on sparse sampling measurements and are restricted to either omnidirectional maps or discrete codebooks, hindering the exploitation of beamforming gain. To address these limitations, we propose BeamCKMDiff, a generative framework for constructing high-fidelity CKMs conditioned on arbitrary continuous beamforming vectors without site-specific sampling. Specifically, we incorporate a novel adaptive layer normalization (adaLN) mechanism into the noise prediction network of the Diffusion Transformer (DiT). This mechanism injects continuous beam embeddings as {global control parameters}, effectively steering the generative process to capture the complex coupling between beam patterns and environmental geometries. Simulation results demonstrate that BeamCKMDiff significantly outperforms state-of-the-art baselines, achieving superior reconstruction accuracy in capturing main lobes and side lobes.


[17] 2601.10250

Cell Behavior Video Classification Challenge, a benchmark for computer vision methods in time-lapse microscopy

The classification of microscopy videos capturing complex cellular behaviors is crucial for understanding and quantifying the dynamics of biological processes over time. However, it remains a frontier in computer vision, requiring approaches that effectively model the shape and motion of objects without rigid boundaries, extract hierarchical spatiotemporal features from entire image sequences rather than static frames, and account for multiple objects within the field of view. To this end, we organized the Cell Behavior Video Classification Challenge (CBVCC), benchmarking 35 methods based on three approaches: classification of tracking-derived features, end-to-end deep learning architectures to directly learn spatiotemporal features from the entire video sequence without explicit cell tracking, or ensembling tracking-derived with image-derived features. We discuss the results achieved by the participants and compare the potential and limitations of each approach, serving as a basis to foster the development of computer vision methods for studying cellular dynamics.


[18] 2601.10264

Sim2Real Deep Transfer for Per-Device CFO Calibration

Carrier Frequency Offset (CFO) estimation in Orthogonal Frequency Division Multiplexing (OFDM) systems faces significant performance degradation across heterogeneous software-defined radio (SDR) platforms due to uncalibrated hardware impairments. Existing deep neural network (DNN)-based approaches lack device-level adaptation, limiting their practical deployment. This paper proposes a Sim2Real transfer learning framework for per-device CFO calibration, combining simulation-driven pretraining with lightweight receiver adaptation. A backbone DNN is pre-trained on synthetic OFDM signals incorporating parametric hardware distortions (e.g., phase noise, IQ imbalance), enabling generalized feature learning without costly cross-device data collection. Subsequently, only the regression layers are fine-tuned using $1,000$ real frames per target device, preserving hardware-agnostic knowledge while adapting to device-specific impairments. Experiments across three SDR families (USRP B210, USRP N210, HackRF One) achieve $30\times$ BER reduction compared to conventional CP-based methods under indoor multipath conditions. The framework bridges the simulation-to-reality gap for robust CFO estimation, enabling cost-effective deployment in heterogeneous wireless systems.


[19] 2601.10292

Single-Feed Circularly Polarized Super Realized Gain Antenna

This paper presents a super realized gain, circularly polarized strip-crossed dipole antenna operating at 3.5 GHz. Superdirective behavior is achieved by leveraging strong inter-element mutual coupling through careful adjustment of the strip dimensions. The antenna features a single driven element, with the other element passively loaded with a reactive impedance. The structure is optimized to maximize left-hand circularly polarized (LHCP) realized gain, ensuring high polarization purity and good impedance matching. The optimized design exhibits a 50 $\Omega$ impedance bandwidth of 3.29 - 4.17 GHz (23.75%) and an axial-ratio bandwidth of 3.43 - 3.57 GHz (4%). At 3.5 GHz, the antenna achieves a peak realized gain of 6.1 dB ($ka \approx 1.65$), with an axial ratio of 1.4 dB. These results demonstrate that circular polarization and superdirectivity can be simultaneously realized in a geometrically simple, low-profile ($0.15\lambda$) antenna, rendering it suitable for integration into compact sub-6~GHz wireless and sensing platforms.


[20] 2601.10331

Low-Complexity Blind Estimator of SNR and MSE for mmWave Multi-Antenna Communications

To enhance the robustness and resilience of wireless communication and meet performance requirements, various environment-reflecting metrics, such as the signal-to-noise ratio (SNR), are utilized as the system parameter. To obtain these metrics, training signals such as pilot sequences are generally employed. However, the rapid fluctuations of the millimeter-wave (mmWave) propagation channel often degrade the accuracy of such estimations. To address this challenge, various blind estimators that operate without pilot have been considered as potential solutions. However, these algorithms often involve a training phase for machine learning or a large number of iterations, which implies prohibitive computational complexity, making them difficult to employ for real-time services and the system less resilient to dynamic environment variation. In this paper, we propose blind estimators for average noise power, signal power, SNR, and mean-square error (MSE) that do not require knowledge of the ground-truth signal or involve high computational complexity. The proposed algorithm leverages the inherent sparsity of mmWave channel in beamspace domain, which makes the signal and noise power components more distinguishable.


[21] 2601.10412

An effective interactive brain cytoarchitectonic parcellation framework using pretrained foundation model

Cytoarchitectonic mapping provides anatomically grounded parcellations of brain structure and forms a foundation for integrative, multi-modal neuroscience analyses. These parcellations are defined based on the shape, density, and spatial arrangement of neuronal cell bodies observed in histological imaging. Recent works have demonstrated the potential of using deep learning models toward fully automatic segmentation of cytoarchitectonic areas in large-scale datasets, but performance is mainly constrained by the scarcity of training labels and the variability of staining and imaging conditions. To address these challenges, we propose an interactive cytoarchitectonic parcellation framework that leverages the strong transferability of the DINOv3 vision transformer. Our framework combines (i) multi-layer DINOv3 feature fusion, (ii) a lightweight segmentation decoder, and (iii) real-time user-guided training from sparse scribbles. This design enables rapid human-in-the-loop refinement while maintaining high segmentation accuracy. Compared with training an nnU-Net from scratch, transfer learning with DINOv3 yields markedly improved performance. We also show that features extracted by DINOv3 exhibit clear anatomical correspondence and demonstrate the method's practical utility for brain region segmentation using sparse labels. These results highlight the potential of foundation-model-driven interactive segmentation for scalable and efficient cytoarchitectonic mapping.


[22] 2601.10576

Achievable Degrees of Freedom Analysis and Optimization in Massive MIMO via Characteristic Mode Analysis

Massive multiple-input multiple-output (MIMO) is esteemed as a critical technology in 6G communications, providing large degrees of freedom (DoF) to improve multiplexing gain. This paper introduces characteristic mode analysis (CMA) to derive the achievable DoF. Unlike existing works primarily focusing on the DoF of the wireless channel,the excitation and radiation properties of antennas are also involved in our DoF analysis, which influences the number of independent data streams for communication of a MIMO system. Specifically, we model the excitation and radiation properties of transceiver antennas using CMA to analyze the excitation and radiation properties of antennas. The CMA-based DoF analysis framework is established and the achievable DoF is derived. A characteristic mode optimization problem of antennas is then formulated to maximize the achievable DoF. A case study where the reconfigurable holographic surface (RHS) antennas are deployed at the transceiver is investigated, and a CMA-based genetic algorithm is later proposed to solve the above problem. By changing the characteristic modes electric field and surface current distribution of RHS, the achievable DoF is enhanced. Full-wave simulation verifies the theoretical analysis on the the achievable DoF and shows that, via the reconfiguration of RHS based on the proposed algorithm, the achievable DoF is improved.


[23] 2601.10607

Multi-Objective Pareto-Front Optimization for Efficient Adaptive VVC Streaming

Adaptive video streaming has facilitated improved video streaming over the past years. A balance among coding performance objectives such as bitrate, video quality, and decoding complexity is required to achieve efficient, content- and codec-dependent, adaptive video streaming. This paper proposes a multi-objective Pareto-front (PF) optimization framework to construct quality-monotonic, content-adaptive bitrate ladders Versatile Video Coding (VVC) streaming that jointly optimize video quality, bitrate, and decoding time, which is used as a practical proxy for decoding energy. Two strategies are introduced: the Joint Rate-Quality-Time Pareto Front (JRQT-PF) and the Joint Quality-Time Pareto Front (JQT-PF), each exploring different tradeoff formulations and objective prioritizations. The ladders are constructed under quality monotonicity constraints during adaptive streaming to ensure a consistent Quality of Experience (QoE). Experiments are conducted on a large-scale UHD dataset (Inter-4K), with quality assessed using PSNR, VMAF, and XPSNR, and complexity measured via decoding time and energy consumption. The JQT-PF method achieves 11.76% average bitrate savings while reducing average decoding time by 0.29% to maintain the same XPSNR, compared to a widely-used fixed ladder. More aggressive configurations yield up to 27.88% bitrate savings at the cost of increased complexity. The JRQT-PF strategy, on the other hand, offers more controlled tradeoffs, achieving 6.38 % bitrate savings and 6.17 % decoding time reduction. This framework outperforms existing methods, including fixed ladders, VMAF- and XPSNR-based dynamic resolution selection, and complexity-aware benchmarks. The results confirm that PF optimization with decoding time constraints enables sustainable, high-quality streaming tailored to network and device capabilities.


[24] 2601.10629

VoiceSculptor: Your Voice, Designed By You

Despite rapid progress in text-to-speech (TTS), open-source systems still lack truly instruction-following, fine-grained control over core speech attributes (e.g., pitch, speaking rate, age, emotion, and style). We present VoiceSculptor, an open-source unified system that bridges this gap by integrating instruction-based voice design and high-fidelity voice cloning in a single framework. It generates controllable speaker timbre directly from natural-language descriptions, supports iterative refinement via Retrieval-Augmented Generation (RAG), and provides attribute-level edits across multiple dimensions. The designed voice is then rendered into a prompt waveform and fed into a cloning model to enable high-fidelity timbre transfer for downstream speech synthesis. VoiceSculptor achieves open-source state-of-the-art (SOTA) on InstructTTSEval-Zh, and is fully open-sourced, including code and pretrained models, to advance reproducible instruction-controlled TTS research.


[25] 2601.10671

Safe Trajectory Gradient Flow Control of a Grid-Interfacing Inverter

Grid-interfacing inverters serve as the interface between renewable energy resources and the electric power grid, offering fast, programmable control capabilities. However, their operation is constrained by hardware limitations, such as bounds on the current magnitude. Existing control methods for these systems often neglect these constraints during controller design and instead rely on ad hoc limiters, which can introduce instability or degrade performance. In this work, we present a control framework that directly incorporates constraints into the control of a voltage-source inverter. We propose a safe trajectory gradient flow controller, which applies the safe gradient flow method to a rolling horizon trajectory optimization problem to ensure that the states remain within a safe set defined by the constraints while directing the trajectory towards an optimal equilibrium point of a nonlinear program. Simulation results demonstrate that our approach can drive the outputs of a simulated inverter system to optimal values and maintain state constraints, even when using a limited number of optimization steps per control cycle.


[26] 2601.09894

One-Cold Poisson Channel: A Simple Continuous-Time Channel with Zero Dispersion

We introduce the one-cold Poisson channel (OCPC), where the transmitter chooses one of several frequency bands to attenuate at a time. In particular, the perfect OCPC, where the number of bands is unlimited, is an extremely simple continuous-time memoryless channel. It has a capacity 1, zero channel dispersion, and an information spectrum being the degenerate distribution at 1. It is the only known nontrivial (discrete or continuous-time) memoryless channel with a closed-form formula for its optimal non-asymptotic error probability, making it the simplest channel in this sense. A potential application is optical communication with a tunable band rejection filter. Due to its simplicity, we may use it as a basic currency of information that is infinitely divisible, as an alternative to bits which are not infinitely divisible. OCPC with perfect feedback gives a generalization of prefix codes. We also study non-asymptotic coding and channel simulation results for the general OCPC.


[27] 2601.09916

Learning-Augmented Perfectly Secure Collaborative Matrix Multiplication

This paper presents a perfectly secure matrix multiplication (PSMM) protocol for multiparty computation (MPC) of $\mathrm{A}^{\top}\mathrm{B}$ over finite fields. The proposed scheme guarantees correctness and information-theoretic privacy against threshold-bounded, semi-honest colluding agents, under explicit local storage constraints. Our scheme encodes submatrices as evaluations of sparse masking polynomials and combines coefficient alignment with Beaver-style randomness to ensure perfect secrecy. We demonstrate that any colluding set of parties below the security threshold observes uniformly random shares, and that the recovery threshold is optimal, matching existing information-theoretic limits. Building on this framework, we introduce a learning-augmented extension that integrates tensor-decomposition-based local block multiplication, capturing both classical and learned low-rank methods. We demonstrate that the proposed learning-based PSMM preserves privacy and recovery guarantees for MPC, while providing scalable computational efficiency gains (up to $80\%$) as the matrix dimensions grow.


[28] 2601.09969

Interfacing Superconductor and Semiconductor Digital Electronics

Interface circuits are the key components that enable the hybrid integration of superconductor and semiconductor digital electronics. The design requirements of superconductor-semiconductor interface circuits vary depending on the application, such as high-performance classical computing, superconducting quantum computing, and digital signal processing. In this survey, various interface circuits are categorized based on the working principle and structure. The superconducting output drivers are explored, which are capable of converting and amplifying, e.g., single flux quantum (SFQ) voltage pulses, to voltage levels that semiconductor circuits can process. Several trade-offs between circuit- and system-level design parameters are examined. Accordingly, parameters such as the data rate, output voltage, power dissipation, layout area, thermal/heat load of cryogenic cables, and bit-error rate are considered.


[29] 2601.10041

Emergency Department Patient Flow Optimization with an Alternative Care Threshold Policy

Emergency department (ED) overcrowding and patient boarding represent critical systemic challenges that compromise care quality. We propose a threshold-based admission policy that redirects non-urgent patients to alternative care pathways, such as telemedicine, during peak congestion. The ED is modeled as a two-class $M/M/c$ preemptive-priority queuing system, where high-acuity patients are prioritized and low-acuity patients are subject to state-dependent redirection. Analyzed via a level-dependent Quasi-Birth-Death (QBD) process, the model determines the optimal threshold by maximizing a long-run time-averaged objective function comprising redirection-affected revenue and costs associated with patient balking and system occupancy. Numerical analysis using national healthcare data reveals that optimal policies are highly context-dependent. While rural EDs generally optimize at lower redirection thresholds, urban EDs exhibit performance peaks at moderate thresholds. Results indicate that our optimal policy yields significant performance gains of up to $4.84\%$ in rural settings and $5.90\%$ in urban environments. This research provides a mathematically rigorous framework for balancing clinical priority with operational efficiency across diverse ED settings.


[30] 2601.10070

Comparative Evaluation of Deep Learning-Based and WHO-Informed Approaches for Sperm Morphology Assessment

Assessment of sperm morphological quality remains a critical yet subjective component of male fertility evaluation, often limited by inter-observer variability and resource constraints. This study presents a comparative biomedical artificial intelligence framework evaluating an image-based deep learning model (HuSHeM) alongside a clinically grounded baseline derived from World Health Organization criteria augmented with the Systemic Inflammation Response Index (WHO(+SIRI)). The HuSHeM model was trained on high-resolution sperm morphology images and evaluated using an independent clinical cohort. Model performance was assessed using discrimination, calibration, and clinical utility analyses. The HuSHeM model demonstrated higher discriminative performance, as reflected by an increased area under the receiver operating characteristic curve with relatively narrow confidence intervals compared to WHO(+SIRI). Precision-recall analysis further indicated improved performance under class imbalance, with higher precision-recall area values across evaluated thresholds. Calibration analysis indicated closer agreement between predicted probabilities and observed outcomes for HuSHeM, while decision curve analysis suggested greater net clinical benefit across clinically relevant threshold probabilities. These findings suggest that image-based deep learning may offer improved predictive reliability and clinical utility compared with traditional rule-based and inflammation-augmented criteria. The proposed framework supports objective and reproducible assessment of sperm morphology and may serve as a decision-support tool within fertility screening and referral workflows. The proposed models are intended as decision-support or referral tools and are not designed to replace clinical judgment or laboratory assessment.


[31] 2601.10228

Optimizing Multimodal LLMs for Egocentric Video Understanding: A Solution for the HD-EPIC VQA Challenge

Multimodal Large Language Models (MLLMs) struggle with complex video QA benchmarks like HD-EPIC VQA due to ambiguous queries/options, poor long-range temporal reasoning, and non-standardized outputs. We propose a framework integrating query/choice pre-processing, domain-specific Qwen2.5-VL fine-tuning, a novel Temporal Chain-of-Thought (T-CoT) prompting for multi-step reasoning, and robust post-processing. This system achieves 41.6% accuracy on HD-EPIC VQA, highlighting the need for holistic pipeline optimization in demanding video understanding. Our code, fine-tuned models are available at this https URL.


[32] 2601.10324

SRAW-Attack: Space-Reweighted Adversarial Warping Attack for SAR Target Recognition

Synthetic aperture radar (SAR) imagery exhibits intrinsic information sparsity due to its unique electromagnetic scattering mechanism. Despite the widespread adoption of deep neural network (DNN)-based SAR automatic target recognition (SAR-ATR) systems, they remain vulnerable to adversarial examples and tend to over-rely on background regions, leading to degraded adversarial robustness. Existing adversarial attacks for SAR-ATR often require visually perceptible distortions to achieve effective performance, thereby necessitating an attack method that balances effectiveness and stealthiness. In this paper, a novel attack method termed Space-Reweighted Adversarial Warping (SRAW) is proposed, which generates adversarial examples through optimized spatial deformation with reweighted budgets across foreground and background regions. Extensive experiments demonstrate that SRAW significantly degrades the performance of state-of-the-art SAR-ATR models and consistently outperforms existing methods in terms of imperceptibility and adversarial transferability. Code is made available at this https URL.


[33] 2601.10379

Online identification of nonlinear time-varying systems with uncertain information

Digital twins (DTs), serving as the core enablers for real-time monitoring and predictive maintenance of complex cyber-physical systems, impose critical requirements on their virtual models: high predictive accuracy, strong interpretability, and online adaptive capability. However, existing techniques struggle to meet these demands simultaneously: Bayesian methods excel in uncertainty quantification but lack model interpretability, while interpretable symbolic identification methods (e.g., SINDy) are constrained by their offline, batch-processing nature, which make real-time updates challenging. To bridge this semantic and computational gap, this paper proposes a novel Bayesian Regression-based Symbolic Learning (BRSL) framework. The framework formulates online symbolic discovery as a unified probabilistic state-space model. By incorporating sparse horseshoe priors, model selection is transformed into a Bayesian inference task, enabling simultaneous system identification and uncertainty quantification. Furthermore, we derive an online recursive algorithm with a forgetting factor and establish precise recursive conditions that guarantee the well-posedness of the posterior distribution. These conditions also function as real-time monitors for data utility, enhancing algorithmic robustness. Additionally, a rigorous convergence analysis is provided, demonstrating the convergence of parameter estimates under persistent excitation conditions. Case studies validate the effectiveness of the proposed framework in achieving interpretable, probabilistic prediction and online learning.


[34] 2601.10391

Codebook Design for Limited Feedback in Near-Field XL-MIMO Systems

In this paper, we study efficient codebook design for limited feedback in extremely large-scale multiple-input-multiple-output (XL-MIMO) frequency division duplexing (FDD) systems. It is worth noting that existing codebook designs for XL-MIMO, such as polar-domain codebook, have not well taken into account user (location) distribution in practice, thereby incurring excessive feedback overhead. To address this issue, we propose in this paper a novel and efficient feedback codebook tailored to user distribution. To this end, we first consider a typical scenario where users are uniformly distributed within a specific polar-region, based on which a sum-rate maximization problem is formulated to jointly optimize angle-range samples and bit allocation among angle/range feedback. This problem is challenging to solve due to the lack of a closed-form expression for the received power in terms of angle and range samples. By leveraging a Voronoi partitioning approach, we show that uniform angle sampling is optimal for received power maximization. For more challenging range sampling design, we obtain a tight lower-bound on the received power and show that geometric sampling, where the ratio between adjacent samples is constant, can maximize the lower bound and thus serves as a high-quality suboptimal solution. We then extend the proposed framework to accommodate more general non-uniform user distribution via an alternating sampling method. Furthermore, theoretical analysis reveals that as the array size increases, the optimal allocation of feedback bits increasingly favors range samples at the expense of angle samples. Finally, numerical results validate the superior rate performance and robustness of the proposed codebook design under various system setups, achieving significant gains over benchmark schemes, including the widely used polar-domain codebook.


[35] 2601.10453

Stable Differentiable Modal Synthesis for Learning Nonlinear Dynamics

Modal methods are a long-standing approach to physical modelling synthesis. Extensions to nonlinear problems are possible, including the case of a high-amplitude vibration of a string. A modal decomposition leads to a densely coupled nonlinear system of ordinary differential equations. Recent work in scalar auxiliary variable techniques has enabled construction of explicit and stable numerical solvers for such classes of nonlinear systems. On the other hand, machine learning approaches (in particular neural ordinary differential equations) have been successful in modelling nonlinear systems automatically from data. In this work, we examine how scalar auxiliary variable techniques can be combined with neural ordinary differential equations to yield a stable differentiable model capable of learning nonlinear dynamics. The proposed approach leverages the analytical solution for linear vibration of system's modes so that physical parameters of a system remain easily accessible after the training without the need for a parameter encoder in the model architecture. As a proof of concept, we generate synthetic data for the nonlinear transverse vibration of a string and show that the model can be trained to reproduce the nonlinear dynamics of the system. Sound examples are presented.


[36] 2601.10605

A user subscription model in mobile radio access networks with network slicing

Network slicing is an architectural enabling technology that logically decouples the current cellular networks into infrastructure providers (InPs) and Network Slice Tenants (NSTs). The network resources (e.g., radio access resources at each cell) are owned by the InP, and are shared by the NSTs to provide a service to their mobile users. In this context, we proposed a business model that includes resource allocation and user subscription to NSTs in a competitive setting, and provides, among other things, closed-form expressions for the subscription indicators in equilibrium of each NST at each cell. This model relies on the widely adopted logit model to characterize user subscriptions. However, as a consequence of user mobility and radio propagation, some of the underlying assumptions in the logit model do not hold. Therefore, further research is needed to assess the accuracy of the results provided by the logit model in a mobile radio scenario. We carry out a thorough evaluation of the validity of the model by comparing its results against those obtained through computer simulation. Our simulation model includes complete and realistic characterizations of user mobility and radio propagation. From the results, we conclude in most cases the logit model provides valid results in a mobile radio scenario.


[37] 2601.10676

Breaking the Storage-Bandwidth Tradeoff in Distributed Storage with Quantum Entanglement

This work investigates the use of quantum resources in distributed storage systems. Consider an $(n,k,d)$ distributed storage system in which a file is stored across $n$ nodes such that any $k$ nodes suffice to reconstruct the file. When a node fails, any $d$ helper nodes transmit information to a newcomer to rebuild the system. In contrast to the classical repair, where helper nodes transmit classical bits, we allow them to send classical information over quantum channels to the newcomer. The newcomer then generates its storage by performing appropriate measurements on the received quantum states. In this setting, we fully characterize the fundamental tradeoff between storage and repair bandwidth (total communication cost). Compared to classical systems, the optimal storage--bandwidth tradeoff can be significantly improved with the enhancement of quantum entanglement shared only among the surviving nodes, particularly at the minimum-storage regenerating point. Remarkably, we show that when $d \geq 2k-2$, there exists an operating point at which \textit{both storage and repair bandwidth are simultaneously minimized}. This phenomenon breaks the tradeoff in the classical setting and reveals a fundamentally new regime enabled by quantum communication.


[38] 2310.09126

Learning Physics-Informed Noise Models from Dark Frames for Low-Light Raw Image Denoising

Recently, the mainstream practice for training low-light raw image denoising methods has shifted towards employing synthetic data. Noise modeling, which focuses on characterizing the noise distribution of real-world sensors, profoundly influences the effectiveness and practicality of synthetic data. Currently, physics-based noise modeling struggles to characterize the entire real noise distribution, while learning-based noise modeling impractically depends on paired real data. In this paper, we propose a novel strategy: learning the noise model from dark frames instead of paired real data, to break down the data dependency. Based on this strategy, we introduce an efficient physics-informed noise neural proxy (PNNP) to approximate the real-world sensor noise model. Specifically, we integrate physical priors into neural proxies and introduce three efficient techniques: physics-guided noise decoupling (PND), physics-aware proxy model (PPM), and differentiable distribution loss (DDL). PND decouples the dark frame into different components and handles different levels of noise flexibly, which reduces the complexity of noise modeling. PPM incorporates physical priors to constrain the synthetic noise, which promotes the accuracy of noise modeling. DDL provides explicit and reliable supervision for noise distribution, which promotes the precision of noise modeling. PNNP exhibits powerful potential in characterizing the real noise distribution. Extensive experiments on public datasets demonstrate superior performance in practical low-light raw image denoising. The source code will be publicly available at the project homepage.


[39] 2406.09335

Instance-level quantitative saliency in multiple sclerosis lesion segmentation

Explainable artificial intelligence (XAI) methods have been proposed to interpret model decisions in classification and, more recently, in semantic segmentation. However, instance-level XAI for semantic segmentation, namely explanations focused on a single object among multiple instances of the same class, remains largely unexplored. Such explanations are particularly important in multi-lesional diseases to understand what drives the detection and contouring of a specific lesion. We propose instance-level explanation maps for semantic segmentation by extending SmoothGrad and Grad-CAM++ to obtain quantitative instance saliency. These methods were applied to the segmentation of white matter lesions (WMLs), a magnetic resonance imaging biomarker in multiple sclerosis. We used 4023 FLAIR and MPRAGE MRI scans from 687 patients collected at the University Hospital of Basel, Switzerland, with WML masks annotated by four expert clinicians. Three deep learning architectures, a 3D U-Net, nnU-Net, and Swin UNETR, were trained and evaluated, achieving normalized Dice scores of 0.71, 0.78, and 0.80, respectively. Instance saliency maps showed that the models relied primarily on FLAIR rather than MPRAGE for WML segmentation, with positive saliency inside lesions and negative saliency in their immediate neighborhood, consistent with clinical practice. Peak saliency values differed significantly across correct and incorrect predictions, suggesting that quantitative instance saliency may help identify segmentation errors. In conclusion, we introduce two architecture-agnostic XAI methods that provide quantitative instance-level explanations for semantic segmentation and support clinically meaningful interpretation of model decisions.


[40] 2410.05061

Bias-VarianceTrade-off in Kalman Filter-Based Disturbance Observers

The performance of disturbance observers is strongly influenced by the level of prior knowledge about the disturbance model. The simultaneous input and state estimation (SISE) algorithm is widely recognized for providing unbiased minimum-variance estimates under arbitrary disturbance models. In contrast, the Kalman filter-based disturbance observer (KF-DOB) achieves minimum mean-square error estimation when the disturbance model is fully specified. However, practical scenarios often fall between these extremes, where only partial knowledge of the disturbance model is available. This paper investigates the inherent bias-variance trade-off in KF-DOB when the disturbance model is incomplete. We further show that SISE can be interpreted as a special case of KF-DOB, where the disturbance noise covariance tends to infinity. To address this trade-off, we propose two novel estimators: the multi-kernel correntropy Kalman filter-based disturbance observer (MKCKF-DOB) and the interacting multiple models Kalman filter-based disturbance observer (IMMKF-DOB). Simulations verify the effectiveness of the proposed methods.


[41] 2411.12130

Adversarial Multi-Agent Reinforcement Learning for Proactive False Data Injection Detection

Smart inverters are instrumental in the integration of distributed energy resources into the electric grid. Such inverters rely on communication layers for continuous control and monitoring, potentially exposing them to cyber-physical attacks such as false data injection attacks (FDIAs). We propose to construct a defense strategy against a priori unknown FDIAs with a multi-agent reinforcement learning (MARL) framework. The first agent is an adversary that simulates and discovers various FDIA strategies, while the second agent is a defender in charge of detecting and locating FDIAs. This approach enables the defender to be trained against new FDIAs continuously generated by the adversary. In addition, we show that the detection skills of an MARL defender can be combined with those of a supervised offline defender through a transfer learning approach. Numerical experiments conducted on a distribution and transmission system demonstrate that: a) the proposed MARL defender outperforms the offline defender against adversarial attacks; b) the transfer learning approach makes the MARL defender capable against both synthetic and unseen FDIAs.


[42] 2412.05554

Rydberg Atomic Quantum Receivers for Classical Wireless Communications and Sensing: Their Models and Performance

The significant progress of quantum sensing technologies offer numerous radical solutions for measuring a multitude of physical quantities at an unprecedented precision. Among them, Rydberg atomic quantum receivers (RAQRs) emerge as an eminent solution for detecting the electric field of radio frequency (RF) signals, exhibiting great potential in assisting classical wireless communications and sensing. So far, most experimental studies have aimed for the proof of physical concepts to reveal its promise, while the practical signal model of RAQR-aided wireless communications and sensing remained under-explored. Furthermore, the performance of RAQR-based wireless receivers and their advantages over classical RF receivers have not been fully characterized. To fill these gaps, we introduce the RAQR to the wireless community by presenting an end-to-end reception scheme. We then develop a corresponding equivalent baseband signal model relying on a realistic reception flow. Our scheme and model provide explicit design guidance to RAQR-aided wireless systems. We next study the performance of RAQR-aided wireless systems based on our model, and compare them to classical RF receivers. The results show that Doppler broadening-free RAQRs are capable of achieving a substantial received signal-to-noise ratio (SNR) gain of over $27$ decibel (dB) and $40$ dB in the photon shot limit and standard quantum limit regimes, respectively.


[43] 2412.13046

Adaptive Economic Model Predictive Control: Performance Guarantees for Nonlinear Systems

We consider the problem of optimizing the economic performance of nonlinear constrained systems subject to uncertain time-varying parameters and bounded disturbances. In particular, we propose an adaptive economic model predictive control (MPC) framework that: (i) directly minimizes transient economic costs, (ii) addresses parametric uncertainty through online model adaptation, (iii) determines optimal setpoints online, and (iv) ensures robustness by using a tube-based approach. The proposed design ensures recursive feasibility, robust constraint satisfaction, and a transient performance bound. In case the disturbances have a finite energy and the parameter variations have a finite path length, the asymptotic average performance is (approximately) not worse than the performance obtained when operating at the best reachable steady-state. We highlight performance benefits in a numerical example involving a chemical reactor with unknown time-invariant and time-varying parameters.


[44] 2503.08546

End-to-End PET Image Reconstruction via a Posterior-Mean Diffusion Model

Positron Emission Tomography (PET) is a functional imaging modality that enables the visualization of biochemical and physiological processes across various tissues. Recently, deep learning (DL)-based methods have demonstrated significant progress in directly mapping sinograms to PET images. However, regression-based DL models often yield overly smoothed reconstructions lacking of details (i.e., low distortion, low perceptual quality), whereas GAN-based and likelihood-based posterior sampling models tend to introduce undesirable artifacts in predictions (i.e., high distortion, high perceptual quality), limiting their clinical applicability. To achieve a robust perception-distortion tradeoff, we propose Posterior-Mean Denoising Diffusion Model (PMDM-PET), a novel approach that builds upon a recently established mathematical theory to explore the closed-form expression of perception-distortion function in diffusion model space for PET image reconstruction from sinograms. Specifically, PMDM-PET first obtained posterior-mean PET predictions under minimum mean square error (MSE), then optimally transports the distribution of them to the ground-truth PET images distribution. Experimental results demonstrate that PMDM-PET not only generates realistic PET images with possible minimum distortion and optimal perceptual quality but also outperforms five recent state-of-the-art (SOTA) DL baselines in both qualitative visual inspection and quantitative pixel-wise metrics PSNR (dB)/SSIM/NRMSE.


[45] 2504.01807

Barrier Certificates for Unknown Systems with Latent States and Polynomial Dynamics using Bayesian Inference

Certifying safety in dynamical systems is crucial, but barrier certificates - widely used to verify that system trajectories remain within a safe region - typically require explicit system models. When dynamics are unknown, data-driven methods can be used instead, yet obtaining a valid certificate requires rigorous uncertainty quantification. For this purpose, existing methods usually rely on full-state measurements, limiting their applicability. This paper proposes a novel approach for synthesizing barrier certificates for unknown systems with latent states and polynomial dynamics. A Bayesian framework is employed, where a prior in state-space representation is updated using output data via a targeted marginal Metropolis-Hastings sampler. The resulting samples are used to construct a barrier certificate through a sum-of-squares program. Probabilistic guarantees for its validity with respect to the true, unknown system are obtained by testing on an additional set of posterior samples. The approach and its probabilistic guarantees are illustrated through a numerical simulation.


[46] 2504.17969

Mixed Bernstein-Fourier Approximants for Optimal Trajectory Generation with Periodic Behavior

Efficient trajectory generation is crucial for autonomous systems; however, current numerical methods often struggle to handle periodic behaviors effectively, particularly when the onboard sensors require equidistant temporal sampling. This paper introduces a novel mixed Bernstein-Fourier approximation framework tailored explicitly for optimal motion planning. Our proposed methodology leverages the uniform convergence properties of Bernstein polynomials for nonperiodic behaviors while effectively capturing periodic dynamics through the Fourier series. Theoretical results are established, including uniform convergence proofs for approximations of functions, derivatives, and integrals, as well as detailed error bound analyses. We further introduce a regulated least squares approach for determining approximation coefficients, enhancing numerical stability and practical applicability. Within an optimal control context, we establish the feasibility and consistency of approximated solutions to their continuous counterparts. We also extend the covector mapping theorem, providing theoretical guarantees for approximating dual variables crucial in verifying the necessary optimality conditions from Pontryagin's Maximum Principle. Numerical examples illustrate the method's superior performance, demonstrating substantial improvements in computational efficiency and precision in scenarios with complex periodic constraints and dynamics. Our mixed Bernstein-Fourier methodology thus presents a robust, theoretically grounded, and computationally efficient approach for advanced optimal trajectory planning in autonomous systems.


[47] 2505.24024

Exploiting Euclidean Distance Field Properties for Fast and Safe 3D planning with a modified Lazy Theta*

This paper presents the FS-Planner, a fast graph-search planner based on a modified Lazy Theta* algorithm that exploits the analytical properties of Euclidean Distance Fields (EDFs). We introduce a new cost function that integrates an EDF-based term proven to satisfy the triangle inequality, enabling efficient parent selection and reducing computation time while generating safe paths with smaller heading variations. We also derive an analytic approximation of the EDF integral along a segment and analyze the influence of the line-of-sight limit on the approximation error, motivating the use of a bounded visibility range. Furthermore, we propose a gradient-based neighbour-selection mechanism that decreases the number of explored nodes and improves computational performance without degrading safety or path quality. The FS-Planner produces safe paths with small heading changes without requiring the use of post-processing methods. Extensive experiments and comparisons in challenging 3D indoor simulation environments, complemented by tests in real-world outdoor environments, are used to evaluate and validate the FS-Planner. The results show consistent improvements in computation time, exploration efficiency, safety, and smoothness in a geometric sense compared with baseline heuristic planners, while maintaining sub-optimality within acceptable bounds. Finally, the proposed EDF-based cost formulation is orthogonal to the underlying search method and can be incorporated into other planning paradigms.


[48] 2506.10407

Semi-Tensor-Product Based Convolutional Neural Networks

The semi-tensor product of vectors generalizes the conventional inner product, enabling algebraic operations between vectors of different dimensions. Building upon this foundation, we introduce a domain-based convolutional product and integrate it with the STP to formulate a padding-free convolutional operation. This new operation inherently avoids zero or other artificial padding, thereby eliminating redundant information and boundary artifacts commonly present in conventional convolutional neural networks. Based on this operation, we further develop an STP-based CNN framework that extends convolutional computation to irregular and cross-dimensional data domains. Applications to image processing and third-order signal identification demonstrate the proposed method's effectiveness in handling irregular, incomplete, and high-dimensional data without the distortions caused by padding.


[49] 2506.12308

From Ground to Sky: Architectures, Applications, and Challenges Shaping Low-Altitude Wireless Networks

In this article, we introduce a novel low-altitude wireless network (LAWN), which is a reconfigurable, three-dimensional (3D) layered architecture. In particular, the LAWN integrates connectivity, sensing, control, and computing across aerial and terrestrial nodes that enable seamless operation in complex, dynamic, and mission-critical environments. Different from the conventional aerial communication systems, LAWN's distinctive feature is its tight integration of functional planes in which multiple functionalities continually reshape themselves to operate safely and efficiently in the low-altitude sky. With the LAWN, we discuss several enabling technologies, such as integrated sensing and communication (ISAC), semantic communication, and fully-actuated control systems. Finally, we identify potential applications and key cross-layer challenges. This article offers a comprehensive roadmap for future research and development in the low-altitude airspace.


[50] 2506.14432

A large-scale heterogeneous 3D magnetic resonance brain imaging dataset for self-supervised learning

We present FOMO300K, a large-scale, heterogeneous dataset of 318,877 brain Magnetic Resonance Imaging (MRI) scans from 82,678 MRI sessions and 59,969 subjects, aggregated from 920 publicly available sources. The dataset includes both clinical- and research-grade images, multiple MRI sequences, and a wide range of anatomical and pathological variability, including scans with large brain anomalies. Minimal preprocessing was applied to preserve the original image characteristics while reducing entry barriers for new users. Companion code for self-supervised pretraining and finetuning is provided, along with pretrained models. FOMO300K is intended to support the development and benchmarking of self-supervised learning methods in medical imaging at scale.


[51] 2507.06363

Mamba Goes HoME: Hierarchical Soft Mixture-of-Experts for 3D Medical Image Segmentation

In recent years, artificial intelligence has significantly advanced medical image segmentation. Nonetheless, challenges remain, including efficient 3D medical image processing across diverse modalities and handling data variability. In this work, we introduce Hierarchical Soft Mixture-of-Experts (HoME), a two-level token-routing layer for efficient long-context modeling, specifically designed for 3D medical image segmentation. Built on the Mamba Selective State Space Model (SSM) backbone, HoME enhances sequential modeling through adaptive expert routing. In the first level, a Soft Mixture-of-Experts (SMoE) layer partitions input sequences into local groups, routing tokens to specialized per-group experts for localized feature extraction. The second level aggregates these outputs through a global SMoE layer, enabling cross-group information fusion and global context refinement. This hierarchical design, combining local expert routing with global expert refinement, enhances generalizability and segmentation performance, surpassing state-of-the-art results across datasets from the three most widely used 3D medical imaging modalities and varying data qualities. The code is publicly available at this https URL.


[52] 2507.09608

prNet: Data-Driven Phase Retrieval via Stochastic Refinement

Phase retrieval is an ill-posed inverse problem in which classical and deep learning-based methods struggle to jointly achieve measurement fidelity and perceptual realism. We propose a novel framework for phase retrieval that leverages Langevin dynamics to enable efficient posterior sampling, yielding reconstructions that explicitly balance distortion and perceptual quality. Unlike conventional approaches that prioritize pixel-wise accuracy, our methods navigate the perception-distortion tradeoff through a principled combination of stochastic sampling, learned denoising, and model-based updates. The framework comprises three variants of increasing complexity, integrating theoretically grounded Langevin inference, adaptive noise schedule learning, parallel reconstruction sampling, and warm-start initialization from classical solvers. Extensive experiments demonstrate that our methods achieve state-of-the-art performance across multiple benchmarks, both in terms of fidelity and perceptual quality. The source code and trained models are available at this https URL


[53] 2510.15195

Pulse Shaping Filter Design for Integrated Sensing & Communication with Zak-OTFS

Zak-OTFS provides a framework for integrated sensing & communication (ISAC) in high delay and Doppler spread environments. Pulse shaping filter design enables joint optimization of sensing and communication performance. For sensing, a localized pulse shaping filter enables input-output (I/O) relation estimates close to the physical scattering channel. For communication, orthogonality of the pulse shape on the information lattice prevents inter-symbol interference, and no time and bandwidth expansion enables full spectral efficiency. A filter simultaneously meeting all three objectives is ideal for ISAC. Existing filter designs achieve two, but not all three objectives. In this work, we design pulse shaping filters meeting all three objectives via the Isotropic Orthogonal Transform Algorithm. The proposed filters have improved spectral efficiency, data detection and sensing performance over existing filter choices.


[54] 2510.16296

Delay Minimization in Pinching-Antenna-enabled NOMA-MEC Networks

This letter proposes a novel pinching antenna systems (PASS) enabled non-orthogonal multiple access (NOMA) multi-access edge computing (MEC) framework. An optimization problem is formulated to minimize the maximum task delay by optimizing offloading ratios, transmit powers, and pinching antenna (PA) positions, subject to constraints on maximum transmit power, user energy budgets, and minimum PA separation to mitigate coupling effects. To address the non-convex problem, a bisection search-based alternating optimization (AO) algorithm is developed, where each subproblem is iteratively solved for a given task delay. Numerical simulations demonstrate that the proposed framework significantly reduces the task delay compared to benchmark schemes.


[55] 2511.06163

Cross-Modal Fine-Tuning of 3D Convolutional Foundation Models for ADHD Classification with Low-Rank Adaptation

Early diagnosis of attention-deficit/hyperactivity disorder (ADHD) in children plays a crucial role in improving outcomes in education and mental health. Diagnosing ADHD using neuroimaging data, however, remains challenging due to heterogeneous presentations and overlapping symptoms with other conditions. To address this, we propose a novel parameter-efficient transfer learning approach that adapts a large-scale 3D convolutional foundation model, pre-trained on CT images, to an MRI-based ADHD classification task. Our method introduces Low-Rank Adaptation (LoRA) in 3D by factorizing 3D convolutional kernels into 2D low-rank updates, dramatically reducing trainable parameters while achieving superior performance. In a five-fold cross-validated evaluation on a public diffusion MRI database, our 3D LoRA fine-tuning strategy achieved state-of-the-art results, with one model variant reaching 71.9% accuracy and another attaining an AUC of 0.716. Both variants use only 1.64 million trainable parameters (over 113x fewer than a fully fine-tuned foundation model). Our results represent one of the first successful cross-modal (CT-to-MRI) adaptations of a foundation model in neuroimaging, establishing a new benchmark for ADHD classification while greatly improving efficiency.


[56] 2512.15729

TinyMyo: a Tiny Foundation Model for Flexible EMG Signal Processing at the Edge

Objective: Surface electromyography (EMG) is a non-invasive sensing modality widely used in biomechanics, rehabilitation, prosthetic control, and human-machine interfaces. Despite decades of use, achieving robust generalization across subjects, recording systems, and acquisition protocols remains challenging. While foundation models (FMs) are gaining traction for EMG, existing approaches remain limited to single downstream tasks and lack deployability on embedded platforms. This work addresses these limitations. Methods: We present TinyMyo, a lightweight FM based on a Transformer encoder architecture. The model is pre-trained in a self-supervised manner using masked reconstruction on publicly available datasets. With only 3.6M parameters, TinyMyo is designed to support multiple downstream tasks through minimal task-specific head adaptations. Results: We demonstrate generalization across hand gesture classification, hand kinematic regression, speech production and speech recognition, with performance comparable to or surpassing the state of the art (SoA), and model size below 5M parameters. We achieve SoA results compared to previous FM-based works on the NinaPro DB5 (89.4%), UCI-EMG (97.56%), and EPN-612 (96.74%) datasets. We demonstrate the first-time deployment of an EMG FM on an ultra-low power microcontroller (GAP9), with an inference time of 0.785 s, energy of 44.91 mJ and power envelope of 57.18 mW. Conclusion: TinyMyo demonstrates that compact, self-supervised EMG FM can guarantee strong generalization across multiple downstream tasks while remaining compatible with low-power edge devices. Significance: TinyMyo is the first EMG FM for ultra-low power edge devices, enabling scalable and energy-efficient sensing for motor intent decoding, neuromuscular assessment, and biosignal driven human-machine interaction.


[57] 2512.23914

Hardware Acceleration for Neural Networks: A Comprehensive Survey

Neural networks have become dominant computational workloads across cloud and edge platforms, but their rapid growth in model size and deployment diversity has exposed hardware bottlenecks increasingly dominated by memory movement, communication, and irregular operators rather than peak arithmetic throughput. This survey reviews the current technology landscape for hardware acceleration of deep learning, spanning GPUs and tensor-core architectures, domain-specific accelerators (TPUs, NPUs), FPGA-based designs, ASIC inference engines, and emerging LLM-serving accelerators such as LPUs, alongside in-/near-memory computing and neuromorphic/analog approaches. We organize the survey using a unified taxonomy across (i) workloads (CNNs, RNNs, GNNs, Transformers/LLMs), (ii) execution settings (training vs.\ inference; datacenter vs.\ edge), and (iii) optimization levers (reduced precision, sparsity and pruning, operator fusion, compilation and scheduling, memory-system/interconnect design). We synthesize key architectural ideas such as systolic arrays, vector and SIMD engines, specialized attention and softmax kernels, quantization-aware datapaths, and high-bandwidth memory, and discuss how software stacks and compilers bridge model semantics to hardware. Finally, we highlight open challenges -- including efficient long-context LLM inference (KV-cache management), robust support for dynamic and sparse workloads, energy- and security-aware deployment, and fair benchmarking -- pointing to promising directions for the next generation of neural acceleration.


[58] 2601.05032

On the Impact of Channel Aging and Doppler-Affected Clutter on OFDM ISAC Systems

The temporal evolution of the propagation environment plays a central role in integrated sensing and communication (ISAC) systems. A slow-time evolution manifests as channel aging in communication links, while a fast-time one is associated with structured clutter with non-zero Doppler. Nevertheless, the joint impact of these two phenomena on ISAC performance has been largely overlooked. This addresses this research gap in a network utilizing orthogonal frequency division multiplexing waveforms. Here, a base station simultaneously serves multiple user equipment (UE) devices and performs monostatic sensing. Channel aging is captured through an autoregressive model with exponential correlation decay. In contrast, clutter is modeled as a collection of uncorrelated, coherent patches with non-zero Doppler, resulting in a Kronecker-separable covariance structure. We propose an aging-aware channel estimator that uses prior pilot observations to estimate the time-varying UE channels, characterized by a non-isotropic multipath fading structure. The clutter's structure enables a novel low-complexity sensing pipeline: clutter statistics are estimated from raw data and subsequently used to suppress the clutter's action, after which target parameters are extracted through range-angle and range-velocity maps. We evaluate the influence of frame length and pilot history on channel estimation accuracy and demonstrate substantial performance gains over block fading in low-to-moderate mobility regimes. The sensing pipeline is implemented in a clutter-dominated environment, demonstrating that effective clutter suppression can be achieved under practical configurations. Furthermore, our results show that dedicated sensing streams are required, as communication beams provide insufficient range resolution.


[59] 2412.13033

Singularity-Free Guiding Vector Field over Bézier's Curves Applied to Rovers Path Planning and Path Following

This paper presents a guidance algorithm for solving the problem of following parametric paths, as well as a curvature-varying speed setpoint for land-based car-type wheeled mobile robots (WMRs). The guidance algorithm relies on Singularity-Free Guiding Vector Fields SF-GVF. This novel GVF approach expands the desired robot path and the Guiding vector field to a higher dimensional space, in which an angular control function can be found to ensure global asymptotic convergence to the desired parametric path while avoiding field singularities. In SF-GVF, paths should follow a parametric definition. This feature makes using Bezier's curves attractive to define the robot's desired patch. The curvature-varying speed setpoint, combined with the guidance algorithm, eases the convergence to the path when physical restrictions exist, such as minimal turning radius or maximal lateral acceleration. We provide theoretical results, simulations, and outdoor experiments using a WMR platform assembled with off-the-shelf components.


[60] 2501.10806

Non-Expansive Mappings in Two-Time-Scale Stochastic Approximation: Finite-Time Analysis

Two-time-scale stochastic approximation algorithms are iterative methods used in applications such as optimization, reinforcement learning, and control. Finite-time analysis of these algorithms has primarily focused on fixed point iterations where both time-scales have contractive mappings. In this work, we broaden the scope of such analyses by considering settings where the slower time-scale has a non-expansive mapping. For such algorithms, the slower time-scale can be viewed as a stochastic inexact Krasnoselskii-Mann iteration. We also study a variant where the faster time-scale has a projection step which leads to non-expansiveness in the slower time-scale. We show that the last-iterate mean square residual error for such algorithms decays at a rate $O(1/k^{1/4-\epsilon})$, where $\epsilon>0$ is arbitrarily small. We further establish almost sure convergence of iterates to the set of fixed points. We demonstrate the applicability of our framework by applying our results to minimax optimization, linear stochastic approximation, and Lagrangian optimization.


[61] 2505.15602

Deep Learning for Continuous-Time Stochastic Control with Jumps

In this paper, we introduce a model-based deep-learning approach to solve finite-horizon continuous-time stochastic control problems with jumps. We iteratively train two neural networks: one to represent the optimal policy and the other to approximate the value function. Leveraging a continuous-time version of the dynamic programming principle, we derive two different training objectives based on the Hamilton-Jacobi-Bellman equation, ensuring that the networks capture the underlying stochastic dynamics. Empirical evaluations on different problems illustrate the accuracy and scalability of our approach, demonstrating its effectiveness in solving complex high-dimensional stochastic control tasks.


[62] 2505.16821

LLM-Based Emulation of the Radio Resource Control Layer: Towards AI-Native RAN Protocols

Integrating Large AI Models (LAMs) into 6G mobile networks is a key enabler of the AI-Native Air Interface (AI-AI), where protocol intelligence must scale beyond handcrafted logic. This paper presents, to our knowledge, the first standards-compliant emulation of the Radio Resource Control (RRC) layer using a decoder-only LAM (LLAMA-class) fine-tuned with Low-Rank Adaptation (LoRA) on a multi-vendor corpus of real-world traces spanning both 5G and 4G systems. We treat RRC as a domain-specific language and construct a segmentation-safe, question-answer (Question-and-Answer (QA)) dataset that preserves Abstract Syntax Notation (ASN.1) structure through linearization prior to Byte Pair Encoding (BPE) tokenization. The proposed approach combines parameter-efficient adaptation with schema-bounded prompting to ensure syntactic and procedural fidelity. Evaluation introduces a standards-aware triad -- ASN.1 conformance, field-level coverage analysis, and uplink-to-downlink state-machine checks -- alongside semantic similarity and latency profiling across 120 configurations. On 30k 5G request-response pairs plus an additional 4.8k QA turns from 4G sessions, our 8B model achieves a median cosine similarity of 0.97, a 61% relative gain over a zero-shot baseline, while sustaining high conformance rates. These results demonstrate that LAMs, when augmented with protocol-aware reasoning, can directly orchestrate control-plane procedures, laying the foundation for the future Artificial Intelligence (AI)-native Radio Access Network (RAN).


[63] 2506.08457

Audio Generation Through Score-Based Generative Modeling: Design Principles and Implementation

Diffusion models have emerged as powerful deep generative techniques, producing high-quality and diverse samples in applications in various domains including audio. While existing reviews provide overviews, there remains limited in-depth discussion of these specific design choices. The audio diffusion model literature also lacks principled guidance for the implementation of these design choices and their comparisons for different applications. This survey provides a comprehensive review of diffusion model design with an emphasis on design principles for quality improvement and conditioning for audio applications. We adopt the score modeling perspective as a unifying framework that accommodates various interpretations, including recent approaches like flow matching. We systematically examine the training and sampling procedures of diffusion models, and audio applications through different conditioning mechanisms. To provide an integrated, unified codebase and to promote reproducible research and rapid prototyping, we introduce an open-source codebase (this https URL) that implements our reviewed framework for various audio applications. We demonstrate its capabilities through three case studies: audio generation, speech enhancement, and text-to-speech synthesis, with benchmark evaluations on standard datasets.


[64] 2507.12596

Keep the beat going: Automatic drum transcription with momentum

How can we process a piece of recorded music to detect and visualize the onset of each instrument? A simple, interpretable approach is based on partially fixed nonnegative matrix factorization (NMF). Yet despite the method's simplicity, partially fixed NMF is challenging to apply because the associated optimization problem is high-dimensional and non-convex. This paper explores two optimization approaches that preserve the nonnegative structure, including a multiplicative update rule and projected gradient descent with momentum. These techniques are derived from the previous literature, but they have not been fully developed for partially fixed NMF before now. Results indicate that projected gradient descent with momentum leads to the higher accuracy among the two methods, and it satisfies stronger local convergence guarantees.


[65] 2508.05663

Random Walk Learning and the Pac-Man Attack

Random walk (RW)-based algorithms have long been popular in distributed systems due to low overheads and scalability, with recent growing applications in decentralized learning. However, their reliance on local interactions makes them inherently vulnerable to malicious behavior. In this work, we investigate an adversarial threat that we term the ``Pac-Man'' attack, in which a malicious node probabilistically terminates any RW that visits it. This stealthy behavior gradually eliminates active RWs from the network, effectively halting the learning process without triggering failure alarms. To counter this threat, we propose the Average Crossing (AC) algorithm--a fully decentralized mechanism for duplicating RWs to prevent RW extinction in the presence of Pac-Man. Our theoretical analysis establishes that (i) the RW population remains almost surely bounded under AC and (ii) RW-based stochastic gradient descent remains convergent under AC, even in the presence of Pac-Man, with a quantifiable deviation from the true optimum. Our extensive empirical results on both synthetic and real-world datasets corroborate our theoretical findings. Furthermore, they uncover a phase transition in the extinction probability as a function of the duplication threshold. We offer theoretical insights by analyzing a simplified variant of the AC, which sheds light on the observed phase transition.


[66] 2508.12681

Adaptive Model-Predictive Control of a Soft Continuum Robot Using a Physics-Informed Neural Network Based on Cosserat Rod Theory

Dynamic control of soft continuum robots (SCRs) holds great potential for expanding their applications, but remains a challenging problem due to the high computational demands of accurate dynamic models. While data-driven approaches like Koopman-operator-based methods have been proposed, they typically lack adaptability and cannot reconstruct the full robot shape, limiting their applicability. This work introduces a real-time-capable nonlinear model-predictive control (MPC) framework for SCRs based on a domain-decoupled physics-informed neural network (DD-PINN) with adaptable bending stiffness. The DD-PINN serves as a surrogate for the dynamic Cosserat rod model with a speed-up factor of 44000. It is also used within an unscented Kalman filter for estimating the model states and bending compliance from end-effector position measurements. We implement a nonlinear evolutionary MPC running at 70 Hz on the GPU. In simulation, it demonstrates accurate tracking of dynamic trajectories and setpoint control with end-effector position errors below 3 mm (2.3% of the actuator's length). In real-world experiments, the controller achieves similar accuracy and accelerations up to 3.55 m/s2.


[67] 2512.22972

Wavelet-based Multi-View Fusion of 4D Radar Tensor and Camera for Robust 3D Object Detection

4D millimeter-wave (mmWave) radar has been widely adopted in autonomous driving and robot perception due to its low cost and all-weather robustness. However, point-cloud-based radar representations suffer from information loss due to multi-stage signal processing, while directly utilizing raw 4D radar tensors incurs prohibitive computational costs. To address these challenges, we propose WRCFormer, a novel 3D object detection framework that efficiently fuses raw 4D radar cubes with camera images via decoupled multi-view radar representations. Our approach introduces two key components: (1) A Wavelet Attention Module embedded in a wavelet-based Feature Pyramid Network (FPN), which enhances the representation of sparse radar signals and image data by capturing joint spatial-frequency features, thereby mitigating information loss while maintaining computational efficiency. (2) A Geometry-guided Progressive Fusion mechanism, a two-stage query-based fusion strategy that progressively aligns multi-view radar and visual features through geometric priors, enabling modality-agnostic and efficient integration without overwhelming computational overhead. Extensive experiments on the K-Radar benchmark show that WRCFormer achieves state-of-the-art performance, surpassing the best existing model by approximately 2.4% in all scenarios and 1.6% in sleet conditions, demonstrating strong robustness in adverse weather.