Machine learning approaches for fringe projection profilometry (FPP) are hindered by the lack of large, diverse datasets and comprehensive benchmarking protocols. This paper introduces the first open-source, photorealistic synthetic dataset for FPP, generated using NVIDIA Isaac Sim with 15,600 fringe images and 300 depth reconstructions across 50 diverse objects. We benchmark four neural network architectures (UNet, Hformer, ResUNet, Pix2Pix) on single-shot depth reconstruction, revealing that all models achieve similar performance (58-77 mm RMSE) despite substantial architectural differences. Our results demonstrate fundamental limitations of direct fringe-to-depth mapping without explicit phase information, with reconstruction errors approaching 75-95\% of the typical object depth range. This resource provides standardized evaluation protocols enabling systematic comparison and development of learning-based FPP approaches.
Medical image fusion integrates complementary information from multiple imaging modalities to improve clinical interpretation. However, existing deep learningbased methods, including recent spatial-frequency frameworks such as AdaFuse and ASFE-Fusion, often suffer from a fundamental trade-off between global statistical similaritymeasured by correlation coefficient (CC) and mutual information (MI)and local structural fidelity. This paper proposes W-DUALMINE, a reliability-weighted dual-expert fusion framework designed to explicitly resolve this trade-off through architectural constraints and a theoretically grounded loss design. The proposed method introduces dense reliability maps for adaptive modality weighting, a dual-expert fusion strategy combining a global-context spatial expert and a wavelet-domain frequency expert, and a soft gradient-based arbitration mechanism. Furthermore, we employ a residual-to-average fusion paradigm that guarantees the preservation of global correlation while enhancing local details. Extensive experiments on CT-MRI, PET-MRI, and SPECT-MRI datasets demonstrate that W-DUALMINE consistently outperforms AdaFuse and ASFE-Fusion in CC and MI metrics while
Full-duplex communication substantially enhances spectral efficiency by enabling simultaneous transmission and reception on the same time-frequency resources. However, its practical deployment remains hindered by strong residual self-interference and inter-user interference, which severely degrade system performance. This work investigates a full-duplex MISO network that leverages movable-antenna base stations (MA-BS) and movable-element reconfigurable intelligent surfaces (ME-RIS) to overcome these limitations in next-generation 6G systems. Unlike conventional fixed-geometry architectures, the proposed framework jointly optimizes antenna and RIS element positions, together with RIS phase shifts, to strengthen desired links while suppressing interference. Our design objective is to maximize the system sum rate through the joint optimization of transmit and receive beamforming vectors, uplink transmit powers, RIS phase shifts, and the spatial locations of both the BS antennas and RIS elements. To solve this challenging nonconvex problem, an alternating optimization algorithm is developed, employing semidefinite relaxation for beamforming design and successive convex approximation for position optimization. Simulation results demonstrate that the proposed ME-RIS-assisted architecture with movable BS antennas offers substantial gains over conventional fixed-position full-duplex networks. These findings highlight the potential of integrating movable antennas with movable RIS elements as a key enabler for high-performance full-duplex operation in future 6G wireless systems.
The cell-free networking paradigm constitutes a revolutionary architecture for future generations of wireless networks, which has been recently considered in synergy with Reconfigurable Intelligent Surfaces (RISs), a promising physical-layer technology for signal propagation programmability. In this paper, we focus on wideband cell-free multi-RIS-empowered Multiple-Input Single-Output (MISO) systems and present a decentralized cooperative active and passive beamforming scheme, aiming to provide an efficient alternative towards the cooperation overhead of available centralized schemes depending on central processing unit. Considering imperfect channel information availability and realistic frequency selectivity behavior of each RIS's element response, we devise a distributed optimization approach based on consensus updates for the RISs' phase configurations. Our simulation results showcase that the proposed distributed design is superior to centralized schemes that are based on various Lorentzian-type wideband modeling approaches for the RISs.
Ultra-High Field MRI (UHF-MRI) is increasingly used in large-scale neuroimaging studies, yet automatic brain segmentation and cortical parcellation remain challenging due to signal inhomogeneities, heterogeneous contrasts and resolutions, and the limited availability of tools optimized for UHF data. Standard software packages such as FastSurferVINN and SynthSeg+ often yield suboptimal results when applied directly to UHF images, thereby restricting region-based quantitative analyses. To address this need, we introduce GOUHFI 2.0, an updated implementation of GOUHFI that incorporates increased training data variability and additional functionalities, including cortical parcellation and volumetry. GOUHFI 2.0 preserves the contrast- and resolution-agnostic design of the original toolbox while introducing two independently trained 3D U-Net segmentation tasks. The first performs whole-brain segmentation into 35 labels across contrasts, resolutions, field strengths and populations, using a domain-randomization strategy and a training dataset of 238 subjects. Using the same training data, the second network performs cortical parcellation into 62 labels following the Desikan-Killiany-Tourville (DKT) protocol. Across multiple datasets, GOUHFI 2.0 demonstrated improved segmentation accuracy relative to the original toolbox, particularly in heterogeneous cohorts, and produced reliable cortical parcellations. In addition, the integrated volumetry pipeline yielded results consistent with standard volumetric workflows. Overall, GOUHFI 2.0 provides a comprehensive solution for brain segmentation, parcellation and volumetry across field strengths, and constitutes the first deep-learning toolbox enabling robust cortical parcellation at UHF-MRI.
We present the Universal Latent Homeomorphic Manifold (ULHM), a framework that unifies semantic representations (e.g., human descriptions, diagnostic labels) and observation-driven machine representations (e.g., pixel intensities, sensor readings) into a single latent structure. Despite originating from fundamentally different pathways, both modalities capture the same underlying reality. We establish \emph{homeomorphism}, a continuous bijection preserving topological structure, as the mathematical criterion for determining when latent manifolds induced by different semantic-observation pairs can be rigorously unified. This criterion provides theoretical guarantees for three critical applications: (1) semantic-guided sparse recovery from incomplete observations, (2) cross-domain transfer learning with verified structural compatibility, and (3) zero-shot compositional learning via valid transfer from semantic to observation space. Our framework learns continuous manifold-to-manifold transformations through conditional variational inference, avoiding brittle point-to-point mappings. We develop practical verification algorithms, including trust, continuity, and Wasserstein distance metrics, that empirically validate homeomorphic structure from finite samples. Experiments demonstrate: (1) sparse image recovery from 5\% of CelebA pixels and MNIST digit reconstruction at multiple sparsity levels, (2) cross-domain classifier transfer achieving 86.73\% accuracy from MNIST to Fashion-MNIST without retraining, and (3) zero-shot classification on unseen classes achieving 89.47\% on MNIST, 84.70\% on Fashion-MNIST, and 78.76\% on CIFAR-10. Critically, the homeomorphism criterion correctly rejects incompatible datasets, preventing invalid unification and providing a feasible way to principled decomposition of general foundation models into verified domain-specific components.
Medical imaging datasets often suffer from class imbalance and limited availability of pathology-rich cases, which constrains the performance of machine learning models for segmentation, classification, and vision-language tasks. To address this challenge, we propose POWDR, a pathology-preserving outpainting framework for 3D MRI based on a conditioned wavelet diffusion model. Unlike conventional augmentation or unconditional synthesis, POWDR retains real pathological regions while generating anatomically plausible surrounding tissue, enabling diversity without fabricating lesions. Our approach leverages wavelet-domain conditioning to enhance high-frequency detail and mitigate blurring common in latent diffusion models. We introduce a random connected mask training strategy to overcome conditioning-induced collapse and improve diversity outside the lesion. POWDR is evaluated on brain MRI using BraTS datasets and extended to knee MRI to demonstrate tissue-agnostic applicability. Quantitative metrics (FID, SSIM, LPIPS) confirm image realism, while diversity analysis shows significant improvement with random-mask training (cosine similarity reduced from 0.9947 to 0.9580; KL divergence increased from 0.00026 to 0.01494). Clinically relevant assessments reveal gains in tumor segmentation performance using nnU-Net, with Dice scores improving from 0.6992 to 0.7137 when adding 50 synthetic cases. Tissue volume analysis indicates no significant differences for CSF and GM compared to real images. These findings highlight POWDR as a practical solution for addressing data scarcity and class imbalance in medical imaging. The method is extensible to multiple anatomies and offers a controllable framework for generating diverse, pathology-preserving synthetic data to support robust model development.
The Access Traffic Steering, Switching, and Splitting (ATSSS) defined in the 3GPP latest Release enables traffic flow over the multiple access paths to achieve the lower-latency End-to-end (E2E) delivery for 6G time-sensitive services. However, the existing E2E multi-path operation often falls short of more stringent QoS requirements for 6G time-sensitive services. This proposes a Reconfigurable Intelligent Surfaces (RIS)-aided E2E multi-path uplink(UL) transmission architecture that explicitly accounts for both radio link latency and N3 backhaul latency, via the coupled designs of the UL traffic-splitting ratio, transmit power, receive combining, and RIS phase shift under practical constraints to achieve the minimum average E2E latency. We develop an alternating optimization framework that updates the above target parameters to be optimized. The simulations were conducted to compare the effectiveness of the proposed E2E optimization framework that lowers the average E2E latency up to 43% for a single user and 15% for the whole system compared with baselines.
Vision Transformers (ViTs) have gained rapid adoption in computational pathology for their ability to model long-range dependencies through self-attention, addressing the limitations of convolutional neural networks that excel at local pattern capture but struggle with global contextual reasoning. Recent pathology-specific foundation models have further advanced performance by leveraging large-scale pretraining. However, standard ViTs remain inherently non-equivariant to transformations such as rotations and reflections, which are ubiquitous variations in histopathology imaging. To address this limitation, we propose Equi-ViT, which integrates an equivariant convolution kernel into the patch embedding stage of a ViT architecture, imparting built-in rotational equivariance to learned representations. Equi-ViT achieves superior rotation-consistent patch embeddings and stable classification performance across image orientations. Our results on a public colorectal cancer dataset demonstrate that incorporating equivariant patch embedding enhances data efficiency and robustness, suggesting that equivariant transformers could potentially serve as more generalizable backbones for the application of ViT in histopathology, such as digital pathology foundation models.
This letter proposes a block sparse Bayesian learning (BSBL) algorithm of non-circular (NC) signals for direction-of-arrival (DOA) estimation, which is suitable for arbitrary unknown NC phases. The block sparse NC signal representation model is constructed through a permutation strategy, capturing the available intra-block structure information to enhance recovery performance. After that, we create the sparse probability model and derive the cost function under BSBL framework. Finally, the fast marginal likelihood maximum (FMLM) algorithm is introduced, enabling the rapid implementation of signal recovery by the addition and removal of basis functions. Simulation results demonstrate the effectiveness and the superior performance of our proposed method.
We propose a probabilistic semantic filtering framework in which parameters of a dynamical system are inferred and associated with a closed set of semantic classes in a map. We extend existing methods to a multi-parameter setting using a posterior that tightly couples semantics with the parameter likelihoods, and propose a filter to compute this posterior sequentially, subject to dynamics in the map's state. Using Bayesian moment matching, we show that the computational complexity of measurement updates scales linearly in the dimension of the parameter space. Finally, we demonstrate limitations of applying existing methods to a problem from the driving domain, and show that the proposed framework better captures time-varying parameter-to-semantic associations.
Grant-free (GF) access is essential for massive connectivity but faces collision risks due to uncoordinated transmissions. While user-side sensing can mitigate these collisions by enabling autonomous transmission decisions, conventional methods become ineffective in overloaded scenarios where active streams exceed receive antennas. To address this problem, we propose a differential stream sensing framework that reframes the problem from estimating the total stream count to isolating newly activated streams via covariance differencing. We analyze the covariance deviation induced by channel variations to establish a theoretical bound based on channel correlation for determining the sensing window size. To mitigate residual interference from finite sampling, a deep learning (DL) classifier is integrated. Simulations across both independent and identically distributed flat Rayleigh fading and standardized channel environments demonstrate that the proposed method consistently outperforms non-DL baselines and remains robust in overloaded scenarios.
The growing adoption of sensor-rich intelligent systems has boosted the use of multi-modal sensing to improve wireless communications. However, traditional methods require extensive manual design of data preprocessing, network architecture, and task-specific fine-tuning, which limits both development scalability and real-world deployment. To address this, we propose WiFo-M$^2$, a foundation model that can be easily plugged into existing deep learning-based transceivers for universal performance gains. To extract generalizable out-of-band (OOB) channel features from multi-modal sensing, we introduce ContraSoM, a contrastive pre-training strategy. Once pre-trained, WiFo-M$^2$ infers future OOB channel features from historical sensor data and strengthens feature robustness via modality-specific data augmentation. Experiments show that WiFo-M$^2$ improves performance across multiple transceiver designs and demonstrates strong generalization to unseen scenarios.
Accurate precoding in massive multiple-input multiple-output (MIMO) frequency-division duplexing (FDD) systems relies on efficient channel state information (CSI) acquisition. End-to-end learning frameworks improve performance by jointly optimizing this process, but they lack scalability and fail to generalize across different system configurations, such as varying numbers of antennas and users. To overcome this limitation, we introduce WiFo-E, a wireless foundation model designed for scalable end-to-end precoding. WiFo-E employs multi-task pretraining on a diverse set of configurations to learn transferable representations of underlying wireless principles. Central to the model is a sparse Mixture-of-Experts (MoE) Transformer architecture, which mitigates task interference and enhances training efficiency by activating specialized parameter subsets adaptively. Extensive simulations demonstrate that WiFo-E outperforms conventional per-configuration training and shows strong generalization to unseen system configurations, providing a flexible and efficient foundation for adaptive massive MIMO precoding.
This paper proposes a novel paradigm centered on Artificial Intelligence (AI)-empowered propagation channel prediction to address the limitations of traditional channel modeling. We present a comprehensive framework that deeply integrates heterogeneous environmental data and physical propagation knowledge into AI models for site-specific channel prediction, which referred to as channel inference. By leveraging AI to infer site-specific wireless channel states, the proposed paradigm enables accurate prediction of channel characteristics at both link and area levels, capturing spatio-temporal evolution of radio propagation. Some novel strategies to realize the paradigm are introduced and discussed, including AI-native and AI-hybrid inference approaches. This paper also investigates how to enhance model generalization through transfer learning and improve interpretability via explainable AI techniques. Our approach demonstrates significant practical efficacy, achieving an average path loss prediction root mean square error (RMSE) of $\sim$ 4 dB and reducing training time by 60\%-75\%. This new modeling paradigm provides a foundational pathway toward high-fidelity, generalizable, and physically consistent propagation channel prediction for future communication networks.
This paper presents an adaptive observer design for semilinear hyperbolic rolling contact ODE-PDE systems with uncertain friction characteristics parameterized by a matrix of unknown coefficients appearing in the nonlinear (and possibly non-smooth) PDE source terms. Under appropriate assumptions of forward completeness and boundary sensing, an adaptive observer is synthesized to simultaneously estimate the lumped and distributed states, as well as the uncertain friction parameters, using only boundary measurements. The observer combines a finite-dimensional parameter estimator with an infinite-dimensional description of the state error dynamics, and achieves exponential convergence under persistent excitation. The effectiveness of the proposed design is demonstrated in simulation by considering a relevant example borrowed from road vehicle dynamics.
Characterizing Age of Information (AoI) in status updating systems with general arrival and service processes has great significance considering that the interarrival and service time of updates can possibly be arbitrary in a real world. While expressions of average continuous AoI under G/G/1/1 queues have been derived in the paper by Soysal and Ulukus, the discrete case remained unsolved. To address it, this paper gives a fully characterization of probability generation functions (PGF) of discrete AoI under G/G/1/1 settings when preemption is allowed. In the non-preemptive case, this paper gives the expressions of PGF of discrete AoI under G/Geo/1/1 settings, which also extends the former results. The average discrete AoI is derived and discussed based on these new theoretical findings.
A central aspect of every pulsed radar signal processor is the targets Range-Doppler estimation within a Coherent Processing Interval. Conventional methods typically rely on simplifying assumptions, such as linear target motion, narrowband operation, or constant velocity, to enable fast computation. However, these assumptions break down in scenarios involving quadratic range-time behavior, high radial velocities or accelerations, or wideband signals, leading to undesired effects such as intra-pulse Doppler shift/stretch and target migration across Range-Doppler cells. This paper presents a generalized waveform-independent Range-Doppler compression approach that compensates for these effects while maintaining minimal Signal-to-Noise-Ratio loss and practical computational efficiency. The performance limits of the proposed method are analyzed and expressed through a unified metric that depends on both scene and system parameters. Comparison with other approaches is presented, showing their estimation bias and performance degradation.
Reliable river flow forecasting is an essential component of flood risk management and early warning systems. It enables improved emergency response coordination and is critical for protecting infrastructure, communities, and ecosystems from extreme hydrological events. Process-based hydrological models and purely data-driven approaches often underperform during extreme events, particularly in forecasting peak flows. To address this limitation, this study introduces a hybrid forecasting framework that couples Extreme Gradient Boosting (XGBoost) and Random Forest (RF). XGBoost is employed for continuous streamflow forecasting, while RF is specifically trained for peak-flow prediction, and the two outputs are combined into an enhanced forecast. The approach is implemented across 857 catchments of the LamaH-CE dataset, using rainfall and discharge observations at 6-hour resolution. Results demonstrate consistently high skill, with 71% of catchments achieving a Kling-Gupta Efficiency (KGE) greater than 0.90. Peak-flow detection reaches 87%, with a false-alarm rate of 13%. Compared to the European Flood Awareness System (EFAS), the framework achieves lower peak-magnitude errors, fewer false alarms, and improved streamflow and peak-flow forecasting accuracy. The proposed framework is computationally lightweight, scalable, and easily transferable across watersheds, with training times of only seconds on standard CPUs. These findings highlight the potential of integrating hydrological understanding with efficient machine learning to improve the accuracy and reliability of operational flood forecasting, and outline future directions for hybrid hydrological-machine learning model development.
Practical aspects of orthogonal time frequency space (OTFS), such as channel estimation and its performance in fractional delay-Doppler (DD) channels, are a lively topic in the OTFS community. Oversampling and pulse shaping are also discussed in the existing literature, but not in the context of channel estimation. To the best of our knowledge, this paper is the first to address the problem of data-to-pilot and vice versa energy leakage caused by oversampling and pulse shaping in OTFS. Theoretical analysis is performed on an oversampled, pulse-shaped OTFS implementing the embedded pilot channel estimation technique, revealing a trade-off between the amount of energy leakage and excess bandwidth introduced by the pulse shape. Next, a novel variant of OTFS is introduced, called UW-OTFS, which is designed to overcome the leakage problem by placing the pilot in the oversampled time domain instead of the DD domain. The unique structure of UW-OTFS offers 36 percent higher spectral efficiency than the OTFS with embedded pilot. UW-OTFS also outperforms traditional OTFS in terms of bit error ratio and out-of-band emissions.
Large-scale LED lighting systems degrade through gradual package degradation and abrupt driver outages, while acceptability is determined by spatio-temporal illuminance compliance rather than component reliability alone. This paper proposes a performance-driven, simulation-in-the-loop framework for opportunistic maintenance optimization of LED lighting systems. LED package degradation is modeled by a semi-physical non-homogeneous Gamma process whose mean follows an exponential lumen-maintenance trend, and driver outages are described by a Weibull lifetime model. Parameters are calibrated from LM-80 accelerated degradation data via Bayesian inference, enabling uncertainty propagation to operating conditions. System performance is evaluated using ray-tracing-based illuminance mapping, and static indices (average illuminance and uniformity) are converted into a long-term dynamic deficiency-ratio metric via performance-deficiency durations over event intervals. To enable scalable Monte Carlo policy evaluation and search, a surrogate-based performance mapping replaces repeated ray-tracing with negligible loss of fidelity. An opportunistic policy is optimized in a multi-objective setting to balance performance deficiency, site visits, and replacements. A case study demonstrates the practicality of the framework and the resulting Pareto trade-offs for maintenance decision support.
Cell-Free Multiple-Input Multiple-Output (MIMO) and Open Radio Access Network (O-RAN) have been active research topics in the wireless communication community in recent years. As an open-source software implementation of the 3rd Generation Partnership Project (3GPP) 5th Generation (5G) protocol stack, OpenAirInterface (OAI) has become a valuable tool for deploying and testing new ideas in wireless communication systems. In this paper, we present our OAI based real-time uplink Multi-User MIMO (MU-MIMO) testbed developed at Fraunhofer HHI. As a part of our Cell-Free MIMO testbed development, we built a 2x2 MU-MIMO system using general purpose computers and commercially available software defined radios (SDRs). Using a modified OAI next-Generation Node-B (gNB) and two unmodified OAI user equipment (UE), we show that it is feasible to use Sounding Reference Signal (SRS) channel estimates to compute uplink combiners. Our results verify that this method can be used to separate and decode signals from two users transmitting in nonorthogonal time-frequency resources. This work serves as an important verification step to build a complete Cell-Free MU-MIMO system that leverages time domain duplexing (TDD) reciprocity to do downlink beamforming over multiple cells.
The unprecedented surge of massive Internet of things (mIoT) traffic in beyond fifth generation (B5G) communication systems calls for transformative approaches for multiple access and data transmission. While classical model-based tools have been proven to be powerful and precise, an imminent trend for resource management in B5G networks is promoting solutions towards data-driven design. Considering an IoT network with devices spread in clusters covered by a base station, we present in this paper a novel model-free multiple access and data transmission framework empowered by reinforcement learning, designed for power-domain non-orthogonal multiple access networks to facilitate uplink traffic of small data packets. The framework supports two access modes referred to as contention-based and semi-contention-free, with its core component being a policy gradient algorithm executed at the base station. The base station performs access control and optimal radio resource allocation by periodically broadcasting two control parameters to each cluster of devices that considerably reduce data detection failures with a minimum computation requirement on devices. Numerical results, in terms of system and cluster throughput, throughput fairness, access delay, and energy consumption, demonstrate the efficiency and scalability of the framework as network size and traffic load vary.
This research sets forth a universal framework to characterize the beamforming gain achievable with arbitrarily nonideal phase shifters. Precisely, the maximum possible shortfall relative to the gain attainable with ideal phase shifters is established. Such shortfall is shown to be fundamentally determined by the perimeter of the convex hull of the set of feasible beamforming coefficients on the complex plane. This result holds regardless of whether the beamforming is at the transmitter, at the receiver, or at a reconfigurable intelligent surface. In i.i.d. fading channels, the shortfall hardens to the maximum possible shortfall as the number of antennas grows.
This paper proposes a two-scale spatial deployment strategy to ensure reliable coverage for multiple target areas, integrating macroscopic intelligent reflecting surfaces (IRSs) and fine-grained movable antennas (MAs). Specifically, IRSs are selectively deployed from candidate sites to shape the propagation geometry, while MAs are locally repositioned among discretized locations to exploit small-scale channel variations. The objective is to minimize the total deployment cost of MAs and IRSs by jointly optimizing the IRS site selection, MA positions, transmit precoding, and IRS phase shifts, subject to the signal-to-noise ratio (SNR) requirements for all target areas. This leads to a challenging mixed-integer non-convex optimization problem that is intractable to solve directly. To address this, we first formulate an auxiliary problem to verify the feasibility. A penalty-based double-loop algorithm integrating alternating optimization and successive convex approximation (SCA) is developed to solve this feasibility issue, which is subsequently adapted to obtain a suboptimal solution for the original cost minimization problem. Finally, based on the obtained solution, we formulate an element refinement problem to further reduce the deployment cost, which is solved by a penalty-based SCA algorithm. Simulation results demonstrate that the proposed designs consistently outperform benchmarks relying on independent area planning or full IRS deployment in terms of cost-efficiency. Moreover, for cost minimization, MA architectures are preferable in large placement apertures, whereas fully populated FPA architectures excel in compact ones; for worst-case SNR maximization, MA architectures exhibit a lower cost threshold for feasibility, while FPA architectures can attain peak SNR at a lower total cost.
This paper presents an echo-side modulation framework for integrated sensing and communication (ISAC) systems. A space-time reconfigurable intelligent surface (ST-RIS) impresses a continuous-phase modulation onto the radar echo, enabling uplink data transmission with a phase modulation of the transmitted radar-like waveform. The received signal is a multiplicative composition of the sensing waveform and the phase for communication. Both functionalities share the same physical signal and perceive each other as impairments. The achievable communication rate is expressed as a function of a coupling parameter that links sensing accuracy to phase error accumulation. Under a fixed bandwidth constraint, the sensing and communication figures of merit define a convex Pareto frontier. The optimal bandwidth allocation satisfying a minimum sensing requirement is derived in closed form. The modified Cramer-Rao bound (MCRB) for range estimation is derived in closed form; this parameter must be estimated to compensate for the frequency offset before data demodulation. Frame synchronization is formulated as a generalized likelihood ratio test (GLRT), and the detection probability is obtained through characteristic function inversion, accounting for residual frequency errors from imperfect range estimation. Numerical results validate the theoretical bounds and characterize the trade-off across the operating range.
The energy transition challenges operational tasks based on simulations and optimisation. These computations need to be fast and flexible as the grid is ever-expanding, and renewables' uncertainty requires a flexible operational environment. Learned approximations, proxies or surrogates -- we refer to them as Neural Solvers -- excel in terms of evaluation speed, but are inflexible with respect to adjusting to changing tasks. Hence, neural solvers are usually applicable to highly specific tasks, which limits their usefulness in practice; a widely reusable, foundational neural solver is required. Therefore, this work proposes the Residual Power Flow (RPF) formulation. RPF formulates residual functions based on Kirchhoff's laws to quantify the infeasibility of an operating condition. The minimisation of the residuals determines the voltage solution; an additional slack variable is needed to achieve AC-feasibility. RPF forms a natural, foundational subtask of tasks subject to power flow constraints. We propose to learn RPF with neural solvers to exploit their speed. Furthermore, RPF improves learning performance compared to common power flow formulations. To solve operational tasks, we integrate the neural solver in a Predict-then-Optimise (PO) approach to combine speed and flexibility. The case study investigates the IEEE 9-bus system and three tasks (AC Optimal Power Flow (OPF), power-flow and quasi-steady state power flow) solved by PO. The results demonstrate the accuracy and flexibility of learning with RPF.
In this article, we propose a novel discretization method based on numerical integration for discretizing continuous systems, termed the $\alpha\beta$-approximation or Scalable Bilinear Transformation (SBT). In contrast to existing methods, the proposed method consists of two factors, i.e., shape factor ($\alpha$) and time factor ($\beta$). Depending on the discretization technique applied, we identify two primary distortion modes in discrete resonant controllers: frequency warping and resonance damping. We further provide a theoretical explanation for these distortion modes, and demonstrate that the performance of the method is superior to all typical methods. The proposed method is implemented to discretize a quasi-resonant (QR) controller on a control board, achieving 25\% reduction in the root-mean-square error (RMSE) compared to the SOTA method. Finally, the approach is extended to discretizing a resonant controller of a grid-tied inverter. The efficacy of the proposed method is conclusively validated through favorable comparisons among the theory, simulation, and experiments.
Advanced metering infrastructure (AMI) provides high-resolution electricity consumption data that can enhance monitoring, diagnosis, and decision making in modern power distribution systems. Detecting anomalies in these time-series measurements is challenging due to nonlinear, nonstationary, and multi-scale temporal behavior across diverse building types and operating conditions. This work presents a systematic, power-system-oriented evaluation of a GAN-LSTM framework for smart meter anomaly detection using the Large-scale Energy Anomaly Detection (LEAD) dataset, which contains one year of hourly measurements from 406 buildings. The proposed pipeline applies consistent preprocessing, temporal windowing, and threshold selection across all methods, and compares the GAN-LSTM approach against six widely used baselines, including statistical, kernel-based, reconstruction-based, and GAN-based models. Experimental results demonstrate that the GAN-LSTM significantly improves detection performance, achieving an F1-score of 0.89. These findings highlight the potential of adversarial temporal modeling as a practical tool for supporting asset monitoring, non-technical loss detection, and situational awareness in real-world power distribution networks. The code for this work is publicly available
Balancing dialogue, music, and sound effects with accompanying video is crucial for immersive storytelling, yet current audio mixing workflows remain largely manual and labor-intensive. While recent advancements have introduced the visually guided acoustic highlighting task, which implicitly rebalances audio sources using multimodal guidance, it remains unclear which visual aspects are most effective as conditioning this http URL address this gap through a systematic study of whether deep video understanding improves audio remixing. Using textual descriptions as a proxy for visual analysis, we prompt large vision-language models to extract six types of visual-semantic aspects, including object and character appearance, emotion, camera focus, tone, scene background, and inferred sound-related cues. Through extensive experiments, camera focus, tone, and scene background consistently yield the largest improvements in perceptual mix quality over state-of-the-art baselines. Our findings (i) identify which visual-semantic cues most strongly support coherent and visually aligned audio remixing, and (ii) outline a practical path toward automating cinema-grade sound design using lightweight guidance derived from large vision-language models.
This study investigates the use of computational audio analysis to examine ideological narratives in Nazi propaganda films. Employing a three-step pipeline, speaker diarization, audio transcription and psycholinguistic analysis, it reveals ideological patterns in characters. Despite current issues with speaker diarization, the methodology provides insights into character traits and propaganda narratives, suggesting scalable applications.
This work introduces ABE-VVS, a framework that performs attribute based selective coordinate encryption for point cloud based volumetric video streaming, enabling lightweight yet effective digital rights management (DRM). Rather than encrypting entire point cloud frames, our approach encrypts only selected subsets of coordinates ($X, Y, Z$, or combinations), lowering computational overhead and latency while still producing strong visual distortion that prevents meaningful unauthorized viewing. Our experiments show that encrypting only the $X$ coordinates achieves effective obfuscation while reducing encryption and decryption times by up to 50% and 80%, respectively, compared to full-frame encryption. To our knowledge, this is the first work to provide a novel end-to-end evaluation of a DRM-enabled secure point cloud streaming system. We deployed a point cloud video streaming setup on the CloudLab testbed and evaluated three HTTP-based Attribute-Based Encryption (ABE) granularities - ABE-XYZ (encrypting all $X,Y,Z$ coordinates), ABE-XY, and ABE-X against conventional HTTPS/TLS secure streaming as well as an HTTP-only baseline without any security. Our streaming evaluation demonstrates that ABE-based schemes reduce server-side CPU load by up to 80% and cache CPU load by up to 63%, comparable to HTTP-only, while maintaining similar cache hit rates. Moreover, ABE-XYZ and ABE-XY exhibit lower client-side rebuffering than HTTPS, and ABE-X achieves zero rebuffering comparable to HTTP-only. Although ABE-VVS increases client-side CPU usage, the overhead is not large enough to affect streaming quality and is offset by its broader benefits, including simplified key revocation, elimination of per-client encryption, and reduced server and cache load.
We aim to investigate the impact of image and signal properties on visual attention mechanisms during a signal detection task in digital images. The application of insight yielded from this work spans many areas of digital imaging where signal or pattern recognition is involved in complex heterogenous background. We used simulated tomographic breast images as the platform to investigate this question. While radiologists are highly effective at analyzing medical images to detect and diagnose diseases, misdiagnosis still occurs. We selected digital breast tomosynthesis (DBT) images as a sample medical images with different breast densities and structures using digital breast phantoms (Bakic and XCAT). Two types of lesions (with distinct spatial frequency properties) were randomly inserted in the phantoms during projections to generate abnormal cases. Six human observers participated in observer study designed for a locating and detection of an 3-mm sphere lesion and 6-mm spicule lesion in reconstructed in-plane DBT slices. We collected eye-gaze data to estimate gaze metrics and to examine differences in visual attention mechanisms. We found that detection performance in complex visual environments is strongly constrained by later perceptual stages, with decision failures accounting for the largest proportion of errors. Signal detectability is jointly influenced by both target morphology and background complexity, revealing a critical interaction between local signal features and global anatomical noise. Increased fixation duration on spiculated lesions suggests that visual attention is differentially engaged depending on background and signal spatial frequency dependencies.
Split Federated Learning (SFL) enables collaborative training between resource-constrained edge devices and a compute-rich server. Communication overhead is a central issue in SFL and can be mitigated with auxiliary networks. Yet, the fundamental client-side computation challenge remains, as back-propagation requires substantial memory and computation costs, severely limiting the scale of models that edge devices can support. To enable more resource-efficient client computation and reduce the client-server communication, we propose HERON-SFL, a novel hybrid optimization framework that integrates zeroth-order (ZO) optimization for local client training while retaining first-order (FO) optimization on the server. With the assistance of auxiliary networks, ZO updates enable clients to approximate local gradients using perturbed forward-only evaluations per step, eliminating memory-intensive activation caching and avoiding explicit gradient computation in the traditional training process. Leveraging the low effective rank assumption, we theoretically prove that HERON-SFL's convergence rate is independent of model dimensionality, addressing a key scalability concern common to ZO algorithms. Empirically, on ResNet training and language model (LM) fine-tuning tasks, HERON-SFL matches benchmark accuracy while reducing client peak memory by up to 64% and client-side compute cost by up to 33% per step, substantially expanding the range of models that can be trained or adapted on resource-limited devices.
The move to next-generation wireless communications with extremely large-scale antenna arrays (ELAAs) brings the communications into the radiative near-field (RNF) region, where distance-aware focusing is feasible. However, high-frequency RNF links are highly vulnerable to blockage in indoor environments dominated by half-space obstacles (walls, corners) that create knife-edge shadows. Conventional near-field focused beams offer high gain in line-of-sight (LoS) scenarios but suffer from severe energy truncation and effective-rank collapse in shadowed regions, making hardware remedies such as reconfigurable intelligent surfaces (RIS) impractical. We propose a beamforming strategy that exploits the auto-bending property of Airy beams to mitigate half-space blockage without additional hardware. The Airy beam is designed to ``ride'' the diffraction edge, accelerating its main lobe into the shadow to restore connectivity. Our contributions are threefold: (i) a Green's function-based RNF multi-user channel model that analytically reveals singular-value collapse behind knife-edge obstacles; (ii) an Airy analog beamforming scheme that optimizes the bending trajectory to recover the effective channel rank; and (iii) an Airy null-steering method that aligns oscillatory nulls with bright-region users to suppress interference in mixed shadow/bright scenarios. Simulations show that the proposed edge-riding Airy strategy achieves an SNR improvement of over 20 dB and restores full-rank connectivity in shadowed links compared to conventional RNF focusing, virtually eliminating outage in geometric shadows and increasing multi-user spectral efficiency by approximately 35\% under typical indoor ELAA configurations. These results demonstrate robust RNF multi-user access in half-space blockage scenarios without relying on RIS.
Purpose: The model allocates the system components orders to the suppliers to minimize the parts price and the system construction delay penalties and maximize the system availability during its use. It considers the quantity-based discount and variation of delivery lead time by ordering similar components. The model also reflects the prerequisite relationships between construction activities and calculates the delay penalty resulting from parts delivery lead time. Design/methodology/approach: This research presents a model for selecting suppliers of components of an industrial series-parallel multi-state system. A nonlinear binary mathematical program uses the Markov process results to select system components. It minimizes the total system construction phase costs, including the components' price, and the system construction delay penalty, and the system exploitation phase costs, including the system shutdown and working at half capacity. Findings: The model allocates the optimal orders for a typical industrial system's components, composing four elements. The proposed approach combines the nonlinear binary program and the Markov process results to optimize the system life cycle parameters, including the system construction cost and operational availability. Originality/value: Using the Markov chain results in binary nonlinear mathematical programming, this study attempts to strike the right balance between the construction phase's objectives and an industrial unit's operation phase.
Satellite videos provide continuous observations of surface dynamics but pose significant challenges for multi-object tracking (MOT), especially under unstabilized conditions where platform jitter and the weak appearance of tiny objects jointly degrade tracking performance. To address this problem, we propose DeTracker, a joint detection-and-tracking framework tailored for unstabilized satellite videos. DeTracker introduces a Global--Local Motion Decoupling (GLMD) module that explicitly separates satellite platform motion from true object motion through global alignment and local refinement, leading to improved trajectory stability and motion estimation accuracy. In addition, a Temporal Dependency Feature Pyramid (TDFP) module is developed to perform cross-frame temporal feature fusion, enhancing the continuity and discriminability of tiny-object representations. We further construct a new benchmark dataset, SDM-Car-SU, which simulates multi-directional and multi-speed platform motions to enable systematic evaluation of tracking robustness under varying motion perturbations. Extensive experiments on both simulated and real unstabilized satellite videos demonstrate that DeTracker significantly outperforms existing methods, achieving 61.1% MOTA on SDM-Car-SU and 47.3% MOTA on real satellite video data.
The IEC 61850 Generic Object-Oriented Substation Event (GOOSE) protocol plays a critical role in real-time protection and automation of digital substations, yet its lack of native security mechanisms can expose power systems to sophisticated cyberattacks. Traditional rule-based and supervised intrusion detection techniques struggle to detect protocol-compliant and zero-day attacks under significant class imbalance and limited availability of labeled data. This paper proposes an explainable, unsupervised multi-view anomaly detection framework for IEC 61850 GOOSE networks that explicitly separates semantic integrity and temporal availability. The approach employs asymmetric autoencoders trained only on real operational GOOSE traffic to learn distinct latent representations of sequence-based protocol semantics and timing-related transmission dynamics in normal traffic. Anomaly detection is implemented using reconstruction errors mixed with statistically grounded thresholds, enabling robust detection without specified attack types. Feature-level reconstruction analysis provides intrinsic explainability by directly linking detection outcomes to IEC 61850 protocol characteristics. The proposed framework is evaluated using real substation traffic for training and a public dataset containing normal traffic and message suppression, data manipulation, and denial-of-service attacks for testing. Experimental results show attack detection rates above 99% with false positives remaining below 5% of total traffic, demonstrating strong generalization across environments and effective operation under extreme class imbalance and interpretable anomaly attribution.
This technical report presents the construction and analysis of polynomial navigation functions for motion planning in 3-D workspaces populated by spherical and cylindrical obstacles. The workspace is modeled as a bounded spherical region, and obstacles are encoded using smooth polynomial implicit functions. We establish conditions under which the proposed navigation functions admit a unique non-degenerate minimum at the target while avoiding local minima, including in the presence of pairwise intersecting obstacles. Gradient and Hessian analyses are provided, and the theoretical results are validated through numerical simulations in obstacle rich 3-D environments.
We introduce a voice-agentic framework that learns one critical omni-understanding skill: knowing when to trust itself versus when to consult external audio perception. Our work is motivated by a crucial yet counterintuitive finding: naively fine-tuning an omni-model on both speech recognition and external sound understanding tasks often degrades performance, as the model can be easily misled by noisy hypotheses. To address this, our framework, Speech-Hands, recasts the problem as an explicit self-reflection decision. This learnable reflection primitive proves effective in preventing the model from being derailed by flawed external candidates. We show that this agentic action mechanism generalizes naturally from speech recognition to complex, multiple-choice audio reasoning. Across the OpenASR leaderboard, Speech-Hands consistently outperforms strong baselines by 12.1% WER on seven benchmarks. The model also achieves 77.37% accuracy and high F1 on audio QA decisions, showing robust generalization and reliability across diverse audio question answering datasets. By unifying perception and decision-making, our work offers a practical path toward more reliable and resilient audio intelligence.
Efficiently optimizing battery charging protocols is challenging because each evaluation is slow, costly, and non-differentiable. Many existing approaches address this difficulty by heavily constraining the protocol search space, which limits the diversity of protocols that can be explored, preventing the discovery of higher-performing solutions. We introduce two gradient-free, LLM-driven closed-loop methods: Prompt-to-Optimizer (P2O), which uses an LLM to propose the code for small neural-network-based protocols, which are then trained by an inner loop, and Prompt-to-Protocol (P2P), which simply writes an explicit function for the current and its scalar parameters. Across our case studies, LLM-guided P2O outperforms neural networks designed by Bayesian optimization, evolutionary algorithms, and random search. In a realistic fast charging scenario, both P2O and P2P yield around a 4.2 percent improvement in state of health (capacity retention based health metric under fast charging cycling) over a state-of-the-art multi-step constant current (CC) baseline, with P2P achieving this under matched evaluation budgets (same number of protocol evaluations). These results demonstrate that LLMs can expand the space of protocol functional forms, incorporate language-based constraints, and enable efficient optimization in high cost experimental settings.
Dynamic spectrum sharing (DSS) among multi-operator low Earth orbit (LEO) mega-constellations is essential for coexistence, yet prevailing policies focus almost exclusively on interference mitigation, leaving geographic equity largely unaddressed. This work investigates whether conventional DSS approaches inadvertently exacerbate the rural digital divide. Through large-scale, 3GPP-compliant non-terrestrial network (NTN) simulations with geographically distributed users, we systematically evaluate standard allocation policies. The results uncover a stark and persistent structural bias: SNR-priority scheduling induces a 1.65x urban-rural access disparity, privileging users with favorable satellite geometry. Counter-intuitively, increasing system bandwidth amplifies rather than alleviates this gap, with disparity rising from 1.0x to 1.65x as resources expand. To remedy this, we propose FairShare, a lightweight, quota-based framework that enforces geographic fairness. FairShare not only reverses the bias, achieving an affirmative disparity ratio of Delta_geo = 0.72x, but also reduces scheduler runtime by 3.3%. This demonstrates that algorithmic fairness can be achieved without trading off efficiency or complexity. Our work provides regulators with both a diagnostic metric for auditing fairness and a practical, enforceable mechanism for equitable spectrum governance in next-generation satellite networks.
An open problem in autonomous driving research is modeling human driving behavior, which is needed for the planning component of the autonomy stack, safety validation through traffic simulation, and causal inference for generating explanations for autonomous driving. Modeling human driving behavior is challenging because it is stochastic, high-dimensional, and involves interaction between multiple agents. This problem has been studied in various communities with a vast body of literature. Existing reviews have generally focused on one aspect: motion prediction. In this article, we present a unification of the literature that covers intent estimation, trait estimation, and motion prediction. This unification is enabled by modeling multi-agent driving as a partially observable stochastic game, which allows us to cast driver modeling tasks as inference problems. We classify driver models into a taxonomy based on the specific tasks they address and the key attributes of their approach. Finally, we identify open research opportunities in the field of driver modeling.
Regularization of control policies using entropy can be instrumental in adjusting predictability of real-world systems. Applications benefiting from such approaches range from, e.g., cybersecurity, which aims at maximal unpredictability, to human-robot interaction, where predictable behavior is highly desirable. In this paper, we consider entropy regularization for interval Markov decision processes (IMDPs). IMDPs are uncertain MDPs, where transition probabilities are only known to belong to intervals. Lately, IMDPs have gained significant popularity in the context of abstracting stochastic systems for control design. In this work, we address robust minimization of the linear combination of entropy and a standard cumulative cost in IMDPs, thereby establishing a trade-off between optimality and predictability. We show that optimal deterministic policies exist, and devise a value-iteration algorithm to compute them. The algorithm solves a number of convex programs at each step. Finally, through an illustrative example we show the benefits of penalizing entropy in IMDPs.
Physical-Layer Authentication (PLA) offers endogenous security, lightweight implementation, and high reliability, making it a promising complement to upper-layer security methods in Edge Intelligence (EI)-empowered Industrial Internet of Things (IIoT). However, state-of-the-art Channel State Information (CSI)-based PLA schemes face challenges in recognizing mobile multi-users due to the constantly shifting CSI distributions with user movements. To address this issue, we propose a Temporal Dynamic Graph Convolutional Network (TDGCN)-based PLA scheme, which employs Graph Neural Networks (GNNs) to capture the spatio-temporal dynamics induced by user movements. Firstly, we partition CSI fingerprints into multivariate time series and utilize dynamic GNNs to capture their associations. Secondly, Temporal Convolutional Networks (TCNs) handle temporal dependencies within each CSI fingerprint dimension. Additionally, Dynamic Graph Isomorphism Networks (GINs) and cascade node clustering pooling further enable efficient information aggregation and reduced computational complexity. Simulations demonstrate the proposed scheme's superior authentication accuracy compared to seven baseline schemes.
In many speech recording applications, noise and acoustic echo corrupt the desired speech. Consequently, combined noise reduction (NR) and acoustic echo cancellation (AEC) is required. Generally, a cascade approach is followed, i.e., the AEC and NR are designed in isolation by selecting a separate signal model, separate cost function, and separate solution strategy. The AEC and NR are then cascaded one after the other, not accounting for their interaction. In this paper, an integrated approach is proposed to consider this interaction in a general multi-microphone/multi-loudspeaker setup. Therefore, a single signal model of either the microphone signal vector or the extended signal vector, obtained by stacking microphone and loudspeaker signals, is selected, a single mean squared error cost function is formulated, and a common solution strategy is used. Using this microphone signal model, a multi-channel Wiener filter (MWF) is derived. Using the extended signal model, it is shown that an extended MWF (MWFext) can be derived, and several equivalent expressions can be found, which are nevertheless shown to be interpretable as cascade algorithms. Specifically, the MWFext is shown to be equivalent to algorithms where the AEC precedes the NR (AEC-NR), the NR precedes the AEC (NR-AEC), and the extended NR (NRext) precedes the AEC and post-filter (PF) (NRext-AEC-PF). Under rank-deficiency conditions the MWFext is non-unique. Equivalence then amounts to the expressions being specific, not necessarily minimum-norm solutions, for this MWFext. The practical performances differ due to non-stationarities and imperfect correlation matrix estimation, with the AEC-NR and NRext-AEC-PF attaining best overall performance.
The application of machine learning in wireless communications has been extensively explored, with deep unfolding emerging as a powerful model-based technique. Deep unfolding enhances interpretability by transforming complex iterative algorithms into structured layers of deep neural networks (DNNs). This approach seamlessly integrates domain knowledge with deep learning (DL), leveraging the strengths of both methods to simplify complex signal processing tasks in communication systems. To provide a solid foundation, we first present a brief overview of DL and deep unfolding. We then explore the applications of deep unfolding in key areas, including signal detection, channel estimation, beamforming design, decoding for error-correcting codes, sensing and communication, power allocation, and security. Each section focuses on a specific task, highlighting its significance in emerging 6G technologies and reviewing recent advancements in deep unfolding-based solutions. Finally, we discuss the challenges associated with developing deep unfolding techniques and propose potential improvements to enhance their applicability across diverse wireless communication scenarios.
Distributed edge learning (DL) is considered a cornerstone of intelligence enablers, since it allows for collaborative training without the necessity for local clients to share raw data with other parties, thereby preserving privacy and security. Integrating DL into the 6G networks requires a coexistence design with existing services such as high-bandwidth (HB) traffic like eMBB. Current designs in the literature mainly focus on communication round-wise designs that assume a rigid resource allocation throughout each communication round (CR). However, rigid resource allocation within a CR is a highly inefficient and inaccurate representation of the system's realistic behavior, especially when CR duration far exceeds the channel coherence time due to large model size or limited resources. This is due to the heterogeneous nature of the system, as clients inherently may need to access the network at different time instants. This work zooms into one arbitrary CR, and demonstrates the importance of considering a time-dependent design for sharing the resource pool with HB traffic. We first formulate a timeslot-wise optimization problem to minimize the consumed time by DL within the CR while constrained by a DL energy budget. Due to its intractability, a session-based optimization problem is formulated assuming a CR lasts less than a large-scale coherence time. Some scheduling properties of such multi-server joint communication scheduling and resource allocation framework have been established. An iterative algorithm has been designed to solve such non-convex and non-block-separable-constrained problems. Simulation results confirm the importance of the efficient and accurate integration design proposed in this work.
Ensuring safety in the sense of constraint satisfaction for learning-based control is a critical challenge, especially in the model-free case. While safety filters address this challenge in the model-based setting by modifying unsafe control inputs, they typically rely on predictive models derived from physics or data. This reliance limits their applicability for advanced model-free learning control methods. To address this gap, we propose a new optimization-based control framework that determines safe control inputs directly from data. The benefit of the framework is that it can be updated through arbitrary model-free learning algorithms to pursue optimal performance. As a key component, the concept of direct data-driven safety filters (3DSF) is first proposed. The framework employs a novel safety certificate, called the state-action control barrier function (SACBF). We present three different schemes to learn the SACBF. Furthermore, based on input-to-state safety analysis, we present the error-to-state safety analysis framework, which provides formal guarantees on safety and recursive feasibility even in the presence of learning inaccuracies. The proposed control framework bridges the gap between model-free learning-based control and constrained control, by decoupling performance optimization from safety enforcement. Simulations on vehicle control illustrate the superior performance regarding constraint satisfaction and task achievement compared to model-based methods and reward shaping.
Rydberg atomic quantum receivers (RAQRs) have emerged as a promising solution for evolving wireless receivers from the classical to the quantum domain. To further unleash their great potential in wireless communications, we propose a flexible architecture for Rydberg atomic quantum multiple-input multiple-output (RAQ-MIMO) receivers in the multi-user uplink. Then the corresponding signal model of the RAQ-MIMO system is constructed by paving the way from quantum physics to classical wireless communications. Explicitly, we outline the associated operating principles and transmission flow. We also validate the linearity of our model and its feasible region. Based on our model, we derive closed-form asymptotic formulas for the ergodic achievable rate (EAR) of both the maximum-ratio combining (MRC) and zero-forcing (ZF) receivers operating in uncorrelated fading channels (UFC) and the correlated fading channels (CFC), as well as in the standard quantum limit (SQL) and photon shot limit (PSL) regimes, respectively. Furthermore, we unveil that the EAR scales logarithmically without bound with the product of effective number $N_{\text{atom}}$ and coherence time $T_2$ of the atomic ensemble in the SQL regime, but exhibits non-monotonic trade-off between the collective atomic enhancement and optical-depth-dependent attenuation in the PSL regime. More particularly, the transmit power of users can be scaled down quadratically with $N_{\text{atom}} \tau$, $\tau \in \{ T_2, \frac{ {\cal C} (\Omega_{\ell}) }{A_p} \}$, but the EAR per user retains fixed, by increasing $N_{\text{atom}}$ while retaining the sensor number $M \propto N_{\text{atom}} \tau$ in the SQL regime or $M \propto \exp \big( \frac{N_{\text{atom}} {\bar \chi}}{A_p} \big)$ in the PSL regime....
Being born small carries significant health risks, including increased neonatal mortality and a higher likelihood of future cardiac diseases. Accurate estimation of gestational age is critical for monitoring fetal growth, but traditional methods, such as estimation based on the last menstrual period, are in some situations difficult to obtain. While ultrasound-based approaches offer greater reliability, they rely on manual measurements that introduce variability. This study presents an interpretable deep learning-based method for automated gestational age calculation, leveraging a novel segmentation architecture and distance maps to overcome dataset limitations and the scarcity of segmentation masks. Our approach achieves performance comparable to state-of-the-art models while reducing complexity, making it particularly suitable for resource-constrained settings and with limited annotated data. Furthermore, our results demonstrate that the use of distance maps is particularly suitable for estimating femur endpoints.
We propose a novel symbolic control framework for enforcing temporal logic specifications in Euler-Lagrange systems that addresses the key limitations of traditional abstraction-based approaches. Unlike existing methods that require exact system models and provide guarantees only at discrete sampling instants, our approach relies only on bounds on system parameters and input constraints, and ensures correctness for the full continuous-time trajectory. The framework combines scalable abstraction of a simplified virtual system with a closed-form, model-free controller that guarantees trajectories satisfy the original specification while respecting input bounds and remaining robust to unknown but bounded disturbances. We provide feasibility conditions for the construction of confinement regions and analyze the trade-off between efficiency and conservatism. Case studies on pendulum dynamics, a two-link manipulator, and multi-agent systems, including hardware experiments, demonstrate that the proposed approach ensures both correctness and safety while significantly reducing computational time and memory requirements. These results highlight its scalability and practicality for real-world robotic systems where precise models are unavailable and continuous-time guarantees are essential.
The field of computer vision is undergoing a paradigm shift toward large-scale foundation model pre-training via self-supervised learning (SSL). Leveraging large volumes of unlabeled brain MRI data, such models can learn anatomical priors that improve few-shot performance in diverse neuroimaging tasks. However, most SSL frameworks are tailored to natural images, and their adaptation to capture multi-modal MRI information remains underexplored. This work proposes a modality-invariant representation learning setup and evaluates its effectiveness in stroke and epilepsy lesion segmentation, following large-scale pre-training. Experimental results suggest that despite successful cross-modality alignment, lesion segmentation primarily benefits from preserving fine-grained modality-specific features. Model checkpoints and code are made publicly available.
Pediatric liver tumors are one of the most common solid tumors in pediatrics, with differentiation of benign or malignant status and pathological classification critical for clinical treatment. While pathological examination is the gold standard, the invasive biopsy has notable limitations: the highly vascular pediatric liver and fragile tumor tissue raise complication risks such as bleeding; additionally, young children with poor compliance require anesthesia for biopsy, increasing medical costs or psychological trauma. Although many efforts have been made to utilize AI in clinical settings, most researchers have overlooked its importance in pediatric liver tumors. To establish a non-invasive examination procedure, we developed a multi-stage deep learning (DL) framework for automated pediatric liver tumor diagnosis using multi-phase contrast-enhanced CT. Two retrospective and prospective cohorts were enrolled. We established a novel PKCP-MixUp data augmentation method to address data scarcity and class imbalance. We also trained a tumor detection model to extract ROIs, and then set a two-stage diagnosis pipeline with three backbones with ROI-masked images. Our tumor detection model has achieved high performance (mAP=0.871), and the first stage classification model between benign and malignant tumors reached an excellent performance (AUC=0.989). Final diagnosis models also exhibited robustness, including benign subtype classification (AUC=0.915) and malignant subtype classification (AUC=0.979). We also conducted multi-level comparative analyses, such as ablation studies on data and training pipelines, as well as Shapley-Value and CAM interpretability analyses. This framework fills the pediatric-specific DL diagnostic gap, provides actionable insights for CT phase selection and model design, and paves the way for precise, accessible pediatric liver tumor diagnosis.
In smart cities, bandwidth-constrained Unmanned Aerial Vehicles (UAVs) often fail to relay mission-critical data in time, compromising real-time decision-making. This highlights the need for faster and more efficient transmission of only the most relevant information. To address this, we propose DSC-UAV model, leveraging a context-adaptive Digital Semantic Communication (DSC) framework. This model redefines aerial data transmission through three core components: prompt-aware encoding, dynamic UAV-enabled relaying, and user mobility-optimized reinforcement learning. Ground users transmit context-driven visual content. Images are encoded via Vision Transformer combined with a prompt-text encoder to generate semantic features based on the desired context (generic or object-specific). These features are then quantized and transmitted over a UAV network that dynamically relays the data. Joint trajectory and resource allocation are optimized using Truncated Quantile Critic (TQC)-aided reinforcement learning technique, which offers greater stability and precision over standard SAC and TD3 due to its resistance to overestimation bias. Simulations demonstrate significant performance improvement, up to 22% gain in semantic-structural similarity and 14% reduction in Age of Information (AoI) compared to digital and prior UAV-semantic communication baselines. By integrating mobility control with context-driven visual abstraction, DSC-UAV advances resilient, information-centric surveillance for next-generation UAV networks in bandwidth-constrained environments.
The emergence of large-scale automatic speech recognition (ASR) models such as Whisper has greatly expanded their adoption across diverse real-world applications. Ensuring robustness against even minor input perturbations is therefore critical for maintaining reliable performance in real-time environments. While prior work has mainly examined accuracy degradation under adversarial attacks, robustness with respect to efficiency remains largely unexplored. This narrow focus provides only a partial understanding of ASR model vulnerabilities. To address this gap, we conduct a comprehensive study of ASR robustness under multiple attack scenarios. We introduce MORE, a multi-objective repetitive doubling encouragement attack, which jointly degrades recognition accuracy and inference efficiency through a hierarchical staged repulsion-anchoring mechanism. Specifically, we reformulate multi-objective adversarial optimization into a hierarchical framework that sequentially achieves the dual objectives. To further amplify effectiveness, we propose a novel repetitive encouragement doubling objective (REDO) that induces duplicative text generation by maintaining accuracy degradation and periodically doubling the predicted sequence length. Overall, MORE compels ASR models to produce incorrect transcriptions at a substantially higher computational cost, triggered by a single adversarial input. Experiments show that MORE consistently yields significantly longer transcriptions while maintaining high word error rates compared to existing baselines, underscoring its effectiveness in multi-objective adversarial attack.
Orthogonal frequency division multiplexing (OFDM) based low Earth orbit (LEO) satellite communication system suffers from severe Doppler shifts, while {the Doppler-resilient affine frequency-division multiplexing (AFDM) transmission suffers from significantly high processing complexity in data detection}. In this paper, we explore the channel estimation gain of affine frequency (AF) domain pilot to enhance the OFDM transmission under high mobility. Specifically, we propose a novel AF domain pilot embedding scheme for satellite-ground downlink OFDM systems for capturing the channel characteristics. By exploiting the autoregressive (AR) property of adjacent channels, a long short-term memory (LSTM) based predictor is designed to replace conventional interpolation operation in OFDM channel estimation. Simulation results show that the proposed transmission scheme significantly outperforms conventional OFDM scheme in terms of bit error rate (BER) under high Doppler scenarios, thus paving a new way for the design of next generation non-terrestrial network (NTN) communication systems.
In conventional distributed optimization, each agent performs a single local update between two communication rounds with its neighbors to synchronize solutions. Inspired by the success of using multiple local updates in federated learning, incorporating local updates into distributed optimization has recently attracted increasing attention. However, unlike federated learning, where multiple local updates can accelerate learning by improving gradient estimation under mini-batch settings, it remains unclear whether similar benefits hold in distributed optimization when gradients are exact. Moreover, existing theoretical results typically require reducing the step size when multiple local updates are employed, which can entirely offset any potential benefit of these additional local updates and obscure their true impact on convergence. In this paper, we focus on the classic DIGing algorithm and leverage the tight performance bounds provided by Performance Estimation Problems (PEP) to show that incorporating local updates can indeed accelerate distributed optimization. To the best of our knowledge, this is the first rigorous demonstration of such acceleration for a broad class of objective functions. Our analysis further reveals that, under an appropriate step size, performing only two local updates is sufficient to achieve the maximal possible improvement, and that additional local updates provide no further gains. Because more updates increase computational cost, these findings offer practical guidance for efficient implementation. Extensive experiments on both synthetic and real-world datasets corroborate the theoretical findings.
Selecting a few available actuators to ensure the controllability of a linear system is a fundamental problem in control theory. Previous works either focus on optimal performance, simplifying the controllability issue, or make the system controllable under structural assumptions, such as in graphs or when the input matrix is a design parameter. We generalize these approaches to offer a precise characterization of the general minimal actuator selection problem where a set of actuators is given, described by a fixed input matrix, and goal is to choose the fewest actuators that make the system controllable. We show that this problem can be equivalently cast as an integer linear program and, if actuation channels are sufficiently independent, as a set multicover problem under multiplicity constraints. The latter equivalence is always true if the state matrix has all distinct eigenvalues, in which case it simplifies to the set cover problem. Such characterizations hold even when a robust selection that tolerates a given number of faulty actuators is desired. Our established connection legitimates a designer to use algorithms from the rich literature on the set multicover problem to select the smallest subset of actuators, including exact solutions that do not require brute-force search.
Particle based communication using diffusion and advection has emerged as an alternative signaling paradigm recently. While most existing studies assume constant flow conditions, real macro scale environments such as atmospheric winds exhibit time varying behavior. In this work, airborne particle communication under time varying advection is modeled as a linear time varying (LTV) channel, and a closed form, time dependent channel impulse response is derived using the method of moving frames. Based on this formulation, the channel is characterized through its power delay profile, leading to the definition of channel dispersion time as a physically meaningful measure of channel memory and a guideline for symbol duration selection. System level simulations under directed, time varying wind conditions show that waveform design is critical for performance, enabling multi symbol modulation using a single particle type when dispersion is sufficiently controlled. The results demonstrate that time varying diffusion advection channels can be systematically modeled and engineered using communication theoretic tools, providing a realistic foundation for particle based communication in complex flow environments.
Chain-of-Thought (CoT) reasoning has proven effective in enhancing large language models by encouraging step-by-step intermediate reasoning, and recent advances have extended this paradigm to Multimodal Large Language Models (MLLMs). In the medical domain, where diagnostic decisions depend on nuanced visual cues and sequential reasoning, CoT aligns naturally with clinical thinking processes. However, Current benchmarks for medical image understanding generally focus on the final answer while ignoring the reasoning path. An opaque process lacks reliable bases for judgment, making it difficult to assist doctors in diagnosis. To address this gap, we introduce a new M3CoTBench benchmark specifically designed to evaluate the correctness, efficiency, impact, and consistency of CoT reasoning in medical image understanding. M3CoTBench features 1) a diverse, multi-level difficulty dataset covering 24 examination types, 2) 13 varying-difficulty tasks, 3) a suite of CoT-specific evaluation metrics (correctness, efficiency, impact, and consistency) tailored to clinical reasoning, and 4) a performance analysis of multiple MLLMs. M3CoTBench systematically evaluates CoT reasoning across diverse medical imaging tasks, revealing current limitations of MLLMs in generating reliable and clinically interpretable reasoning, and aims to foster the development of transparent, trustworthy, and diagnostically accurate AI systems for healthcare. Project page at this https URL.
This paper introduces the Safe Protective and Assistive Robot Kit (SPARK), a comprehensive benchmark designed to ensure safety in humanoid autonomy and teleoperation. Humanoid robots pose significant safety risks due to their physical capabilities of interacting with complex environments. The physical structures of humanoid robots further add complexity to the design of general safety solutions. To facilitate safe deployment of complex robot systems, SPARK can be used as a toolbox that comes with state-of-the-art safe control algorithms in a modular and composable robot control framework. Users can easily configure safety criteria and sensitivity levels to optimize the balance between safety and performance. To accelerate humanoid safety research and development, SPARK provides simulation benchmarks that compare safety approaches in a variety of environments, tasks, and robot models. Furthermore, SPARK allows quick deployment of synthesized safe controllers on real robots. For hardware deployment, SPARK supports Apple Vision Pro (AVP) or a Motion Capture System as external sensors, while offering interfaces for seamless integration with alternative hardware setups at the same time. This paper demonstrates SPARK's capability with both simulation experiments and case studies with a Unitree G1 humanoid robot. Leveraging these advantages of SPARK, users and researchers can significantly improve the safety of their humanoid systems as well as accelerate relevant research. The open source code is available at: this https URL.
Large audio-language models (LALMs), built upon powerful Large Language Models (LLMs), have exhibited remarkable audio comprehension and reasoning capabilities. However, the training of LALMs demands a large corpus of audio-language pairs, which requires substantial costs in both data collection and training resources. In this paper, we propose \textbf{MATS}, an audio-language multimodal LLM designed to handle \textbf{M}ultiple \textbf{A}udio task using solely \textbf{T}ext-only \textbf{S}upervision. By leveraging pre-trained audio-language alignment models such as CLAP, we develop a text-only training strategy that projects the shared audio-language latent space into LLM latent space, endowing the LLM with audio comprehension capabilities without relying on audio data during training. To further bridge the modality gap between audio and language embeddings within CLAP, we propose the \textbf{S}trongly-rel\textbf{a}ted \textbf{n}oisy \textbf{t}ext with \textbf{a}udio (\textbf{Santa}) mechanism. Santa maps audio embeddings into CLAP language embedding space while preserving essential information from the audio input. Extensive experiments demonstrate that MATS, despite being trained exclusively on text data, achieves competitive performance compared to recent LALMs trained on large-scale audio-language pairs. The code is publicly available in \href{this https URL}{this https URL}.
Automating bone micro-milling using a robotic system presents challenges due to the uncertainties in both the external and internal features of bone tissue. For example, during mouse cranial window creation, a circular path with a radius of 2 to 4 mm needs to be milled on the mouse skull using a microdrill. The uneven surface and non-uniform thickness of the mouse skull make it difficult to fully automate this process, requiring the system to possess advanced perceptual and adaptive capabilities. In this study, we address this challenge by integrating a Microscopic Stereo Camera System (MSCS) into the robotic bone micro-milling system and proposing a novel online pre-measurement pipeline for the target surface. Starting from uncalibrated cameras, the pipeline enables automatic calibration and 3D surface fitting through a convolutional neural network (CNN)-based keypoint detection. Combined with the existing feedback-based system, we develop the world's first autonomous robotic bone micro-milling system capable of rapidly, in real-time perceiving and adapting to surface unevenness and non-uniform thickness, thereby enabling an end-to-end autonomous cranial window creation workflow without human assistance. Validation experiments on euthanized mice demonstrate that the improved system achieves a success rate of 85.7 % and an average milling time of 2.1 minutes, showing not only significant performance improvements over the previous system but also exceptional accuracy, speed, and stability compared to human operators.
Backward stochastic differential equation (BSDE)-based deep learning methods provide an alternative to Physics-Informed Neural Networks (PINNs) for solving high-dimensional partial differential equations (PDEs), offering potential algorithmic advantages in settings such as stochastic optimal control, where the PDEs of interest are tied to an underlying dynamical system. However, standard BSDE-based solvers have empirically been shown to underperform relative to PINNs in the literature. In this paper, we identify the root cause of this performance gap as a discretization bias introduced by the standard Euler-Maruyama (EM) integration scheme applied to one-step self-consistency BSDE losses, which shifts the optimization landscape off target. We find that this bias cannot be satisfactorily addressed through finer step-sizes or multi-step self-consistency losses. To properly handle this issue, we propose a Stratonovich-based BSDE formulation, which we implement with stochastic Heun integration. We show that our proposed approach completely eliminates the bias issues faced by EM integration. Furthermore, our empirical results show that our Heun-based BSDE method consistently outperforms EM-based variants and achieves competitive results with PINNs across multiple high-dimensional benchmarks. Our findings highlight the critical role of integration schemes in BSDE-based PDE solvers, an algorithmic detail that has received little attention thus far in the literature.
Monaural multi-speaker automatic speech recognition (ASR) remains challenging due to data scarcity and the intrinsic difficulty of recognizing and attributing words to individual speakers, particularly in overlapping speech. Recent advances have driven the shift from cascade systems to end-to-end (E2E) architectures, which reduce error propagation and better exploit the synergy between speech content and speaker identity. Despite rapid progress in E2E multi-speaker ASR, the field lacks a comprehensive review of recent developments. This survey provides a systematic taxonomy of E2E neural approaches for multi-speaker ASR, highlighting recent advances and comparative analysis. Specifically, we analyze: (1) architectural paradigms (SIMO vs.~SISO) for pre-segmented audio, analyzing their distinct characteristics and trade-offs; (2) recent architectural and algorithmic improvements based on these two paradigms; (3) extensions to long-form speech, including segmentation strategy and speaker-consistent hypothesis stitching. Further, we (4) evaluate and compare methods across standard benchmarks. We conclude with a discussion of open challenges and future research directions towards building robust and scalable multi-speaker ASR.
Neural network-based radio receivers are expected to play a key role in future wireless systems, making reliable Out-Of-Distribution (OOD) detection essential. We propose a post-hoc, layerwise OOD framework based on channelwise feature aggregation that avoids classwise statistics--critical for multi-label soft-bit outputs with astronomically many classes. Receiver activations exhibit no discrete clusters but a smooth Signal-to-Noise-Ratio (SNR)-aligned manifold, consistent with classical receiver behavior and motivating manifold-aware OOD detection. We evaluate multiple OOD feature types, distance metrics, and methods across layers. Gaussian Mahalanobis with mean activations is the strongest single detector, earlier layers outperform later, and SNR/classifier fusions offer small, inconsistent AUROC gains. High-delay OOD is detected reliably, while high-speed remains challenging.
Integrating Large AI Models (LAMs) into 6G mobile networks is a key enabler of the AI-Native Air Interface (AI-AI), where protocol intelligence must scale beyond handcrafted logic. This paper presents, to our knowledge, the first standards-compliant emulation of the Radio Resource Control (RRC) layer using a decoder-only LAM (LLAMA-class) fine-tuned with Low-Rank Adaptation (LoRA) on a multi-vendor corpus of real-world traces spanning both 5G and 4G systems. We treat RRC as a domain-specific language and construct a segmentation-safe, question-answer (Question-and-Answer (QA)) dataset that preserves Abstract Syntax Notation (ASN.1) structure through linearization prior to Byte Pair Encoding (BPE) tokenization. The proposed approach combines parameter-efficient adaptation with schema-bounded prompting to ensure syntactic and procedural fidelity. Evaluation introduces a standards-aware triad -- ASN.1 conformance, field-level coverage analysis, and uplink-to-downlink state-machine checks -- alongside semantic similarity and latency profiling across 120 configurations. On 30k 5G request-response pairs plus an additional 4.8k QA turns from 4G sessions, our 8B model achieves a median cosine similarity of 0.97, a 61% relative gain over a zero-shot baseline, while sustaining high conformance rates. These results demonstrate that LAMs, when augmented with protocol-aware reasoning, can directly orchestrate control-plane procedures, laying the foundation for the future Artificial Intelligence (AI)-native Radio Access Network (RAN).
Nowadays, speech emotion recognition (SER) plays a vital role in the field of human-computer interaction (HCI) and the evolution of artificial intelligence (AI). Our proposed DCRF-BiLSTM model is used to recognize seven emotions: neutral, happy, sad, angry, fear, disgust, and surprise, which are trained on five datasets: RAVDESS (R), TESS (T), SAVEE (S), EmoDB (E), and Crema-D (C). The model achieves high accuracy on individual datasets, including 97.83% on RAVDESS, 97.02% on SAVEE, 95.10% for CREMA-D, and a perfect 100% on both TESS and EMO-DB. For the combined (R+T+S) datasets, it achieves 98.82% accuracy, outperforming previously reported results. To our knowledge, no existing study has evaluated a single SER model across all five benchmark datasets (i.e., R+T+S+C+E) simultaneously. In our work, we introduce this comprehensive combination and achieve a remarkable overall accuracy of 93.76%. These results confirm the robustness and generalizability of our DCRF-BiLSTM framework across diverse datasets.
Robotic cutting is a challenging contact-rich manipulation task where the robot must simultaneously negotiate unknown object mechanics, large contact forces, and precise motion requirements. We introduce a new active virtual-model control scheme that enables knife rocking motion for robot manipulators, without pre-planned trajectories or precise information of the environment. Motion is generated and controlled through switching virtual coupling with virtual mechanisms, given by virtual springs, dampers, and masses arranged in a suitable way. Through analysis and experiments, we demonstrate that the controlled robot behavior settles into a periodic motion. Experiments with a Franka manipulator demonstrate robust cuts with five different vegetables, and sub-millimeter slice accuracy from 1 mm to 6 mm at nearly one cut per second. The same controller survives changes in knife shape and cutting board height, and adaptation to a different humanoid manipulator, demonstrating robustness and platform independence.
Successful autonomous planetary exploration hinges on real-time, high-fidelity environmental perception. However, standard deep learning models usually demand far more memory and computation power than space-qualified, radiation-hardened onboard hardware can provide. This creates a fundamental design challenge of deploying sophisticated detection architectures without saturating the rigid power and memory envelopes of the computation hardware of planetary exploration platforms. We propose the Adaptive Quantized Planetary Crater Detection System to resolve this bottleneck. Our framework integrates a Quantized Neural Network, refined through Quantization Aware Training, with an Adaptive Multi-Sensor Fusion module. By forcing weights into low-precision integer arithmetic, we effectively strip away the floating-point overhead that typically bottlenecks onboard processors and system memory. This yields a leaner model footprint and significantly faster processing while the detection fidelity remains high. Such efficiency enables AMF module to merge high-bandwidth Optical Imagery streams with Digital Elevation Models using an Adaptive Weighting Mechanism to re-balance sensor priority under variable conditions like deep shadows or high albedo. Integrated Multi-Scale Detection Heads then resolve craters across a wide range of diameters, providing a computationally efficient and precise solution for real-time detection, localization of craters and hazard avoidance. This paper establishes the architectural design and theoretical justification of the system. While our methodology is grounded in principles of hybrid computer vision and planetary science, we present this as a blueprint for future empirical validation and hardware benchmarking on integer-arithmetic units. This system provides a capability vital for the next generation of autonomous landing, navigation, and deep space explorations.
The ethical and legal imperative to share research data without causing harm requires careful attention to privacy risks. While mounting evidence demonstrates that data sharing benefits science, legitimate concerns persist regarding the potential leakage of personal information that could lead to reidentification and subsequent harm. We reviewed metadata accompanying neuroimaging datasets from heterogeneous studies openly available on OpenNeuro, involving participants across the lifespan, from children to older adults, with and without clinical diagnoses, and including associated clinical score data. Using metaprivBIDS (this https URL), a software application for BIDS compliant tsv/json files that computes and reports different privacy metrics (k-anonymity, k-global, l-diversity, SUDA, PIF), we found that privacy is generally well maintained, with serious vulnerabilities being rare. Nonetheless, issues were identified in nearly all datasets and warrant mitigation. Notably, clinical score data (e.g., neuropsychological results) posed minimal reidentification risk, whereas demographic variables: age, sex assigned at birth, sexual orientations, race, income, and geolocation, represented the principal privacy vulnerabilities. We outline practical measures to address these risks, enabling safer data sharing practices.
From metronomes to celestial bodies, mechanics underpins how the world evolves in time and space. With consideration of this, a number of recent neural network models leverage inductive biases from classical mechanics to encourage model interpretability and ensure forecasted states are physical. However, in general, these models are designed to capture the dynamics of a single system with fixed physical parameters, from state-space measurements of a known configuration space. In this paper we introduce Symplectic Phase Space GAN (SPS-GAN) which can capture the dynamics of multiple systems, and generalize to unseen physical parameters from. Moreover, SPS-GAN does not require prior knowledge of the system configuration space. In fact, SPS-GAN can discover the configuration space structure of the system from arbitrary measurement types (e.g., state-space measurements, video frames). To achieve physically plausible generation, we introduce a novel architecture which embeds a Hamiltonian neural network recurrent module in a conditional GAN backbone. To discover the structure of the configuration space, we optimize the conditional time-series GAN objective with an additional physically motivated term to encourages a sparse representation of the configuration space. We demonstrate the utility of SPS-GAN for trajectory prediction, video generation and symmetry discovery. Our approach captures multiple systems and achieves performance on par with supervised models designed for single systems.
Industrial control systems (ICS) form the operational backbone of critical infrastructure networks (CIN) such as power grids, water supply systems, and gas pipelines. As cyber threats to these systems escalate, regulatory agencies are imposing stricter compliance requirements to ensure system-wide security and reliability. A central challenge, however, is enabling regulators to verify the effectiveness of detection mechanisms without requiring utilities to disclose sensitive operational data. In this paper, we introduce zkSTAR, a cyberattack detection framework that leverages zk-SNARKs to reconcile these requirements and enable provable detection guarantees while preserving data confidentiality. Our approach builds on established residual-based statistical hypothesis testing methods applied to state-space detection models. Specifically, we design a two-pronged zk-SNARK architecture that enforces (i) temporal consistency of the state-space dynamics and (ii) statistical consistency of the detection tests, enabling regulators to verify correctness and prevent suppression of alarms without visibility into utility-level data. We formally analyze the soundness and zero-knowledge properties of our framework and validate its practical feasibility through computational experiments on real-world ICS datasets. As a result, our work demonstrates a scalable, privacy-preserving alternative for regulatory compliance for ICS driven critical infrastructure networks.
The advancement of automatic speech recognition (ASR) has been largely enhanced by extensive datasets in high-resource languages, while languages such as Hungarian remain underrepresented due to limited spontaneous and conversational corpora. To address this gap, we introduce two new datasets -- BEA-Large and BEA-Dialogue -- constructed from the previously unprocessed portions of the Hungarian speech corpus named BEA. BEA-Large extends BEA-Base with 255 hours of spontaneous speech from 433 speakers, enriched with detailed segment-level metadata. BEA-Dialogue, comprising 85 hours of spontaneous conversations, is a Hungarian speech corpus featuring natural dialogues partitioned into speaker-independent subsets, supporting research in conversational ASR and speaker diarization. We establish reproducible baselines on these datasets using publicly available ASR models, with the fine-tuned Fast Conformer model achieving word error rates as low as 14.18% on spontaneous and 4.8% on repeated speech. Diarization experiments yield diarization error rates between 12.46% and 17.40%, providing reference points for future improvements. The results highlight the persistent difficulty of conversational ASR, particularly due to disfluencies, overlaps, and informal speech patterns. By releasing these datasets and baselines, we aim to advance Hungarian speech technology and offer a methodological framework for developing spontaneous and conversational benchmarks in other languages.
Multimodal brain decoding aims to reconstruct semantic information that is consistent with visual stimuli from brain activity signals such as fMRI, and then generate readable natural language descriptions. However, multimodal brain decoding still faces key challenges in cross-subject generalization and interpretability. We propose a BrainROI model and achieve leading-level results in brain-captioning evaluation on the NSD dataset. Under the cross-subject setting, compared with recent state-of-the-art methods and representative baselines, metrics such as BLEU-4 and CIDEr show clear improvements. Firstly, to address the heterogeneity of functional brain topology across subjects, we design a new fMRI encoder. We use multi-atlas soft functional parcellations (soft-ROI) as a shared space. We extend the discrete ROI Concatenation strategy in MINDLLM to a voxel-wise gated fusion mechanism (Voxel-gate). We also ensure consistent ROI mapping through global label alignment, which enhances cross-subject transferability. Secondly, to overcome the limitations of manual and black-box prompting methods in stability and transparency, we introduce an interpretable prompt optimization process. In a small-sample closed loop, we use a locally deployed Qwen model to iteratively generate and select human-readable prompts. This process improves the stability of prompt design and preserves an auditable optimization trajectory. Finally, we impose parameterized decoding constraints during inference to further improve the stability and quality of the generated descriptions.
As wireless communication applications evolve from traditional multipath environments to high-mobility scenarios like unmanned aerial vehicles, multiplexing techniques have advanced accordingly. Traditional single-carrier frequency-domain equalization (SC-FDE) and orthogonal frequency-division multiplexing (OFDM) have given way to emerging orthogonal time-frequency space (OTFS) and affine frequency-division multiplexing (AFDM). These approaches exploit specific channel structures to diagonalize or sparsify the effective channel, thereby enabling low-complexity detection. However, their reliance on these structures significantly limits their robustness in dynamic, real-world environments. To address these challenges, this paper studies a random multiplexing technique that is decoupled from the physical channels, enabling its application to arbitrary norm-bounded and spectrally convergent channel matrices. Random multiplexing achieves statistical fading-channel ergodicity for transmitted signals by constructing an equivalent input-isotropic channel matrix in the random transform domain. It guarantees the asymptotic replica MAP bit-error rate (BER) optimality of AMP-type detectors for linear systems with arbitrary norm-bounded, spectrally convergent channel matrices and signaling configurations, under the unique fixed point assumption. A low-complexity cross-domain memory AMP (CD-MAMP) detector is considered, leveraging the sparsity of the time-domain channel and the randomness of the equivalent channel. Optimal power allocations are derived to minimize the replica MAP BER and maximize the replica constrained capacity of random multiplexing systems. The optimal coding principle and replica constrained-capacity optimality of CD-MAMP detector are investigated for random multiplexing systems. Additionally, the versatility of random multiplexing in diverse wireless applications is explored.