This preprint presents a neural network tuner for the finite state model predictive control of an induction motor. The tuner deals with the parameters of the controllers in the speed loop and in the stator current loop. The results are assessed using a five phase machine in an experimental setup. Data for the neural network training is obtained from the experiments using step tests.
We propose Universal Speech Content Factorization (USCF), a simple and invertible linear method for extracting a low-rank speech representation in which speaker timbre is suppressed while phonetic content is preserved. USCF extends Speech Content Factorization, a closed-set voice conversion (VC) method, to an open-set setting by learning a universal speech-to-content mapping via least-squares optimization and deriving speaker-specific transformations from only a few seconds of target speech. We show through embedding analysis that USCF effectively removes speaker-dependent variation. As a zero-shot VC system, USCF achieves competitive intelligibility, naturalness, and speaker similarity compared to methods that require substantially more target-speaker data or additional neural training. Finally, we demonstrate that as a training-efficient timbre-disentangled speech feature, USCF features can serve as the acoustic representation for training timbre-prompted text-to-speech models. Speech samples and code are publicly available.
We present a policy-aware, cross-layer methodology for edge-side auditing of service tiering and quota-based throttling in Starlink. Using a multi-week plan-hopping campaign (232.8 h) on a UK residential terminal, we align 1 Hz terminal telemetry with host-side probes to obtain portal-labeled traces spanning priority (pre-quota), post-quota throttling, stay-active operation, and residential service. Using portal status only as ground truth (independent of throughput), we show these policy regimes manifest as distinct signatures in goodput, PoP RTT, and an internal-to-user ratio $R=C_{\mathrm{int}}/T_{\mathrm{user}}$. A lightweight rule on windowed medians separates high-speed from low-rate operation without operator visibility.
Adversarial perturbations exploit vulnerabilities in automatic speech recognition (ASR) systems while preserving human perceived linguistic content. Neural audio codecs impose a discrete bottleneck that can suppress fine-grained signal variations associated with adversarial noise. We examine how the granularity of this bottleneck, controlled by residual vector quantization (RVQ) depth, shapes adversarial robustness. We observe a non-monotonic trade-off under gradient-based attacks: shallow quantization suppresses adversarial perturbations but degrades speech content, while deeper quantization preserves both content and perturbations. Intermediate depths balance these effects and minimize transcription error. We further show that adversarially induced changes in discrete codebook tokens strongly correlate with transcription error. These gains persist under adaptive attacks, where neural codec configurations outperform traditional compression defenses.
Satellite-derived fire observations are the primary input for learning-based wildfire spread prediction, yet they are inherently incomplete due to cloud cover, smoke obscuration, and sensor artifacts. This partial observability introduces a domain gap between the clean data used to train forecasting models and the degraded inputs encountered during deployment, often leading to unreliable predictions. To address this challenge, we formulate wildfire forecasting under partial observability using a two-stage probabilistic framework that decouples observation recovery from spatiotemporal prediction. Stage-I reconstructs plausible fire maps from corrupted observations via conditional inpainting, while Stage-II models wildfire dynamics on the recovered sequences using a spatiotemporal forecasting network. We consider four network architectures for the reconstruction module-a Residual U-Net (MaskUNet), a Conditional VAE (MaskCVAE), a cross-attention Vision Transformer (MaskViT), and a discrete diffusion model (MaskD3PM)-spanning CNN-based, latent-variable, attention-based, and diffusion-based approaches. We evaluate the performance of the two-stage approach on the WildfireSpreadTS (WSTS) dataset under various settings, including pixel-wise and block-wise masking, eight corruption levels (10%-80%), four fire scenarios, and leave-one-year-out cross-validation. Results show that all learning-based recovery models substantially outperform non-learning baselines, with MaskCVAE and MaskUNet achieving the strongest overall performance. Importantly, inserting the reconstruction stage before forecasting significantly mitigates the domain gap, restoring next-day prediction accuracy to near-clean-input levels even under severe information loss.
Positron emission tomography (PET) scans expose patients to radiation, which can be mitigated by reducing the dose, albeit at the cost of diminished quality. This makes low-dose (LD) PET recovery an active research area. Previous studies have focused on standard-dose (SD) PET recovery from LD PET scans and/or multi-modal scans, e.g., PET/CT or PET/MRI, using deep learning. While these studies incorporate multi-modal information through conditioning in a single-task model, such approaches may limit the capacity to extract modality-specific features, potentially leading to early feature dilution. Although recent studies have begun incorporating pathology-rich data, challenges remain in effectively leveraging multi-modality inputs for reconstructing diverse features, particularly in heterogeneous patient populations. To address these limitations, we introduce a multi-modality multi-task diffusion model (M2Diff) that processes MRI and LD PET scans separately to learn modality-specific features and fuse them via hierarchical feature fusion to reconstruct SD PET. This design enables effective integration of complementary structural and functional information, leading to improved reconstruction fidelity. We have validated the effectiveness of our model on both healthy and Alzheimer's disease brain datasets. The M2Diff achieves superior qualitative and quantitative performance on both datasets.
Change detection (CD) has extensive applications and is a crucial method for identifying and localizing target changes. In recent years, various CD methods represented by convolutional neural network (CNN) and transformer have achieved significant success in effectively detecting difference areas in bi-temporal remote sensing images. However, CNN still exhibit limitations in local feature extraction when confronted with pseudo changes caused by different object types across global scales. Although transformers can effectively detect true change regions due to their long-range dependencies, the shadows cast by buildings under varying lighting conditions can introduce localized noise in these areas. To address these challenges, we propose the dynamically focused progressive fusion network (DFPF-Net) to simultaneously tackle global and local noise influences. On one hand, we utilize a pyramid vision transformer (PVT) as a weight-shared siamese network to implement change detection, efficiently fusing multi-level features extracted from the pyramid structure through a residual based progressive enhanced fusion module (PEFM). On the other hand, we propose the dynamic change focus module (DCFM), which employs attention mechanisms and edge detection algorithms to mitigate noise interference across varying ranges. Extensive experiments on four datasets demonstrate that DFPF-Net outperforms mainstream CD methods.
The accelerating growth of computational demand in modern data centers has further heightened the need for power infrastructures that are highly reliable, environmentally sustainable, and capable of supporting grid stability. Small Modular Reactors (SMRs) as a clean source of energy are particularly attractive for next-generation hyperscale data centers with significant electrical and cooling demands. This paper presents a comprehensive dynamic modeling and stability analysis of a grid-connected Integrated Energy System (IES) designed for data center applications. The proposed IES integrates an SMR and a battery energy storage system to jointly supply electricity for computational and cooling load while providing stability support to the main grid. A coupled computational-thermal load model is developed to capture the real-time power demand of the data center, incorporating CPU utilization, cooling efficiency, and ambient temperature effects. The integrated SMR-powered data center model is implemented in PSSE and tested on the IEEE 118-bus system under various fault scenarios. Simulation results demonstrate that the IES substantially enhances voltage and frequency stability compared to a conventionally grid-connected data center, minimizing disturbance-induced deviations and improving post-fault recovery.
We present MetaSpectra+, a compact multifunctional camera that supports two operating modes: (1) snapshot HDR + hyperspectral or (2) snapshot polarization + hyperspectral imaging. It utilizes a novel metasurface-refractive assembly that splits the incident beam into multiple channels and independently controls each channel's dispersion, exposure, and polarization. Unlike prior multifunctional metasurface imagers restricted to narrow (10-100 nm) bands, MetaSpectra+ operates over nearly the entire visible spectrum (250 nm). Relative to snapshot hyperspectral imagers, it achieves the shortest total track length and the highest reconstruction accuracy on benchmark datasets. The demonstrated prototype reconstructs high-quality hyperspectral datacubes and either an HDR image or two orthogonal polarization channels from a single snapshot.
Recent advances in zero-shot voice conversion have exhibited potential in emotion control, yet the performance is suboptimal or inconsistent due to their limited expressive capacity. We propose Emotion-Aware Prefix for explicit emotion control in a two-stage voice conversion backbone. We significantly improve emotion conversion performance, doubling the baseline Emotion Conversion Accuracy (ECA) from 42.40% to 85.50% while maintaining linguistic integrity and speech quality, without compromising speaker identity. Our ablation study suggests that a joint control of both sequence modulation and acoustic realization is essential to synthesize distinct emotions. Furthermore, comparative analysis verifies the generalizability of proposed method, while it provides insights on the role of acoustic decoupling in maintaining speaker identity.
Macroscopic traffic flow is stochastic, but the physics-informed deep learning methods currently used in transportation literature embed deterministic PDEs and produce point-valued outputs; the stochasticity of the governing dynamics plays no role in the learned representation. This work develops a framework in which the physics constraint itself is distributional and directly derived from stochastic traffic-flow dynamics. Starting from an Ito-type Lighthill-Whitham-Richards model with Brownian forcing, we derive a one-point forward equation for the marginal traffic density at each spatial location. The spatial coupling induced by the conservation law appears as an explicit conditional drift term, which makes the closure requirement transparent. Based on this formulation, we derive an equivalent deterministic Probability Flow ODE that is pointwise evaluable and differentiable once a closure is specified. Incorporating this as a physics constraint, we then propose a score network with an advection-closure module, trainable by denoising score matching together with a Fokker-Planck residual loss. The resulting model targets a data-conditioned density distribution, from which point estimates, credible intervals, and congestion-risk measures can be computed. The framework provides a basis for distributional traffic-state estimation and for stochastic fundamental-diagram analysis in a physics-informed generative setting.
A model predictive control (MPC) framework is developed for station-keeping in spacecraft formation flight along libration point orbits. At each control period, the MPC policy solves a multi-vehicle optimal control problem (MVOCP) that tracks a reference trajectory, while enforcing path constraints on the relative motion of the formation. The control policy makes use of a limited set of control nodes consistent with operational constraints that allow only a small number of maneuver opportunities per revolution. To promote recursive feasibility, path constraints are progressively tightened across the prediction horizon. An isoperimetric reformulation of the constraints is used to prevent inter-sample violations. The resulting MVOCP is a nonconvex program, which is solved via sequential convex programming. The proposed approach is evaluated in a high-fidelity ephemeris model under realistic uncertainties for a formation along the near-rectilinear halo orbit (NRHO), and subject to path constraints on interspacecraft separation and relative Sun phase angle. The results demonstrate maintenance of a spacecraft formation that satisfies the path constraints with cumulative propellant consumption comparable to that of existing methods
Emotions play a central role in human communication, shaping trust, engagement, and social interaction. As artificial intelligence systems powered by large language models become increasingly integrated into everyday life, enabling them to reliably understand and generate human emotions remains an important challenge. While emotional expression is inherently multimodal, this thesis focuses on emotions conveyed through spoken language and investigates how acoustic and semantic information can be jointly modeled to advance both emotion understanding and emotion synthesis from speech. The first part of the thesis studies emotion-aware representation learning through pre-training. We propose strategies that incorporate acoustic and semantic supervision to learn representations that better capture affective cues in speech. A speech-driven supervised pre-training framework is also introduced to enable large-scale emotion-aware text modeling without requiring manually annotated text corpora. The second part addresses emotion recognition in conversational settings. Hierarchical architectures combining cross-modal attention and mixture-of-experts fusion are developed to integrate acoustic and semantic information across conversational turns. Finally, the thesis introduces a textless and non-parallel speech-to-speech framework for emotion style transfer that enables controllable emotional transformations while preserving speaker identity and linguistic content. The results demonstrate improved emotion transfer and show that style-transferred speech can be used for data augmentation to improve emotion recognition.
Achieving high perceptual quality without hallucination remains a challenge in generative speech enhancement (SE). A representative approach, PASE, is robust to hallucination but has limited perceptual quality under adverse conditions. We propose StuPASE, built upon PASE to achieve studio-level quality while retaining its low-hallucination property. First, we show that finetuning PASE with dry targets rather than targets containing simulated early reflections substantially improves dereverberation. Second, to address performance limitations under strong additive noise, we replace the GAN-based generative module in PASE with a flow-matching module, enabling studio-quality generation even under highly challenging conditions. Experiments demonstrate that StuPASE consistently produces perceptually high-quality speech while maintaining low hallucination, outperforming state-of-the-art SE methods. Audio demos are available at: this https URL.
This study investigates the effects of nail penetration speed on the safety outcomes of large-format automotive lithium-ion pouch cells. Through six controlled tests varying the speed of nail insertion, we observed that lower penetration speeds did not induce thermal runaway; instead, the cells exhibited self-discharge while the nail remained embedded. These findings suggest that penetration speed is a critical factor in the onset of thermal runaway, providing valuable insights for the development of safer battery systems and more effective safety testing protocols.
To alleviate the pilot and CSI-feedback burden in 6G, channel knowledge map (CKM) has emerged as a promising approach that predicts CSI solely from user locations. Nevertheless, accurate location information is rarely available in current systems. Moreover, the uncertainty inherent to highly dynamic scenes further degrades the performance of existing schemes that typically assume quasi-static scenarios. In this paper, we propose a novel framework named location-agnostic dynamic CKM (LAD-CKM). Specifically, LAD-CKM is constructed through dynamic radio frequency (RF) radiance field rendering, which takes instantaneous uplink CSI and partial downlink CSI as inputs. To enable effective rendering, a dedicated radiator representation network (RARE-Net) is designed to capture the spatial-spectral correlations within the inputs. Furthermore, an adaptive deformation module is devised to deform the uplink CSI-based queries of RARE-Net according to instantaneous channel dynamics, thereby enhancing CSI prediction accuracy under mobility. In addition, a novel synthetic channel dataset is created in outdoor dynamic scenes via ray-tracing. Simulation results demonstrate that LAD-CKM yields significant performance gains compared with existing baselines in terms of effective data rate.
A two-stage hybrid transceiver is designed by considering a partially connected architecture at the base station (BS) for a low-resolution multi-user (MU) THz massive multiple input multiple output (MIMO) system. Due to its high bandwidth coupled with a high number of antennas, the THz band suffers from the deleterious spatial-wideband and frequency-wideband effects jointly termed as the dual-wideband effect. To address this undesired phenomenon, we rigorously model the THz MIMO channel at each subarray corresponding to each user by incorporating the absorption, reflection, and free-space losses. Subsequently, a novel beamforming technique is proposed that employs only a few true time delay (TTD) lines for eliminating the beam-split effect, which is the manifestation of the spatial-wideband effect in the frequency domain. Our simulation results demonstrate a performance improvement of around 13% in terms of spectral efficiency over the existing state-of-the-art techniques.
Scaled Relative Graphs (SRGs) provide an intuitive graphical frequency-domain method for the analysis of Nonlinear (NL) systems, generalizing the Nyquist diagram. In this paper, we develop a method for computing $L_2$-gain bounds for Lur'e systems over bounded frequency and amplitude ranges. We do this by restricting the input space of the SRG both in frequency and energy content, and combining with methods from Sobolev theory. The resulting gain bounds over restricted sets of inputs are less conservative than bounds computed over all of $L_2$, and yield three-dimensional NL generalization of the Bode diagram, plotting $L_2$-gain as function of both input frequency and energy content. In the zero-energy limit, the Linear Time-Invariant (LTI) Bode diagram is recovered, and at the infinite-energy zero-frequency limit, we recover the $L_2$-gain. The effectiveness of our method is demonstrated on an example that resembles Phase-Locked Loop dynamics.
This paper investigates the problem of functional state estimation for linear time-delay systems in which the delay affecting the state evolution differs from the delay affecting the output measurements. While existing observer designs typically assume instantaneous output availability, practical systems often exhibit measurement delays that are distinct from and not aligned with the intrinsic state delay. We explicitly distinguish between the state delay $\tau$ and the measurement delay $h$ and address the problem of estimating a desired functional $z(t)=Fx(t)$ under such mismatched delay conditions. Three functional observer structures are proposed to accommodate different delay configurations, each capable of realizing functional observers of different orders. This flexibility is important since a functional observer whose order equals the number of estimated functionals may not always exist. For each structure, algebraic existence conditions are established together with constructive synthesis procedures. A functional augmentation framework is developed to derive verifiable rank-based conditions for observers of various orders. In addition, the notion of generalized functionals, defined over an augmented delayed state vector, is introduced to provide greater flexibility in satisfying observer existence conditions and facilitating systematic design. Numerical examples illustrate the proposed theory.
Model Predictive Control (MPC) is widely recognized for its ability to explicitly handle system constraints. In practice, system states are often affected by disturbances with unknown distributions. While robust MPC guarantees constraint satisfaction under worst-case scenarios, it tends to be overly conservative. Stochastic MPC balances conservatism and performance but relies on precise knowledge of the disturbance distribution, which is often unavailable. To address this challenge, this paper introduces Distributionally Robust Optimization (DRO) into the MPC framework and proposes a novel Two-Stage Distributionally Robust MPC (TSDR-MPC) scheme. The key innovation lies in formulating constraint violation penalties as a second-stage optimization problem, which, combined with the first-stage quadratic cost, constitutes a two-stage distributionally robust program. This structure enables adaptive constraint tightening against disturbances with unknown time-varying means and covariances. Utilizing a Wasserstein ambiguity set, we derive a tractable reformulation via strong duality and develop a cutting-plane algorithm that converges in a finite number of iterations, suitable for real-time implementation. To ensure closed-loop stability even under non-zero mean disturbances, we introduce a terminal constraint applied solely to the nominal system; this constraint is proportional to the current state and independent of distributional uncertainty, thus preserving overall feasibility. We provide rigorous theoretical guarantees, including recursive feasibility, finite-time algorithm termination, and an asymptotic performance bound on the average closed-loop cost. Numerical simulations validate the adaptability and robustness of the proposed framework under various disturbance scenarios.
In power networks based on Inverter-Based Resources (IBRs), fast controllers cause frequency and voltage dynamics to overlap. Thus, it becomes critical to assess the overall dynamic performance of such networks through a combined system-wide metric. This letter presents a unified metric designed to evaluate dynamic performance in such cases. The proposed metric consists of a weighted sum of local voltage phasor variations at each bus, where the weights are the complex powers injected at the buses. The proposed metric is further decomposed into device-driven and network-driven components, enabling a more comprehensive assessment of grid dynamics. A case study based on a modified version of the IEEE 39-bus system is presented, in which synchronous machines are replaced by inverter-based resources. A sensitivity analysis of the R/X ratio is utilized to evaluate the metric in conventional grids, as well as in those characterized by strong voltage-frequency coupling with complex power flows.
This note proposes a general control approach, called vector-field guided constraint-following control, to solve the dynamics control problem of geometric path-following for a class of uncertain mechanical systems. More specifically, it operates at the dynamics level and can handle both fully-actuated and underactuated mechanical systems, heterogeneous (possibly fast) time-varying uncertainties with unknown bounds, and geometric desired paths that may be self-intersecting. Simulations are conducted to demonstrate the effectiveness of the approach.
Keyword spotting (KWS) is crucial for many speech-driven applications, but robust KWS in noisy environments remains challenging. Conventional systems often rely on single-channel inputs and a cascaded pipeline separating front-end enhancement from KWS. This precludes joint optimization, inherently limiting performance. We present an end-to-end multi-channel KWS framework that exploits spatial cues to improve noise robustness. A spatial encoder learns inter-channel features, while a spatial embedding injects directional priors; the fused representation is processed by a streaming backbone. Experiments in simulated noisy conditions across multiple signal-to-noise ratios (SNRs) show that spatial modeling and directional priors each yield clear gains over baselines, with their combination achieving the best results. These findings validate end-to-end multi-channel spatial modeling, indicating strong potential for the target-speaker-aware detection in complex acoustic scenarios.
Diffusion Probabilistic Models (DPMs) are a well-established class of diffusion models for unconditional image generation, while SGMSE+ is a well-established conditional diffusion model for speech enhancement. One of the downsides of diffusion models is that solving the reverse process requires many evaluations of a large Neural Network. Although advanced fast sampling solvers have been developed for DPMs, they are not directly applicable to models such as SGMSE+ due to differences in their diffusion processes. Specifically, DPMs transform between the data distribution and a standard Gaussian distribution, whereas SGMSE+ interpolates between the target distribution and a noisy observation. This work first develops a formalism of interpolating Stochastic Differential Equations (iSDEs) that includes SGMSE+, and second proposes a solver for iSDEs. The proposed solver enables fast sampling with as few as 10 Neural Network evaluations across multiple speech restoration tasks.
Dynamic shortest-path routing, using real-time traffic data, enables path selection responsive to evolving conditions. Nevertheless, transportation planning tasks such as adaptive congestion pricing, fleet routing, and long-term operational decisions rely on offline traffic estimators. To address this problem, we develop a spatiotemporal predictor based on a low-rank decomposition of the traffic matrix and the temporal subspace coefficients. Using a recent large-scale measurement campaign over the Seoul road network, we show that our proposed predictor incurs an average excess travel time of less than 1.5 minutes. Moreover, our predictor's tail of the excess travel time distribution matches that of a near-real-time predictor. Results based on one year of traffic data are also demonstrated in simulations.
Existing results on finite-time model predictive control (MPC) often rely on terminal equality constraint, switching inside one-step region, or terminal cost with short control horizon, leading to limited initial feasibility. This paper proposes an infinite-horizon Model Predictive Control (MPC) framework for the constrained finite-time stabilization of discrete-time systems, overcoming limitations found in existing finite-time MPC results. The proposed framework is built upon a terminal cost strategy, but expands it by replacing the short-horizon terminal cost with the sum of stage costs over an infinite control horizon. This design choice significantly enlarges the initial feasibility region and avoids the need for terminal equality constraints or switching strategies during implementation. It is proved that the proposed finite-time MPC guarantees finite-time stabilization performance once the state trajectory enters the predefined terminal set. The infinite-horizon finite-time MPC is shown to be equivalently implementable as a finite-horizon MPC with a terminal cost, thereby ensuring computational tractability. The proposed finite-time MPC is systematically extended and shown to be applicable to both constrained multi-input linear systems and a class of constrained nonlinear systems that are feedback linearizable.
While large-scale omni-models have demonstrated impressive capabilities across various modalities, their strong performance heavily relies on massive multimodal data and incurs substantial computational costs. This work introduces Speech-Omni-Lite, a cost-efficient framework for extending pre-trained Visual-Language (VL) backbones with speech understanding and generation capabilities, while fully preserving the backbones' vision-language performance. Specifically, the VL backbone is equipped with two lightweight, trainable plug-and-play modules, a speech projector and a speech token generator, while keeping the VL backbone fully frozen. To mitigate the scarcity of spoken QA corpora, a low-cost data construction strategy is proposed to generate Question-Text Answer-Text-Speech (QTATS) data from existing ASR speech-text pairs, facilitating effective speech generation training. Experimental results show that, even with only thousands of hours of speech training data, Speech-Omni-Lite achieves excellent spoken QA performance, which is comparable to omni-models trained on millions of hours of speech data. Furthermore, the learned speech modules exhibit strong transferability across VL backbones.
Finetuning wireless receivers to a specific deployment scenario can yield significant error-rate performance improvements without increasing processing complexity. However, site-specific finetuning has so far only been demonstrated on synthetic channel data and lacks real-world benchmarks. In this work, we empirically study site-specific finetuning of neural receivers using real-world 5G NR physical uplink shared channel (PUSCH) data collected with an over-the-air testbed at ETH Zurich across three scenarios: (i) a small laboratory, (ii) a large office floor, and (iii) a high-mobility outdoor environment. Our results confirm substantial error-rate performance improvements from site-specific finetuning, consistent with earlier findings based on synthetic channel data. Moreover, we demonstrate that these improvements generalize across different user-equipment hardware and deployment scenarios.
Current developments of high-speed magnetic levitation technology using the principle of the electromagnet suspension (EMS) focus on reaching vehicle speeds of more than 600 km/h. With increasing vehicle speeds, however, updated control algorithms need to be investigated to reliably stabilize the system and meet the demands in terms of ride comfort. This article examines the modern and popular approach of model predictive control and its application to the magnetic levitation control system. Investigated key aspects are the parameterization of the model predictive controller and its implementation on embedded, resource constrained hardware. The results reveal that model predictive control is capable to robustly stabilize the highly nonlinear and constrained system even at very high speed. Furthermore, processor-in-the-loop studies are carried out to validate the designed control algorithms on a microcontroller.
Room Impulse Responses (RIRs) enable realistic acoustic simulation, with applications ranging from multimedia production to speech data augmentation. However, acquiring high-quality real-world RIRs is labor-intensive, and data scarcity remains a challenge for data-driven RIR generation approaches. In this paper, we propose a novel approach to RIR generation by fine-tuning a pre-trained text-to-audio model, demonstrating for the first time that large-scale generative audio priors can be effectively leveraged for the task. To address the lack of text-RIR paired data, we establish a labeling pipeline utilizing vision-language models to extract acoustic descriptions from existing image-RIR datasets. We introduce an in-context learning strategy to accommodate free-form user prompts during inference. Evaluations involving MUSHRA listening tests and downstream ASR performance demonstrate that our model generates plausible RIRs and serves as an effective tool for speech data augmentation.
We present DRES: a 1.5-hour Dutch realistic elicited (semi-spontaneous) speech dataset from 80 speakers recorded in noisy, public indoor environments. DRES was designed as a test set for the evaluation of state-of-the-art (SOTA) automatic speech recognition (ASR) and speech enhancement (SE) models in a real-world scenario: a person speaking in a public indoor space with background talkers and noise. The speech was recorded with a four-channel linear microphone array. In this work we evaluate the speech quality of five well-known single-channel SE algorithms and the recognition performance of eight SOTA off-the-shelf ASR models before and after applying SE on the speech of DRES. We found that five out of the eight ASR models have WERs lower than 22% on DRES, despite the challenging conditions. In contrast to recent work, we did not find a positive effect of modern single-channel SE on ASR performance, emphasizing the importance of evaluating in realistic conditions.
In a wireless acoustic sensor network (WASN), devices (i.e., nodes) can collaborate through distributed algorithms to collectively perform audio signal processing tasks. This paper focuses on the distributed estimation of node-specific desired speech signals using network-wide Wiener filtering. The objective is to match the performance of a centralized system that would have access to all microphone signals, while reducing the communication bandwidth usage of the algorithm. Existing solutions, such as the distributed adaptive node-specific signal estimation (DANSE) algorithm, converge towards the multichannel Wiener filter (MWF) which solves a centralized linear minimum mean square error (LMMSE) signal estimation problem. However, they do so iteratively, which can be slow and impractical. Many solutions also assume that all nodes observe the same set of sources of interest, which is often not the case in practice. To overcome these limitations, we propose the distributed multichannel Wiener filter (dMWF) for fully connected WASNs. The dMWF is non-iterative and optimal even when nodes observe different sets of sources. In this algorithm, nodes exchange neighbor-pair-specific, low-dimensional (fused) signals estimating the contribution of sources observed by both nodes in the pair. We formally prove the optimality of dMWF and demonstrate its performance in simulated speech enhancement experiments. The proposed algorithm is shown to outperform DANSE in terms of objective metrics after short operation times, highlighting the benefit of its iterationless design.
Nonlinear optimisation techniques are commonly employed to minimise complex cost functions, with their effectiveness determined largely by the structure of the underlying error landscape. These methods require initial parameter values, and in the presence of multiple local minima, they are prone to becoming trapped in suboptimal regions. The likelihood of locating the global minimum increases substantially when the initialisation lies within its corresponding basin of attraction. Consequently, high-quality initial parameters are critical for successful optimisation. This technical report outlines a new strategy for selecting suitable initial parameters for a trigonometric model and unevenly sampled data, ensuring that the optimisation procedure starts sufficiently close to the global minimum. The proposed parameter estimation approach is strictly NI-based, interpretable, and explainable. It targets at complicated cases which include: samples with strong random noise, samples with only few covered periods, and samples which cover only a fraction of one period. Special attention is put on the frequency estimation. It can be shown that an estimation of initial parameters with sufficient accuracy is possible down to a signal-noise-ratio of 1.4 dB at much lower computational costs than the Lomb-Scargle-periodogram method requires.
Accurate path loss prediction is crucial for wireless network planning and optimization in suburban environments with complex terrain variation and diverse land cover. This paper proposes a model assisted hybrid path loss prediction method that introduces an environment adaptive compensation on top of the classic close-in free-space reference distance (CI) path loss model. By jointly predicting the path loss exponent and a compensation term, the proposed approach dynamically adjusts the empirical trend. To improve the effectiveness of environmental representation, three environmental image organization schemes are constructed and evaluated. Experiments on measurement data collected in Pingtan Island show that the proposed method outperforms the CI model and a conventional model assisted baseline, achieving a test root mean square error of 4.04 dB.
Frequency stability is fundamental to the secure operation of power systems. With growing uncertainty and volatility introduced by renewable generation, secondary frequency regulation must now deliver enhanced performance not only in the steady state but also during transients. This paper presents a systematic framework to embed learning in the design of a primal-dual controller that provides provable (potentially exponential) stability and steady-state optimality, while simultaneously improving key transient metrics, including frequency nadir and control effort, in a data-driven manner. In particular, we employ the primal-dual dynamics of an optimization problem that encodes steady-state objectives to realize secondary frequency control with asymptotic stability guarantee. To augment transient performance of the controller via learning, a change of variables on control inputs, which will be deployed by neural networks, is proposed such that under mild conditions, stability and steady-state optimality are preserved. It further allows us to define a learning goal that accounts for the exponential convergence rate, frequency nadir and accumulated control effort, and use sample trajectories to enhance these metrics. Simulation results validate the theories and demonstrate superior transient performance of the learning-augmented primal-dual controller.
Super-resolution ultrasound via microbubble (MB) localisation and tracking, also known as ultrasound localisation microscopy (ULM), can resolve microvasculature beyond the acoustic diffraction limit. However, significant challenges remain in localisation performance and data acquisition and processing time. Deep learning methods for ULM have shown promise to address these challenges, however, they remain limited by in vivo label scarcity and the simulation-to-reality domain gap. We present CycleULM, the first unified label-free deep learning framework for ULM. CycleULM learns a physics-emulating translation between the real contrast-enhanced ultrasound (CEUS) data domain and a simplified MB-only domain, leveraging the power of CycleGAN without requiring paired ground truth data. With this translation, CycleULM removes dependence on high-fidelity simulators or labelled data, and makes MB localisation and tracking substantially easier. Deployed as modular plug-and-play components within existing pipelines or as an end-to-end processing framework, CycleULM delivers substantial performance gains across both in silico and in vivo datasets. Specifically, CycleULM improves image contrast (contrast-to-noise ratio) by up to 15.3 dB and sharpens CEUS resolution with a 2.5{\times} reduction in the full width at half maximum of the point spread function. CycleULM also improves MB localisation performance, with up to +40% recall, +46% precision, and a -14.0 {\mu}m mean localisation error, yielding more faithful vascular reconstructions. Importantly, CycleULM achieves real-time processing throughput at 18.3 frames per second with order-of-magnitude speed-ups (up to ~14.5{\times}). By combining label-free learning, performance enhancement, and computational efficiency, CycleULM provides a practical pathway toward robust, real-time ULM and accelerates its translation to clinical applications.
A 3 bit Analog to Digital Converter (ADC) is designed using perpendicular Spin Orbit Torque Magnetic Tunnel Junction (SOT MTJ). A sampled analog input signal is transmitted as a spin orbit torque current (Iin) to a perpendicular SOT MTJ, and deterministic switching is supported by the Voltage Controlled Magnetic Anisotropy (VCMA) and Spin Transfer Torque (STT) switching methods. Analog to digital conversion is done by comparing input signal with varied critical current of SOT MTJs. The critical current of each is SOT MTJ governed by varying widths of Heavy Metal (HM). In the 3 bit ADC, there are two sets of 7 SOT MTJs for quantizing input value, a conversion set and dummy set for comparing the change in resistance state. As input signal passed through conversion set SOT MTJs switches from Parallel (P) to AntiParallel (AP) state if the input signal exceeds its critical current. The conversion set change in state is converted to thermometer codes by StrongARM latch comparator by comparing the resistance with dummy set SOT MTJs, where all the in P state or low resistance. A novel architecture is proposed for increasing speed of throughput, by utilizing the dummy set of as a conversion set and conversion set as dummy set, thus eliminating the reset step from analog to digital conversion. And by improving SOT-MTJ and timing blocks a field free spin flash ADC has a power consumption of 476 uW with a conversion rate of 304.1 MHz is produced.
This letter proposes a linear bandit-based beam training framework for near-field communication under multi-path channels. By leveraging Thompson Sampling (TS), the framework adaptively balances exploration and exploitation to maximize cumulative beamforming gain under limited pilot overhead. To ensure data-efficient learning, we incorporate a correlated Gaussian prior in the DFT domain, using a Gaussian kernel to capture spatial correlations and near-field energy leakage. We develop three TS strategies: codebook-constrained search for rapid convergence via structural regularization, continuous-space search to achieve near-optimal performance, and a two-stage hybrid refinement scheme that balances convergence speed and estimation accuracy. Simulation results show that the proposed framework reduces pilot overhead by up to 90\% while achieving more than a 2dB SNR gain over baselines in multipath environments. Furthermore, the continuous-space search is shown to be asymptotically optimal, approaching the full-CSI bound when the pilot overhead is unconstrained.
The next generation of cellular networks is designed to provide ubiquitous connectivity to a wide range of devices. As Telecommunication Service Providers (TSPs) increasingly collaborate with public cloud providers to deploy 5G and beyond networks, a fundamental shift is underway, from hardware-bound Physical Network Functions (PNFs) to cloud-native, containerized deployments managed through platforms like Kubernetes. While this transition promises greater scalability, flexibility, and cost efficiency, it also introduces a complex set of technical and operational challenges that must be thoroughly understood before large-scale cellular deployments can take place in cloud environments. In this survey, we present a structured taxonomy that categorizes the design space of cloud-based cellular deployments across four dimensions: deployment architecture, resource management and orchestration, multi-tenancy and isolation, and economic and ownership models. Using this taxonomy as a foundation, we critically analyze six key investigation areas, security and privacy, scalability and elasticity, performance and latency, cost optimization, resilience and fault management, and compliance and sovereignty, examining each through a cloud-native lens. To benchmark the state of industry adoption, we examine the deployment strategies of leading Infrastructure-as-a-Service (IaaS) providers, namely Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). Finally, we identify emerging trends such as AI-driven orchestration, quantum-safe protocols for virtualized network functions, and serverless networking for 6G, while articulating the open challenges that remain in realizing robust, scalable cloud-based cellular networks.
A privacy-preserving dynamic average consensus (DAC) algorithm is proposed that achieves consensus while preventing external eavesdroppers from inferring the reference signals and their derivatives. During the initialization phase, each agent generates a set of sinusoidal signals with randomly selected frequencies and exchanges them with its neighboring agents to construct a masking signal. Each agent masks its reference signals using this composite masking signal before executing the DAC update rule. It is shown that the developed scheme preserves the convergence properties of the conventional DAC framework while preventing information leakage to external eavesdroppers. Furthermore, the developed algorithm is applied to state-of-charge (SoC) balancing in a networked battery energy storage system to demonstrate its practical applicability. Simulation results validate the theoretical findings.
Accurately forecasting spectrum demand is a key component for efficient spectrum resource allocation and management. With the rapid growth in demand for wireless services, mobile network operators and regulators face increasing challenges in ensuring adequate spectrum availability. This paper presents a data-driven approach leveraging artificial intelligence (AI) and machine learning (ML) to estimate and manage spectrum demand. The approach uses multiple proxies of spectrum demand, drawing from site license data and derived from crowdsourced data. These proxies are validated against real-world mobile network traffic data to ensure reliability, achieving an R$^2$ value of 0.89 for an enhanced proxy. The proposed ML models are tested and validated across five major Canadian cities, demonstrating their generalizability and robustness. These contributions assist spectrum regulators in dynamic spectrum planning, enabling better resource allocation and policy adjustments to meet future network demands.
The progressive electrification of aircraft systems under the more electric aircraft (MEA) paradigm is reshaping the design and qualification constraints of safety-critical avionics. Emergency locator transmitters (ELTs), which are essential for post-accident localization and search and rescue (SAR) operations, have evolved from legacy 121.5/243 MHz beacons to digitally encoded 406 MHz systems, typically retaining 121.5 MHz as a homing signal in combined units. In parallel, the modernization of the Cospas-Sarsat infrastructure, especially MEOSAR, together with multi-constellation global navigation satellite system (GNSS) integration and second-generation beacon capabilities, is reducing detection latency and enabling richer distress messaging. However, MEA platforms impose stricter constraints on available power, thermal management, wiring density, and electromagnetic compatibility (EMC). As a result, ELT performance increasingly depends not only on the device itself, but also on its installation conditions and on the aircraft's overall electrical environment. This review summarizes the ELT architectures and activation/operational cycles, outlines key technological milestones, and consolidates the main integration challenges for MEA, with emphasis on energy autonomy, battery qualification frameworks, EMC and installation practices, and survivability-driven failure modes (e.g., antenna/feedline damage, mounting, and post-impact shielding). Finally, emerging trends include ELT for distress tracking (DT), energy-based designs, advanced health monitoring, and certification-ready pathways for next-generation SAR services are discussed, highlighting research directions that can deliver demonstrable, certifiable gains in reliability, energy efficiency, and robust integration for future electrified aircraft.
In the diverse landscape of 6G networks, where wireless connectivity demands surge and spectrum resources remain limited, flexible spectrum access becomes paramount. The success of crafting such schemes hinges on our ability to accurately characterize spectrum demand patterns across space and time. This paper presents a data-driven methodology for estimating spectrum demand variations over space and identifying key drivers of these variations in the mobile broadband landscape. By leveraging geospatial analytics and machine learning, the methodology is applied to a case study in Canada to estimate spectrum demand dynamics in urban regions. Our proposed model captures 70\% of the variability in spectrum demand when trained on one urban area and tested on another. These insights empower regulators to navigate the complexities of 6G networks and devise effective policies to meet future network demands.
Public EV charging infrastructure suffers from significant failure rates -- with field studies reporting up to 27.5% of DC fast chargers non-functional -- and multi-day mean time to resolution, imposing billions in annual economic burden. Cloud-centric architectures cannot achieve the latency, reliability, and bandwidth characteristics required for autonomous operation. We present Auralink SDC (Software-Defined Charging), an architecture deploying domain-specialized AI agents at the network edge for autonomous charging infrastructure management. Key contributions include: (1) Confidence-Calibrated Autonomous Resolution (CCAR), enabling autonomous remediation with formal false-positive bounds; (2) Adaptive Retrieval-Augmented Reasoning (ARA), combining dense and sparse retrieval with dynamic context allocation; (3) Auralink Edge Runtime, achieving sub-50ms TTFT on commodity hardware under PREEMPT_RT constraints; and (4) Hierarchical Multi-Agent Orchestration (HMAO). Implementation uses AuralinkLM models fine-tuned via QLoRA on a domain corpus spanning OCPP 1.6/2.0.1, ISO 15118, and operational incident histories. Evaluation on 18,000 labeled incidents in a controlled environment establishes 78% autonomous incident resolution, 87.6% diagnostic accuracy, and 28-48ms TTFT latency (P50). This work presents architecture and implementation patterns for edge-deployed industrial AI systems with safety-critical constraints.
Wide-area IoT sensor networks require efficient data collection mechanisms when sensors are dispersed over large regions with limited communication infrastructure. Unmanned aerial vehicle (UAV)-mounted Mobile Base Stations (MBSs) provide a flexible solution; however, their limited onboard energy and the strict energy budgets of sensors necessitate carefully optimized tour planning. In this paper, we introduce the Mobile Base Station Optimal Tour (MOT) problem, which seeks a minimum-cost, non-revisiting tour over a subset of candidate stops such that the union of their coverage regions ensures complete sensor data collection under a global sensor energy constraint. The tour also avoids restricted areas. We formally model the MOT problem as a combinatorial optimization problem, which is NP-complete. Owing to its computational intractability, we develop a polynomial-time greedy heuristic that jointly considers travel cost and incremental coverage gain while avoiding restricted areas. Using simulations, we obtain tours with low cost, complete sensor coverage, and faster execution. Our proposed greedy algorithm outperforms state-of-the-art approaches in terms of a performance indicator defined as the product of tour length and algorithm execution time, achieving an improvement of 39.15%. The proposed framework provides both theoretical insight into the structural complexity of MBS-assisted data collection and a practical algorithmic solution for large-scale IoT deployments.
This paper formally develops a novel hierarchical planning and control framework for robust payload transportation by quadrupedal robots, integrating a model predictive control (MPC) algorithm with a gradient-descent-based adaptive updating law. At the framework's high level, an indirect adaptive law estimates the unknown parameters of the reduced-order (template) locomotion model under varying payloads. These estimated parameters feed into an MPC algorithm for real-time trajectory planning, incorporating a convex stability criterion within the MPC constraints to ensure the stability of the template model's estimation error. The optimal reduced-order trajectories generated by the high-level adaptive MPC (AMPC) are then passed to a low-level nonlinear whole-body controller (WBC) for tracking. Extensive numerical investigations validate the framework's capabilities, showcasing the robot's proficiency in transporting unmodeled, unknown static payloads up to 109% in experiments on flat terrains and 91% on rough experimental terrains. The robot also successfully manages dynamic payloads with 73% of its mass on rough terrains. Performance comparisons with a normal MPC and an L1 MPC indicate a significant improvement. Furthermore, comprehensive hardware experiments conducted in indoor and outdoor environments confirm the method's efficacy on rough terrains despite uncertainties such as payload variations, push disturbances, and obstacles.
Model Predictive Control (MPC) is widely adopted for agile multirotor vehicles, yet achieving both stability and obstacle-free flight is particularly challenging when a payload is suspended beneath the airframe. This paper introduces a Safety Enhanced Passivity-Based Nonlinear MPC (SEP-NMPC) that provides formal guarantees of stability and safety for a quadrotor transporting a slung payload through cluttered environments. Stability is enforced by embedding a strict passivity inequality, which is derived from a shaped energy storage function with adaptive damping, directly into the NMPC. This formulation dissipates excess energy and ensures asymptotic convergence despite payload swings. Safety is guaranteed through high-order control barrier functions (HOCBFs) that render user-defined clearance sets forward-invariant, obliging both the quadrotor and the swinging payload to maintain separation while interacting with static and dynamic obstacles. The optimization remains quadratic-program compatible and is solved online at each sampling time without gain scheduling or heuristic switching. Extensive simulations and real-world experiments confirm stable payload transport, collision-free trajectories, and real-time feasibility across all tested scenarios. The SEP-NMPC framework therefore unifies passivity-based closed-loop stability with HOCBF-based safety guarantees for UAV slung-payload transportation.
Phase-amplitude coupling (PAC), a form of cross-frequency interaction, has been implicated in various cognitive functions and, by extension, in neural communication and information integration. Accurately detecting and characterising PAC is essential for understanding its role in processes such as memory and attention. However, this remains a significant challenge. Most existing methods rely on variations in the temporal profile to detect PAC, but they often suffer from key limitations, most notably, their sensitivity to filter bandwidth selection and their susceptibility to detecting spurious couplings. Previous studies have suggested that approaches grounded in the actual generative dynamics of PAC may offer improved accuracy. In this study, we adopt a dynamical systems perspective and propose a novel method for PAC detection and characterisation based on nonlinear system identification. This approach involves identifying a nonlinear dynamical model that captures the temporal dynamics underlying PAC. The resulting generative model enables noise-free simulation of estimated PAC signals, facilitating detailed analysis of modulation strength and the low-frequency phase at which the high-frequency bursts occur. The proposed method accounts for harmonic-induced spurious couplings through empirically derived criteria and remains robust to high noise levels and variations in slow-frequency power, offering an accurate and interpretable framework for PAC analysis. The performance of the proposed approach is illustrated using several simulated examples and a real case using local field potentials (LFP) data. The results are compared with several popular methods.
We study a class of finite-state channels, known as POST channels, in which the previous channel output serves as the current state. A POST channel is deemed approximately memoryless when the state-dependent transition matrices are sufficiently close to one another. For this family of channels, under a surjectivity condition on the associated memoryless reference channel, we show that the feedback capacity coincides with the non-feedback capacity. Consequently, for almost all approximately memoryless POST channels whose input alphabet size is no smaller than the output alphabet size, feedback provides no capacity gain. This result extends Shannon's classical theorem on discrete memoryless channels and demonstrates that the phenomenon holds well beyond the strictly memoryless case.
In this paper, we investigate a novel digital network twin (DNT) assisted deep learning (DL) model training framework. In particular, we consider a physical network where a base station (BS) uses several antennas to serve multiple mobile users, and a DNT that is a virtual representation of the physical network. The BS must adjust its antenna tilt angles to optimize the data rates of all users. Due to user mobility, the BS may not be able to accurately track network dynamics such as wireless channels and user mobilities. Hence, a reinforcement learning (RL) approach is used to dynamically adjust the antenna tilt angles. To train the RL, we can use data collected from the physical network and the DNT. The data collected from the physical network is more accurate but incurs more communication overhead compared to the data collected from the DNT. Therefore, it is necessary to determine the ratio of data collected from the physical network and the DNT to improve the training of the RL model. We formulate this problem as an optimization problem whose goal is to jointly optimize the tilt angle adjustment policy and the data collection strategy, aiming to maximize the data rates of all users while constraining the time delay introduced by collecting data from the physical network. To solve this problem, we propose a hierarchical RL framework that integrates robust adversarial loss and proximal policy optimization (PPO). Simulation results show that our proposed method reduces the physical network data collection delay by up to 28.01% and 1x compared to a hierarchical RL that uses vanilla PPO as the first level RL, and the baseline that uses robust-RL at the first level and selects the data collection ratio randomly.
Speech Large Language Models (LLMs) show great promise for speech emotion recognition (SER) via generative interfaces. However, shifting from closed-set classification to open text generation introduces zero-shot stochasticity, making evaluation highly sensitive to prompts. Additionally, conventional speech LLMs benchmarks overlook the inherent ambiguity of human emotion. Hence, we present VoxEmo, a comprehensive SER benchmark encompassing 35 emotion corpora across 15 languages for Speech LLMs. VoxEmo provides a standardized toolkit featuring varying prompt complexities, from direct classification to paralinguistic reasoning. To reflect real-world perception/application, we introduce a distribution-aware soft-label protocol and a prompt-ensemble strategy that emulates annotator disagreement. Experiments reveal that while zero-shot speech LLMs trail supervised baselines in hard-label accuracy, they uniquely align with human subjective distributions.
This paper considers the perception safety problem in distributed vision-based leader-follower formations, where each robot uses onboard perception to estimate relative states, track desired setpoints, and keep the leader within its camera field of view (FOV). Safety is challenging due to heteroscedastic perception errors and the coupling between formation maneuvers and visibility constraints. We propose a distributed, formation-aware adaptive conformal prediction method based on Risk-Aware Mondrian CP to produce formation-conditioned uncertainty quantiles. The resulting bounds tighten in high-risk configurations (near FOV limits) and relax in safer regions. We integrate these bounds into a Formation-Aware Conformal CBF-QP with a smooth margin to enforce visibility while maintaining feasibility and tracking performance. Gazebo simulations show improved formation success rates and tracking accuracy over non-adaptive (global) CP baselines that ignore formation-dependent visibility risk, while preserving finite-sample probabilistic safety guarantees. The experimental videos are available on the \href{this https URL}{project website}\footnote{Project Website: this https URL}.
Forward reachability analysis is a dominant approach for verifying reach-avoid specifications in neural feedback systems, i.e., dynamical systems controlled by neural networks, and a number of directions have been proposed and studied. In contrast, far less attention has been given to backward reachability analysis for these systems, in part because of the limited scalability of known techniques. In this work, we begin to address this gap by introducing new algorithms for computing both over- and underapproximations of backward reachable sets for nonlinear neural feedback systems. We also describe and implement an integration of these backward reachability techniques with existing ones for forward analysis. We call the resulting algorithm Forward and Backward Reachability Integration for Certification (FaBRIC). We evaluate our algorithms on a representative set of benchmarks and show that they significantly outperform the prior state of the art.
Audio-Visual Segmentation (AVS) aims to produce pixel-level masks of sound producing objects in videos, by jointly learning from audio and visual signals. However, real-world environments are inherently dynamic, causing audio and visual distributions to evolve over time, which challenge existing AVS systems that assume static training settings. To address this gap, we introduce the first exemplar-free continual learning benchmark for Audio-Visual Segmentation, comprising four learning protocols across single-source and multi-source AVS datasets. We further propose a strong baseline, ATLAS, which uses audio-guided pre-fusion conditioning to modulate visual feature channels via projected audio context before cross-modal attention. Finally, we mitigate catastrophic forgetting by introducing Low-Rank Anchoring (LRA), which stabilizes adapted weights based on loss sensitivity. Extensive experiments demonstrate competitive performance across diverse continual scenarios, establishing a foundation for lifelong audio-visual perception. Code is available at${}^{*}$\footnote{Paper under review} - \hyperlink{this https URL}{this https URL} \keywords{Continual Learning \and Audio-Visual Segmentation \and Multi-Modal Learning}
Wi-Fi Channel State Information (CSI) has emerged as a promising non-line-of-sight sensing modality for human and robotic activity recognition. However, prior work has predominantly relied on CSI amplitude while underutilizing phase information, particularly in robotic arm activity recognition. In this paper, we present GateFusion-Bidirectional Long Short-Term Memory network (GF-BiLSTM) for WiFi sensing in robotic activity recognition. GF-BiLSTM is a two-stream gated fusion network that encodes amplitude and phase separately and adaptively integrates per-time features through a learned gating mechanism. We systematically evaluate state-of-the-art deep learning models under a Leave-One-Velocity-Out (LOVO) protocol across four input configurations: amplitude only, phase only, amplitude + unwrapped phase, and amplitude + sanitized phase. Experimental results demonstrate that incorporating phase alongside amplitude consistently improves recognition accuracy and cross-speed robustness, with GF-BiLSTM achieving the best performance. To the best of our knowledge, this work provides the first systematic exploration of CSI phase for robotic activity recognition, establishing its critical role in Wi-Fi-based sensing.
Parameter estimation-based observer (PEBO) is a recently developed constructive tool to design state observers for nonlinear systems. It reformulates the state estimation problem as one of online parameter identification, effectively addressing many open estimation challenges in practical applications. The feasibility of a PEBO design relies on two fundamental properties: transformability and identifiability. The former pertains to the existence of an injective solution to a suitable partial differential equation, whereas the latter characterizes the uniqueness of the parameterization induced by the resulting nonlinear regression model. In this paper, we analyze the existence of PEBOs for general nonlinear systems by studying these two properties in detail and by providing sufficient conditions under which they hold.
Emerging generative world models and vision-language-action (VLA) systems are rapidly reshaping automated driving by enabling scalable simulation, long-horizon forecasting, and capability-rich decision making. Across these directions, latent representations serve as the central computational substrate: they compress high-dimensional multi-sensor observations, enable temporally coherent rollouts, and provide interfaces for planning, reasoning, and controllable generation. This paper proposes a unifying latent-space framework that synthesizes recent progress in world models for automated driving. The framework organizes the design space by the target and form of latent representations (latent worlds, latent actions, latent generators; continuous states, discrete tokens, and hybrids) and by structural priors for geometry, topology, and semantics. Building on this taxonomy, the paper articulates five cross-cutting internal mechanics (i.e, structural isomorphism, long-horizon temporal stability, semantic and reasoning alignment, value-aligned objectives and post-training, as well as adaptive computation and deliberation) and connects these design choices to robustness, generalization, and deployability. The work also proposes concrete evaluation prescriptions, including a closed-loop metric suite and a resource-aware deliberation cost, designed to reduce the open-loop / closed-loop mismatch. Finally, the paper identifies actionable research directions toward advancing latent world model for decision-ready, verifiable, and resource-efficient automated driving.
Batteries with silicon-graphite-based anodes, which offer higher energy density and improved charging performance, introduce pronounced voltage hysteresis, making state-of-charge (SoC) estimation particularly challenging. Existing approaches to modeling hysteresis rely on exhaustive high-fidelity tests or focus on conventional graphite-based lithium-ion batteries, without considering uncertainty quantification or computational constraints. This work introduces a data-driven approach for probabilistic hysteresis factor prediction, with a particular emphasis on applications involving silicon-graphite anode-based batteries. A data harmonization framework is proposed to standardize heterogeneous driving cycles across varying operating conditions. Statistical learning and deep learning models are applied to assess performance in predicting the hysteresis factor with uncertainties while considering computational efficiency. Extensive experiments are conducted to evaluate the generalizability of the optimal model configuration in unseen vehicle models through retraining, zero-shot prediction, fine-tuning, and joint training. By addressing key challenges in SoC estimation, this research facilitates the adoption of advanced battery technologies. A summary page is available at: this https URL
Radio interferometry enables high-resolution imaging of astronomical radio sources by synthesizing a large effective aperture from an array of antennas and solving a deconvolution problem to reconstruct the image. Deep learning has emerged as a promising solution to the imaging problem, reducing computational costs and enabling super-resolution. However, existing DL-based methods often fall short of the requirements for real-world deployment due to limitations in handling high dynamic range, large field of view, and mismatches between training and test conditions. In this work, we build upon and extend the POLISH framework, a recent DL model for radio interferometric imaging. We introduce key improvements to enable robust reconstruction and super-resolution under real-world conditions: (1) a patch-wise training and stitching strategy for scaling to wide-field imaging and (2) a nonlinear arcsinh-based intensity transformation to manage high dynamic range. We conduct comprehensive evaluations using the T-RECS simulation suite with realistic sky models and point spead functions (PSF), and demonstrate that our approach significantly improves reconstruction quality and robustness. We test the model on realistic simulated strong gravitational lenses and show that lens systems with Einstein radii near the PSF scale can be recovered after deconvolution with our POLISH model, potentially yielding 10$\times$ more galaxy-galaxy lensing systems from the Deep Synoptic Array (DSA) survey than with image-plane CLEAN. Our results highlight the potential of DL models as practical, scalable tools for next-generation radio astronomy.
Interleaved spoken language models (SLMs) alternately generate text and speech tokens, but decoding at full transformer depth for every step becomes costly, especially due to long speech sequences. We propose SPAR-K, a modality-aware early exit framework designed to accelerate interleaved SLM inference while preserving perceptual quality. SPAR-K introduces a speech alternating-depth schedule: most speech positions exit at a fixed intermediate layer, while periodic full-depth "refresh" steps mitigate distribution shift due to early exit. We evaluate our framework using Step-Audio-2-mini and GLM-4-Voice across four datasets spanning reasoning, factual QA, and dialogue tasks, measuring performance in terms of ASR transcription accuracy and perceptual quality. Experimental results demonstrate that SPAR-K largely preserves question-answering accuracy with a maximum accuracy drop of 0.82\% while reducing average speech decoding depth by up to 11\% on Step-Audio-2-mini and 5\% on GLM-4-Voice, both with negligible changes in MOS and WER and no auxiliary computation overhead. We further demonstrate that confidence-based early exit strategies, widely used in text LLMs, are suboptimal for SLMs, highlighting that the unique statistical nature of speech tokens necessitates a specialized early exit design.
While Contrastive Decoding (CD) has proven effective at enhancing Large Audio Language Models (LALMs), the underlying mechanisms driving its success and the comparative efficacy of different strategies remain unclear. This study systematically evaluates four distinct CD strategies across diverse LALM architectures. We identify Audio-Aware Decoding and Audio Contrastive Decoding as the most effective methods. However, their impact varies significantly by model. To explain this variability, we introduce a Transition Matrix framework to map error pattern shifts during inference. Our analysis demonstrates that CD reliably rectifies errors in which models falsely claim an absence of audio or resort to uncertainty-driven guessing. Conversely, it fails to correct flawed reasoning or confident misassertions. Ultimately, these findings provide a clear guideline for determining which LALM architectures are most suitable for CD enhancement based on their baseline error profiles.
Engine sounds originate from sequential exhaust pressure pulses rather than sustained harmonic oscillations. While neural synthesis methods typically aim to approximate the resulting spectral characteristics, we propose directly modeling the underlying pulse shapes and temporal structure. We present the Pulse-Train-Resonator (PTR) model, a differentiable synthesis architecture that generates engine audio as parameterized pulse trains aligned to engine firing patterns and propagates them through recursive Karplus-Strong resonators simulating exhaust acoustics. The architecture integrates physics-informed inductive biases including harmonic decay, thermodynamic pitch modulation, valve-dynamics envelopes, exhaust system resonances and derived engine operating modes such as throttle operation and deceleration fuel cutoff (DCFO). Validated on three diverse engine types totaling 7.5 hours of audio, PTR achieves a 21% improvement in harmonic reconstruction and a 5.7% reduction in total loss over a harmonic-plus-noise baseline model, while providing interpretable parameters corresponding to physical phenomena. Complete code, model weights, and audio examples are openly available.
Power company operators make power generation plans one day in advance, in what is known as the Unit Commitment (UC) problem. UC is exposed to uncertainties, such as unknown electricity load and disturbances caused by renewable energy sources, especially PVs. In previous research, we proposed the Renewable Energy Robust Optimization Problem (RE-RP), which solves these uncertainties by considering suppression. In this paper, we propose a new model called RE-RP with fairness (RE-RPfair), which aims to achieve fair allocation among PVs allocation. This model is an expansion of the original RE-RP, and we prove its effectiveness through simulation. To measure the degree of fairness, we use the Gini Index, which is well-known in social science.
Background: Pleuroparenchymal fibroelastosis (PPFE) is an upper lobe predominant fibrotic lung abnormality associated with increased mortality in established interstitial lung disease. However, the clinical significance of radiologic PPFE progression in lung cancer screening populations remains unclear. We investigated whether longitudinal change in PPFE quantified on low dose CT independently associates with mortality and respiratory morbidity. Methods: We analysed longitudinal low-dose CT scans and clinical data from two lung cancer screening studies: the National Lung Screening Trial (NLST; n=7980) and the SUMMIT study (n=8561). An automated algorithm quantified PPFE volume on baseline and follow up scans. Annualised change in PPFE (dPPFE) was derived and dichotomised using a distribution based threshold to define progressive PPFE. Associations between dPPFE and mortality were evaluated using Cox proportional hazards models adjusted for demographic and clinical variables. In the SUMMIT cohort, dPPFE was also examined in relation to clinical outcomes. Findings: dPPFE independently associated with mortality in both cohorts (NLST: HR 1.25, 95% CI 1.01-1.56, p=0.042; SUMMIT: HR 3.14, 95% CI 1.66-5.97, p<0.001). Kaplan-Meier curves showed reduced survival among participants with progressive PPFE in both cohorts. In SUMMIT, dPPFE was associated with higher respiratory admissions (IRR 2.79, p<0.001), increased antibiotic and steroid use (IRR 1.55, p=0.010), and a trend towards higher mMRC scores (OR 1.40, p=0.055). Interpretation: Radiologic PPFE progression independently associates with mortality across two large lung cancer screening cohorts and with adverse clinical outcomes. Quantitative assessment of PPFE progression may provide a clinically relevant imaging biomarker for identifying individuals at increased respiratory risk within screening programmes.
We establish the randomized distributed function computation (RDFC) framework, in which a sender transmits just enough information for a receiver to generate a randomized function of the input data. Describing RDFC as a form of semantic communication, which can be essentially seen as a generalized remote-source-coding problem, we show that security and privacy constraints naturally fit this model, as they generally require a randomization step. Using strong coordination metrics, we ensure (local differential) privacy for every input sequence and prove that such guarantees can be met even when no common randomness is shared between the transmitter and receiver. This work provides lower bounds on Wyner's common information (WCI), which is the communication cost when common randomness is absent, and proposes numerical techniques to evaluate the other corner point of the RDFC rate region for continuous-alphabet random variables with unlimited shared randomness. Experiments illustrate that a sufficient amount of common randomness can reduce the semantic communication rate by up to two orders of magnitude compared to the WCI point, while RDFC without any shared randomness still outperforms lossless transmission by a large margin. A finite blocklength analysis further confirms that the privacy parameter gap between the asymptotic and non-asymptotic RDFC methods closes exponentially fast with input length. Our results position RDFC as an energy-efficient semantic communication strategy for privacy-aware distributed computation systems.
Benchmarking presence-only passive reconnaissance in smart-grid communications is challenging because the adversary is receive-only, yet nearby observers can still alter propagation through additional shadowing and multipath that reshapes channel coherence. Public smart-grid cybersecurity datasets largely target active protocol- or measurement-layer attacks and rarely provide propagation-driven observables with tiered topology context, which limits reproducible evaluation under strictly passive threat models. This paper introduces an IEEE-inspired, literature-anchored benchmark dataset generator for passive reconnaissance over a tiered Home Area Network (HAN), Neighborhood Area Network (NAN), and Wide Area Network (WAN) communication graph with heterogeneous wireless and wireline links. Node-level time series are produced through a physically consistent channel-to-metrics mapping where channel state information (CSI) is represented via measurement-realistic amplitude and phase proxies that drive inferred signal-to-noise ratio (SNR), packet error behavior, and delay dynamics. Passive attacks are modeled only as windowed excess attenuation and coherence degradation with increased channel innovation, so reliability and latency deviations emerge through the same causal mapping without labels or feature shortcuts. The release provides split-independent realizations with burn-in removal, strictly causal temporal descriptors, adjacency-weighted neighbor aggregates and deviation features, and federated-ready per-node train, validation, and test partitions with train-only normalization metadata. Baseline federated experiments highlight technology-dependent detectability and enable standardized benchmarking of graph-temporal and federated detectors for passive reconnaissance.
Brains remain unrivaled in their ability to recognize and generate complex spatiotemporal patterns. While AI is able to reproduce some of these capabilities, deep learning algorithms remain largely at odds with our current understanding of brain circuitry and dynamics. This is prominently the case for backpropagation through time (BPTT), the go-to algorithm for learning complex temporal dependencies. In this work we propose a general formalism to approximate BPTT in a controlled, biologically plausible manner. Our approach builds on, unifies and extends several previous approaches to local, time-continuous, phase-free spatiotemporal credit assignment based on principles of energy conservation and extremal action. Our starting point is a prospective energy function of neuronal states, from which we calculate real-time error dynamics for time-continuous neuronal networks. In the general case, this provides a simple and straightforward derivation of the adjoint method result for neuronal networks, the time-continuous equivalent to BPTT. With a few modifications, we can turn this into a fully local (in space and time) set of equations for neuron and synapse dynamics. Our theory provides a rigorous framework for spatiotemporal deep learning in the brain, while simultaneously suggesting a blueprint for physical circuits capable of carrying out these computations. These results reframe and extend the recently proposed Generalized Latent Equilibrium (GLE) model.
Maintaining background consistency while enhancing foreground quality remains a core challenge in video editing. Injecting full-image information often leads to background artifacts, whereas rigid background locking severely constrains the model's capacity for foreground generation. To address this issue, we propose KV-Lock, a training-free framework tailored for DiT-based video diffusion models. Our core insight is that the hallucination metric (variance of denoising prediction) directly quantifies generation diversity, which is inherently linked to the classifier-free guidance (CFG) scale. Building upon this, KV-Lock leverages diffusion hallucination detection to dynamically schedule two key components: the fusion ratio between cached background key-values (KVs) and newly generated KVs, and the CFG scale. When hallucination risk is detected, KV-Lock strengthens background KV locking and simultaneously amplifies conditional guidance for foreground generation, thereby mitigating artifacts and improving generation fidelity. As a training-free, plug-and-play module, KV-Lock can be easily integrated into any pre-trained DiT-based models. Extensive experiments validate that our method outperforms existing approaches in improved foreground quality with high background fidelity across various video editing tasks.
While multi-audio understanding is critical for large audio-language models (LALMs), it remains underexplored. We introduce MUGEN, a comprehensive benchmark evaluating this capability across speech, general audio, and music. Our experiments reveal consistent weaknesses in multi-audio settings, and performance degrades sharply as the number of concurrent audio inputs increases, identifying input scaling as a fundamental bottleneck. We further investigate training-free strategies and observe that Audio-Permutational Self-Consistency, which diversifies the order of audio candidates, helps models form more robust aggregated predictions, yielding up to 6.28% accuracy gains. Combining this permutation strategy with Chain-of-Thought further improves performance to 6.74%. These results expose blind spots in current LALMs and provide a foundation for evaluating complex auditory comprehension.
Animal brains exhibit remarkable efficiency in perception and action, while being robust to both external and internal perturbations. The means by which brains accomplish this remains, for now, poorly understood, hindering our understanding of animal and human cognition, as well as our own implementation of efficient algorithms for control of dynamical systems.A potential candidate for a robust mechanism of state estimation and action computation is the free energy principle, but existing implementations of this principle have largely relied on conventional, biologically implausible approaches without spikes. We propose a novel, efficient, and robust spiking control framework with realistic biological characteristics. The resulting networks function as free energy constrainers, in which neurons only fire if they reduce the free energy of their internal representation. The networks offer efficient operation through highly sparse activity while matching performance with other similar spiking frameworks, and have high resilience against both external (e.g. sensory noise or collisions) and internal perturbations (e.g. synaptic noise and delays or neuron silencing) that such a network would be faced with when deployed by either an organism or an engineer. Overall, our work provides a novel mathematical account for spiking control through constraining free energy, providing both better insight into how brain networks might leverage their spiking substrate and a new route for implementing efficient control algorithms in neuromorphic hardware.
Semantic occupancy prediction enables dense 3D geometric and semantic understanding for autonomous driving. However, existing camera-based approaches implicitly assume complete surround-view observations, an assumption that rarely holds in real-world deployment due to occlusion, hardware malfunction, or communication failures. We study semantic occupancy prediction under incomplete multi-camera inputs and introduce $M^2$-Occ, a framework designed to preserve geometric structure and semantic coherence when views are missing. $M^2$-Occ addresses two complementary challenges. First, a Multi-view Masked Reconstruction (MMR) module leverages the spatial overlap among neighboring cameras to recover missing-view representations directly in the feature space. Second, a Feature Memory Module (FMM) introduces a learnable memory bank that stores class-level semantic prototypes. By retrieving and integrating these global priors, the FMM refines ambiguous voxel features, ensuring semantic consistency even when observational evidence is incomplete. We introduce a systematic missing-view evaluation protocol on the nuScenes-based SurroundOcc benchmark, encompassing both deterministic single-view failures and stochastic multi-view dropout scenarios. Under the safety-critical missing back-view setting, $M^2$-Occ improves the IoU by 4.93%. As the number of missing cameras increases, the robustness gap further widens; for instance, under the setting with five missing views, our method boosts the IoU by 5.01%. These gains are achieved without compromising full-view performance. The source code will be publicly released at this https URL.
Global perception is essential for embodied agents in 360° spaces, yet current affordance grounding remains largely object-centric and restricted to perspective views. To bridge this gap, we introduce a novel task: Holistic Affordance Grounding in 360° Indoor Environments. This task faces unique challenges, including severe geometric distortions from Equirectangular Projection (ERP), semantic dispersion, and cross-scale alignment difficulties. We propose PanoAffordanceNet, an end-to-end framework featuring a Distortion-Aware Spectral Modulator (DASM) for latitude-dependent calibration and an Omni-Spherical Densification Head (OSDH) to restore topological continuity from sparse activations. By integrating multi-level constraints comprising pixel-wise, distributional, and region-text contrastive objectives, our framework effectively suppresses semantic drift under low supervision. Furthermore, we construct 360-AGD, the first high-quality panoramic affordance grounding dataset. Extensive experiments demonstrate that PanoAffordanceNet significantly outperforms existing methods, establishing a solid baseline for scene-level perception in embodied intelligence. The source code and benchmark dataset will be made publicly available at this https URL.
Accurate relative positioning is crucial for swarm aerial robotics, enabling coordinated flight and collision avoidance. Although vision-based tracking has been extensively studied, 3D LiDAR-based methods remain underutilized despite their robustness under varying lighting conditions. Existing systems often rely on bulky, power-intensive sensors, making them impractical for small UAVs with strict payload and energy constraints. This paper presents a lightweight LiDAR-based UAV tracking system incorporating an Adaptive Extended Kalman Filter (AEKF) framework. Our approach effectively addresses the challenges posed by sparse, noisy, and nonuniform point cloud data generated by non-repetitive scanning 3D LiDARs, ensuring reliable tracking while remaining suitable for small drones with strict payload constraints. Unlike conventional filtering techniques, the proposed method dynamically adjusts the noise covariance matrices using innovation and residual statistics, thereby enhancing tracking accuracy under real-world conditions. Additionally, a recovery mechanism ensures continuity of tracking during temporary detection failures caused by scattered LiDAR returns or occlusions. Experimental validation was performed using a Livox Mid-360 LiDAR mounted on a DJI F550 UAV in real-world flight scenarios. The proposed method demonstrated robust UAV tracking performance under sparse LiDAR returns and intermittent detections, consistently outperforming both standard Kalman filtering and particle filtering approaches during aggressive maneuvers. These results confirm that the framework enables reliable relative positioning in GPS-denied environments without the need for multi-sensor arrays or external infrastructure.
This study presents a comprehensive experimental assessment of a low-cost frequency-modulated continuous-wave (FMCW) multiple-input multiple-output (MIMO) radar for non-contact vital sign monitoring, focusing on respiratory rate (RR) and heart rate (HR) estimation. The influence of sensing distance and number of transmitted chirps on measurement accuracy is systematically quantified. Results exhibit a U-shaped error profile with optimal performance near $70~cm$, achieving mean absolute errors of $0.8~bpm$ for RR and $3.2~bpm$ for HR. Accuracy deteriorates at short ($<60~cm$) and long ($>100~cm$) distances due to multipath, near-field, and signal-to-noise effects. Increasing chirp count enhances performance: RR errors converge asymptotically for $\geq96$ chirps, while HR requires at least 96 chirps for stable detection. Variability metrics, including heart and respiratory rate variability, remain less accurate ($>15$--$30\%$ error), indicating limited capability in capturing instantaneous fluctuations. These findings define a fundamental trade-off: the radar ensures robust estimation of average RR and HR but exhibits restricted precision in high-resolution beat-to-beat and breath-to-breath monitoring.
Terahertz (THz) radiation provides a non-ionizing, highly sensitive probe of the dielectric properties of biological tissues. In this study, we present a comprehensive experimental characterization of dielectric properties using pork skin tissue, a widely used surrogate for human tissue, as a biological sample. Measurements are conducted employing THz time-domain spectroscopy in the 0.1-11 THz frequency range with photoconductive antennas for both signal generation and detection. Frequency-dependent refractive indices, absorption, and complex permittivity are extracted from transmitted time-domain signals. Our results confirm strong absorption and low transmittance at low THz frequencies due to water content, while highlighting frequency-dependent dispersion and narrowband transmission features at higher frequencies. This work provides one of the first extended-frequency datasets of biological tissue dielectric properties, supporting realistic channel modeling for the design and development of intra-body nanosensor networks in the THz band.
The surge in wireless connectivity demand, coupled with the finite nature of spectrum resources, compels the development of efficient spectrum management approaches. Spectrum sharing presents a promising avenue, although it demands precise characterization of spectrum demand for informed policy-making. This paper introduces HR-GAT, a hierarchical resolution graph attention network model, designed to predict spectrum demand using geospatial data. HR-GAT adeptly handles complex spatial demand patterns and resolves issues of spatial autocorrelation that usually challenge standard machine learning models, often resulting in poor generalization. Tested across five major Canadian cities, HR-GAT improves predictive accuracy of spectrum demand by 21% over eight baseline models, underscoring its superior performance and reliability.
Existing aerial-robotics benchmarks target vehicles from hundreds of grams to several kilograms and typically expose only high-level state data. They omit the actuator-level signals required to study nano-scale quadrotors, where low-Reynolds number aerodynamics, coreless DC motor nonlinearities, and severe computational constraints invalidate models and controllers developed for larger vehicles. We introduce NanoBench, an open-source multi-task benchmark collected on the commercially available Crazyflie 2.1 nano-quadrotor (takeoff weight 27 g) in a Vicon motion capture arena. The dataset contains over 170 flight trajectories spanning hover, multi-frequency excitation, standard tracking, and aggressive maneuvers across multiple speed regimes. Each trajectory provides synchronized Vicon ground truth, raw IMU data, onboard extended Kalman filter estimates, PID controller internals, and motor PWM commands at 100 Hz, alongside battery telemetry at 10 Hz, aligned with sub-0.5 ms consistency. NanoBench defines standardized evaluation protocols, train/test splits, and open-source baselines for three tasks: nonlinear system identification, closed-loop controller benchmarking, and onboard state estimation assessment. To our knowledge, it is the first public dataset to jointly provide actuator commands, controller internals, and estimator outputs with millimeter-accurate ground truth on a commercially available nano-scale aerial platform.
A central question in modern deep learning is how to design optimizers whose behavior remains stable as the network width $w$ increases. We address this question by interpreting several widely used neural-network optimizers, including \textrm{AdamW} and \textrm{Muon}, as instances of steepest descent under matrix operator norms. This perspective links optimizer geometry with the Lipschitz structure of the network forward map, and enables width-independent control of both Lipschitz and smoothness constants. However, steepest-descent rules induced by standard $p \to q$ operator norms lack layerwise composability and therefore cannot provide width-independent bounds in deep architectures. We overcome this limitation by introducing a family of mean-normalized operator norms, denoted $\pmean \to \qmean$, that admit layerwise composability, yield width-independent smoothness bounds, and give rise to practical optimizers such as \emph{rescaled} \textrm{AdamW}, row normalization, and column normalization. The resulting learning rate width-aware scaling rules recover $\mu$P scaling~\cite{yang2021tensor} as a special case and provide a principled mechanism for cross-width learning-rate transfer across a broad class of optimizers. We further show that \textrm{Muon} can suffer an $\mathcal{O}(\sqrt{w})$ worst-case growth in the smoothness constant, whereas a new family of row-normalized optimizers we propose achieves width-independent smoothness guarantees. Based on the observations, we propose MOGA (Matrix Operator Geometry Aware), a width-aware optimizer based only on row/column-wise normalization that enables stable learning-rate transfer across model widths. Large-scale pre-training on GPT-2 and LLaMA shows that MOGA, especially with row normalization, is competitive with Muon while being notably faster in large-token and low-loss regimes.
``Einstein from noise" (EfN) is a prominent example of the model bias phenomenon: systematic errors in the statistical model that lead to spurious but consistent estimates. In the EfN experiment, one falsely believes that a set of observations contains noisy, shifted copies of a template signal (e.g., an Einstein image), whereas in reality, it contains only pure noise observations. To estimate the signal, the observations are first aligned with the template using cross-correlation, and then averaged. Although the observations contain nothing but noise, it was recognized early on that this process produces a signal that resembles the template signal! This pitfall was at the heart of a central scientific controversy about validation techniques in structural biology. This paper provides a comprehensive statistical analysis of the EfN phenomenon above. We show that the Fourier phases of the EfN estimator (namely, the average of the aligned noise observations) converge to the Fourier phases of the template signal, explaining the observed structural similarity. Additionally, we prove that the convergence rate is inversely proportional to the number of noise observations and, in the high-dimensional regime, to the Fourier magnitudes of the template signal. Moreover, in the high-dimensional regime, the Fourier magnitudes converge to a scaled version of the template signal's Fourier magnitudes. This work not only deepens the theoretical understanding of the EfN phenomenon but also highlights potential pitfalls in template matching techniques and emphasizes the need for careful interpretation of noisy observations across disciplines in engineering, statistics, physics, and biology.
From autonomous driving to package delivery, ensuring safe yet efficient multi-agent interaction is challenging as the interaction dynamics are influenced by hard-to-model factors such as social norms and contextual cues. Understanding these influences can aid in the design and evaluation of socially-aware autonomous agents whose behaviors are aligned with human values. In this work, we seek to codify factors governing safe multi-agent interactions via the lens of responsibility, i.e., an agent's willingness to deviate from their desired control to accommodate safe interaction with others. Specifically, we propose a data-driven modeling approach based on control barrier functions and differentiable optimization that efficiently learns agents' responsibility allocation from data. We demonstrate on synthetic and real-world datasets that we can obtain an interpretable and quantitative understanding of how much agents adjust their behavior to ensure the safety of others given their current environment.
The non-stationary nature of electroencephalography (EEG) introduces distribution shifts across domains (e.g., days and subjects), posing a significant challenge to EEG-based neurotechnology generalization. Without labeled calibration data for target domains, the problem is a source-free unsupervised domain adaptation (SFUDA) problem. For scenarios with constant label distribution, Riemannian geometry-aware statistical alignment frameworks on the symmetric positive definite (SPD) manifold are considered state-of-the-art. However, many practical scenarios, including EEG-based sleep staging, exhibit label shifts. Here, we propose a geometric deep learning framework for SFUDA problems under specific distribution shifts, including label shifts. We introduce a novel, realistic generative model and show that prior Riemannian statistical alignment methods on the SPD manifold can compensate for specific marginal and conditional distribution shifts but hurt generalization under label shifts. As a remedy, we propose a parameter-efficient manifold optimization strategy termed SPDIM. SPDIM uses the information maximization principle to learn a single SPD-manifold-constrained parameter per target domain. In simulations, we demonstrate that SPDIM can compensate for the shifts under our generative model. Moreover, using public EEG-based brain-computer interface and sleep staging datasets, we show that SPDIM outperforms prior approaches.
Real-time visual feedback is essential for tetherless control of remotely operated vehicles, particularly during inspection and manipulation tasks. Though acoustic communication is the preferred choice for medium-range communication underwater, its limited bandwidth renders it impractical to transmit images or videos in real-time. To address this, we propose a model-based image compression technique that leverages prior mission information. Our approach employs trained machine-learning based novel view synthesis models, and uses gradient descent optimization to refine latent representations to help generate compressible differences between camera images and rendered images. We evaluate the proposed compression technique using a dataset from an artificial ocean basin, demonstrating superior compression ratios and image quality over existing techniques. Moreover, our method exhibits robustness to introduction of new objects within the scene, highlighting its potential for advancing tetherless remotely operated vehicle operations.
Many traditional robust control approaches assume linearity of the system and independence between the system state-input and the parameters of its approximant (possibly lower-order) model. This assumption implies that the application of robust control design to the underlying system introduces no distributional shifts in the parameters of its approximant model. This is generally not true when the underlying system is nonlinear, which may require different approximant models with different parameter distributions when operated at different regions of the state-input space. Therefore, a robust controller has to be robust under the approximant model with parameter distribution that will be experienced in the future data, after applying this control, not the parameter distribution seen in the learning data or assumed in the design. In this paper, we seek a solution to this problem by restricting the newly designed closed-loop system to be consistent with the learning data and slowing down any distributional shifts in the state-input space of the underlying system, and therefore, in the parameter space of its approximant model. In computational terms, the objective of dampening the shifts in the parameter distribution is formulated as a convex semi-definite program that can be solved efficiently by standard software packages. We evaluate the proposed approach on a simple yet telling gain-scheduling problem, which can be equivalently posed as a robust control problem.
Despite the transmission efficiency gains of semantic communication (SemCom) over traditional methods, most existing SemCom schemes still operate at a fixed transmission rate regardless of channel conditions and transmitted content, resulting in wasted resources in favorable channels and degraded performance in harsh channels. To address this issue, we propose a novel SemCom framework that incorporates an entropy-and-channel-aware adaptive rate control mechanism over MIMO Rayleigh fading channels. Specifically, we embed a joint representation of the channel state information (CSI) and the signal-to-noise ratio (SNR) into both the semantic encoder and decoder, thereby realizing channel-aware semantic coding and decoding. Moreover, the proposed method jointly exploits the CSI, the SNR, the feature maps, and their 2D entropy via two policy networks to selectively transmit only a subset of feature maps and, within each selected feature map, only a subset of symbols. Thereby, it achieves finer-grained adaptive rate control than existing methods. At the receiver, leveraging the strong visual understanding capability of multimodal large language models (MLLMs), we deploy the lightweight visual encoder (InternViT-300M) of the pre-trained InternVL3.5 model to compensate for discarded feature maps and symbols, and we fine-tune InternViT using low-rank adaptation (LoRA) for parameter-efficient training. Experimental results show that, with a carefully designed channel-aware loss function, our system automatically allocates more communication resources under poor channels to enhance task performance while reducing resource usage under favorable channels and maintaining high task performance.
As dynamical systems equipped with neural network controllers (neural feedback systems) become increasingly prevalent, it is critical to develop methods to ensure their safe operation. Verifying safety requires extending control theoretic analysis methods to these systems. Although existing techniques can efficiently handle linear neural feedback systems, relatively few scalable methods address the nonlinear case. We propose a novel algorithm for forward reachability analysis of nonlinear neural feedback systems. The approach leverages the structure of the nonlinear transition functions of the systems to compute tight polyhedral enclosures (i.e., abstractions). These enclosures, combined with the neural controller, are then encoded as a mixed-integer linear program (MILP). Optimizing this MILP yields a sound over-approximation of the forward-reachable set. We evaluate our algorithm on representative benchmarks and demonstrate an order of magnitude improvement over the current state of the art.
In safety-critical control systems, ensuring both system safety and smooth control input is essential for practical deployment. Existing Control Barrier Function (CBF) frameworks, especially High-Order CBFs (HOCBFs), effectively enforce safety constraints, but also raise concerns about the smoothness of the resulting control inputs. While smoothness typically refers to continuity and differentiability, it does not by itself ensure bounded input variation. In contrast, Lipschitz continuity is a stronger form of continuity that not only is necessary for the theoretical guarantee of safety, but also bounds the rate of variation and eliminates abrupt changes in the control input. Such abrupt changes can degrade system performance or even violate actuator limitations, yet current CBF-based methods do not provide Lipschitz continuity guarantees. This paper introduces Filtered Control Barrier Functions (FCBFs), which extend HOCBFs by incorporating an auxiliary dynamic system-referred to as an input regularization filter-to produce Lipschitz continuous control inputs. The proposed framework ensures safety, control bounds, and Lipschitz continuity of the control inputs simultaneously by integrating FCBFs and HOCBFs within a unified quadratic program (QP). Theoretical guarantees are provided and simulations on a unicycle model demonstrate the effectiveness of the proposed method compared to standard and smoothness-penalized HOCBF approaches.
We propose a distributed model predictive control (MPC) framework for coordinating heterogeneous, nonlinear multi-agent systems under individual and coupling constraints. The cooperative task is encoded as a shared objective function minimized collectively by the agents. Each agent optimizes an artificial reference as an intermediate step towards the cooperative objective, along with a control input to track it. We establish recursive feasibility, asymptotic stability, and transient performance bounds under suitable assumptions. The solution to the cooperative task is not predetermined but emerges from the optimized interactions of the agents. We demonstrate the framework on numerical examples inspired by satellite constellation control, collision-free narrow-passage traversal, and coordinated quadrotor flight.
Given a pair of source and reference speech recordings, speech-to-speech (S2S) emotion style transfer involves the generation of an output speech that mimics the emotion characteristics of the reference while preserving the content and speaker attributes of the source. In this paper, we propose a speech-to-speech zero-shot emotion style transfer framework, termed S2S Zero-shot Emotion Style Transfer (S2S-ZEST), that enables the transfer of emotional attributes from the reference to the source while retaining the speaker identity and speech content. The S2S-ZEST framework consists of an analysis-synthesis pipeline in which the analysis module extracts semantic tokens, speaker representations, and emotion embeddings from speech. Using these representations, a pitch contour estimator and a duration predictor are learned. Further, a synthesis module is designed to generate speech based on the input representations and the derived factors. The analysis-synthesis pipeline is trained using an auto-encoding objective to enable efficient resynthesis during inference. For S2S emotion style transfer, the emotion embedding extracted from the reference speech along with the remaining representations from the source speech are used in the synthesis module to generate the style-transferred speech. In our experiments, we evaluate the converted speech on content and speaker preservation (with respect to the source) as well as on the effectiveness of the emotion style transfer (with respect to the reference). The proposed framework demonstrates improved emotion style transfer performance over prior methods in a textless and non-parallel setting. We also illustrate the application of the proposed work for data augmentation in emotion recognition tasks.
Accurate relative orbit determination is a significant challenge in modern space operations, particularly when relying only on angular measurements. The inherent observability limitations of this approach make initial state estimation difficult, directly impacting mission safety and performance. This work proposes a hybrid estimation and control strategy for autonomous rendezvous. An active learning (AL) based algorithm designs the initial input control sequence by maximizing the exploration of the output space, thereby enhancing the observability of the initial relative state for the angle-only initial relative orbit determination (IROD) problem. The IROD solution provides a batch estimate of the initial relative state and its analytical covariance, which quantifies the estimation quality and determines the transition point to recursive filtering. Once the uncertainty is sufficiently low, an Extended Kalman Filter (EKF) is initialized with the IROD solution and takes over for sequential estimation, providing state estimates to a Model Predictive Controller (MPC) to complete the rendezvous. The proposed framework is validated through numerical simulations, demonstrating its ability to reliably resolve the scale ambiguity, outperform baseline excitation strategies, and successfully execute an end-to-end rendezvous from initial estimation to final approach.
This paper focuses on distributed signal estimation in topology-unconstrained wireless acoustic sensor networks (WASNs) where sensor nodes only transmit fused versions of their local sensor signals. For this task, the topology-independent (TI) distributed adaptive node-specific signal estimation (DANSE) algorithm (TI-DANSE) has previously been proposed. It converges towards the centralized signal estimation solution in non-fully connected and time-varying network topologies. However, the applicability of TI-DANSE in real-world scenarios is limited due to its slow convergence. The latter results from the fact that, in TI-DANSE, nodes only have access to the in-network sum of all fused signals in the WASN. We address this low convergence speed by introducing an improved TI-DANSE algorithm, referred to as TI-DANSE+, in which updating nodes separately use the partial in-network sums of fused signals coming from each of their neighbors. Nodes can maximize the number of available degrees of freedom in their local optimization problem, leading to faster convergence. This is further exploited by combining TI-DANSE+ with a tree-pruning strategy that maximizes the number of neighbors at the updating node. In fully connected WASNs, TI-DANSE+ converges as fast as the original DANSE algorithm (the latter only defined for fully connected WASNs) while using peer-to-peer data transmission instead of broadcasting and thus saving communication bandwidth. If link failures occur, the convergence of TI-DANSE+ towards the centralized solution is preserved without any change in its formulation. Altogether, the proposed TI-DANSE+ algorithm can be viewed as an all-round alternative to DANSE and TI-DANSE which (i) merges the advantages of both, (ii) reconciliates their differences into a single formulation, and (iii) shows advantages of its own in terms of communication bandwidth usage.
Dynamic games provide a fundamental framework for multi-agent decision-making over time, yet computing feedback Nash equilibria (FNEs) in infinite-horizon discrete-time linear-quadratic (LQ) settings remains computationally challenging. Motivated by the need for tractable and implementable strategies, this paper studies a finite-horizon strategy for approximating a certain infinite-horizon equilibrium. Specifically, at each stage, each player solves a T-stage game and implements only the first-stage control, thereby avoiding the direct solution of coupled infinite-horizon Riccati equations. We first analyze the finite-horizon game and characterize the structure of the associated coupled generalized discrete Riccati difference equations. Based on this analysis, we establish a sufficient condition for uniqueness of the FNE and propose an efficient algorithm that computes it via a sequence of linear equations. We then consider the infinite-horizon game in which players adopt the finite-horizon strategies with heterogeneous prediction horizons and show that, under suitable conditions, the total cost under the finite-horizon strategies converges to the cost under the limiting infinite-horizon FNE. Moreover, we derive an explicit upper bound on this cost gap in terms of the distance between the corresponding strategy matrices. These results provide theoretical justification and quantitative performance guarantees for finite-horizon strategies in infinite-horizon LQ dynamic games. A nonscalar numerical example illustrates the effectiveness of the proposed framework.
Contrastive language-audio pretraining (CLAP) is widely used for audio generation and recognition tasks. For example, CLAPScore, which utilizes the similarity of CLAP embeddings, has been a major metric for the evaluation of the relevance between audio and text in text-to-audio. However, the relationship between CLAPScore and human subjective evaluation scores is still unclarified. We show that CLAPScore has a low correlation with human subjective evaluation scores. Additionally, we propose a human-perception-based CLAP called Human-CLAP by training a contrastive language-audio model using the subjective evaluation score. In our experiments, the results indicate that our Human-CLAP improved the Spearman's rank correlation coefficient (SRCC) between the CLAPScore and the subjective evaluation scores by more than 0.25 compared with the conventional CLAP.
We present a hardware-based validation of angular droop control for grid-forming DC/AC converters, a control strategy that establishes active power-to-angle droop. Angular droop control enables exact frequency regulation at steady state, thereby combining primary and secondary control into a single layer. We provide traceable analysis and suggest solutions to the main implementation challenges with angular droop control, specifically addressing the challenges concerning discretization and clock drift in hardware experiments. This is illustrated in two different scenarios. Experimental results from the single converter to load scenario demonstrate black start capability and power-to-angle droop behavior for two different implementation schemes. A multi-converter setup validates frequency synchronization and power-sharing properties, proving the ancillary services that angular droop control provides in the real-world experimental setup.
We consider real-time remote tracking of a Markov source observed by multiple heterogeneous sensors with state-dependent sensing accuracy, motivated by distributed camera networks with overlapping coverage and spatial blind spots. Upon commands from a remote sink, sensors transmit their observations over error-prone channels. We aim to minimize the long-term average of a weighted sum of goal-aware distortion and transmission costs. The problem is formulated as a partially observable Markov decision process (POMDP) and cast into an equivalent belief-MDP. To address the intractability of the infinite and continuous belief space, we develop a truncation-based approximation that yields a finite-state MDP solved via the relative value iteration algorithm (RVIA). We further reformulate the original belief-MDP into a discounted version and solve it using incremental pruning algorithm (IPA). Numerical results demonstrate that the performance of the RVIA-based policy improves with the truncation depth at the expense of computational effort, and both proposed methods outperform low-complexity baselines across a wide range of system parameters. The results also reveal a switching-type structure of the RVIA-based policy over the belief simplex and quantify the impact of key system parameters, highlighting the importance of accounting for state-dependent sensing.
Auditory attention and selective phase-locking are central to human speech understanding in complex acoustic scenes and cocktail party settings, yet these capabilities in multilingual subjects remain poorly understood. While machine understanding of natural speech has advanced in recent years, questions persist about comprehension of overlapped and mixed-channel speech. We propose a systematic paradigm for studying humans and machines in speech question-answering tasks in multilingual settings with clean and mixed-channel speech. For human listeners, selective attention to a target speaker was significantly better in their native language (L1) than in their second language (L2). For machine listening, speech-based large language models (LLMs) match or exceed human performance in clean, single-speaker conditions but often struggle to selectively attend in two-speaker settings. These results reveal a key divergence: humans rely on attentional cues that are more streamlined in their native language, whereas LLMs default to parallel information extraction which exceed human skills.
We present a comprehensive evaluation of pretrained speech embedding systems for the detection of dysarthric speech using existing accessible data. Dysarthric speech datasets are often small and can suffer from recording biases as well as data imbalance. To address these we selected a range of datasets covering related conditions and adopt the use of several cross-validations runs to estimate the chance level. To certify that results are above chance, we compare the distribution of scores across these runs against the distribution of scores of a carefully crafted null hypothesis. In this manner, we evaluate 17 publicly available speech embedding systems across 6 different datasets, reporting the cross-validation performance on each. We also report cross-dataset results derived when training with one particular dataset and testing with another. We observed that within-dataset results vary considerably depending on the dataset, regardless of the embedding used, raising questions about which datasets should be used for benchmarking. We found that cross-dataset accuracy is, as expected, lower than within-dataset, highlighting challenges in the generalization of the systems. These findings have important implications for the clinical validity of systems trained and tested on the same dataset.
Video-conditioned audio generation, including Video-to-Sound (V2S) and Visual Text-to-Speech (VisualTTS), has traditionally been treated as distinct tasks, leaving the potential for a unified generative framework largely underexplored. In this paper, we bridge this gap with VSSFlow, a unified flow-matching framework that seamlessly solve both problems. To effectively handle multiple input signals within a Diffusion Transformer (DiT) architecture, we propose a disentangled condition aggregation mechanism leveraging distinct intrinsic properties of attention layers: cross-attention for semantic conditions, and self-attention for temporally-intensive conditions. Besides, contrary to the prevailing belief that joint training for the two tasks leads to performance degradation, we demonstrate that VSSFlow maintains superior performance during end-to-end joint learning process. Furthermore, we use a straightforward feature-level data synthesis method, demonstrating that our framework provides a robust foundation that easily adapts to joint sound and speech generation using synthetic data. Extensive experiments on V2S, VisualTTS and joint generation benchmarks show that VSSFlow effectively unifies these tasks and surpasses state-of-the-art domain-specific baselines, underscoring the critical potential of unified generative models. Project page: this https URL
Deep learning-based neural receivers offer promising physical-layer solutions for next-generation wireless systems. We propose an axial self-attention transformer neural receiver that achieves state-of-the-art Block Error Rate (BLER) performance with significantly improved computational efficiency during inference and large-scale training. By factorizing attention operations along temporal and spectral axes, the proposed architecture reduces computational complexity from $O((TF)^2)$ to $O(T^2F+TF^2)$, yielding substantially fewer floating-point operations and attention matrix multiplications per transformer block. Experimental validation under 3GPP Clustered Delay Line (CDL) channels demonstrates consistent performance gains across varying mobility scenarios. Under non-line-of-sight conditions, our proposed axial neural receiver outperforms global self-attention and convolutional neural receiver baselines at 10% BLER and 1% BLER respectively, with reduced computational complexity.
We study the problem of jointly selecting sensing agents and synthesizing decentralized active perception policies for the chosen subset of agents within a Decentralized Partially Observable Markov Decision Process (Dec-POMDP) framework. Our approach employs a two-layer optimization structure. In the inner layer, we introduce information-theoretic metrics, defined by the mutual information between the unknown trajectories or some hidden property in the environment and the collective partial observations in the multi-agent system, as a unified objective for active perception problems. We employ various optimization methods to obtain optimal sensor policies that maximize mutual information for distinct active perception tasks. In the outer layer, we prove that under certain conditions, the information-theoretic objectives are monotone and submodular with respect to the subset of observations collected from multiple agents. We then exploit this property to design an IMAS$^2$ (Information-theoretic Multi-Agent Selection and Sensing) algorithm for joint sensing agent selection and sensing policy synthesis. However, since the policy search space is infinite, we adapt the classical Nemhauser-Wolsey argument to prove that the proposed IMAS$^2$ algorithm can provide a tight $(1 - 1/e)$-guarantee on the performance. Finally, we demonstrate the effectiveness of our approach in a multi-agent cooperative perception in a grid-world environment.
Stacked intelligent metasurfaces (SIMs) represent a key enabler for next-generation wireless networks, offering beamforming gains while significantly reducing radio-frequency chain requirements. In conventional space-only SIM architectures, the rate of reconfigurability of the SIM is equal to the inverse of the channel coherence time. This paper investigates a novel beamforming strategy for massive downlink connectivity using a randomized space-time (ST) coded SIM. In addition to conventional space-only metasurface layers, the proposed design integrates a ST metasurface layer at the input stage of the SIM that introduces random time variations over each channel coherence time interval. These artificial time variations enable opportunistic user scheduling and exploitation of multiuser diversity under slow channel dynamics. To mitigate the prohibitive overhead associated with full channel state information at the transmitter (CSIT), we propose a partial-CSIT-based beamforming scheme that leverages randomized steering vectors and limited user-side feedback based on signal quality measurements. Numerical results demonstrate that the proposed ST-SIM architecture achieves satisfactory sum-rate performance while significantly reducing CSIT acquisition and feedback overhead, thereby enabling scalable downlink connectivity in dense networks.
Whispered speech lacks vocal-fold excitation, making intelligible conversion challenging. We propose WhisperVC, a three-stage framework for low-resource whisper-to-normal (W2N) conversion that decouples cross-domain alignment from speech generation. Stage 1 uses limited paired whisper-normal data with a content encoder and a Conformer-based variational autoencoder (VAE) with soft-DTW alignment to learn domain-invariant semantic representations. Stage 2, trained only on normal speech, employs a Length-Channel Aligner and a two-stage speaker-conditioned mel generator for timbre and prosody modeling. Stage 3 fine-tunes a HiFi-GAN vocoder for waveform synthesis. Experimental results on AISHELL6-Whisper show competitive quality (DNSMOS 3.07, UTMOS 2.83, CER 16.93%) and WavLM speaker similarity (0.95). The framework also supports privacy-preserving communication as well as non-vocal communication and a rehabilitation tool for post-surgical vocal-fold patients. Samples are available online.
Audio watermarking is essential for verifying speech authenticity, yet single-watermark schemes often struggle against sophisticated distortions such as neural reconstruction and adversarial attacks. To address this limitation, we introduce a multiplexing paradigm that combines multiple watermarking techniques to leverage their inherent complementarities. We explore both parallel and sequential multiplexing strategies and propose perceptual-adaptive time-frequency multiplexing (PA-TFM), a robust training-free approach. To further enhance performance, we introduce MaskNet, a novel model-based framework designed to learn effective time-domain multiplexing. Experimental results on the LibriSpeech and Common Voice datasets under 14 diverse attack types, including high-strength white-box and neural reconstruction attacks, demonstrate that both PA-TFM and MaskNet considerably outperform existing single-watermark baselines, establishing a resilient paradigm for real-world audio protection.
We present a tiled architecture for computationally efficient digital beamforming for wideband massive MIMO radar, using beamspace dimension reduction for each tile, and coordinated training of reduced-dimension MVDR beamformers across tiles. We illustrate the efficacy of our approach for a setting in which a 1024-element airborne radar platform beamforms towards airborne targets while suppressing strong interference from ground transmitters. The array is organized into eight 128-element tiles, each a 2D array with 4 (vertical) x 32 (horizontal) elements. Each tile applies a 2D spatial DFT to achieve energy concentration in beamspace, and a 1D temporal FFT to channelize the wideband signal into subbands for which narrowband array models apply. A small tile-level beamspace window is selected for each target (depending on its angle of arrival) in each subband, and coordinated training across tiles is used to compute reduced-dimension MVDR beamformers per-target, per-subband. While full-dimensional MVDR processing is infeasible for the system under consideration, we show that our proposed approach significantly outperforms beamspace MVDR beamforming for a single 128-element tile, where we set the dimensions of the spatial filter (and hence the complexity of MVDR training) to be equal in both systems.
Restoring hand function requires simultaneous and proportional control (SPC) of multiple degrees of freedom (DoFs). This study evaluated the multichannel linear descriptors-based block field method (MLD-BFM) against conventional feature extraction approaches for continuous decoding of five finger-joint DoFs using high-density surface electromyography (HD sEMG). Twenty-one healthy participants performed dynamic sinusoidal finger movements while HD sEMG signals were recorded from the proximal forearm. MLD-BFM extracted spatial descriptors including effective field strength ($\Sigma$), field-strength variation rate ($\Phi$), and spatial complexity ($\Omega$). Performance was optimized (block size: $2\times2$; window: 0.15,s) and compared with conventional time-domain features, root mean square (RMS) and mean absolute value plus waveform length (MAV-WL), as well as dimensionality reduction methods (PCA and NMF), using multi-output regression models. MLD-BFM achieved the highest mean variance-weighted coefficient of determination ($\mathrm{R}^2_\mathrm{vw}$) across all models, with the multilayer perceptron yielding the best result ($86.68 \pm 0.33 \%$). However, the improvement was not statistically significant relative to time-domain features, suggesting that dense multichannel recordings already encode spatial information through amplitude-based descriptors. MLD-BFM significantly outperformed dimensionality reduction approaches, indicating that preserving the spatial resolution of HD sEMG is critical for accurate multi-DoF finger movement regression.
In conventional distributed optimization, each agent performs a single local update between two communication rounds with its neighbors to synchronize solutions. Inspired by the success of using multiple local updates in federated learning, incorporating local updates into distributed optimization has recently attracted increasing attention. However, unlike federated learning, where multiple local updates can accelerate learning by improving gradient estimation under mini-batch settings, it remains unclear whether similar benefits hold in distributed optimization when gradients are exact. Moreover, existing theoretical results typically require reducing the step size when multiple local updates are employed, which can entirely offset any potential benefit of these additional local updates and obscure their true impact on convergence. In this paper, we focus on the classic DIGing algorithm and leverage the tight performance bounds provided by Performance Estimation Problems (PEP) to show that incorporating local updates can indeed accelerate distributed optimization. To the best of our knowledge, this is the first rigorous demonstration of such acceleration for a broad class of objective functions. Our analysis further reveals that, under an appropriate step size, performing only two local updates is sufficient to achieve the maximal possible improvement, and that additional local updates provide no further gains. Because more updates increase computational cost, these findings offer practical guidance for efficient implementation. Extensive experiments on both synthetic and real-world datasets corroborate the theoretical findings.
Discrete Speech Representation Tokens (DSRTs) have become a foundational component in speech generation. While prior work has extensively studied phonetic and speaker information in DSRTs, how accent information is encoded in DSRTs remains largely unexplored. In this paper, we present the first systematic investigation of accent information in DSRTs. We propose a unified evaluation framework that measures both accessibility of accent information via a novel Accent ABX task and recoverability via cross-accent Voice Conversion (VC) resynthesis. Using this framework, we analyse DSRTs derived from several widely used speech representations. Our results reveal that: (1) choice of layers has the most significant impact on retaining accent information, (2) accent information is substantially reduced by ASR supervision; (3) naive codebook size reduction cannot effectively disentangle accent from phonetic and speaker information.
In this study, we have presented a novel approach to predict the Short-Time Objective Intelligibility (STOI) metric using a bottleneck transformer architecture. Traditional methods for calculating STOI typically requires clean reference speech, which limits their applicability in the real world. To address this, numerous deep learning-based nonintrusive speech assessment models have garnered significant interest. Many studies have achieved commendable performance, but there is room for further improvement. We propose the use of bottleneck transformer, incorporating convolution blocks for learning frame-level features and a multi-head self-attention (MHSA) layer to aggregate the information. These components enable the transformer to focus on the key aspects of the input data. Our model has shown higher correlation and lower mean squared error for both seen and unseen scenarios compared to the state-of-the-art model using self-supervised learning (SSL) and spectral features as inputs.
Missing data problems, such as missing modalities in multi-modal brain MRI and missing slices in cardiac MRI, pose significant challenges in clinical practice. Existing methods rely on external guidance to supply detailed missing state for instructing generative models to synthesize missing MRIs. However, manual indicators are not always available or reliable in real-world scenarios due to the unpredictable nature of clinical environments. Moreover, these explicit masks are not informative enough to provide guidance for improving semantic consistency. In this work, we argue that generative models should infer and recognize missing states in a self-perceptive manner, enabling them to better capture subtle anatomical and pathological variations. Towards this goal, we propose CoPeDiT, a general-purpose latent diffusion model equipped with completeness perception for unified synthesis of 3D MRIs. Specifically, we incorporate dedicated pretext tasks into our tokenizer, CoPeVAE, empowering it to learn completeness-aware discriminative prompts, and design MDiT3D, a specialized diffusion transformer architecture for 3D MRI synthesis that effectively uses the learned prompts as guidance to enhance semantic consistency in 3D space. Comprehensive evaluations on three large-scale MRI datasets demonstrate that CoPeDiT significantly outperforms state-of-the-art methods, achieving superior robustness and yielding high-fidelity, structurally consistent synthesis across diverse missing patterns.
This report presents the TCG CREST system description for Track 1 (Speaker Diarization) of the DISPLACE-M challenge, focusing on naturalistic medical conversations in noisy rural-healthcare scenarios. Our study evaluates the impact of various voice activity detection (VAD) methods and advanced clustering algorithms on overall speaker diarization (SD) performance. We compare and analyze two SD frameworks: a modular pipeline utilizing SpeechBrain with ECAPA-TDNN embeddings, and a state-of-the-art (SOTA) hybrid end-to-end neural diarization system, Diarizen, built on top of a pre-trained WavLM. With these frameworks, we explore diverse clustering techniques, including agglomerative hierarchical clustering (AHC), and multiple novel variants of spectral clustering, such as SC-adapt, SC-PNA, and SC-MK. Experimental results demonstrate that the Diarizen system provides an approximate $39\%$ relative improvement in the diarization error rate (DER) on the post-evaluation analysis of Phase~I compared to the SpeechBrain baseline. Our best-performing submitted system employing the Diarizen baseline with AHC employing a median filtering with a larger context window of $29$ achieved a DER of 10.37\% on the development and 9.21\% on the evaluation sets, respectively. Our team ranked fifth out of the 11 participating teams after the Phase~I evaluation.
Large Audio Language Models (LALMs) are increasingly capable of reasoning over audio. However, existing benchmarks provide limited coverage of reasoning in polyphonic audio, where multiple sound events co-occur and induce compositional structure. In this work, we introduce PolyBench, a benchmark designed to evaluate compositional reasoning in polyphonic audio. PolyBench comprises five evaluation subsets covering counting, classification, detection, concurrency, and duration estimation, requiring reasoning over multiple concurrent events and their relations. Evaluation of state-of-the-art LALMs reveals consistent performance degradation in polyphonic audio, indicating a fundamental bottleneck in current LALMs.
Deep-space habitats (DSHs) are safety-critical systems that must operate autonomously for long periods, often beyond the reach of ground-based maintenance or expert intervention. Monitoring health and anticipating failures are essential for safe operations. Prognostics based on remaining useful life (RUL) prediction support this goal by estimating how long a subsystem can operate before failure. Critical DSH subsystems, including environmental control and life support, power generation, and thermal control, are monitored by many sensors and can degrade through multiple failure modes. In practice, these failure modes are often unknown, and the sensors providing useful information may vary across modes, making accurate RUL prediction challenging when failure data are unlabeled. We propose an unsupervised prognostics framework for RUL prediction that jointly identifies latent failure modes and selects informative sensors using unlabeled run-to-failure data. The framework has two phases: offline sensor selection and failure mode identification, and online diagnosis and RUL prediction. In the offline phase, failure times are modeled using a mixture of Gaussian regressions, and an Expectation-Maximization algorithm simultaneously clusters degradation trajectories and selects mode-specific sensors. In the online phase, low-dimensional features from selected sensors diagnose the active failure mode and predict RUL through a weighted functional regression model. The framework is evaluated on a simulated dataset capturing key telemetry challenges in DSH systems and on the NASA C-MAPSS benchmark. Results show improved prediction accuracy and clearer identification of informative sensors and failure modes than existing methods.
Integrating coded caching (CC) into multiple-input multiple-output (MIMO) communications can significantly enhance the achievable degrees of freedom (DoF) in wireless networks. This paper investigates a practical cache-aided asymmetric MIMO configuration with cache ratio $\gamma$, where a server equipped with $L$ transmit antennas communicates with $K$ users, each having $G_k$ receive antennas. We propose three content-aware MIMO-CC strategies: the \emph{min-G} scheme, which treats the system as symmetric by assuming all users have the same number of antennas, equal to the smallest among them; the \emph{Grouping} scheme, which maximizes spatial multiplexing gain separately within each user subset at the cost of some global caching gain; and the \emph{Phantom} scheme, which dynamically redistributes spatial resources using virtual or ``phantom'' antennas at the users, bridging the performance gains of the min-$G$ and Grouping schemes. These strategies jointly optimize the number of users, $\Omega$, and the parallel streams decoded by each user, $\beta_k$, ensuring linear decodability for all target users. Analytical and numerical results confirm that the proposed schemes achieve significant DoF improvements across various system configurations.
Cutting edge classical computing today relies on a combination of CPU-based computing with a strong reliance on accelerators. In particular, high-performance computing (HPC) and machine learning (ML) rely heavily on acceleration via GPUs for numerical kernels. In the future, acceleration via quantum devices may complement GPUs for kernels where algorithms provide quantum advantage, i.e., significant speedups over classical algorithms. Computing with quantum kernels mapped onto quantum processing units (QPUs) requires seamless integration into HPC and ML. However, quantum offloading onto HPC/cloud lacks open-source software infrastructure. For classical algorithms, parallelization standards, such as OpenMP, MPI, or CUDA exist. In contrast, a lack of quantum abstractions currently limits the adoption of quantum acceleration in practical applications creating a gap between quantum algorithm development and practical HPC integration. Such integration needs to extend to efficient quantum offloading of kernels, which further requires scheduling of quantum resources, control of QPU kernel execution, tracking of QPU results, providing results to classical calling contexts and coordination with HPC scheduling. This work proposes CONQURE, a co-execution environment for quantum and classical resources. CONQURE is a fully open-source cloud queue framework that presents a novel modular scheduling framework allowing users to offload OpenMP quantum kernels to QPUs as quantum circuits, to relay results back to calling contexts in classical computing, and to schedule quantum resources via our CONQURE API. We show our API has a low overhead averaging 12.7ms in our tests, and we demonstrate functionality on an ion-trap device. Our OpenMP extension enables the parallelization of VQE runs with a 3.1X reduction in runtime.
A wideband Gaussian Noise Model of the nonlinear noise power spectral density is developed for a single semiconductor optical amplifier as described by the Agrawal model. A simple, interpretable closed-form expression is obtained for the nonlinear noise-to-signal ratio of broadband wavelength-division multiplexed signals as a function of the Agrawal model parameters, the amplifier output power and the transmission bandwidth. The accuracy of the closed-form expression and its region of validity are assessed in numerical simulations. The error is smaller than 0.1 dB when the product of bandwidth and gain recovery time $B\times\tau_c$ exceeds 100. A complete treatment of gain compression is shown to enhance nonlinear noise by a factor $1+P_\text{out}/P_\text{sat}$ compared to the first-order perturbation theory result.
Bridge models have been investigated in speech enhancement but are mostly single-task, with constrained general speech restoration (GSR) capability. In this work, we propose VoiceBridge, a one-step latent bridge model (LBM) for GSR, capable of efficiently reconstructing 48 kHz fullband speech from diverse distortions. To inherit the advantages of data-domain bridge models, we design an energy-preserving variational autoencoder, enhancing the waveform-latent space alignment over varying energy levels. By compressing waveform into continuous latent representations, VoiceBridge models~\textit{various} GSR tasks with a~\textit{single} latent-to-latent generative process backed by a scalable transformer. To alleviate the challenge of reconstructing the high-quality target from distinctively different low-quality priors, we propose a joint neural prior for GSR, uniformly reducing the burden of the LBM in diverse tasks. Building upon these designs, we further investigate bridge training objective by jointly tuning LBM, decoder and discriminator together, transforming the model from a denoiser to generator and enabling \textit{one-step GSR without distillation}. Extensive validation across in-domain (\textit{e.g.}, denoising and super-resolution) and out-of-domain tasks (\textit{e.g.}, refining synthesized speech) and datasets demonstrates the superior performance of VoiceBridge. Demos: this https URL.
In this paper, we study the use of robust model independent bounded extremum seeking (ES) feedback control to improve the robustness of deep reinforcement learning (DRL) controllers for a class of nonlinear time-varying systems. DRL has the potential to learn from large datasets to quickly control or optimize the outputs of many-parameter systems, but its performance degrades catastrophically when the system model changes rapidly over time. Bounded ES can handle time-varying systems with unknown control directions, but its convergence speed slows down as the number of tuned parameters increases and, like all local adaptive methods, it can get stuck in local minima. We demonstrate that together, DRL and bounded ES result in a hybrid controller whose performance exceeds the sum of its parts with DRL taking advantage of historical data to learn how to quickly control a many-parameter system to a desired setpoint while bounded ES ensures its robustness to time variations. We present a numerical study of a general time-varying system and a combined ES-DRL controller for automatic tuning of the Low Energy Beam Transport section at the Los Alamos Neutron Science Center linear particle accelerator.
Auto-regressive speech-text models pre-trained on interleaved text tokens and discretized speech tokens demonstrate strong speech understanding and generation, yet remain substantially less compute-efficient than text LLMs, partly due to the much longer sequences of speech tokens relative to text. This modality imbalance disproportionately allocates pre-training and inference compute to speech, potentially hindering effective cross-modal alignment and slowing performance scaling by orders of magnitude. We introduce the Latent Speech-Text Transformer (LST), which aggregates speech tokens into latent speech patches that serve as higher-level autoregressive units. This design aligns the sequence-modeling granularity between speech and text while improving computational efficiency. The resulting patches can align with textual units to facilitate cross-modal knowledge transfer and compactly capture recurring acoustic patterns such as silence. Across story-completion benchmarks under both compute-controlled and data-controlled settings, LST consistently improves speech accuracy while also improving text performance, achieving up to +6.5% absolute gain on speech HellaSwag in compute-controlled training (+5.3% in data-controlled training). Under compute-controlled scaling from 420M to 1.8B parameters in a near compute-optimal regime, gains grow with scale, and improvements persist up to 7B parameters under fixed-token budgets. These benefits extend to downstream tasks: LST stabilizes ASR adaptation and reduces the effective autoregressive sequence length during ASR and TTS inference, lowering computational cost without degrading reconstruction quality. The code is available at this https URL.
Multi-agent learning faces a fundamental tension: leveraging distributed collaboration without sacrificing the personalization needed for diverse agents. This tension intensifies when aiming for full personalization while adapting to unknown heterogeneity levels -- gaining collaborative speedup when agents are similar, without performance degradation when they are different. Embracing the challenge, we propose personalized collaborative learning (PCL), a novel framework for heterogeneous agents to collaboratively learn personalized solutions with seamless adaptivity. Through carefully designed bias correction and importance correction mechanisms, our method AffPCL robustly handles both environment and objective heterogeneity. We prove that AffPCL reduces sample complexity over independent learning by a factor of $\max\{n^{-1}, \delta\}$, where $n$ is the number of agents and $\delta\in[0,1]$ measures their heterogeneity. This affinity-based acceleration automatically interpolates between the linear speedup of federated learning in homogeneous settings and the baseline of independent learning, without requiring prior knowledge of the system. Our analysis further reveals that an agent may obtain linear speedup even by collaborating with arbitrarily dissimilar agents, unveiling new insights into personalization and collaboration in the high heterogeneity regime.
Robust speaker verification under noisy conditions remains an open challenge. Conventional deep learning methods learn a robust unified speaker representation space against diverse background noise and achieve significant improvement. In contrast, this paper presents a noise-conditioned mixture-ofexperts framework that decomposes the feature space into specialized noise-aware subspaces for speaker verification. Specifically, we propose a noise-conditioned expert routing mechanism, a universal model based expert specialization strategy, and an SNR-decaying curriculum learning protocol, collectively improving model robustness and generalization under diverse noise conditions. The proposed method can automatically route inputs to expert networks based on noise information derived from the inputs, where each expert targets distinct noise characteristics while preserving speaker identity information. Comprehensive experiments demonstrate consistent superiority over baselines
Neural audio codecs (NACs) provide compact latent speech representations in the form of sequences of continuous vectors or discrete tokens. In this work, we investigate how these two types of speech representations compare when used as training targets for supervised speech enhancement. We consider both autoregressive and non-autoregressive speech enhancement models based on the Conformer architecture, as well as a simple baseline where the NAC encoder is simply fine-tuned for speech enhancement. Our experiments reveal three key findings: predicting continuous latent representations consistently outperforms discrete token prediction; autoregressive models achieve higher quality but at the expense of intelligibility and efficiency, making non-autoregressive models more attractive in practice; and adding encoder fine-tuning yields the strongest enhancement metrics overall, though at the cost of degraded codec reconstruction. The code and audio samples are available online.
Diffusion policies (DPs) achieve state-of-the-art performance on complex manipulation tasks by learning from large-scale demonstration datasets, often spanning multiple embodiments and environments. However, they cannot guarantee safe behavior, requiring external safety mechanisms. These, however, alter actions in ways unseen during training, causing unpredictable behavior and performance degradation. To address these problems, we propose path-consistent safety filtering (PACS) for DPs. Our approach performs path-consistent braking on a trajectory computed from the sequence of generated actions. In this way, we keep the execution consistent with the training distribution of the policy, maintaining the learned, task-completing behavior. To enable real-time deployment and handle uncertainties, we verify safety using set-based reachability analysis. Our experimental evaluation in simulation and on three challenging real-world human-robot interaction tasks shows that PACS (a) provides formal safety guarantees in dynamic environments, (b) preserves task success rates, and (c) outperforms reactive safety approaches, such as control barrier functions, by up to 68 % in terms of task success. Videos are available at our project website: this https URL.
Visual prosody may be critical for communication success in face-to-face conversations in noisy settings. Here, we explore the involvement of hand, head, and whole-body movements, as well as gesturing quality, in dyadic conversations in noisy settings. We hypothesize that increasing background noise would alter the frequency of conversation-related movements to support the roles of the speaker and the listener. Specifically, talkers may increase gesticulation and thus the use of hand, head, trunk, or leg movements more often, while listeners may increase backchanneling or head and trunk movements to improve the signal-to-noise ratio. Additionally, we test whether the synchrony between speech and hand gestures is affected by background noise. Here, pairs of normal hearing participants (n=8) stood in an audiovisual virtual environment while talking freely. The conversational movements were described using a newly developed labeling system with categories that respect their communicative function. The results showed higher gesturing rate during speaking than during listening. Increased levels of background noise led to increased hand-gesture complexity, modulation of head movements, and a change in trunk movements. People spoke 0.7 dB - 1.4 dB louder during hand gesturing in comparison to times with static drop posture but this was unrelated to presence of background noise. The analysis of hand-speech synchrony showed a modest decrease in synchrony for moderate noise level. People adapt their communicative behavior to increased background noise levels by increases in speech production levels and gesturing which may drive additional increase in speech production due to biomechanical coupling; listeners may increase backchanneling to support the exchange and their own signal-to-noise ratio. The synchrony analysis may reflect motivational factors of communication in noisy environments.
Applying general-purpose object detectors to ship detection in satellite imagery presents significant challenges due to the extreme scale disparity and high aspect ratios of maritime targets. In conventional YOLO architectures, the deepest feature pyramid level (P5, stride of 32) compresses narrow vessels into sub-pixel representations, causing severe spatial feature dilution that prevents the network from resolving fine-grained ship boundaries. In this work, we propose LiM-YOLO (Less is More YOLO), a streamlined detector designed to address these domain-specific structural conflicts. Through a statistical analysis of ship scale distributions across four major benchmarks, we introduce a Pyramid Level Shift Strategy that reconfigures the detection head from the conventional P3-P5 to P2-P4. This shift ensures compliance with the Nyquist sampling condition for small targets while eliminating the computational redundancy inherent in the deep P5 layers. To further stabilize training on high-resolution satellite inputs, we incorporate a Group Normalized Convolutional Block for Linear Projection (GN-CBLinear), which replaces batch-dependent normalization with Group Normalization to overcome gradient instability in memory-constrained micro-batch regimes. Validated on SODA-A, DOTA-v1.5, FAIR1M-v2.0, and ShipRSImageNet-V1, LiM-YOLO achieves state-of-the-art detection accuracy with significantly fewer parameters than existing methods, validating that a well-targeted pyramid level shift can achieve a "Less is More" balance between accuracy and efficiency. The code is available at this https URL.
The rapid growth in wireless infrastructure has increased the need to accurately estimate and forecast electromagnetic field (EMF) levels to ensure ongoing compliance, assess potential health impacts, and support efficient network planning. While existing studies rely on univariate forecasting of wideband aggregate EMF data, frequency-selective multivariate forecasting is needed to capture the inter-operator and inter-frequency variations essential for proactive network planning. To this end, this paper introduces EMFusion, a conditional multivariate diffusion-based probabilistic forecasting framework that integrates diverse contextual factors (e.g., time of day, season, and holidays) while providing explicit uncertainty estimates. The proposed architecture features a residual U-Net backbone enhanced by a cross-attention mechanism that dynamically integrates external conditions to guide the generation process. Furthermore, EMFusion integrates an imputation-based sampling strategy that treats forecasting as a structural inpainting task, ensuring temporal coherence even with irregular measurements. Unlike standard point forecasters, EMFusion generates calibrated probabilistic prediction intervals directly from the learned conditional distribution, providing explicit uncertainty quantification essential for trustworthy decision-making. Numerical experiments conducted on frequency-selective EMF datasets demonstrate that EMFusion with the contextual information of working hours outperforms the baseline models with or without conditions. The EMFusion outperforms the best baseline by 23.85% in continuous ranked probability score (CRPS), 13.93% in normalized root mean square error, and reduces prediction CRPS error by 22.47%.
Gate-based quantum image processing is constrained by qubit scarcity and the high overhead of quantum state preparation, limiting its applicability to realistic geometric data. We introduce a quantum-native framework for image matching on neutral-atom analog quantum computers that advances our earlier Sparse-Dots Representation (SDR) approach. A classical pre-processing pipeline -- Sobel edge extraction followed by the Ramer--Douglas--Peucker (RDP) algorithm -- converts an input image into a geometrically faithful Sparse-Dots point cloud of substantially fewer atoms. This atom layout is virtually embedded into the programmable tweezer array of QuEra's Aquila device via its Bloqade SDK, where the image geometry is encoded physically in the distance-dependent van der Waals interaction term of the Rydberg Hamiltonian. After time-evolution, we extract the many-body fingerprint of each image using two observables -- the Pearson-normalized two-site correlation matrix which encodes the blockade-induced correlation structure of the quantum state, and the two-dimensional static structure factor evaluated on a fixed wavevector grid, yielding a fingerprint vector of constant length regardless of atom count. In Stage~1, image matching is performed by cosine similarity on the fingerprint vectors, a scale-invariant metric appropriate for Fourier-domain descriptors. In Stage~2, this approach is extended to quantum reservoir computing~(QRC) to enable machine learning via dramatically reduced training data and training cycles, as a preliminary proof-of-concept. Simulations using the Bloqade software stack confirm successful matching of industrial objects, often with fewer than 24 atoms. To our knowledge, this constitutes the first application of the static structure factor -- a condensed-matter quantum observable -- as an image retrieval descriptor in an analog quantum computing context.
This paper presents the design and implementation of a relative localization system for SnailBot, a modular self reconfigurable robot. The system integrates ArUco marker recognition, optical flow analysis, and IMU data processing into a unified fusion framework, enabling robust and accurate relative positioning for collaborative robotic tasks. Experimental validation demonstrates the effectiveness of the system in realtime operation, with a rule based fusion strategy ensuring reliability across dynamic scenarios. The results highlight the potential for scalable deployment in modular robotic systems.
Lorentzian and completely log-concave polynomials have recently emerged as a unifying framework for negative dependence, log-concavity, and convexity in combinatorics and probability. We extend this theory to variational analysis and cone-constrained dynamics by studying $K$-Lorentzian and $K$-completely log-concave polynomials over a proper convex cone $K\subset\mathbb{R}^n$. For a $K$-Lorentzian form $f$ and $v\in\operatorname{int}K$, we define an open cone $K^\circ(f,v)$ and a closed cone $K(f,v)$ via directional derivatives along $v$, recovering the usual hyperbolicity cone when $f$ is hyperbolic. We prove that $K^\circ(f,v)$ is a proper cone and equals $\operatorname{int}K(f,v)$. If $f$ is $K(f,v)$-Lorentzian, then $K(f,v)$ is convex and maximal among convex cones on which $f$ is Lorentzian. Using the Rayleigh matrix $M_f(x)=\nabla f(x)\nabla f(x)^T - f(x)\nabla^2 f(x)$, we obtain cone-restricted Rayleigh inequalities and show that two-direction Rayleigh inequalities on $K$ are equivalent to an acuteness condition for the bilinear form $v^T M_f(x) w$. This yields a cone-restricted negative-dependence interpretation linking the curvature of $\log f$ to covariance properties of associated Gibbs measures. For determinantal generating polynomials, we identify the intersection of the hyperbolicity cone with the nonnegative orthant as the classical semipositive cone, and we extend this construction to general proper cones via $K$-semipositive cones. Finally, for linear evolution variational inequality (LEVI) systems, we show that if $q(x)=x^T A x$ is (strictly) $K$-Lorentzian, then $A$ is (strictly) $K$-copositive and yields Lyapunov (semi-)stability on $K$, giving new Lyapunov criteria for cone-constrained dynamics.
We present a hybrid learning and model-based approach for reactive internal-force adaptation to halt in-hand slip in a multifingered robotic gripper. A multimodal tactile stack combines piezoelectric (PzE) sensing for fast slip cues with piezoresistive (PzR) arrays for contact localization, enabling online construction of the grasp matrix. Upon slip detection, internal forces are updated in the null space of the grasp through a quadratic program that reinforces normal forces while preserving the object wrench. We demonstrate reactive stabilization of multifingered grasps under external perturbations. Augmenting analytic force control with learned tactile cues enables fast and reliable closed-loop stabilization in the evaluated grasp scenarios. The pipeline yields a theoretical sensing-to-command latency of 35-40 ms, including 5 ms for PzR-based grasp geometry updates and approximately 4 ms for solving the quadratic program. In controlled trials, slip onset is detected after ~ 20 ms. The analysis supports the feasibility of sub-50 ms integrated closed-loop stabilization.