This study investigates the application of diffusion models in medical image classification (DiffMIC), focusing on skin and oral lesions. Utilizing the datasets PAD-UFES-20 for skin cancer and P-NDB-UFES for oral cancer, the diffusion model demonstrated competitive performance compared to state-of-the-art deep learning models like Convolutional Neural Networks (CNNs) and Transformers. Specifically, for the PAD-UFES-20 dataset, the model achieved a balanced accuracy of 0.6457 for six-class classification and 0.8357 for binary classification (cancer vs. non-cancer). For the P-NDB-UFES dataset, it attained a balanced accuracy of 0.9050. These results suggest that diffusion models are viable models for classifying medical images of skin and oral lesions. In addition, we investigate the robustness of the model trained on PAD-UFES-20 for skin cancer but tested on the clinical images of the HIBA dataset.
This paper presents a new algorithm for set-based state estimation of nonlinear discrete-time systems with bounded uncertainties. The novel method builds upon essential properties and computational advantages of constrained zonotopes (CZs) and polyhedral relaxations of factorable representations of nonlinear functions to propagate CZs through nonlinear functions, which is usually done using conservative linearization in the literature. The new method also refines the propagated enclosure using nonlinear measurements. To achieve this, a lifted polyhedral relaxation is computed for the composite nonlinear function of the system dynamics and measurement equations, in addition to incorporating the measured output through equality constraints. Polyhedral relaxations of trigonometric functions are enabled for the first time, allowing to address a broader class of nonlinear systems than our previous works. Additionally, an approach to obtain an equivalent enclosure with fewer generators and constraints is developed. Thanks to the advantages of the polyhedral enclosures based on factorable representations, the new state estimation method provides better approximations than those resulting from linearization procedures. This led to significant improvements in the computation of convex sets enclosing the system states consistent with measured outputs. Numerical examples highlight the advantages of the novel algorithm in comparison to existing CZ methods based on the Mean Value Theorem and DC programming principles.
Digital twins for power electronics require accurate power losses whose direct measurements are often impractical or impossible in real-world applications. This paper presents a novel hybrid framework that combines physics-based thermal modeling with data-driven techniques to identify and correct power losses accurately using only temperature measurements. Our approach leverages a cascaded architecture where a neural network learns to correct the outputs of a nominal power loss model by backpropagating through a reduced-order thermal model. We explore two neural architectures, a bootstrapped feedforward network, and a recurrent neural network, demonstrating that the bootstrapped feedforward approach achieves superior performance while maintaining computational efficiency for real-time applications. Between the interconnection, we included normalization strategies and physics-guided training loss functions to preserve stability and ensure physical consistency. Experimental results show that our hybrid model reduces both temperature estimation errors (from 7.2+-6.8{\deg}C to 0.3+-0.3{\deg}C) and power loss prediction errors (from 5.4+-6.6W to 0.2+-0.3W) compared to traditional physics-based approaches, even in the presence of thermal model uncertainties. This methodology allows us to accurately estimate power losses without direct measurements, making it particularly helpful for real-time industrial applications where sensor placement is hindered by cost and physical limitations.
In 6th generation wireless communication technology, it is important to utilize space resources efficiently. Recently, holographic multiple-input multiple-output (HMIMO) and meta-surface technology have attracted attention as technologies that maximize space utilization for 6G mobile communications. However, studies on HMIMO communications are still in an initial stage and its fundamental limits are yet to be unveiled. It is well known that the Fourier transform relationship can be obtained using a lens in the optical field, but research to apply it to the mobile communication field is in the early stages. In this paper, we show that the Fourier transform relationship between signals can be obtained when two metasurfaces are aligned or unaligned, and analyze the transmission and reception power, and the maximum number of spatial multimodes that can be transmitted. In addition, to reduce transmission complexity, we propose a spatial multimode transmission system using three metasurfaces and analyze signal characteristics on the meta-surfaces. In numerical results, we provide the performance of spatial multimode transmission in case of using rectangular and Gaussian signals.
The economic feasibility of nuclear microreactors will depend on minimizing operating costs through advancements in autonomous control, especially when these microreactors are operating alongside other types of energy systems (e.g., renewable energy). This study explores the application of deep reinforcement learning (RL) for real-time drum control in microreactors, exploring performance in regard to load-following scenarios. By leveraging a point kinetics model with thermal and xenon feedback, we first establish a baseline using a single-output RL agent, then compare it against a traditional proportional-integral-derivative (PID) controller. This study demonstrates that RL controllers, including both single- and multi-agent RL (MARL) frameworks, can achieve similar or even superior load-following performance as traditional PID control across a range of load-following scenarios. In short transients, the RL agent was able to reduce the tracking error rate in comparison to PID. Over extended 300-minute load-following scenarios in which xenon feedback becomes a dominant factor, PID maintained better accuracy, but RL still remained within a 1% error margin despite being trained only on short-duration scenarios. This highlights RL's strong ability to generalize and extrapolate to longer, more complex transients, affording substantial reductions in training costs and reduced overfitting. Furthermore, when control was extended to multiple drums, MARL enabled independent drum control as well as maintained reactor symmetry constraints without sacrificing performance -- an objective that standard single-agent RL could not learn. We also found that, as increasing levels of Gaussian noise were added to the power measurements, the RL controllers were able to maintain lower error rates than PID, and to do so with less control effort.
Chemical Shift Imaging (CSI) or Chemical Shift Encoded Magnetic Resonance Imaging (CSE-MRI) enables the quantification of different chemical species in the human body, and it is one of the most widely used imaging modalities used to quantify fat in the human body. Although there have been substantial improvements in the design of signal acquisition protocols and the development of a variety of methods for the recovery of parameters of interest from the measured signal, it is still challenging to obtain a consistent and reliable quantification over the entire field of view. In fact, there are still discrepancies in the quantities recovered by different methods, and each exhibits a different degree of sensitivity to acquisition parameters such as the choice of echo times. Some of these challenges have their origin in the signal model itself. In particular, it is non-linear, and there may be different sets of parameters of interest compatible with the measured signal. For this reason, a thorough analysis of this model may help mitigate some of the remaining challenges, and yield insight into novel acquisition protocols. In this work, we perform an analysis of the signal model underlying CSI, focusing on finding suitable conditions under which recovery of the parameters of interest is possible. We determine the sources of non-identifiability of the parameters, and we propose a reconstruction method based on smooth non-convex optimization under convex constraints that achieves exact local recovery under suitable conditions. A surprising result is that the concentrations of the chemical species in the sample may be identifiable even when other parameters are not. We present numerical results illustrating how our theoretical results may help develop novel acquisition techniques, and showing how our proposed recovery method yields results comparable to the state-of-the-art.
Accurate and quick diagnosis of normal brain tissue Glioma, Meningioma, and Pituitary Tumors is crucial for optimal treatment planning and improved medical results. Magnetic Resonance Imaging (MRI) is widely used as a non-invasive diagnostic tool for detecting brain abnormalities, including tumors. However, manual interpretation of MRI scans is often time-consuming, prone to human error, and dependent on highly specialized expertise. This paper proposes an advanced AI-driven technique to detecting glioma, meningioma, and pituitary brain tumors using YoloV11 and YoloV8 deep learning models. Methods: Using a transfer learning-based fine-tuning approach, we integrate cutting-edge deep learning techniques with medical imaging to classify brain tumors into four categories: No-Tumor, Glioma, Meningioma, and Pituitary Tumors. Results: The study utilizes the publicly accessible CE-MRI Figshare dataset and involves fine-tuning pre-trained models YoloV8 and YoloV11 of 99.49% and 99.56% accuracies; and customized CNN accuracy of 96.98%. The results validate the potential of CNNs in achieving high precision in brain tumor detection and classification, highlighting their transformative role in medical imaging and diagnostics.
In this work, we propose an output-feedback tube-based model predictive control (MPC) scheme for linear systems under dynamic uncertainties that are described via integral quadratic constraints (IQC). By leveraging IQCs, a large class of nonlinear and dynamic uncertainties can be addressed. We leverage recent IQC synthesis tools to design a dynamic controller and an observer that are robust to these uncertainties and minimize the size of the resulting constraint tightening in the MPC. Thereby, we show that the robust estimation problem using IQCs with peak-to-peak performance can be convexified. We guarantee recursive feasibility, robust constraint satisfaction, and input-to-state stability of the resulting MPC scheme.
We propose a distributed model predictive control (MPC) framework for coordinating heterogeneous, nonlinear multi-agent systems under individual and coupling constraints. The cooperative task is encoded as a shared objective function minimized collectively by the agents. Each agent optimizes an artificial reference as an intermediate step towards the cooperative objective, along with a control input to track it. We establish recursive feasibility, asymptotic stability, and transient performance bounds under suitable assumptions. The solution to the cooperative task is not predetermined but emerges from the optimized interactions of the agents. We demonstrate the framework on numerical examples inspired by satellite constellation control, collision-free narrow passage traversal, and coordinated quadrotor flight.
This study investigates disengagements of Remote Driving Systems (RDS) based on interventions by an in-vehicle Safety Drivers (SD) in real-world Operational Design Domains (ODD) with a focus on Remote Driver (RD) performance during their driving training. Based on an analysis of over 14,000 km on remote driving data, the relationship between the driving experience of 25 RD and the frequency of disengagements is systematically investigated. The results show that the number of SD interventions decreases significantly within the first 400 km of driving experience, which illustrates a clear learning curve of the RD. In addition, the most common causes for 183 disengagements analyzed are identified and categorized, whereby four main scenarios for SD interventions were identified and illustrated. The results emphasize the need for experience-based and targeted training programs aimed at developing basic driving skills early on, thereby increasing the safety, controllability and efficiency of RDS, especially in complex urban environment ODDs.
Many self-supervised denoising approaches have been proposed in recent years. However, these methods tend to overly smooth images, resulting in the loss of fine structures that are essential for medical applications. In this paper, we propose DiffDenoise, a powerful self-supervised denoising approach tailored for medical images, designed to preserve high-frequency details. Our approach comprises three stages. First, we train a diffusion model on noisy images, using the outputs of a pretrained Blind-Spot Network as conditioning inputs. Next, we introduce a novel stabilized reverse sampling technique, which generates clean images by averaging diffusion sampling outputs initialized with a pair of symmetric noises. Finally, we train a supervised denoising network using noisy images paired with the denoised outputs generated by the diffusion model. Our results demonstrate that DiffDenoise outperforms existing state-of-the-art methods in both synthetic and real-world medical image denoising tasks. We provide both a theoretical foundation and practical insights, demonstrating the method's effectiveness across various medical imaging modalities and anatomical structures.
High-fidelity models are essential for accurately capturing nonlinear system dynamics. However, simulation of these models is often computationally too expensive and, due to their complexity, they are not directly suitable for analysis, control design or real-time applications. Surrogate modelling techniques seek to construct simplified representations of these systems with minimal complexity, but adequate information on the dynamics given a simulation, analysis or synthesis objective at hand. Despite the widespread availability of system linearizations and the growing computational potential of autograd methods, there is no established approach that systematically exploits them to capture the underlying global nonlinear dynamics. This work proposes a novel surrogate modelling approach that can efficiently build a global representation of the dynamics on-the-fly from local system linearizations without ever explicitly computing a model. Using radial basis function interpolation and the second fundamental theorem of calculus, the surrogate model is only computed at its evaluation, enabling rapid computation for simulation and analysis and seamless incorporation of new linearization data. The efficiency and modelling capabilities of the method are demonstrated on simulation examples.
We introduce a generalized excitable system in which spikes can happen in a continuum of directions, therefore drastically enriching the expressivity and control capability of the spiking dynamics. In this generalized excitable system, spiking trajectories happen in a Hilbert space with an excitable resting state at the origin and spike responses that can be triggered in any direction as a function of the system's state and inputs. State-dependence of the spiking direction provide the system with a vanishing spiking memory trace, which enables robust tracking and integration of inputs in the spiking direction history. The model exhibits generalized forms of both Hodgkin's Type I and Type II excitability, capturing their usual bifurcation behaviors in an abstract setting. When used as the controller of a two-dimensional navigation task, this model facilitates both the sparseness of the actuation and its sensitivity to environmental inputs. These results highlight the potential of the proposed generalized excitable model for excitable control in high- and infinite-dimensional spaces.
While convolutional neural networks (CNNs) and vision transformers (ViTs) have advanced medical image segmentation, they face inherent limitations such as local receptive fields in CNNs and high computational complexity in ViTs. This paper introduces Deconver, a novel network that integrates traditional deconvolution techniques from image restoration as a core learnable component within a U-shaped architecture. Deconver replaces computationally expensive attention mechanisms with efficient nonnegative deconvolution (NDC) operations, enabling the restoration of high-frequency details while suppressing artifacts. Key innovations include a backpropagation-friendly NDC layer based on a provably monotonic update rule and a parameter-efficient design. Evaluated across four datasets (ISLES'22, BraTS'23, GlaS, FIVES) covering both 2D and 3D segmentation tasks, Deconver achieves state-of-the-art performance in Dice scores and Hausdorff distance while reducing computational costs (FLOPs) by up to 90% compared to leading baselines. By bridging traditional image restoration with deep learning, this work offers a practical solution for high-precision segmentation in resource-constrained clinical workflows. The project is available at https://github.com/pashtari/deconver.
The Koopman Operator Theory opens the door for application of rich linear systems theory for computationally efficient modeling and optimal control of nonlinear systems by providing a globally linear representation for complex nonlinear systems. However, methodologies for Koopman Operator discovery struggle with the dependency on the set of selected observable functions and meaningful uncertainty quantification. The primary objective of this work is to leverage Gaussian process regression (GPR) to develop a probabilistic Koopman linear model while removing the need for heuristic observable specification. In this work, we present inverted Gaussian process optimization based Koopman Operator learning (iGPK), an automatic differentiation-based approach to simultaneously learn the observable-operator combination. We show that the proposed iGPK method is robust to observation noise in the training data, while also providing good uncertainty quantification, such that the predicted distribution consistently encapsulates the ground truth, even for noisy training data.
The increasing deployment of Electric Vehicle Charging Infrastructures (EVCIs) introduces cybersecurity challenges, particularly due to inherent vulnerabilities, making them susceptible to cyberattacks. The vulnerable points in EVCI are charging ports, which serve as the links between the EVs and the EVCI as they transfer the data along with the power. Data spoofing attacks targeting these ports can compromise security, reliability, and overall system performance by introducing anomalies in operational data. An efficient method for identifying the charging port current magnitude variations is presented in this research. The MATLAB/SIMULINK environment simulates an EVCI system for various data generating scenarios. A Temporal Convolution Network - Autoencoder (TCN-AE) model is used in training the multivariate time series data of EVCI and reconstructing it. The abnormalities in data are that various charging port current magnitudes are replaced with their respective data of different durations, thus enabling the replay attack scenarios. To detect anomalies, the error between the original and reconstructed data is computed, and these error values are used for detecting the anomalies. With the help of the mean vector and covariance matrices of the errors, the anomaly score is computed in the form of Mahalanobis distance. The threshold is obtained from the short sub-sequence of the errors and optimized for the whole time series data. The obtained optimal threshold is compared with the anomaly score to detect the anomaly. The model demonstrates robust performance in data reconstruction by identifying anomalies with an accuracy of 99.64%, to enhance the reliability and security in operations of EVCI.
Feedback optimization algorithms compute inputs to a system in real time, which helps mitigate the effects of unknown disturbances. However, existing work models both system dynamics and computations in either discrete or continuous time, which does not faithfully model some applications. In this work, we model linear system dynamics in continuous time, and we model the computations of inputs in discrete time. Therefore, we present a novel hybrid systems framework for modeling feedback optimization of linear time-invariant systems that are subject to unknown, constant disturbances. For this setup, we first establish the well-posedness of the hybrid model and establish completeness of solutions while ruling out Zeno behavior. Then, our main result derives a convergence rate and an error bound for the full hybrid computation-in-theloop system and shows that it converges exponentially towards a ball of known radius about a desired fixed point. Simulation results show that this approach successfully mitigates the effects of disturbances, with the magnitude of steady-state error being 81% less than the magnitude of the disturbances in the system.
This paper focuses on the design of a robust decision scheme capable of operating in target-rich scenarios with unknown signal signatures (including their range positions, angles of arrival, and number) in a background of Gaussian disturbance. To solve the problem at hand, a novel estimation procedure is conceived resorting to the expectation-maximization algorithm in conjunction with the hierarchical latent variable model that are exploited to come up with a maximum \textit{a posteriori} rule for reliable signal classification and angle of arrival estimation. The estimates returned by the procedure are then used to build up an adaptive detection architecture in range and azimuth based on the likelihood ratio test with enhanced detection performance. Remarkably, it is shown that the new decision scheme can maintain constant the false alarm rate when the interference parameters vary in the considered range of values. The performance assessment, conducted by means of Monte Carlo simulation, highlights that the proposed detector exhibits superior detection performance in comparison with the existing GLRT-based competitors.
Control barrier functions (CBFs) are a powerful tool for synthesizing safe control actions; however, constructing CBFs remains difficult for general nonlinear systems. In this work, we provide a constructive framework for synthesizing CBFs for systems with dual relative degree -- where different inputs influence the outputs at two different orders of differentiation; this is common in systems with orientation-based actuation, such as unicycles and quadrotors. In particular, we propose dual relative degree CBFs (DRD-CBFs) and show that these DRD-CBFs can be constructively synthesized and used to guarantee system safety. Our method constructs DRD-CBFs by leveraging the dual relative degree property -- combining a CBF for an integrator chain with a Lyapunov function certifying the tracking of safe inputs generated for this linear system. We apply these results to dual relative degree systems, both in simulation and experimentally on hardware using quadruped and quadrotor robotic platforms.
Many robotics tasks, such as path planning or trajectory optimization, are formulated as optimal control problems (OCPs). The key to obtaining high performance lies in the design of the OCP's objective function. In practice, the objective function consists of a set of individual components that must be carefully modeled and traded off such that the OCP has the desired solution. It is often challenging to balance multiple components to achieve the desired solution and to understand, when the solution is undesired, the impact of individual cost components. In this paper, we present a framework addressing these challenges based on the concept of directional corrections. Specifically, given the solution to an OCP that is deemed undesirable, and access to an expert providing the direction of change that would increase the desirability of the solution, our method analyzes the individual cost components for their "consistency" with the provided directional correction. This information can be used to improve the OCP formulation, e.g., by increasing the weight of consistent cost components, or reducing the weight of - or even redesigning - inconsistent cost components. We also show that our framework can automatically tune parameters of the OCP to achieve consistency with a set of corrections.
Integrated into existing Mobile Edge Computing (MEC) systems, Unmanned Aerial Vehicles (UAVs) serve as a cornerstone in meeting the stringent requirements of future Internet of Things (IoT) networks. The current endeavor studies an MEC system, in which a computationally-empowered UAV, wirelessly linked to a cloud server, is destined for task offloading in uplink transmission of IoT devices. The performance of this system is studied by formulating a resource allocation problem, which aims to maximize the long-term computed task efficiency, while ensuring the stability of task buffers at the IoT devices, UAV and cloud. The problem jointly optimizes the uplink transmit power of IoT devices and their offloading decisions, the trajectory of the UAV and computing power at all transceivers. Regarding the non-convex and stochastic nature of the problem, we devise a multi-step solution approach. Initially, by invoking the fractional programming and Lyapunov theory, we transform the long-term optimization problem into an equivalent per-time-slot form. Subsequently, we recast the reformulated problem as a Markov Decision Process (MDP), which reflects the network dynamics. The MDP model, eventually, serves for training a Meta Twin Delayed Deep Deterministic Policy Gradient (MTD3) agent, in charge of adaptive resource allocation with respect to the MEC system variations derived from the mobility of the UAV and IoT devices. Simulations reveal the dominance of our proposed resource allocation approach over its Deep Reinforcement Learning (DRL)-powered counterparts, increasing computed task efficiency and reducing task buffer lengths.
Leveraging populations of thermostatically controlled loads could provide vast storage capacity to the grid. To realize this potential, their flexibility must be accurately aggregated and represented to the system operator as a single, controllable virtual device. Mathematically this is computed by calculating the Minkowski sum of the individual flexibility of each of the devices. Previous work showed how to exactly characterize the flexibility of lossless storage devices as generalized polymatroids-a family of polytope that enable an efficient computation of the Minkowski sum. In this paper we build on these results to encompass devices with dissipative storage dynamics. In doing so we are able to provide tractable methods of accurately characterizing the flexibility in populations consisting of a variety of heterogeneous devices. Numerical results demonstrate that the proposed characterizations are tight.
Synchronization is essential for the stability and coordinated operation of complex networked systems. Pinning control, which selectively controls a subset of nodes, provides a scalable solution to enhance network synchronizability. However, existing strategies face key limitations: heuristic centrality-based methods lack a direct connection to synchronization dynamics, while spectral approaches, though effective, are computationally intensive. To address these challenges, we propose a perturbation-based optimized strategy (PBO) that dynamically evaluates each node's spectral impact on the Laplacian matrix, achieving improved synchronizability with significantly reduced computational costs (with complexity O(kM)). Extensive experiments demonstrate that the proposed method outperforms traditional strategies in synchronizability, convergence rate, and pinning robustness to node failures. Notably, in all the empirical networks tested and some generated networks, PBO significantly outperforms the brute-force greedy strategy, demonstrating its ability to avoid local optima and adapt to complex connectivity patterns. Our study establishes the theoretical relationship between network synchronizability and convergence rate, offering new insights into efficient synchronization strategies for large-scale complex networks.
The rapid expansion of data centers (DCs) has intensified energy and carbon footprint, incurring a massive environmental computing cost. While carbon-aware workload migration strategies have been examined, existing approaches often overlook reliability metrics such as server lifetime degradation, and quality-of-service (QoS) that substantially affects both carbon and operational efficiency of DCs. Hence, this paper proposes a comprehensive optimization framework for spatio-temporal workload migration across distributed DCs that jointly minimizes operational and embodied carbon emissions while complying with service-level agreements (SLA). A key contribution is the development of an embodied carbon emission model based on servers' expected lifetime analysis, which explicitly considers server heterogeneity resulting from aging and utilization conditions. These issues are accommodated using new server dispatch strategies, and backup resource allocation model, accounting hardware, software and workload-induced failure. The overall model is formulated as a mixed-integer optimization problem with multiple linearization techniques to ensure computational tractability. Numerical case studies demonstrate that the proposed method reduces total carbon emissions by up to 21%, offering a pragmatic approach to sustainable DC operations.
In this paper we introduce an observer design framework for ordinary differential equation (ODE) systems based on various types of existing or even novel one-parameter symmetries (exact, asymptotic and variational) ending up with a certain number of semi-global and global observers, with bounded or unbounded system's solutions and with infinite- or finite-time convergence. We compare some of these symmetry-based observers with existing observers, recovering for instance the same performances of high-gain semiglobal observers and the finite-time convergence capabilities of sliding mode observers, while obtaining novel global observers where existing techniques are not able to provide any.
In this paper, we explore the feasibility of using communication signals for extended target (ET) tracking in an integrated sensing and communication (ISAC) system. The ET is characterized by its center range, azimuth, orientation, and contour shape, for which conventional scatterer-based tracking algorithms are hardly feasible due to the limited scatterer resolution in ISAC. To address this challenge, we propose ISACTrackNet, a deep learning-based tracking model that directly estimates ET kinematic and contour parameters from noisy received echoes. The model consists of three modules: Denoising module for clutter and self-interference suppression, Encoder module for instantaneous state estimation, and KalmanNet module for prediction refinement within a constant-velocity state-space model. Simulation results show that ISACTrackNet achieves near-optimal accuracy in position and angle estimation compared to radar-based tracking methods, even under limited measurement resolution and partial occlusions, but orientation and contour shape estimation remains slightly suboptimal. These results clearly demonstrate the feasibility of using communication-only signals for reliable ET tracking.
The increasing demands for high-throughput and energy-efficient wireless communications are driving the adoption of extremely large antennas operating at high-frequency bands. In these regimes, multiple users will reside in the radiative near-field, and accurate localization becomes essential. Unlike conventional far-field systems that rely solely on DOA estimation, near-field localization exploits spherical wavefront propagation to recover both DOA and range information. While subspace-based methods, such as MUSIC and its extensions, offer high resolution and interpretability for near-field localization, their performance is significantly impacted by model assumptions, including non-coherent sources, well-calibrated arrays, and a sufficient number of snapshots. To address these limitations, this work proposes AI-aided subspace methods for near-field localization that enhance robustness to real-world challenges. Specifically, we introduce NF-SubspaceNet, a deep learning-augmented 2D MUSIC algorithm that learns a surrogate covariance matrix to improve localization under challenging conditions, and DCD-MUSIC, a cascaded AI-aided approach that decouples angle and range estimation to reduce computational complexity. We further develop a novel model-order-aware training method to accurately estimate the number of sources, that is combined with casting of near field subspace methods as AI models for learning. Extensive simulations demonstrate that the proposed methods outperform classical and existing deep-learning-based localization techniques, providing robust near-field localization even under coherent sources, miscalibrations, and few snapshots.
Autonomous Sensory Meridian Response (ASMR) has been remarkably popular in the recent decade. While its effect has been validated through behavioral studies and neuro-physiological measurements such as electroencephalography (EEG) and related bio-signal analyses, its development and triggers remain a subject of debate. Previous studies suggest that its triggers are highly linked with cyclic patterns: predictable patterns introduce relaxation while variations maintain intrigue. To validate this and further understand the impact of acoustic features on ASMR effects, we designed three distinct cyclic patterns with monophonic and stereophonic variations, while controlling their predictability and randomness, and collected ASMR triggering scores through online surveys. Then, we extracted cyclic features and carried out regression analysis, seeking an explainable mapping of cyclic features and ASMR triggers. We found that relaxing effects accumulate progressively and are independent of spatial orientation. Cyclic patterns significantly influence psychological and physical effects, which remain invariant with time. Regression analysis revealed that smoothly spread and energy-dense cyclic patterns most effectively trigger ASMR responses.
This paper investigates the decidability of opacity in timed automata (TA), a property that has been proven to be undecidable in general. First, we address a theoretical gap in recent work by J. An et al. (FM 2024) by providing necessary and sufficient conditions for the decidability of location-based opacity in TA. Based on these conditions, we identify a new decidable subclass of TA, called timed automata with integer resets (IRTA), where clock resets are restricted to occurring at integer time points. We also present a verification algorithm for opacity in IRTA. On the other hand, we consider achieving decidable timed opacity by weakening the capabilities of intruders. Specifically, we show that opacity in general TA becomes decidable under the assumption that intruders can only observe time in discrete units. These results establish theoretical foundations for modeling timed systems and intruders in security analysis, enabling an effective balance between expressiveness and decidability.
This paper proposes tackling safety-critical stochastic Reinforcement Learning (RL) tasks with a samplebased, model-based approach. At the core of the method lies a Model Predictive Control (MPC) scheme that acts as function approximation, providing a model-based predictive control policy. To ensure safety, a probabilistic Control Barrier Function (CBF) is integrated into the MPC controller. A sample-based approach with guarantees is employed to approximate the effects of stochasticies in the optimal control formulation and to guarantee the probabilistic CBF condition. A learnable terminal cost formulation is included in the MPC objective to counterbalance the additional computational burden due to sampling. An RL algorithm is deployed to learn both the terminal cost and the CBF constraint. Results from our numerical experiment on a constrained LTI problem corroborate the effectiveness of the proposed methodology in reducing computation time while preserving control performance and safety.
Incentive-based coordination mechanisms for distributed energy consumption have shown promise in aligning individual user objectives with social welfare, especially under privacy constraints. Our prior work proposed a two-timescale adaptive pricing framework, where users respond to prices by minimizing their local cost, and the system operator iteratively updates the prices based on aggregate user responses. A key assumption was that the system cost need to smoothly depend on the aggregate of the user demands. In this paper, we relax this assumption by considering the more realistic model of where the cost are determined by solving a DCOPF problem with constraints. We present a generalization of the pricing update rule that leverages the generalized gradients of the system cost function, which may be nonsmooth due to the structure of DCOPF. We prove that the resulting dynamic system converges to a unique equilibrium, which solves the social welfare optimization problem. Our theoretical results provide guarantees on convergence and stability using tools from nonsmooth analysis and Lyapunov theory. Numerical simulations on networked energy systems illustrate the effectiveness and robustness of the proposed scheme.
Accurate speed estimation in sensorless brushless DC motors is essential for high-performance control and monitoring, yet conventional model-based approaches struggle with system nonlinearities and parameter uncertainties. In this work, we propose an in-context learning framework leveraging transformer-based models to perform zero-shot speed estimation using only electrical measurements. By training the filter offline on simulated motor trajectories, we enable real-time inference on unseen real motors without retraining, eliminating the need for explicit system identification while retaining adaptability to varying operating conditions. Experimental results demonstrate that our method outperforms traditional Kalman filter-based estimators, especially in low-speed regimes that are crucial during motor startup.
Heatstroke and life threatening incidents resulting from the retention of children and animals in vehicles pose a critical global safety issue. Current presence detection solutions often require specialized hardware or suffer from detection delays that do not meet safety standards. To tackle this issue, by re-modeling channel state information (CSI) with theoretical analysis of path propagation, this study introduces RapidPD, an innovative system utilizing CSI in subcarrier dimension to detect the presence of humans and pets in vehicles. The system models the impact of motion on CSI and introduces motion statistics in subcarrier dimension using a multi-layer autocorrelation method to quantify environmental changes. RapidPD is implemented using commercial Wi-Fi chipsets and tested in real vehicle environments with data collected from 10 living organisms. Experimental results demonstrate that RapidPD achieves a detection accuracy of 99.05% and a true positive rate of 99.32% within a 1-second time window at a low sampling rate of 20 Hz. These findings represent a significant advancement in vehicle safety and provide a foundation for the widespread adoption of presence detection systems.
This paper presents an online energy management system for an energy hub where electric vehicles are charged combining on-site photovoltaic generation and battery energy storage with the power grid, with the objective to decide on the battery (dis)charging to minimize the costs of operation. To this end, we devise a scenario-based stochastic model predictive control (MPC) scheme that leverages probabilistic 24-hour-ahead forecasts of charging load, solar generation and day-ahead electricity prices to achieve a cost-optimal operation of the energy hub. The probabilistic forecasts leverage conformal prediction providing calibrated distribution-free confidence intervals starting from a machine learning model that generates no uncertainty quantification. We showcase our controller by running it over a 280-day evaluation in a closed-loop simulated environment to compare the observed cost of two scenario-based MPCs with two deterministic alternatives: a version with point forecast and a version with perfect forecast. Our results indicate that, compared to the perfect forecast implementation, our proposed scenario-based MPCs are 11\% more expensive, and 1\% better than their deterministic point-forecast counterpart.
This paper studies models for Autonomous Micromobility-on-Demand (AMoD), a paradigm in which a fleet of autonomous vehicles delivers mobility services on demand in conjunction with micromobility systems. Specifically, we introduce a network flow model to encapsulate the interaction between AMoD and micromobility under an intermodal connection scenario. The primary objective is to analyze the system's behavior, optimizing passenger travel time. Following this theoretical development, we apply these models to the transportation networks of Sioux Falls, enabling a quantifiable evaluation of the reciprocal influences between the two transportation modes. We found that increasing the number of vehicles in any of these two modes of transportation also incentivizes users to use the other. Moreover, increasing the rebalancing capacity of the micromobility system will make the AMoD system need less rebalancing.
Dynamic metabolic control can enhance bioprocess flexibility and expand the available optimization degrees of freedom via real-time modulation of metabolic enzyme expression. This allows target metabolic fluxes to be dynamically tuned throughout the process. However, identifying optimal dynamic control policies is challenging due to the presence of potential metabolic burden, cytotoxic effects, and the generally high-dimensional solution space, making exhaustive experimentation impractical. Here, we propose an approach based on reinforcement learning to derive optimal dynamic metabolic control policies by allowing an agent or controller to interact with a surrogate dynamic model $\textit{in silico}$. To incorporate and test robustness, we apply domain randomization, enabling the controller to generalize across system uncertainties. Our approach provides an alternative to conventional model-based control such as model predictive control, which requires differentiating the models with respect to decision variables; an often impractical task when dealing with complex stochastic, nonlinear, stiff, or piecewise-defined dynamics. In contrast, our approach only requires forward integration, making the task computationally much simpler with off-the-shelf solvers. We demonstrate our approach with a case study on the dynamic control of acetyl-CoA carboxylase in $\textit{Escherichia coli}$ for fatty acid biosynthesis. The derived dynamic metabolic control policies outperform static control, achieving up to 40 % higher titers while remaining robust under uncertainty.
The Open Dataset of Audio Quality (ODAQ) was recently introduced to address the scarcity of openly available audio datasets with corresponding subjective quality scores. The dataset, released under permissive licenses, comprises audio material processed using six different signal processing methods operating at five quality levels, along with corresponding subjective test results. To expand the dataset, we provided listener training to university students to conduct further subjective tests and obtained results consistent with previous expert listeners. We also showed how different training approaches affect the use of absolute scales and anchors. The expanded dataset now comprises results from three international laboratories providing a total of 42 listeners and 10080 subjective scores. This paper provides the details of the expansion and an in-depth analysis. As part of this analysis, we initiate the use of ODAQ as a benchmark to evaluate objective audio quality metrics in their ability to predict subjective scores
Sensing and imaging with distributed radio infrastructures (e.g., distributed MIMO, wireless sensor networks, multistatic radar) rely on knowledge of the positions, orientations, and clock parameters of distributed apertures. We extend a particle-based loopy belief propagation (BP) algorithm to cooperatively synchronize distributed agents to anchors in space and time. Substituting marginalization over nuisance parameters with approximate but closed-form concentration, we derive an efficient estimator that bypasses the need for preliminary channel estimation and operates directly on noisy channel observations. Our algorithm demonstrates scalable, accurate spatiotemporal synchronization on simulated data.
Data-enabled predictive control (DeePC) has emerged as a powerful technique to control complex systems without the need for extensive modeling efforts. However, relying solely on offline collected data trajectories to represent the system dynamics introduces certain drawbacks. Therefore, we present a novel semi-data-driven model predictive control (SD-MPC) framework that combines (limited) model information with DeePC to address a range of these drawbacks, including sensitivity to noisy data, lack of robustness, and a high computational burden. In this work we focus on the performance of DeePC in operating regimes not captured by the offline collected data trajectories and demonstrate how incorporating an underlying parametric model can counteract this issue. SD-MPC exhibits equivalent closed-loop performance as DeePC for deterministic linear time-invariant systems. Simulations demonstrate the general control performance of the proposed SD-MPC for both a linear time-invariant system and a nonlinear system modeled as a linear parameter-varying system. These results provide numerical evidence of the enhanced robustness of SD-MPC over classical DeePC.
Reference tracking is a key objective in many control systems, including those characterized by complex nonlinear dynamics. In these settings, traditional control approaches can effectively ensure steady-state accuracy but often struggle to explicitly optimize transient performance. Neural network controllers have gained popularity due to their adaptability to nonlinearities and disturbances; however, they often lack formal closed-loop stability and performance guarantees. To address these challenges, a recently proposed neural-network control framework known as Performance Boosting (PB) has demonstrated the ability to maintain $\mathcal{L}_p$ stability properties of nonlinear systems while optimizing generic transient costs. This paper extends the PB approach to reference tracking problems. First, we characterize the complete set of nonlinear controllers that preserve desired tracking properties for nonlinear systems equipped with base reference-tracking controllers. Then, we show how to optimize transient costs while searching within subsets of tracking controllers that incorporate expressive neural network models. Furthermore, we analyze the robustness of our method to uncertainties in the underlying system dynamics. Numerical simulations on a robotic system demonstrate the advantages of our approach over the standard PB framework.
Remote tracking systems play a critical role in applications such as IoT, monitoring, surveillance and healthcare. In such systems, maintaining both real-time state awareness (for online decision making) and accurate reconstruction of historical trajectories (for offline post-processing) are essential. While the Age of Information (AoI) metric has been extensively studied as a measure of freshness, it does not capture the accuracy with which past trajectories can be reconstructed. In this work, we investigate reconstruction error as a complementary metric to AoI, addressing the trade-off between timely updates and historical accuracy. Specifically, we consider three policies, each prioritizing different aspects of information management: Keep-Old, Keep-Fresh, and our proposed Inter-arrival-Aware dropping policy. We compare these policies in terms of impact on both AoI and reconstruction error in a remote tracking system with a finite buffer. Through theoretical analysis and numerical simulations of queueing behavior, we demonstrate that while the Keep-Fresh policy minimizes AoI, it does not necessarily minimize reconstruction error. In contrast, our proposed Inter-arrival-Aware dropping policy dynamically adjusts packet retention decisions based on generation times, achieving a balance between AoI and reconstruction error. Our results provide key insights into the design of efficient update policies for resource-constrained IoT networks.
Building performance simulation (BPS) is critical for understanding building dynamics and behavior, analyzing performance of the built environment, optimizing energy efficiency, improving demand flexibility, and enhancing building resilience. However, conducting BPS is not trivial. Traditional BPS relies on an accurate building energy model, mostly physics-based, which depends heavily on detailed building information, expert knowledge, and case-by-case model calibrations, thereby significantly limiting their scalability. With the development of sensing technology and increased data availability, there is a growing attention and interest in data-driven BPS. However, purely data-driven models often suffer from limited generalization ability and a lack of physical consistency, resulting in poor performance in real-world applications. To address these limitations, recent studies have started to incorporate physics priors into data-driven models, a methodology called physics-informed machine learning (PIML). PIML is an emerging field with the definitions, methodologies, evaluation criteria, application scenarios, and future directions that remain open. To bridge those gaps, this study systematically reviews the state-of-art PIML for BPS, offering a comprehensive definition of PIML, and comparing it to traditional BPS approaches regarding data requirements, modeling effort, performance and computation cost. We also summarize the commonly used methodologies, validation approaches, application domains, available data sources, open-source packages and testbeds. In addition, this study provides a general guideline for selecting appropriate PIML models based on BPS applications. Finally, this study identifies key challenges and outlines future research directions, providing a solid foundation and valuable insights to advance R&D of PIML in BPS.
In parallel-connected cells, cell-to-cell (CtC) heterogeneities can lead to current and thermal gradients that may adversely impact the battery performance and aging. Sources of CtC heterogeneity include manufacturing process tolerances, poor module configurations, and inadequate thermal management. Understanding which CtC heterogeneity sources most significantly impact battery performance is crucial, as it can provide valuable insights. In this study, we use an experimentally validated electrochemical battery model to simulate hundreds of battery configurations, each consisting of four cells in parallel. We conduct a statistical analysis to evaluate the relative importance of key cell-level parameters, interconnection resistance, cell spacing, and location on performance and aging. The analysis reveals that heterogeneities in electrode active material volume fractions primarily impact module capacity, energy, and cell current, leading to substantial thermal gradients. However, to fully capture the output behavior, interconnection resistance, state of charge gradients and the effect of the temperature on parameter values must also be considered. Additionally, module design configurations, particularly cell location, exacerbate thermal gradients, accelerating long-term module degradation. This study also offers insights into optimizing cell arrangement during module design to reduce thermal gradients and enhance overall battery performance and longevity. Simulation results with four cells indicate a reduction of 51.8% in thermal gradients, leading to a 5.2% decrease in long-term energy loss.
This thesis delves into the forefront of wireless communication by exploring the synergistic integration of three transformative technologies: STAR-RIS, CoMP, and NOMA. Driven by the ever-increasing demand for higher data rates, improved spectral efficiency, and expanded coverage in the evolving landscape of 6G development, this research investigates the potential of these technologies to revolutionize future wireless networks. The thesis analyzes the performance gains achievable through strategic deployment of STAR-RIS, focusing on mitigating inter-cell interference, enhancing signal strength, and extending coverage to cell-edge users. Resource sharing strategies for STAR-RIS elements are explored, optimizing both transmission and reflection functionalities. Analytical frameworks are developed to quantify the benefits of STAR-RIS assisted CoMP-NOMA networks under realistic channel conditions, deriving key performance metrics such as ergodic rates and outage probabilities. Additionally, the research delves into energy-efficient design approaches for CoMP-NOMA networks incorporating RIS, proposing novel RIS configurations and optimization algorithms to achieve a balance between performance and energy consumption. Furthermore, the application of Deep Reinforcement Learning (DRL) techniques for intelligent and adaptive optimization in aerial RIS-assisted CoMP-NOMA networks is explored, aiming to maximize network sum rate while meeting user quality of service requirements. Through a comprehensive investigation of these technologies and their synergistic potential, this thesis contributes valuable insights into the future of wireless communication, paving the way for the development of more efficient, reliable, and sustainable networks capable of meeting the demands of our increasingly connected world.
Objective: To obtain explainable guarantees in the online synthesis of optimal controllers for high-integrity cyber-physical systems, we re-investigate the use of exhaustive search as an alternative to reinforcement learning. Approach: We model an application scenario as a hybrid game automaton, enabling the synthesis of robustly correct and near-optimal controllers online without prior training. For modal synthesis, we employ discretised games solved via scope-adaptive and step-pre-shielded discrete dynamic programming. Evaluation: In a simulation-based experiment, we apply our approach to an autonomous aerial vehicle scenario. Contribution: We propose a parametric system model and a parametric online synthesis.
Ensuring safety in cyber-physical systems (CPSs) is a critical challenge, especially when system models are difficult to obtain or cannot be fully trusted due to uncertainty, modeling errors, or environmental disturbances. Traditional model-based approaches rely on precise system dynamics, which may not be available in real-world scenarios. To address this, we propose a data-driven safety verification framework that leverages matrix zonotopes and barrier certificates to verify system safety directly from noisy data. Instead of trusting a single unreliable model, we construct a set of models that capture all possible system dynamics that align with the observed data, ensuring that the true system model is always contained within this set. This model set is compactly represented using matrix zonotopes, enabling efficient computation and propagation of uncertainty. By integrating this representation into a barrier certificate framework, we establish rigorous safety guarantees without requiring an explicit system model. Numerical experiments demonstrate the effectiveness of our approach in verifying safety for dynamical systems with unknown models, showcasing its potential for real-world CPS applications.
The assessment of segmentation quality plays a fundamental role in the development, optimization, and comparison of segmentation methods which are used in a wide range of applications. With few exceptions, quality assessment is performed using traditional metrics, which are based on counting the number of erroneous pixels but do not capture the spatial distribution of errors. Established distance-based metrics such as the average Hausdorff distance are difficult to interpret and compare for different methods and datasets. In this paper, we introduce the Surface Consistency Coefficient (SCC), a novel distance-based quality metric that quantifies the spatial distribution of errors based on their proximity to the surface of the structure. Through a rigorous analysis using synthetic data and real segmentation results, we demonstrate the robustness and effectiveness of SCC in distinguishing errors near the surface from those further away. At the same time, SCC is easy to interpret and comparable across different structural contexts.
Reliable collision avoidance under extreme situations remains a critical challenge for autonomous vehicles. While large language models (LLMs) offer promising reasoning capabilities, their application in safety-critical evasive maneuvers is limited by latency and robustness issues. Even so, LLMs stand out for their ability to weigh emotional, legal, and ethical factors, enabling socially responsible and context-aware collision avoidance. This paper proposes a scenario-aware collision avoidance (SACA) framework for extreme situations by integrating predictive scenario evaluation, data-driven reasoning, and scenario-preview-based deployment to improve collision avoidance decision-making. SACA consists of three key components. First, a predictive scenario analysis module utilizes obstacle reachability analysis and motion intention prediction to construct a comprehensive situational prompt. Second, an online reasoning module refines decision-making by leveraging prior collision avoidance knowledge and fine-tuning with scenario data. Third, an offline evaluation module assesses performance and stores scenarios in a memory bank. Additionally, A precomputed policy method improves deployability by previewing scenarios and retrieving or reasoning policies based on similarity and confidence levels. Real-vehicle tests show that, compared with baseline methods, SACA effectively reduces collision losses in extreme high-risk scenarios and lowers false triggering under complex conditions. Project page: https://sean-shiyuez.github.io/SACA/.
This paper introduces an effective framework for designing memoryless dissipative full-state feedbacks for general linear delay systems via the Krasovski\u{i} functional (KF) approach, where an unlimited number of pointwise and general distributed delays (DDs) exists in the state, input and output. To handle the infinite dimensionality of DDs, we employ the Kronecker-Seuret Decomposition (KSD) which we recently proposed for analyzing matrix-valued functions in the context of delay systems. The KSD enables factorization or least-squares approximation of any number of $\fL^2$ DD kernel from any number of DDs without introducing conservatism. This also facilitates the construction of a complete-type KF with flexible integral kernels, following from an application of a novel integral inequalities derived from the least-squares principle. Our solution includes two theorems and an iterative algorithm to compute controller gains without relying on nonlinear solvers. A challenging numerical example, intractable for existing methods, underscores the efficacy of this approach.
Spectral estimation is an important tool in time series analysis, with applications including economics, astronomy, and climatology. The asymptotic theory for non-parametric estimation is well-known but the development of non-asymptotic theory is still ongoing. Our recent work obtained the first non-asymptotic error bounds on the Bartlett and Welch methods for $L$-mixing stochastic processes. The class of $L$-mixing processes contains common models in time series analysis, including autoregressive processes and measurements of geometrically ergodic Markov chains. Our prior analysis assumes that the process has zero mean. While zero-mean assumptions are common, real-world time-series data often has unknown, non-zero mean. In this work, we derive non-asymptotic error bounds for both Bartlett and Welch estimators for $L$-mixing time-series data with unknown means. The obtained error bounds are of $O(\frac{1}{\sqrt{k}})$, where $k$ is the number of data segments used in the algorithm, which are tighter than our previous results under the zero-mean assumption.
This paper is concerned with the partially observed linear system identification, where the goal is to obtain reasonably accurate estimation of the balanced truncation of the true system up to the order $k$ from output measurements. We consider the challenging case of system identification under adversarial attacks, where the probability of having an attack at each time is $\Theta(1/k)$ while the value of the attack is arbitrary. We first show that the $l_1$-norm estimator exactly identifies the true Markov parameter matrix for nilpotent systems under any type of attack. We then build on this result to extend it to general systems and show that the estimation error exponentially decays as $k$ grows. The estimated balanced truncation model accordingly shows an exponentially decaying error for the identification of the true system up to the similarity transformation. This work is the first to provide the input-output analysis of the system with partial observations under arbitrary attacks.
Articulated multi-axle vehicles are interesting from a control-theoretic perspective due to their peculiar kinematic offtracking characteristics, instability modes, and singularities. Holonomic and nonholonomic constraints affecting the kinematic behavior is investigated in order to develop control-oriented kinematic models representative of these peculiarities. Then, the structure of these constraints is exploited to develop an iterative algorithm to symbolically derive yaw-plane kinematic models of generalized $n$-trailer articulated vehicles with an arbitrary number of multi-axle vehicle units. A formal proof is provided for the maximum number of kinematic controls admissible to a large-scale generalized articulated vehicle system, which leads to a generalized Ackermann steering law for $n$-trailer systems. Moreover, kinematic data collected from a test vehicle is used to validate the kinematic models and, to understand the rearward yaw rate amplification behavior of the vehicle pulling multiple simulated trailers.
The Open Radio Access Network (O-RAN) architecture is reshaping telecommunications by promoting openness, flexibility, and intelligent closed-loop optimization. By decoupling hardware and software and enabling multi-vendor deployments, O-RAN reduces costs, enhances performance, and allows rapid adaptation to new technologies. A key innovation is intelligent network slicing, which partitions networks into isolated slices tailored for specific use cases or quality of service requirements. The RAN Intelligent Controller further optimizes resource allocation, ensuring efficient utilization and improved service quality for user equipment (UEs). However, the modular and dynamic nature of O-RAN expands the threat surface, necessitating advanced security measures to maintain network integrity, confidentiality, and availability. Intrusion detection systems have become essential for identifying and mitigating attacks. This research explores using large language models (LLMs) to generate security recommendations based on the temporal traffic patterns of connected UEs. The paper introduces an LLM-driven intrusion detection framework and demonstrates its efficacy through experimental deployments, comparing non fine-tuned and fine-tuned models for task-specific accuracy.
Different from conventional passive reconfigurable intelligent surfaces (RISs), incident signals and thermal noise can be amplified at active RISs. By exploiting the amplifying capability of active RISs, noticeable performance improvement can be expected when precise channel state information (CSI) is available. Since obtaining perfect CSI related to an RIS is difficult in practice, a robust transmission design is proposed in this paper to tackle the channel uncertainty issue, which will be more severe for active RIS-aided systems. To account for the worst-case scenario, the minimum achievable rate of each user is derived under a statistical CSI error model. Subsequently, an optimization problem is formulated to maximize the sum of the minimum achievable rate. Since the objective function is non-concave, the formulated problem is transformed into a tractable lower bound maximization problem, which is solved using an alternating optimization method. Numerical results show that the proposed robust design outperforms a baseline scheme that only exploits estimated CSI.
Most existing generation scheduling models for power systems under demand uncertainty rely on energy-based formulations with a finite number of time periods, which may fail to ensure that power supply and demand are balanced continuously over time. To address this issue, we propose a robust generation scheduling model in a continuous-time framework, employing a decision rule approach. First, for a given set of demand trajectories, we formulate a general robust generation scheduling problem to determine a decision rule that maps these demand trajectories and time points to the power outputs of generators. Subsequently, we derive a surrogate of it as our model by carefully designing a class of decision rules that are affine in the current demand, with coefficients invariant over time and constant terms that are continuous piecewise affine functions of time. As a result, our model can be recast as a finite-dimensional linear program to determine the coefficients and the function values of the constant terms at each breakpoint, solvable via the cutting-plane method. Our model is non-anticipative unlike most existing continuous-time models, which use Bernstein polynomials, making it more practical. We also provide illustrative numerical examples.
Conformal prediction (CP) has emerged as a powerful tool in robotics and control, thanks to its ability to calibrate complex, data-driven models with formal guarantees. However, in robot navigation tasks, existing CP-based methods often decouple prediction from control, evaluating models without considering whether prediction errors actually compromise safety. Consequently, ego-vehicles may become overly conservative or even immobilized when all potential trajectories appear infeasible. To address this issue, we propose a novel CP-based navigation framework that responds exclusively to safety-critical prediction errors. Our approach introduces egocentric score functions that quantify how much closer obstacles are to a candidate vehicle position than anticipated. These score functions are then integrated into a model predictive control scheme, wherein each candidate state is individually evaluated for safety. Combined with an adaptive CP mechanism, our framework dynamically adjusts to changes in obstacle motion without resorting to unnecessary conservatism. Theoretical analyses indicate that our method outperforms existing CP-based approaches in terms of cost-efficiency while maintaining the desired safety levels, as further validated through experiments on real-world datasets featuring densely populated pedestrian environments.
Motion Cueing Algorithms (MCAs) encode the movement of simulated vehicles into movement that can be reproduced with a motion simulator to provide a realistic driving experience within the capabilities of the machine. This paper introduces a novel learning-based MCA for serial robot-based motion simulators. Building on the differentiable predictive control framework, the proposed method merges the advantages of Nonlinear Model Predictive Control (NMPC) - notably nonlinear constraint handling and accurate kinematic modeling - with the computational efficiency of machine learning. By shifting the computational burden to offline training, the new algorithm enables real-time operation at high control rates, thus overcoming the key challenge associated with NMPC-based motion cueing. The proposed MCA incorporates a nonlinear joint-space plant model and a policy network trained to mimic NMPC behavior while accounting for joint acceleration, velocity, and position limits. Simulation experiments across multiple motion cueing scenarios showed that the proposed algorithm performed on par with a state-of-the-art NMPC-based alternative in terms of motion cueing quality as quantified by the RMSE and correlation coefficient with respect to reference signals. However, the proposed algorithm was on average 400 times faster than the NMPC baseline. In addition, the algorithm successfully generalized to unseen operating conditions, including motion cueing scenarios on a different vehicle and real-time physics-based simulations.
In this paper, we propose a deep hierarchical attention context model for lossless attribute compression of point clouds, leveraging a multi-resolution spatial structure and residual learning. A simple and effective Level of Detail (LoD) structure is introduced to yield a coarse-to-fine representation. To enhance efficiency, points within the same refinement level are encoded in parallel, sharing a common context point group. By hierarchically aggregating information from neighboring points, our attention model learns contextual dependencies across varying scales and densities, enabling comprehensive feature extraction. We also adopt normalization for position coordinates and attributes to achieve scale-invariant compression. Additionally, we segment the point cloud into multiple slices to facilitate parallel processing, further optimizing time complexity. Experimental results demonstrate that the proposed method offers better coding performance than the latest G-PCC for color and reflectance attributes while maintaining more efficient encoding and decoding runtimes.
Autonomous navigation is usually trained offline in diverse scenarios and fine-tuned online subject to real-world experiences. However, the real world is dynamic and changeable, and many environmental encounters/effects are not accounted for in real-time due to difficulties in describing them within offline training data or hard to describe even in online scenarios. However, we know that the human operator can describe these dynamic environmental encounters through natural language, adding semantic context. The research is to deploy Large Language Models (LLMs) to perform real-time contextual code adjustment to autonomous navigation. The challenge not evaluated in literature is what LLMs are appropriate and where should these computationally heavy algorithms sit in the computation-communication edge-cloud computing architectures. In this paper, we evaluate how different LLMs can adjust both the navigation map parameters dynamically (e.g., contour map shaping) and also derive navigation task instruction sets. We then evaluate which LLMs are most suitable and where they should sit in future edge-cloud of 6G telecommunication architectures.
In industrial engineering and manufacturing, quality control is an essential part of the production process of a product. To ensure proper functionality of a manufactured good, rigorous testing has to be performed to identify defective products before shipment to the customer. However, testing products individually in a sequential manner is often tedious, cumbersome and not widely applicable given that time, resources and personnel are limited. Thus, statistical methods have been employed to investigate random samples of products from batches. For instance, group testing has emerged as an alternative to reliably test manufactured goods by evaluating joint test results. Despite the clear advantages, existing group testing methods often struggle with efficiency and practicality in real-world industry settings, where minimizing the average number of tests and overall testing duration is critical. In this paper, novel multistage (r,s)-regular design algorithms in the framework of group testing for the identification of defective products are investigated. Motivated by the application in quality control in manufacturing, unifying expressions for the expected number of tests and expected duration are derived. The results show that the novel group testing algorithms outperform established algorithms for low probabilities of defectiveness and get close to the optimal counting bound while maintaining a low level of complexity. Mathematical proofs are supported by rigorous simulation studies and an evaluation of the performance.
The accuracy and robustness of machine learning models against adversarial attacks are significantly influenced by factors such as training data quality, model architecture, the training process, and the deployment environment. In recent years, duplicated data in training sets, especially in language models, has attracted considerable attention. It has been shown that deduplication enhances both training performance and model accuracy in language models. While the importance of data quality in training image classifier Deep Neural Networks (DNNs) is widely recognized, the impact of duplicated images in the training set on model generalization and performance has received little attention. In this paper, we address this gap and provide a comprehensive study on the effect of duplicates in image classification. Our analysis indicates that the presence of duplicated images in the training set not only negatively affects the efficiency of model training but also may result in lower accuracy of the image classifier. This negative impact of duplication on accuracy is particularly evident when duplicated data is non-uniform across classes or when duplication, whether uniform or non-uniform, occurs in the training set of an adversarially trained model. Even when duplicated samples are selected in a uniform way, increasing the amount of duplication does not lead to a significant improvement in accuracy.
Collision avoidance capability is an essential component in an autonomous vessel navigation system. To this end, an accurate prediction of dynamic obstacle trajectories is vital. Traditional approaches to trajectory prediction face limitations in generalizability and often fail to account for the intentions of other vessels. While recent research has considered incorporating the intentions of dynamic obstacles, these efforts are typically based on the own-ship's interpretation of the situation. The current state-of-the-art in this area is a Dynamic Bayesian Network (DBN) model, which infers target vessel intentions by considering multiple underlying causes and allowing for different interpretations of the situation by different vessels. However, since its inception, there have not been any significant structural improvements to this model. In this paper, we propose enhancing the DBN model by incorporating considerations for grounding hazards and vessel waypoint information. The proposed model is validated using real vessel encounters extracted from historical Automatic Identification System (AIS) data.
Audio-Visual Target Speaker Extraction (AV-TSE) aims to mimic the human ability to enhance auditory perception using visual cues. Although numerous models have been proposed recently, most of them estimate target signals by primarily relying on local dependencies within acoustic features, underutilizing the human-like capacity to infer unclear parts of speech through contextual information. This limitation results in not only suboptimal performance but also inconsistent extraction quality across the utterance, with some segments exhibiting poor quality or inadequate suppression of interfering speakers. To close this gap, we propose a model-agnostic strategy called the Mask-And-Recover (MAR). It integrates both inter- and intra-modality contextual correlations to enable global inference within extraction modules. Additionally, to better target challenging parts within each sample, we introduce a Fine-grained Confidence Score (FCS) model to assess extraction quality and guide extraction modules to emphasize improvement on low-quality segments. To validate the effectiveness of our proposed model-agnostic training paradigm, six popular AV-TSE backbones were adopted for evaluation on the VoxCeleb2 dataset, demonstrating consistent performance improvements across various metrics.
It is a challenging problem that solving the \textit{multivariate linear model} (MLM) $\mathbf{A}\mathbf{x}=\mathbf{b}$ with the $\ell_1 $-norm approximation method such that $||\mathbf{A}\mathbf{x}-\mathbf{b}||_1$, the $\ell_1$-norm of the \textit{residual error vector} (REV), is minimized. In this work, our contributions lie in two aspects: firstly, the equivalence theorem for the structure of the $\ell_1$-norm optimal solution to the MLM is proposed and proved; secondly, a unified algorithmic framework for solving the MLM with $\ell_1$-norm optimization is proposed and six novel algorithms (L1-GPRS, L1-TNIPM, L1-HP, L1-IST, L1-ADM, L1-POB) are designed. There are three significant characteristics in the algorithms discussed: they are implemented with simple matrix operations which do not depend on specific optimization solvers; they are described with algorithmic pseudo-codes and implemented with Python and Octave/MATLAB which means easy usage; and the high accuracy and efficiency of our six new algorithms can be achieved successfully in the scenarios with different levels of data redundancy. We hope that the unified theoretic and algorithmic framework with source code released on GitHub could motivate the applications of the $\ell_1$-norm optimization for parameter estimation of MLM arising in science, technology, engineering, mathematics, economics, and so on.
In this paper, we investigate reconfigurable pixel antenna (RPA)-based electronic movable antennas (REMAs) for multiuser communications. First, we model each REMA as an antenna characterized by a set of predefined and discrete selectable radiation positions within the radiating region. Considering the trade-off between performance and cost, we propose two types of REMA-based arrays: the partially-connected RPA-based electronic movable-antenna array (PC-REMAA) and fully-connected REMAA (FC-REMAA). Then, we formulate a multiuser sum-rate maximization problem subject to the power constraint and hardware constraints of the PC-REMAA or FC-REMAA. To solve this problem, we propose a two-step multiuser beamforming and antenna selection scheme. In the first step, we develop a two-loop joint beamforming and antenna selection (TL-JBAS) algorithm. In the second step, we apply the coordinate descent method to further enhance the solution of the TL-JBAS algorithm. In addition, we revisit mechanical movable antennas (MMAs) to establish a benchmark for evaluating the performance of REMA-enabled multiuser communications, where MMAs can continuously adjust the positions within the transmission region. We also formulate a sum-rate maximization problem for MMA-enabled multiuser communications and propose an alternating beamforming and antenna position optimization scheme to solve it. Finally, we analyze the performance gap between REMAs and MMAs. Based on Fourier analysis, we derive the maximum power loss of REMAs compared to MMAs for any given position interval. Specifically, we show that the REMA incurs a maximum power loss of only 3.25\% compared to the MMA when the position interval is set to one-tenth of the wavelength. Simulation results demonstrate the effectiveness of the proposed methods.
Recently, there has been a surge of research on a class of methods called feedback optimization. These are methods to steer the state of a control system to an equilibrium that arises as the solution of an optimization problem. Despite the growing literature on the topic, the important problem of enforcing state constraints at all times remains unaddressed. In this work, we present the first feedback-optimization method that enforces state constraints. The method combines a class of dynamics called safe gradient flows with high-order control barrier functions. We provide a number of results on our proposed controller, including well-posedness guarantees, anytime constraint-satisfaction guarantees, equivalence between the closed-loop's equilibria and the optimization problem's critical points, and local asymptotic stability of optima.
This article addresses time-optimal path planning for a vehicle capable of moving both forward and backward on a unit sphere with a unit maximum speed, and constrained by a maximum absolute turning rate $U_{max}$. The proposed formulation can be utilized for optimal attitude control of underactuated satellites, optimal motion planning for spherical rolling robots, and optimal path planning for mobile robots on spherical surfaces or uneven terrains. By utilizing Pontryagin's Maximum Principle and analyzing phase portraits, it is shown that for $U_{max}\geq1$, the optimal path connecting a given initial configuration to a desired terminal configuration falls within a sufficient list of 23 path types, each comprising at most 6 segments. These segments belong to the set $\{C,G,T\}$, where $C$ represents a tight turn with radius $r=\frac{1}{\sqrt{1+U_{max}^2}}$, $G$ represents a great circular arc, and $T$ represents a turn-in-place motion. Closed-form expressions for the angles of each path in the sufficient list are derived. The source code for solving the time-optimal path problem and visualization is publicly available at https://github.com/sixuli97/Optimal-Spherical-Convexified-Reeds-Shepp-Paths.
Manual labeling for large-scale image and video datasets is often time-intensive, error-prone, and costly, posing a significant barrier to efficient machine learning workflows in fault detection from railroad videos. This study introduces a semi-automated labeling method that utilizes a pre-trained You Only Look Once (YOLO) model to streamline the labeling process and enhance fault detection accuracy in railroad videos. By initiating the process with a small set of manually labeled data, our approach iteratively trains the YOLO model, using each cycle's output to improve model accuracy and progressively reduce the need for human intervention. To facilitate easy correction of model predictions, we developed a system to export YOLO's detection data as an editable text file, enabling rapid adjustments when detections require refinement. This approach decreases labeling time from an average of 2 to 4 minutes per image to 30 seconds to 2 minutes, effectively minimizing labor costs and labeling errors. Unlike costly AI based labeling solutions on paid platforms, our method provides a cost-effective alternative for researchers and practitioners handling large datasets in fault detection and other detection based machine learning applications.