New articles on Electrical Engineering and Systems Science

[1] 2312.00799

hvEEGNet: exploiting hierarchical VAEs on EEG data for neuroscience applications

With the recent success of artificial intelligence in neuroscience, a number of deep learning (DL) models were proposed for classification, anomaly detection, and pattern recognition tasks in electroencephalography (EEG). EEG is a multi-channel time-series that provides information about the individual brain activity for diagnostics, neuro-rehabilitation, and other applications (including emotions recognition). Two main issues challenge the existing DL-based modeling methods for EEG: the high variability between subjects and the low signal-to-noise ratio making it difficult to ensure a good quality in the EEG data. In this paper, we propose two variational autoencoder models, namely vEEGNet-ver3 and hvEEGNet, to target the problem of high-fidelity EEG reconstruction. We properly designed their architectures using the blocks of the well-known EEGNet as the encoder, and proposed a loss function based on dynamic time warping. We tested the models on the public Dataset 2a - BCI Competition IV, where EEG was collected from 9 subjects and 22 channels. hvEEGNet was found to reconstruct the EEG data with very high-fidelity, outperforming most previous solutions (including our vEEGNet-ver3 ). Furthermore, this was consistent across all subjects. Interestingly, hvEEGNet made it possible to discover that this popular dataset includes a number of corrupted EEG recordings that might have influenced previous literature results. We also investigated the training behaviour of our models and related it with the quality and the size of the input EEG dataset, aiming at opening a new research debate on this relationship. In the future, hvEEGNet could be used as anomaly (e.g., artefact) detector in large EEG datasets to support the domain experts, but also the latent representations it provides could be used in other classification problems and EEG data generation.

[2] 2312.00802

Continuous Authentication Using Mouse Clickstream Data Analysis

Biometrics is used to authenticate an individual based on physiological or behavioral traits. Mouse dynamics is an example of a behavioral biometric that can be used to perform continuous authentication as protection against security breaches. Recent research on mouse dynamics has shown promising results in identifying users; however, it has not yet reached an acceptable level of accuracy. In this paper, an empirical evaluation of different classification techniques is conducted on a mouse dynamics dataset, the Balabit Mouse Challenge dataset. User identification is carried out using three mouse actions: mouse move, point and click, and drag and drop. Verification and authentication methods are conducted using three machine-learning classifiers: the Decision Tree classifier, the K-Nearest Neighbors classifier, and the Random Forest classifier. The results show that the three classifiers can distinguish between a genuine user and an impostor with a relatively high degree of accuracy. In the verification mode, all the classifiers achieve a perfect accuracy of 100%. In authentication mode, all three classifiers achieved the highest accuracy (ACC) and Area Under Curve (AUC) from scenario B using the point and click action data: (Decision Tree ACC:87.6%, AUC:90.3%), (K-Nearest Neighbors ACC:99.3%, AUC:99.9%), and (Random Forest ACC:89.9%, AUC:92.5%).

[3] 2312.00836

Heteroscedastic Uncertainty Estimation for Probabilistic Unsupervised Registration of Noisy Medical Images

This paper proposes a heteroscedastic uncertainty estimation framework for unsupervised medical image registration. Existing methods rely on objectives (e.g. mean-squared error) that assume a uniform noise level across the image, disregarding the heteroscedastic and input-dependent characteristics of noise distribution in real-world medical images. This further introduces noisy gradients due to undesired penalization on outliers, causing unnatural deformation and performance degradation. To mitigate this, we propose an adaptive weighting scheme with a relative $\gamma$-exponentiated signal-to-noise ratio (SNR) for the displacement estimator after modeling the heteroscedastic noise using a separate variance estimator to prevent the model from being driven away by spurious gradients from error residuals, leading to more accurate displacement estimation. To illustrate the versatility and effectiveness of the proposed method, we tested our framework on two representative registration architectures across three medical image datasets. Our proposed framework consistently outperforms other baselines both quantitatively and qualitatively while also providing accurate and sensible uncertainty measures. Paired t-tests show that our improvements in registration accuracy are statistically significant. The code will be publicly available at \url{}.

[4] 2312.00837

An Adaptive Correspondence Scoring Framework for Unsupervised Image Registration of Medical Images

We propose an adaptive training scheme for unsupervised medical image registration. Existing methods rely on image reconstruction as the primary supervision signal. However, nuisance variables (e.g. noise and covisibility) often cause the loss of correspondence between medical images, violating the Lambertian assumption in physical waves (e.g. ultrasound) and consistent imaging acquisition. As the unsupervised learning scheme relies on intensity constancy to establish correspondence between images for reconstruction, this introduces spurious error residuals that are not modeled by the typical training objective. To mitigate this, we propose an adaptive framework that re-weights the error residuals with a correspondence scoring map during training, preventing the parametric displacement estimator from drifting away due to noisy gradients, which leads to performance degradations. To illustrate the versatility and effectiveness of our method, we tested our framework on three representative registration architectures across three medical image datasets along with other baselines. Our proposed adaptive framework consistently outperforms other methods both quantitatively and qualitatively. Paired t-tests show that our improvements are statistically significant. The code will be publicly available at \url{}.

[5] 2312.00890

Improved Stability and Controller Design Criteria for Two-dimensional Differential-Algebraic-Equation Systems via LMI Approach

This paper addresses issues concerning asymptotic stability testing and controller design for the two-dimensional Rosser model in Differential-Algebraic-Equations systems (DAEs). We present sufficient stability criteria based on the Lyapunov approach, utilizing a set of Linear-Matrix-Inequalities (LMIs) tailored for two-dimensional DAEs. Furthermore, we establish a set of sufficient conditions for determining the feasibility of both state- and output-feedback controllers. Our methods eliminate the need for decomposing the two-dimensional DAEs into separate algebraic and differential components.

[6] 2312.00919

Rethinking Skip Connections in Spiking Neural Networks with Time-To-First-Spike Coding

Time-To-First-Spike (TTFS) coding in Spiking Neural Networks (SNNs) offers significant advantages in terms of energy efficiency, closely mimicking the behavior of biological neurons. In this work, we delve into the role of skip connections, a widely used concept in Artificial Neural Networks (ANNs), within the domain of SNNs with TTFS coding. Our focus is on two distinct types of skip connection architectures: (1) addition-based skip connections, and (2) concatenation-based skip connections. We find that addition-based skip connections introduce an additional delay in terms of spike timing. On the other hand, concatenation-based skip connections circumvent this delay but produce time gaps between after-convolution and skip connection paths, thereby restricting the effective mixing of information from these two paths. To mitigate these issues, we propose a novel approach involving a learnable delay for skip connections in the concatenation-based skip connection architecture. This approach successfully bridges the time gap between the convolutional and skip branches, facilitating improved information mixing. We conduct experiments on public datasets including MNIST and Fashion-MNIST, illustrating the advantage of the skip connection in TTFS coding architectures. Additionally, we demonstrate the applicability of TTFS coding on beyond image recognition tasks and extend it to scientific machine-learning tasks, broadening the potential uses of SNNs.

[7] 2312.00921

Bitstream Organization for Parallel Entropy Coding on Neural Network-based Video Codecs

Video compression systems must support increasing bandwidth and data throughput at low cost and power, and can be limited by entropy coding bottlenecks. Efficiency can be greatly improved by parallelizing coding, which can be done at much larger scales with new neural-based codecs, but with some compression loss related to data organization. We analyze the bit rate overhead needed to support multiple bitstreams for concurrent decoding, and for its minimization propose a method for compressing parallel-decoding entry points, using bidirectional bitstream packing, and a new form of jointly optimizing arithmetic coding termination. It is shown that those techniques significantly lower the overhead, making it easier to reduce it to a small fraction of the average bitstream size, like, for example, less than 1% and 0.1% when the average number of bitstream bytes is respectively larger than 95 and 1,200 bytes.

[8] 2312.00926

New Filters for Image Interpolation and Resizing

We propose a new class of kernels to simplify the design of filters for image interpolation and resizing. Their properties are defined according to two parameters, specifying the width of the transition band and the height of a unique sidelobe. By varying these parameters it is possible to efficiently explore the space with only the filters that are suitable for image interpolation and resizing, and identify the filter that is best for a given application. These two parameters are also sufficient to obtain very good approximations of many commonly-used interpolation kernels. We also show that, because the Fourier transforms of these kernels have very fast decay, these filters produce better results when time-stretched for image downsizing.

[9] 2312.00936

Surface Coil Intensity Correction for MRI

Modern MRI scanners utilize one or more arrays of small receive-only coils to collect k-space data. The sensitivity maps of the coils, when estimated using traditional methods, differ from the true sensitivity maps, which are generally unknown. Consequently, the reconstructed MR images exhibit undesired spatial variation in intensity. These intensity variations can be at least partially corrected using pre-scan data. In this work, we propose an intensity correction method that utilizes pre-scan data. For demonstration, we apply our method to a digital phantom, as well as to cardiac MRI data collected from a commercial scanner by Siemens Healthineers. The code is available at

[10] 2312.00953

Deep Image prior with StruCtUred Sparsity (DISCUS) for dynamic MRI reconstruction

High-quality training data are not always available in dynamic MRI. To address this, we propose a self-supervised deep learning method called deep image prior with structured sparsity (DISCUS) for reconstructing dynamic images. DISCUS is inspired by deep image prior (DIP) and recovers a series of images through joint optimization of network parameters and input code vectors. However, DISCUS additionally encourages group sparsity on frame-specific code vectors to discover the low-dimensional manifold that describes temporal variations across frames. Compared to prior work on manifold learning, DISCUS does not require specifying the manifold dimensionality. We validate DISCUS using three numerical studies. In the first study, we simulate a dynamic Shepp-Logan phantom with frames undergoing random rotations, translations, or both, and demonstrate that DISCUS can discover the dimensionality of the underlying manifold. In the second study, we use data from a realistic late gadolinium enhancement (LGE) phantom to compare DISCUS with compressed sensing (CS) and DIP and to demonstrate the positive impact of group sparsity. In the third study, we use retrospectively undersampled single-shot LGE data from five patients to compare DISCUS with CS reconstructions. The results from these studies demonstrate that DISCUS outperforms CS and DIP and that enforcing group sparsity on the code vectors helps discover true manifold dimensionality and provides additional performance gain.

[11] 2312.00956

A Cyclic Small Phase Theorem

This paper introduces a brand-new phase definition called the segmental phase for multi-input multi-output linear time-invariant systems. The underpinning of the definition lies in the matrix segmental phase which, as its name implies, is graphically based on the smallest circular segment covering the matrix normalized numerical range in the unit disk. The matrix segmental phase has the crucial product eigen-phase bound, which makes itself stand out from several existing phase notions in the literature. The proposed bound paves the way for stability analysis of a cyclic feedback system consisting of multiple subsystems. A cyclic small phase theorem is then established as our main result, which requires the loop system phase to lie between $-\pi$ and $\pi$. The proposed theorem complements a cyclic version of the celebrated small gain theorem. In addition, a generalization of the proposed theorem is made via the use of angular scaling techniques for reducing conservatism.

[12] 2312.00981

Securing the Sensing Functionality in ISAC Networks: An Artificial Noise Design

Integrated sensing and communications (ISAC) systems employ dual-functional signals to simultaneously accomplish radar sensing and wireless communication tasks. However, ISAC systems open up new sensing security vulnerabilities to malicious illegitimate eavesdroppers (Eves) that can also exploit the transmitted waveform to extract sensing information from the environment. In this paper, we investigate the beamforming design to enhance the sensing security of an ISAC system, where the communication user (CU) serves as a sensing Eve. Our objective is to maximize the mutual information (MI) for the legitimate radar sensing receiver while considering the constraint of the MI for the Eve and the quality of service to the CUs. Then, we consider the artificial noise (AN)-aided beamforming to further enhance the sensing security. Simulation results demonstrate that our proposed methods achieve MI improvement of the legitimate receiver while limiting the sensing MI of the Eve, compared with the baseline scheme, and that the utilization of AN further contributes to sensing security.

[13] 2312.01004

Learning-based Ecological Adaptive Cruise Control of Autonomous Electric Vehicles: A Comparison of ADP, DQN and DDPG Approaches

This paper presents model-based and model-free learning methods for economic and ecological adaptive cruise control (Eco-ACC) of connected and autonomous electric vehicles. For model-based optimal control of Eco-ACC, we considered longitudinal vehicle dynamics and a quasi-steady-state powertrain model including the physical limits of a commercial electric vehicle. We used adaptive dynamic programming (ADP), in which the value function was trained using data obtained from IPG CarMaker simulations. For real-time implementation, forward multi-step look-ahead prediction and optimization were executed in a receding horizon scheme to maximize the energy efficiency of the electric machine while avoiding rear-end collisions and satisfying the powertrain, speed, and distance-gap constraints. For model-free optimal control of Eco-ACC, we applied two reinforcement learning methods, Deep Q-Network (DQN) and Deep Deterministic Policy Gradient (DDPG), in which deep neural networks were trained in IPG CarMaker simulations. For performance demonstrations, the HWFET, US06, and WLTP Class 3b driving cycles were used to simulate the front vehicle, and the energy consumptions of the host vehicle and front vehicle were compared. In high-fidelity IPG CarMaker simulations, the proposed learning-based Eco-ACC methods demonstrated approximately 3-5% and 10-14% efficiency improvements in highway and city-highway driving scenarios, respectively, compared with the front vehicle. A video of the CarMaker simulation is available at

[14] 2312.01009

Perceptive, Resilient, and Efficient Networks assisted by Reconfigurable Intelligent Surfaces

Wireless communications are nowadays shifting to higher operation frequencies with the aim to meet the ever-increasing demand for bandwidth. While reconfigurable intelligent surfaces (RISs) are usually envisioned to restore the line-of-sight of blocked links and to efficiently counteract the increased pathloss, their functionalities can extend far beyond these basic operations. Owing to their large surface and the multitude of scatterers, RISs can be exploited to perform advanced wavefront engineering, essentially transforming the incident beam into a non-trivial reflected beam that is able to address the challenges of high frequencies more efficiently than conventional beam-forming. In this paper it is demonstrated how advanced wavefront engineering with RISs enables beam profiles that are able to focus, bend and self-heal, thus offering functionalities beyond the current state-of-the-art. Their potential as enablers of perceptive, resilient, and efficient networks is discussed, and a localization technique based on a hybrid beam-forming/beam-focusing scheme is demonstrated.

[15] 2312.01043

Quantifying Hippocampal Shape Asymmetry in Alzheimer's Disease Using Optimal Shape Correspondences

Hippocampal atrophy in Alzheimer's disease (AD) is asymmetric and spatially inhomogeneous. While extensive work has been done on volume and shape analysis of atrophy of the hippocampus in AD, less attention has been given to hippocampal asymmetry specifically. Previous studies of hippocampal asymmetry are limited to global volume or shape measures, which don't localize shape asymmetry at the point level. In this paper, we propose to quantify localized shape asymmetry by optimizing point correspondences between left and right hippocampi within a subject, while simultaneously favoring a compact statistical shape model of the entire sample. To account for related variables that have impact on AD and healthy subject differences, we build linear models with other confounding factors. Our results on the OASIS3 dataset demonstrate that compared to using volumetric information, shape asymmetry reveals fine-grained, localized differences that indicate the hippocampal regions of most significant shape asymmetry in AD patients.

[16] 2312.01061

Spectral-wise Implicit Neural Representation for Hyperspectral Image Reconstruction

Coded Aperture Snapshot Spectral Imaging (CASSI) reconstruction aims to recover the 3D spatial-spectral signal from 2D measurement. Existing methods for reconstructing Hyperspectral Image (HSI) typically involve learning mappings from a 2D compressed image to a predetermined set of discrete spectral bands. However, this approach overlooks the inherent continuity of the spectral information. In this study, we propose an innovative method called Spectral-wise Implicit Neural Representation (SINR) as a pioneering step toward addressing this limitation. SINR introduces a continuous spectral amplification process for HSI reconstruction, enabling spectral super-resolution with customizable magnification factors. To achieve this, we leverage the concept of implicit neural representation. Specifically, our approach introduces a spectral-wise attention mechanism that treats individual channels as distinct tokens, thereby capturing global spectral dependencies. Additionally, our approach incorporates two components, namely a Fourier coordinate encoder and a spectral scale factor module. The Fourier coordinate encoder enhances the SINR's ability to emphasize high-frequency components, while the spectral scale factor module guides the SINR to adapt to the variable number of spectral channels. Notably, the SINR framework enhances the flexibility of CASSI reconstruction by accommodating an unlimited number of spectral bands in the desired output. Extensive experiments demonstrate that our SINR outperforms baseline methods. By enabling continuous reconstruction within the CASSI framework, we take the initial stride toward integrating implicit neural representation into the field.

[17] 2312.01077

OpEnCam: Lensless Optical Encryption Camera

Lensless cameras multiplex the incoming light before it is recorded by the sensor. This ability to multiplex the incoming light has led to the development of ultra-thin, high-speed, and single-shot 3D imagers. Recently, there have been various attempts at demonstrating another useful aspect of lensless cameras - their ability to preserve the privacy of a scene by capturing encrypted measurements. However, existing lensless camera designs suffer numerous inherent privacy vulnerabilities. To demonstrate this, we develop the first comprehensive attack model for encryption cameras, and propose OpEnCam -- a novel lensless OPtical ENcryption CAmera design that overcomes these vulnerabilities. OpEnCam encrypts the incoming light before capturing it using the modulating ability of optical masks. Recovery of the original scene from an OpEnCam measurement is possible only if one has access to the camera's encryption key, defined by the unique optical elements of each camera. Our OpEnCam design introduces two major improvements over existing lensless camera designs - (a) the use of two co-axially located optical masks, one stuck to the sensor and the other a few millimeters above the sensor and (b) the design of mask patterns, which are derived heuristically from signal processing ideas. We show, through experiments, that OpEnCam is robust against a range of attack types while still maintaining the imaging capabilities of existing lensless cameras. We validate the efficacy of OpEnCam using simulated and real data. Finally, we built and tested a prototype in the lab for proof-of-concept.

[18] 2312.01123

Joint Multiple FMCW Chirp Sequence Processing for Velocity Estimation and Ambiguity Resolving

In FMCW automotive radar applications, it is often a challenge to design a chirp sequence that satisfies the requirements set by practical driving scenarios and simultaneously enables high range resolution, large maximum range, and unambiguous velocity estimation. To support long-range scenarios the chirps should have a sufficiently long duration compared to their bandwidth. At the same time, the long chirps result in ambiguous velocity estimation for targets with high velocity. The problem of velocity ambiguity is often solved by using multiple chirp sequences with co-prime delay shifts between them. However, coherent processing of multiple chirp sequences is not possible using classical spectral estimation techniques based on Fast Fourier Transform (FFT). This results in statistically not efficient velocity estimation and loss of processing gain. In this work, we propose an algorithm that can jointly process multiple chirp sequences and resolve possible ambiguities present in the velocities estimates. The resulting algorithm is statistically efficient and gridless. Furthermore, it increases the resolution of velocity estimation beyond the natural resolution due to its super-resolution properties. These results are confirmed by both numerical simulations and experiments with automotive radar IC.

[19] 2312.01128

SPEEDNet: Salient Pyramidal Enhancement Encoder-Decoder Network for Colonoscopy Images

Accurate identification and precise delineation of regions of significance, such as tumors or lesions, is a pivotal goal in medical imaging analysis. This paper proposes SPEEDNet, a novel architecture for precisely segmenting lesions within colonoscopy images. SPEEDNet uses a novel block named Dilated-Involutional Pyramidal Convolution Fusion (DIPC). A DIPC block combines the dilated involution layers pairwise into a pyramidal structure to convert the feature maps into a compact space. This lowers the total number of parameters while improving the learning of representations across an optimal receptive field, thereby reducing the blurring effect. On the EBHISeg dataset, SPEEDNet outperforms three previous networks: UNet, FeedNet, and AttesResDUNet. Specifically, SPEEDNet attains an average dice score of 0.952 and a recall of 0.971. Qualitative results and ablation studies provide additional insights into the effectiveness of SPEEDNet. The model size of SPEEDNet is 9.81 MB, significantly smaller than that of UNet (22.84 MB), FeedNet(185.58 MB), and AttesResDUNet (140.09 MB).

[20] 2312.01152

Ultra-Resolution Cascaded Diffusion Model for Gigapixel Image Synthesis in Histopathology

Diagnoses from histopathology images rely on information from both high and low resolutions of Whole Slide Images. Ultra-Resolution Cascaded Diffusion Models (URCDMs) allow for the synthesis of high-resolution images that are realistic at all magnification levels, focusing not only on fidelity but also on long-distance spatial coherency. Our model beats existing methods, improving the pFID-50k [2] score by 110.63 to 39.52 pFID-50k. Additionally, a human expert evaluation study was performed, reaching a weighted Mean Absolute Error (MAE) of 0.11 for the Lower Resolution Diffusion Models and a weighted MAE of 0.22 for the URCDM.

[21] 2312.01193

Vehicle path and traffic flow optimization via lane changing of automated or semi-automated vehicles on motorways

Emerging vehicle automation and communication systems (VACS) may contribute to the improvement of vehicle travel time and the mitigation of motorway traffic congestion on the basis of appropriate control strategies. This work considers the possibility that automated, or semi-automated, vehicles are equipped with devices that perform (or recommend) lane-changing tasks. The lane-changing strategy MOBIL (minimizing overall braking induced by lane changing) has been chosen for its simplicity and ductility, as well as for the reduced number of parameters that need to be specified (namely, politeness factor and threshold). A wide set of simulations, where MOBIL has been implemented within the microscopic traffic simulator Aimsun for a calibrated motorway network (representing a stretch of motorway A12 in the Netherlands), has been performed. Simulations revealed the impact that the choice of different parameters have on the travel time of different vehicles, allowing also to analyse their behaviour with respect to different traffic conditions (without or with traffic congestion).

[22] 2312.01212

A Comparative Analysis Towards Melanoma Classification Using Transfer Learning by Analyzing Dermoscopic Images

Melanoma is a sort of skin cancer that starts in the cells known as melanocytes. It is more dangerous than other types of skin cancer because it can spread to other organs. Melanoma can be fatal if it spreads to other parts of the body. Early detection is the key to cure, but it requires the skills of skilled doctors to diagnose it. This paper presents a system that combines deep learning techniques with established transfer learning methods to enable skin lesions classification and diagnosis of melanoma skin lesions. Using Convolutional Neural Networks, it presents a method for categorizing melanoma images into benign and malignant images in this research (CNNs). Researchers used 'Deep Learning' techniques to train an expansive number of photos & essentially to get the expected result deep neural networks to need to be trained with a huge number of parameters as dermoscopic images are sensitive & very hard to classify. This paper, has been emphasized building models with less complexity and comparatively better accuracy with limited datasets & partially fewer deep networks so that the system can predict Melanoma at ease from input dermoscopic images as correctly as possible within devices with less computational power. The dataset has been obtained from ISIC Archive. Multiple pre-trained models ResNet101, DenseNet, EfficientNet, InceptionV3 have been implemented using transfer learning techniques to complete the comparative analysis & every model achieved good accuracy. Before training the models, the data has been augmented by multiple parameters to improve the accuracy. Moreover, the results are better than the previous state-of-the-art approaches & adequate to predict melanoma. Among these architectures, DenseNet performed better than the others which gives a validation accuracy of 96.64%, validation loss of 9.43% & test set accuracy of 99.63%.

[23] 2312.01239

Motion-aware Needle Segmentation in Ultrasound Images

Segmenting a moving needle in ultrasound images is challenging due to the presence of artifacts, noise, and needle occlusion. This task becomes even more demanding in scenarios where data availability is limited. Convolutional Neural Networks (CNNs) have been successful in many computer vision applications, but struggle to accurately segment needles without considering their motion. In this paper, we present a novel approach for needle segmentation that combines classical Kalman Filter (KF) techniques with data-driven learning, incorporating both needle features and needle motion. Our method offers two key contributions. First, we propose a compatible framework that seamlessly integrates into commonly used encoder-decoder style architectures. Second, we demonstrate superior performance compared to recent state-of-the-art needle segmentation models using our novel convolutional neural network (CNN) based KF-inspired block, achieving a 15\% reduction in pixel-wise needle tip error and an 8\% reduction in length error. Third, to our knowledge we are the first to implement a learnable filter to incorporate non-linear needle motion for improving needle segmentation.

[24] 2312.01251

Stochastic Resource Allocation via Dual Tail Waterfilling

Optimal resource allocation in wireless systems still stands as a rather challenging task due to the inherent statistical characteristics of channel fading. On the one hand, minimax/outage-optimal policies are often overconservative and analytically intractable, despite advertising maximally reliable system performance. On the other hand, ergodic-optimal resource allocation policies are often susceptible to the statistical dispersion of heavy-tailed fading channels, leading to relatively frequent drastic performance drops. We investigate a new risk-aware formulation of the classical stochastic resource allocation problem for point-to-point power-constrained communication networks over fading channels with no cross-interference, by leveraging the Conditional Value-at-Risk (CV@R) as a coherent measure of risk. We rigorously derive closed-form expressions for the CV@R-optimal risk-aware resource allocation policy, as well as the optimal associated quantiles of the corresponding user rate functions by capitalizing on the underlying fading distribution, parameterized by dual variables. We then develop a purely dual tail waterfilling scheme, achieving significantly more rapid and assured convergence of dual variables, as compared with the primal-dual tail waterfilling algorithm, recently proposed in the literature. The effectiveness of the proposed scheme is also readily confirmed via detailed numerical simulations.

[25] 2312.01285

A Literature Review on the Smart Wheelchair Systems

This study offers an in-depth analysis of smart wheelchair (SW) systems, charting their progression from early developments to future innovations. It delves into various Brain-Computer Interface (BCI) systems, including mu rhythm, event-related potential, and steady-state visual evoked potential. The paper addresses challenges in signal categorization, proposing the sparse Bayesian extreme learning machine as an innovative solution. Additionally, it explores the integration of emotional states in BCI systems, the application of alternative control methods such as EMG-based systems, and the deployment of intelligent adaptive interfaces utilizing recurrent quantum neural networks. The study also covers advancements in autonomous navigation, assistance, and mapping, emphasizing their importance in SW systems. The human aspect of SW interaction receives considerable attention, specifically in terms of privacy, physiological factors, and the refinement of control mechanisms. The paper acknowledges the commercial challenges faced, like the limitations of indoor usage and the necessity for user training. For future applications, the research explores the potential of autonomous systems adept at adapting to changing environments and user needs. This exploration includes reinforcement learning and various control methods, such as eye and voice control, to improve adaptability and interaction. The potential integration with smart home technologies, including advanced features such as robotic arms, is also considered, aiming to further enhance user accessibility and independence. Ultimately, this study seeks to provide a thorough overview of SW systems, presenting extensive research to detail their historical evolution, current state, and future prospects.

[26] 2312.01336

Integrating Communication, Sensing and Computing in Satellite Internet of Things: Challenges and Opportunities

Satellite Internet of Things (IoT) is to use satellites as the access points for IoT devices to achieve the global coverage of future IoT systems, and is expected to support burgeoning IoT applications, including communication, sensing, and computing. However, the complex and dynamic satellite environments and limited network resources raise new challenges in the design of satellite IoT systems. In this article, we focus on the joint design of communication, sensing, and computing to improve the performance of satellite IoT, which is quite different from the case of terrestrial IoT systems. We describe how the integration of the three functions can enhance system capabilities, and summarize the state-of-the-art solutions. Furthermore, we discuss the main challenges of integrating communication, sensing, and computing in satellite IoT to be solved with pressing interest.

[27] 2312.01338

Enhancing and Adapting in the Clinic: Source-free Unsupervised Domain Adaptation for Medical Image Enhancement

Medical imaging provides many valuable clues involving anatomical structure and pathological characteristics. However, image degradation is a common issue in clinical practice, which can adversely impact the observation and diagnosis by physicians and algorithms. Although extensive enhancement models have been developed, these models require a well pre-training before deployment, while failing to take advantage of the potential value of inference data after deployment. In this paper, we raise an algorithm for source-free unsupervised domain adaptive medical image enhancement (SAME), which adapts and optimizes enhancement models using test data in the inference phase. A structure-preserving enhancement network is first constructed to learn a robust source model from synthesized training data. Then a teacher-student model is initialized with the source model and conducts source-free unsupervised domain adaptation (SFUDA) by knowledge distillation with the test data. Additionally, a pseudo-label picker is developed to boost the knowledge distillation of enhancement tasks. Experiments were implemented on ten datasets from three medical image modalities to validate the advantage of the proposed algorithm, and setting analysis and ablation studies were also carried out to interpret the effectiveness of SAME. The remarkable enhancement performance and benefits for downstream tasks demonstrate the potential and generalizability of SAME. The code is available at

[28] 2312.01345

Introducing Modelling, Analysis and Control of Three-Phase Electrical Systems Using Geometric Algebra

State-of-the-art techniques for modeling, analysis and control of three-phase electrical systems belong to the real-valued multi-input/multi-output (MIMO) domain, or to the complex-valued nonlinear single-input/single-output (SISO) domain. In order to complement both domains while simplifying complexity and offering new analysis and design perspectives, this paper introduces the application of geometric algebra (GA) principles to the modeling, analysis and control of three-phase electrical systems. The key contribution for the modeling part is the identification of the transformation that allows transferring real-valued linear MIMO systems into GA-valued linear SISO representations (with independence of having a balanced or unbalanced system). Closed-loop stability analysis in the new space is addressed by using intrinsic properties of GA. In addition, a recipe for designing stabilizing and decoupling GA-valued controllers is provided. Numerical examples illustrate key developments and experiments corroborate the main findings.

[29] 2312.01351

Deep learning and traditional-based CAD schemes for the pulmonary embolism diagnosis: A survey

Nowadays, pulmonary Computed Tomography Angiography (CTA) is the main tool for detecting Pulmonary Embolism (PE). However, manual interpretation of CTA volume requires a radiologist, which is time-consuming and error-prone due to the specific conditions of lung tissue, large volume of data, lack of experience, and eye fatigue. Therefore, Computer-Aided Design (CAD) systems are used as a second opinion for the diagnosis of PE. The purpose of this article is to review, evaluate, and compare the performance of deep learning and traditional-based CAD system for diagnosis PE and to help physicians and researchers in this field. In this study, all articles available in databases such as IEEE, ScienceDirect, Wiley, Springer, Nature, and Wolters Kluwer in the field of PE diagnosis were examined using traditional and deep learning methods. From 2002 to 2023, 23 papers were studied to extract the articles with the considered limitations. Each paper presents an automatic PE detection system that we evaluate using criteria such as sensitivity, False Positives (FP), and the number of datasets. This research work includes recent studies, state-of-the-art research works, and a more comprehensive overview compared to previously published review articles in this research area.

[30] 2312.01403

OplixNet: Towards Area-Efficient Optical Split-Complex Networks with Real-to-Complex Data Assignment and Knowledge Distillation

Having the potential for high speed, high throughput, and low energy cost, optical neural networks (ONNs) have emerged as a promising candidate for accelerating deep learning tasks. In conventional ONNs, light amplitudes are modulated at the input and detected at the output. However, the light phases are still ignored in conventional structures, although they can also carry information for computing. To address this issue, in this paper, we propose a framework called OplixNet to compress the areas of ONNs by modulating input image data into the amplitudes and phase parts of light signals. The input and output parts of the ONNs are redesigned to make full use of both amplitude and phase information. Moreover, mutual learning across different ONN structures is introduced to maintain the accuracy. Experimental results demonstrate that the proposed framework significantly reduces the areas of ONNs with the accuracy within an acceptable range. For instance, 75.03% area is reduced with a 0.33% accuracy decrease on fully connected neural network (FCNN) and 74.88% area is reduced with a 2.38% accuracy decrease on ResNet-32.

[31] 2312.01423

Self-Critical Alternate Learning based Semantic Broadcast Communication

Semantic communication (SemCom) has been deemed as a promising communication paradigm to break through the bottleneck of traditional communications. Nonetheless, most of the existing works focus more on point-to-point communication scenarios and its extension to multi-user scenarios is not that straightforward due to its cost-inefficiencies to directly scale the JSCC framework to the multi-user communication system. Meanwhile, previous methods optimize the system by differentiable bit-level supervision, easily leading to a "semantic gap". Therefore, we delve into multi-user broadcast communication (BC) based on the universal transformer (UT) and propose a reinforcement learning (RL) based self-critical alternate learning (SCAL) algorithm, named SemanticBC-SCAL, to capably adapt to the different BC channels from one transmitter (TX) to multiple receivers (RXs) for sentence generation task. In particular, to enable stable optimization via a nondifferentiable semantic metric, we regard sentence similarity as a reward and formulate this learning process as an RL problem. Considering the huge decision space, we adopt a lightweight but efficient self-critical supervision to guide the learning process. Meanwhile, an alternate learning mechanism is developed to provide cost-effective learning, in which the encoder and decoders are updated asynchronously with different iterations. Notably, the incorporation of RL makes SemanticBC-SCAL compliant with any user-defined semantic similarity metric and simultaneously addresses the channel non-differentiability issue by alternate learning. Besides, the convergence of SemanticBC-SCAL is also theoretically established. Extensive simulation results have been conducted to verify the effectiveness and superiorness of our approach, especially in low SNRs.

[32] 2312.01427

Novel KLD-based Resource Allocation for Integrated Sensing and Communication

In this paper, we introduce a novel resource allocation approach for integrated sensing-communication (ISAC) using the Kullback-Leibler divergence (KLD) metric. Specifically, we consider a base-station with limited power and antenna resources serving a number of communication users and detecting multiple targets simultaneously. First, we analyze the KLD for two possible antenna deployments, which are the separated and shared deployments, then use the results to optimize the resources of the base-station through minimising the average KLD for the network while satisfying a minimum predefined KLD requirement for each user equipment (UE) and target. To this end, the optimisation is formulated and presented as a mixed integer nonlinear programming (MINLP) problem and then solved using two approaches. In the first approach, we employ a genetic algorithm, which offers remarkable performance but demands substantial computational resources; and in the second approach, we propose a rounding-based interior-point method (RIPM) that provides a more computationally-efficient alternative solution at a negligible performance loss. The results demonstrate that the KLD metric can be an effective means for optimising ISAC networks, and that both optimisation solutions presented offer superior performance compared to uniform power and antenna allocation.

[33] 2312.01441

Koopman-based feedback design with stability guarantees

We present a method to design a state-feedback controller ensuring exponential stability for nonlinear systems using only measurement data. Our approach relies on Koopman operator theory and uses robust control to explicitly account for approximation errors due to finitely many data samples. To simplify practical usage across various applications, we provide a tutorial-style exposition of the feedback design and its stability guarantees for single-input systems. Moreover, we extend this controller design to multi-input systems and more flexible nonlinear state-feedback controllers using gain-scheduling techniques to increase the guaranteed region of attraction. As the proposed controller design is framed as a semidefinite program, it allows for an efficient solution. Finally, we validate the proposed feedback design procedure by means of numerical examples.

[34] 2312.01460

Towards an accurate and generalizable multiple sclerosis lesion segmentation model using self-ensembled lesion fusion

Automatic multiple sclerosis (MS) lesion segmentation using multi-contrast magnetic resonance (MR) images provides improved efficiency and reproducibility compared to manual delineation. Current state-of-the-art automatic MS lesion segmentation methods utilize modified U-Net-like architectures. However, in the literature, dedicated architecture modifications were always required to maximize their performance. In addition, the best-performing methods have not proven to be generalizable to diverse test datasets with contrast variations and image artifacts. In this work, we developed an accurate and generalizable MS lesion segmentation model using the well-known U-Net architecture without further modification. A novel test-time self-ensembled lesion fusion strategy is proposed that not only achieved the best performance using the ISBI 2015 MS segmentation challenge data but also demonstrated robustness across various self-ensemble parameter choices. Moreover, equipped with instance normalization rather than batch normalization widely used in literature, the model trained on the ISBI challenge data generalized well on clinical test datasets from different scanners.

[35] 2312.01499

Towards Decentralized Task Offloading and Resource Allocation in User-Centric Mobile Edge Computing

In the traditional cellular-based mobile edge computing (MEC), users at the edge of the cell are prone to suffer severe inter-cell interference and signal attenuation, leading to low throughput even transmission interruptions. Such edge effect severely obstructs offloading of tasks to MEC servers. To address this issue, we propose user-centric mobile edge computing (UCMEC), a novel MEC architecture integrating user-centric transmission, which can ensure high throughput and reliable communication for task offloading. Then, we formulate an optimization problem with joint consideration of task offloading, power control, and computing resource allocation in UCMEC, aiming at obtaining the optimal performance in terms of long-term average total delay. To solve the intractable problem, we propose two decentralized joint optimization schemes based on multi-agent deep reinforcement learning (MADRL) and convex optimization, which consider both cooperation and non-cooperation among network nodes. Simulation results demonstrate that the proposed schemes in UCMEC can significantly improve the uplink transmission rate by at most 343.56% and reduce the long-term average total delay by at most 45.57% compared to traditional cellular-based MEC.

[36] 2312.01573

Survey on deep learning in multimodal medical imaging for cancer detection

The task of multimodal cancer detection is to determine the locations and categories of lesions by using different imaging techniques, which is one of the key research methods for cancer diagnosis. Recently, deep learning-based object detection has made significant developments due to its strength in semantic feature extraction and nonlinear function fitting. However, multimodal cancer detection remains challenging due to morphological differences in lesions, interpatient variability, difficulty in annotation, and imaging artifacts. In this survey, we mainly investigate over 150 papers in recent years with respect to multimodal cancer detection using deep learning, with a focus on datasets and solutions to various challenges such as data annotation, variance between classes, small-scale lesions, and occlusion. We also provide an overview of the advantages and drawbacks of each approach. Finally, we discuss the current scope of work and provide directions for the future development of multimodal cancer detection.

[37] 2312.01610

Accelerated Parallel Magnetic Resonance Imaging with Compressed Sensing using Structured Sparsity

Compressed sensing is an imaging paradigm that allows one to invert an underdetermined linear system by imposing the a priori knowledge that the sought after solution is sparse (i.e., mostly zeros). Previous works have shown that if one also knows something about the sparsity pattern (the locations where non-zero entries exist), one can take advantage of this structure to improve the quality of the result. A significant application of compressed sensing is magnetic resonance imaging (MRI), where samples are acquired in the Fourier domain. Compressed sensing allows one to reconstruct a high-quality image with fewer samples which can be collected with a faster scan. This increases the robustness of MRI to patient motion since less motion is possible during the shorter scan. Parallel imaging, where multiple coils are used to gather data, is another an more ubiquitously used method for accelerating MRI. Existing combinations of these acceleration methods, such as Sparse SENSE, yield high quality images with an even shorter scan time than either technique alone. In this work, we show how to modify Sparse SENSE with structured sparsity to reconstruct a high quality image with even fewer samples.

[38] 2312.01625

Interference-Constrained Scheduling of a Cognitive Multi-hop Underwater Acoustic Network

This paper investigates optimal scheduling for a cognitive multi-hop underwater acoustic network with a primary user interference constraint. The network consists of primary and secondary users, with multi-hop transmission adopted for both user types to provide reliable communications. Critical characteristics of underwater acoustic channels, including significant propagation delay, distance-and-frequency dependent attenuation, half-duplex modem, and inter-hop interference, are taken into account in the design and analysis. In particular, time-slot allocation is found to be more effective than frequency-slot allocation due to the underwater channel model. The goal of the network scheduling problem is to maximize the end-to-end throughput of the overall system while limiting the throughput loss of primary users. Both centralized and decentralized approaches are considered. Partially Observable Markov Decision Processes (POMDP) framework is applied to formulate the optimization problem, and an optimal dynamic programming algorithm is derived. However, the optimal dynamic programming solution is computationally intractable. Key properties are shown for the objective function, enabling the design of approximate schemes with significant complexity reduction. Numerical results show that the proposed schemes significantly increase system throughput while maintaining the primary throughput loss constraint. Under certain traffic conditions, the throughput gain over frequency-slot allocation schemes can be as high as 50%.

[39] 2312.01638

J-Net: Improved U-Net for Terahertz Image Super-Resolution

Terahertz (THz) waves are electromagnetic waves in the 0.1 to 10 THz frequency range, and THz imaging is utilized in a range of applications, including security inspections, biomedical fields, and the non-destructive examination of materials. However, THz images have low resolution due to the long wavelength of THz waves. Therefore, improving the resolution of THz images is one of the current hot research topics. We propose a novel network architecture called J-Net which is improved version of U-Net to solve the THz image super-resolution. It employs the simple baseline blocks which can extract low resolution (LR) image features and learn the mapping of LR images to highresolution (HR) images efficiently. All training was conducted using the DIV2K+Flickr2K dataset, and we employed the peak signal-to-noise ratio (PSNR) for quantitative comparison. In our comparisons with other THz image super-resolution methods, JNet achieved a PSNR of 32.52 dB, surpassing other techniques by more than 1 dB. J-Net also demonstrates superior performance on real THz images compared to other methods. Experiments show that the proposed J-Net achieves better PSNR and visual improvement compared with other THz image super-resolution methods.

[40] 2312.01644

TMSR: Tiny Multi-path CNNs for Super Resolution

In this paper, we proposed a tiny multi-path CNN-based Super-Resolution (SR) method, called TMSR. We mainly refer to some tiny CNN-based SR methods, under 5k parameters. The main contribution of the proposed method is the improved multi-path learning and self-defined activated function. The experimental results show that TMSR obtains competitive image quality (i.e. PSNR and SSIM) compared to the related works under 5k parameters.

[41] 2312.01679

Adversarial Medical Image with Hierarchical Feature Hiding

Deep learning based methods for medical images can be easily compromised by adversarial examples (AEs), posing a great security flaw in clinical decision-making. It has been discovered that conventional adversarial attacks like PGD which optimize the classification logits, are easy to distinguish in the feature space, resulting in accurate reactive defenses. To better understand this phenomenon and reassess the reliability of the reactive defenses for medical AEs, we thoroughly investigate the characteristic of conventional medical AEs. Specifically, we first theoretically prove that conventional adversarial attacks change the outputs by continuously optimizing vulnerable features in a fixed direction, thereby leading to outlier representations in the feature space. Then, a stress test is conducted to reveal the vulnerability of medical images, by comparing with natural images. Interestingly, this vulnerability is a double-edged sword, which can be exploited to hide AEs. We then propose a simple-yet-effective hierarchical feature constraint (HFC), a novel add-on to conventional white-box attacks, which assists to hide the adversarial feature in the target feature distribution. The proposed method is evaluated on three medical datasets, both 2D and 3D, with different modalities. The experimental results demonstrate the superiority of HFC, \emph{i.e.,} it bypasses an array of state-of-the-art adversarial medical AE detectors more efficiently than competing adaptive attacks, which reveals the deficiencies of medical reactive defense and allows to develop more robust defenses in future.

[42] 2312.01689

Fast and accurate sparse-view CBCT reconstruction using meta-learned neural attenuation field and hash-encoding regularization

Cone beam computed tomography (CBCT) is an emerging medical imaging technique to visualize the internal anatomical structures of patients. During a CBCT scan, several projection images of different angles or views are collectively utilized to reconstruct a tomographic image. However, reducing the number of projections in a CBCT scan while preserving the quality of a reconstructed image is challenging due to the nature of an ill-posed inverse problem. Recently, a neural attenuation field (NAF) method was proposed by adopting a neural radiance field algorithm as a new way for CBCT reconstruction, demonstrating fast and promising results using only 50 views. However, decreasing the number of projections is still preferable to reduce potential radiation exposure, and a faster reconstruction time is required considering a typical scan time. In this work, we propose a fast and accurate sparse-view CBCT reconstruction (FACT) method to provide better reconstruction quality and faster optimization speed in the minimal number of view acquisitions ($<$ 50 views). In the FACT method, we meta-trained a neural network and a hash-encoder using a few scans (= 15), and a new regularization technique is utilized to reconstruct the details of an anatomical structure. In conclusion, we have shown that the FACT method produced better, and faster reconstruction results over the other conventional algorithms based on CBCT scans of different body parts (chest, head, and abdomen) and CT vendors (Siemens, Phillips, and GE).

[43] 2312.01726

Simultaneous Alignment and Surface Regression Using Hybrid 2D-3D Networks for 3D Coherent Layer Segmentation of Retinal OCT Images with Full and Sparse Annotations

Layer segmentation is important to quantitative analysis of retinal optical coherence tomography (OCT). Recently, deep learning based methods have been developed to automate this task and yield remarkable performance. However, due to the large spatial gap and potential mismatch between the B-scans of an OCT volume, all of them were based on 2D segmentation of individual B-scans, which may lose the continuity and diagnostic information of the retinal layers in 3D space. Besides, most of these methods required dense annotation of the OCT volumes, which is labor-intensive and expertise-demanding. This work presents a novel framework based on hybrid 2D-3D convolutional neural networks (CNNs) to obtain continuous 3D retinal layer surfaces from OCT volumes, which works well with both full and sparse annotations. The 2D features of individual B-scans are extracted by an encoder consisting of 2D convolutions. These 2D features are then used to produce the alignment displacement vectors and layer segmentation by two 3D decoders coupled via a spatial transformer module. Two losses are proposed to utilize the retinal layers' natural property of being smooth for B-scan alignment and layer segmentation, respectively, and are the key to the semi-supervised learning with sparse annotation. The entire framework is trained end-to-end. To the best of our knowledge, this is the first work that attempts 3D retinal layer segmentation in volumetric OCT images based on CNNs. Experiments on a synthetic dataset and three public clinical datasets show that our framework can effectively align the B-scans for potential motion correction, and achieves superior performance to state-of-the-art 2D deep learning methods in terms of both layer segmentation accuracy and cross-B-scan 3D continuity in both fully and semi-supervised settings, thus offering more clinical values than previous works.

[44] 2312.01727

Deep learning acceleration of iterative model-based light fluence correction for photoacoustic tomography

Photoacoustic tomography (PAT) is a promising imaging technique that can visualize the distribution of chromophores within biological tissue. However, the accuracy of PAT imaging is compromised by light fluence (LF), which hinders the quantification of light absorbers. Currently, model-based iterative methods are widely used for LF correction but require significant computational resources due to repeated LF estimation using differential light transport models. To improve LF correction efficiency, we propose to use Fourier neural operator (FNO), a neural network specially designed for solving differential equations, to learn the forward projection of light transport during PAT imaging. Trained using paired finite-element-based LF simulation data, our FNO model is used to replace the traditional computational heavy LF estimator during iterative correction, such that the correction procedure can be significantly accelerated. Simulation and experimental results demonstrate that our proposed method achieves comparable LF correction quality to traditional iterative methods while reducing the correction time by over 30 times.

[45] 2312.01740

MobileUtr: Revisiting the relationship between light-weight CNN and Transformer for efficient medical image segmentation

Due to the scarcity and specific imaging characteristics in medical images, light-weighting Vision Transformers (ViTs) for efficient medical image segmentation is a significant challenge, and current studies have not yet paid attention to this issue. This work revisits the relationship between CNNs and Transformers in lightweight universal networks for medical image segmentation, aiming to integrate the advantages of both worlds at the infrastructure design level. In order to leverage the inductive bias inherent in CNNs, we abstract a Transformer-like lightweight CNNs block (ConvUtr) as the patch embeddings of ViTs, feeding Transformer with denoised, non-redundant and highly condensed semantic information. Moreover, an adaptive Local-Global-Local (LGL) block is introduced to facilitate efficient local-to-global information flow exchange, maximizing Transformer's global context information extraction capabilities. Finally, we build an efficient medical image segmentation model (MobileUtr) based on CNN and Transformer. Extensive experiments on five public medical image datasets with three different modalities demonstrate the superiority of MobileUtr over the state-of-the-art methods, while boasting lighter weights and lower computational cost. Code is available at

[46] 2312.01744

SEFGAN: Harvesting the Power of Normalizing Flows and GANs for Efficient High-Quality Speech Enhancement

This paper proposes SEFGAN, a Deep Neural Network (DNN) combining maximum likelihood training and Generative Adversarial Networks (GANs) for efficient speech enhancement (SE). For this, a DNN is trained to synthesize the enhanced speech conditioned on noisy speech using a Normalizing Flow (NF) as generator in a GAN framework. While the combination of likelihood models and GANs is not trivial, SEFGAN demonstrates that a hybrid adversarial and maximum likelihood training approach enables the model to maintain high quality audio generation and log-likelihood estimation. Our experiments indicate that this approach strongly outperforms the baseline NF-based model without introducing additional complexity to the enhancement network. A comparison using computational metrics and a listening experiment reveals that SEFGAN is competitive with other state-of-the-art models.

[47] 2312.01777

Doubly 1-Bit Quantized Massive MIMO

Enabling communications in the (sub-)THz band will call for massive multiple-input multiple-output (MIMO) arrays at either the transmit- or receive-side, or at both. To scale down the complexity and power consumption when operating across massive frequency and antenna dimensions, a sacrifice in the resolution of the digital-to-analog/analog-to-digital converters (DACs/ADCs) will be inevitable. In this paper, we analyze the extreme scenario where both the transmit- and receive-side are equipped with fully digital massive MIMO arrays and 1-bit DACs/ADCs, which leads to a system with minimum radio-frequency complexity, cost, and power consumption. Building upon the Bussgang decomposition, we derive a tractable approximation of the mean squared error (MSE) between the transmitted data symbols and their soft estimates. Numerical results show that, despite its simplicity, a doubly 1-bit quantized massive MIMO system with very large antenna arrays can deliver an impressive performance in terms of MSE and symbol error rate.

[48] 2312.01785

Closed-Form Solutions for Grid-Forming Converters: A Design-Oriented Study

This paper derives closed-form solutions for grid-forming converters with power synchronization control (PSC) by subtly simplifying and factorizing the complex closed-loop models. The solutions can offer clear analytical insights into control-loop interactions, enabling guidelines for robust controller design. It is proved that 1) the proportional gains of PSC and alternating voltage control (AVC) can introduce negative resistance, which aggravates synchronous resonance (SR) of power control, 2) the integral gain of AVC is the cause of sub-synchronous resonance (SSR) in stiff-grid interconnections, albeit the proportional gain of AVC can help dampen the SSR, and 3) surprisingly, the current controller that dampens SR actually exacerbates SSR. Controller design guidelines are given based on analytical insights. The findings are verified by simulations and experimental results.

[49] 2312.01808

Head Orientation Estimation with Distributed Microphones Using Speech Radiation Patterns

Determining the head orientation of a talker is not only beneficial for various speech signal processing applications, such as source localization or speech enhancement, but also facilitates intuitive voice control and interaction with smart environments or modern car assistants. Most approaches for head orientation estimation are based on visual cues. However, this requires camera systems which often are not available. We present an approach which purely uses audio signals captured with only a few distributed microphones around the talker. Specifically, we propose a novel method that directly incorporates measured or modeled speech radiation patterns to infer the talker's orientation during active speech periods based on a cosine similarity measure. Moreover, an automatic gain adjustment technique is proposed for uncalibrated, irregular microphone setups, such as ad-hoc sensor networks. In experiments with signals recorded in both anechoic and reverberant environments, the proposed method outperforms state-of-the-art approaches, using either measured or modeled speech radiation patterns.

[50] 2312.01831

Equivariant plug-and-play image reconstruction

Plug-and-play algorithms constitute a popular framework for solving inverse imaging problems that rely on the implicit definition of an image prior via a denoiser. These algorithms can leverage powerful pre-trained denoisers to solve a wide range of imaging tasks, circumventing the necessity to train models on a per-task basis. Unfortunately, plug-and-play methods often show unstable behaviors, hampering their promise of versatility and leading to suboptimal quality of reconstructed images. In this work, we show that enforcing equivariance to certain groups of transformations (rotations, reflections, and/or translations) on the denoiser strongly improves the stability of the algorithm as well as its reconstruction quality. We provide a theoretical analysis that illustrates the role of equivariance on better performance and stability. We present a simple algorithm that enforces equivariance on any existing denoiser by simply applying a random transformation to the input of the denoiser and the inverse transformation to the output at each iteration of the algorithm. Experiments on multiple imaging modalities and denoising networks show that the equivariant plug-and-play algorithm improves both the reconstruction performance and the stability compared to their non-equivariant counterparts.

[51] 2312.01888

Highly Accelerated Weighted MMSE Algorithms for Designing Precoders in FDD Systems with Incomplete CSI

In this work, we derive a lower bound on the training-based achievable downlink (DL) sum rate (SR) of a multi-user multiple-input-single-output (MISO) system operating in frequency-division-duplex (FDD) mode. Assuming linear minimum mean square error (LMMSE) channel estimation is used, we establish a connection of the derived lower bound on the signal-to-interference-noise-ratio (SINR) to an average MSE that allows to reformulate the SR maximization problem as the minimization of the augmented weighted average MSE (AWAMSE). We propose an iterative precoder design with three alternating steps, all given in closed form, drastically reducing the computation time. We show numerically the effectiveness of the proposed approach in challenging scenarios with limited channel knowledge, i.e., we consider scenarios with a very limited number of pilots. We additionally propose a more efficient version of the well-known stochastic iterative WMMSE (SIWMMSE) approach, where the precoder update is given in closed form.

[52] 2312.01920

Strong stabilization in classical control via adjustment of fractional powers into integers

We address stabilization of linear time-invariant (LTI), single-input single-output (SISO) systems in the Laplace domain, with a stable controller in a single feedback loop. Such stabilization is called strong. Plants that satisfy a parity interlacing property are known to be strongly stabilizable. Finding such controllers is a well known difficult problem. Existing general methods are based on either manual search or a clever use of Nevanlinna-Pick interpolation with polynomials of possibly high integer order. Here we present a new, simple, and general method for strongly stabilizing systems of relative degree less than 3. We call our method adjustment of fractional powers (AFP). Our theoretical contributions constitute proposing the functional form used, which involves a product of several terms of the form $\displaystyle \left ( \frac{s+a}{s+b} \right )^m$, showing that real $m$'s will arise whenever the plant is strongly stabilizable, and proving that integer $m$'s can be obtained by continuously varying free parameters (i.e., the $a$'s and $b$'s). Our practical contributions include demonstrating a simple way, based on a trigonometric trick, to adjust the fractional powers until they take reasonable integer values. We include brief but necessary associated discussion to make the paper accessible to a broad audience. We also present ten numerical examples of successful control design with varying levels of difficulty, including plants whose transfer functions have relative degrees of 0, 1 or 2; and with right half plane zeros of multiplicity possibly exceeding one.

[53] 2312.01928

Consensus-Based Distributed Nonlinear Filtering with Kernel Mean Embedding

This paper proposes a consensus-based distributed nonlinear filter with kernel mean embedding (KME). This fills with gap of posterior density approximation with KME for distributed nonlinear dynamic systems. To approximate the posterior density, the system state is embedded into a higher-dimensional reproducing kernel Hilbert space (RKHS), and then the nonlinear measurement function is linearly converted. As a result, an update rule of KME of posterior distribution is established in the RKHS. To show the proposed distributed filter being capable of achieving the centralized estimation accuracy, a centralized filter, serving as an extension of the standard Kalman filter in the state space to the RKHS, is developed first. Benefited from the KME, the proposed distributed filter converges to the centralized one while maintaining the distributed pattern. Two examples are introduced to demonstrate the effectiveness of the developed filters in target tracking scenarios including nearly constantly moving target and turning target, respectively, with bearing-only, range and bearing measurements.

[54] 2312.01940

Intelligent Reflecting Surface-Aided Electromagnetic Stealth Against Radar Detection

While traditional electromagnetic stealth materials/metasurfaces can render a target virtually invisible to some extent, they lack flexibility and adaptability, and can only operate within a limited frequency and angle/direction range, making it challenging to ensure the expected stealth performance. In view of this, we propose in this paper a new intelligent reflecting surface (IRS)-aided electromagnetic stealth system mounted on targets to evade radar detection, by utilizing the tunable passive reflecting elements of IRS to achieve flexible and adaptive electromagnetic stealth in a cost-effective manner. Specifically, we optimize the IRS's reflection at the target to minimize the sum received signal power of all adversary radars. We first address the IRS's reflection optimization problem using the Lagrange multiplier method and derive a semi-closed-form optimal solution for the single-radar setup, which is then generalized to the multi-radar case. To meet real-time processing requirements, we further propose low-complexity closed-form solutions based on the reverse alignment/cancellation and minimum mean-square error (MMSE) criteria for the single-radar and multi-radar cases, respectively. Additionally, we propose practical low-complexity estimation schemes at the target to acquire angle-of-arrival (AoA) and/or path gain information via a small number of receive sensing devices. Simulation results validate the performance advantages of our proposed IRS-aided electromagnetic stealth system with the proposed IRS reflection designs.

[55] 2312.01996

Tuning of Online Feedback Optimization for setpoint tracking in centrifugal compressors

Online Feedback Optimization (OFO) controllers steer a system to its optimal operating point by treating optimization algorithms as auxiliary dynamic systems. Implementation of OFO controllers requires setting the parameters of the optimization algorithm that allows reaching convergence, posing a challenge because the convergence of the optimization algorithm is often decoupled from the performance of the controlled system. OFO controllers are also typically designed to ensure steady-state tracking by fixing the sampling time to be longer than the time constants of the system. In this paper, we first quantify the impact of OFO parameters and the sampling time on the tracking error and number of oscillations of the controlled system, showing that adjusting them allows good tracking without reaching steady states. We then propose a tuning method for the sampling time of the OFO controller together with the parameters to allow tracking fast trajectories while reducing oscillations. We validate the proposed tuning approach in a pressure controller in a centrifugal compressor, tracking trajectories faster than the time needed to reach the steady state by the compressor. The results of the validation confirm that simultaneous tuning of the sampling time and the parameters of OFO yields up to 87% times better tracking performance than manual tuning based on steady state.

[56] 2312.01999

SRTransGAN: Image Super-Resolution using Transformer based Generative Adversarial Network

Image super-resolution aims to synthesize high-resolution image from a low-resolution image. It is an active area to overcome the resolution limitations in several applications like low-resolution object-recognition, medical image enhancement, etc. The generative adversarial network (GAN) based methods have been the state-of-the-art for image super-resolution by utilizing the convolutional neural networks (CNNs) based generator and discriminator networks. However, the CNNs are not able to exploit the global information very effectively in contrast to the transformers, which are the recent breakthrough in deep learning by exploiting the self-attention mechanism. Motivated from the success of transformers in language and vision applications, we propose a SRTransGAN for image super-resolution using transformer based GAN. Specifically, we propose a novel transformer-based encoder-decoder network as a generator to generate 2x images and 4x images. We design the discriminator network using vision transformer which uses the image as sequence of patches and hence useful for binary classification between synthesized and real high-resolution images. The proposed SRTransGAN outperforms the existing methods by 4.38 % on an average of PSNR and SSIM scores. We also analyze the saliency map to understand the learning ability of the proposed method.

[57] 2312.02017

A multi-channel cycleGAN for CBCT to CT synthesis

Image synthesis is used to generate synthetic CTs (sCTs) from on-treatment cone-beam CTs (CBCTs) with a view to improving image quality and enabling accurate dose computation to facilitate a CBCT-based adaptive radiotherapy workflow. As this area of research gains momentum, developments in sCT generation methods are difficult to compare due to the lack of large public datasets and sizeable variation in training procedures. To compare and assess the latest advancements in sCT generation, the SynthRAD2023 challenge provides a public dataset and evaluation framework for both MR and CBCT to sCT synthesis. Our contribution focuses on the second task, CBCT-to-sCT synthesis. By leveraging a multi-channel input to emphasize specific image features, our approach effectively addresses some of the challenges inherent in CBCT imaging, whilst restoring the contrast necessary for accurate visualisation of patients' anatomy. Additionally, we introduce an auxiliary fusion network to further enhance the fidelity of generated sCT images.

[58] 2312.02050

Optimal Dual-Polarized Planar Arrays for Massive Capacity Over Point-to-Point MIMO Channels

Future wireless networks must provide ever higher data rates. The available bandwidth increases roughly linearly as we increase the carrier frequency, but the range shrinks drastically. This paper explores if we can instead reach massive capacities using spatial multiplexing over multiple-input multiple-output (MIMO) channels. In line-of-sight (LOS) scenarios, therank of the MIMO channel matrix depends on the polarization and antenna arrangement. We optimize the rank and condition number by identifying the optimal antenna spacing in dual-polarized planar antenna arrays with imperfect isolation. The result is sparely spaced antenna arrays that exploit radiative near-field properties. We further optimize the array geometry for minimum aperture length and aperture area, which leads to different configurations. Moreover, we prove analytically that for fixed-sized arrays, the MIMO rank grows quadratically with the carrier frequency in LOS scenarios, if the antennas are appropriately designed. Hence, MIMO technology contributes more to the capacity growth than the bandwidth. The numerical results show that massive data rates, far beyond 1 Tbps, can be reached both over fixed point-to-point links. It is also possible for a large base station to serve a practically-sized mobile device.

[59] 2312.02082

Joint State and Input Estimation for Linear Dynamical Systems with Sparse Control

Sparsity constraints on the control inputs of a linear dynamical system naturally arise in several practical applications such as networked control, computer vision, seismic signal processing, and cyber-physical systems. In this work, we consider the problem of jointly estimating the states and sparse inputs of such systems from low-dimensional (compressive) measurements. Due to the low-dimensional measurements, conventional Kalman filtering and smoothing algorithms fail to accurately estimate the states and inputs. We present a Bayesian approach that exploits the input sparsity to significantly improve estimation accuracy. Sparsity in the input estimates is promoted by using different prior distributions on the input. We investigate two main approaches: regularizer-based MAP, and {Bayesian learning-based estimation}. We also extend the approaches to handle control inputs with common support and analyze the time and memory complexities of the presented algorithms. Finally, using numerical simulations, we show that our algorithms outperform the state-of-the-art methods in terms of accuracy and time/memory complexities, especially in the low-dimensional measurement regime.

[60] 2312.00790

Low-Rank Solution Operator for Forced Linearized Dynamics with Unsteady Base Flows

Understanding the linear growth of disturbances due to external forcing is crucial for flow stability analysis, flow control, and uncertainty quantification. These applications typically require a large number of forward simulations of the forced linearized dynamics, often in a brute-force fashion. When dealing with simple steady-state or periodic base flows, there exist powerful and cost-effective solution operator techniques. Once constructed, these operators can be used to determine the response to various forcings with negligible computational cost. However, these methods are not applicable to problems with arbitrarily time-dependent base flows. This paper develops and investigates reduced-order modeling with time-dependent bases (TDBs) to build low-rank solution operators for forced linearized dynamics with arbitrarily time-dependent base flows. In particular, we use forced optimally time-dependent decomposition (f-OTD), which extracts the time-dependent correlated structures of the flow response to various excitations. We also demonstrate that in the case of a steady-state mean flow subject to harmonic forcing, the f-OTD subspace converges to the dominant resolvent analysis modes. The demonstration includes four cases: a toy model, the Burgers equation, the 2D temporally evolving jet, and two-dimensional decaying isotropic turbulence. In these cases, we demonstrate the utility of the low-rank operator for (i) identifying the excitation that leads to maximum amplification, and (ii) reconstructing the full-state flow without incurring additional cost.

[61] 2312.00795

Talent-Interview: Web-Client Cheating Detection for Online Exams

Online exams are more attractive after the Covid-19 pandemic. Furthermore, during recruitment, online exams are used. However, there are more cheating possibilities for online exams. Assigning a proctor for each exam increases cost. At this point, automatic proctor systems detect possible cheating status. This article proposes an end-to-end system and submodules to get better results for online proctoring. Object detection, face recognition, human voice detection, and segmentation are used in our system. Furthermore, our proposed model works on the PCs of users, meaning a client-based system. So, server cost is eliminated. As far as we know, it is the first time the client-based online proctoring system has been used for recruitment. Online exams are more attractive after the Covid-19 pandemic. Furthermore, during recruitment, online exams are used. However, there are more cheating possibilities for online exams. Assigning a proctor for each exam increases cost. At this point, automatic proctor systems detect possible cheating status. This article proposes an end-to-end system and submodules to get better results for online proctoring. Object detection, face recognition, human voice detection, and segmentation are used in our system. Furthermore, our proposed model works on the PCs of users, meaning a client-based system. So, server cost is eliminated. As far as we know, it is the first time the client-based online proctoring system has been used for recruitment. Furthermore, this cheating system works at

[62] 2312.00797

Multi-mode OAM Convergent Transmission with Co-divergent Angle Tailored by Airy Wavefront

Wireless backhaul offers a more cost-effective, time-efficient, and reconfigurable solution than wired backhaul to connect the edge-computing cells to the core network. As the amount of transmitted data increases, the low-rank characteristic of Line-of-Sight (LoS) channel severely limits the growth of channel capacity in the point-to-point backhaul transmission scenario. Orbital Angular Momentum (OAM), also known as vortex beam, is considered a potentially effective solution for high-capacity LoS wireless transmission. However, due to the shortcomings of its energy divergence and the specificity of multi-mode divergence angles, OAM beams have been difficult to apply in practical communication systems for a long time. In this work, a novel multi-mode convergent transmission with co-scale reception scheme is proposed. OAM beams of different modes can be transmitted with the same beam divergent angle, while the wavefronts are tailored by the ring-shaped Airy compensation lens during propagation, so that the energy will converge to the same spatial area for receiving. Based on this scheme, not only is the Signal-to-Noise Ratio (SNR) greatly improved, but it is also possible to simultaneously receive and demodulate OAM channels multiplexed with different modes in a limited space area. Through prototype experiments, we demonstrated that 3 kinds of OAM modes are tunable, and different channels can be separated simultaneously with receiving power increasing. The measurement isolations between channels are over 11 dB, which ensures a reliable 16-QAM multiplexing wireless transmission demo system. This work may explore the potential applications of OAM-based multi-mode convergent transmission in LoS wireless communications.

[63] 2312.00812

Empowering Autonomous Driving with Large Language Models: A Safety Perspective

Autonomous Driving (AD) faces crucial hurdles for commercial launch, notably in the form of diminished public trust and safety concerns from long-tail unforeseen driving scenarios. This predicament is due to the limitation of deep neural networks in AD software, which struggle with interpretability and exhibit poor generalization capabilities in out-of-distribution and uncertain scenarios. To this end, this paper advocates for the integration of Large Language Models (LLMs) into the AD system, leveraging their robust common-sense knowledge, reasoning abilities, and human-interaction capabilities. The proposed approach deploys the LLM as an intelligent decision-maker in planning, incorporating safety verifiers for contextual safety learning to enhance overall AD performance and safety. We present results from two case studies that affirm the efficacy of our approach. We further discuss the potential integration of LLM for other AD software components including perception, prediction, and simulation. Despite the observed challenges in the case studies, the integration of LLMs is promising and beneficial for reinforcing both safety and performance in AD.

[64] 2312.00857

Latent Space Explorer: Visual Analytics for Multimodal Latent Space Exploration

Machine learning models built on training data with multiple modalities can reveal new insights that are not accessible through unimodal datasets. For example, cardiac magnetic resonance images (MRIs) and electrocardiograms (ECGs) are both known to capture useful information about subjects' cardiovascular health status. A multimodal machine learning model trained from large datasets can potentially predict the onset of heart-related diseases and provide novel medical insights about the cardiovascular system. Despite the potential benefits, it is difficult for medical experts to explore multimodal representation models without visual aids and to test the predictive performance of the models on various subpopulations. To address the challenges, we developed a visual analytics system called Latent Space Explorer. Latent Space Explorer provides interactive visualizations that enable users to explore the multimodal representation of subjects, define subgroups of interest, interactively decode data with different modalities with the selected subjects, and inspect the accuracy of the embedding in downstream prediction tasks. A user study was conducted with medical experts and their feedback provided useful insights into how Latent Space Explorer can help their analysis and possible new direction for further development in the medical domain.

[65] 2312.00942

Survey of Security Issues in Memristor-based Machine Learning Accelerators for RF Analysis

We explore security aspects of a new computing paradigm that combines novel memristors and traditional Complimentary Metal Oxide Semiconductor (CMOS) to construct a highly efficient analog and/or digital fabric that is especially well-suited to Machine Learning (ML) inference processors for Radio Frequency (RF) signals. Memristors have different properties than traditional CMOS which can potentially be exploited by attackers. In addition, the mixed signal approximate computing model has different vulnerabilities than traditional digital implementations. However both the memristor and the ML computation can be leveraged to create security mechanisms and countermeasures ranging from lightweight cryptography, identifiers (e.g. Physically Unclonable Functions (PUFs), fingerprints, and watermarks), entropy sources, hardware obfuscation and leakage/attack detection methods. Three different threat models are proposed: 1) Supply Chain, 2) Physical Attacks, and 3) Remote Attacks. For each threat model, potential vulnerabilities and defenses are identified. This survey reviews a variety of recent work from the hardware and ML security literature and proposes open problems for both attack and defense. The survey emphasizes the growing area of RF signal analysis and identification in terms of the commercial space, as well as military applications and threat models. We differ from other other recent surveys that target ML in general, neglecting RF applications.

[66] 2312.00951

AV4EV: Open-Source Modular Autonomous Electric Vehicle Platform to Make Mobility Research Accessible

When academic researchers develop and validate autonomous driving algorithms, there is a challenge in balancing high-performance capabilities with the cost and complexity of the vehicle platform. Much of today's research on autonomous vehicles (AV) is limited to experimentation on expensive commercial vehicles that require large teams with diverse skills to retrofit the vehicles and test them in dedicated testing facilities. Testing the limits of safety and performance on such vehicles is costly and hazardous. It is also outside the reach of most academic departments and research groups. On the other hand, scaled-down 1/10th-1/16th scale vehicle platforms are more affordable but have limited similitude in dynamics, control, and drivability. To address this issue, we present the design of a one-third-scale autonomous electric go-kart platform with open-source mechatronics design along with fully-functional autonomous driving software. The platform's multi-modal driving system is capable of manual, autonomous, and teleoperation driving modes. It also features a flexible sensing suite for development and deployment of algorithms across perception, localization, planning, and control. This development serves as a bridge between full-scale vehicles and reduced-scale cars while accelerating cost-effective algorithmic advancements in autonomous systems research. Our experimental results demonstrate the AV4EV platform's capabilities and ease-of-use for developing new AV algorithms. All materials are available at to stimulate collaborative efforts within the AV and electric vehicle (EV) communities.

[67] 2312.00977

Optimal Placement of Transmissive RIS in the Near Field for Capacity Maximization in THz Communications

This study centers on Line-of-Sight (LoS) MIMO communication enabled by a Transmissive Reconfigurable Intelligent Surface (RIS) operating in the Terahertz (THz) frequency bands. The study demonstrates that the introduction of RIS can render the curvature of the wavefront apparent over the transmit and receive arrays, even when they are positioned in the far field from each other. This phenomenon contributes to an enhancement in spatial multiplexing. Notably, simulation results underline that the optimal placement of the RIS in the near-field is not solely contingent on proximity to the transmitter (Tx) or receiver (Rx) but relies on the inter-antenna spacing of the Tx and Rx.

[68] 2312.01005

Generating Images of the M87* Black Hole Using GANs

In this paper, we introduce a novel data augmentation methodology based on Conditional Progressive Generative Adversarial Networks (CPGAN) to generate diverse black hole (BH) images, accounting for variations in spin and electron temperature prescriptions. These generated images are valuable resources for training deep learning algorithms to accurately estimate black hole parameters from observational data. Our model can generate BH images for any spin value within the range of [-1, 1], given an electron temperature distribution. To validate the effectiveness of our approach, we employ a convolutional neural network to predict the BH spin using both the GRMHD images and the images generated by our proposed model. Our results demonstrate a significant performance improvement when training is conducted with the augmented dataset while testing is performed using GRMHD simulated data, as indicated by the high R2 score. Consequently, we propose that GANs can be employed as cost effective models for black hole image generation and reliably augment training datasets for other parameterization algorithms.

[69] 2312.01042

Covert Communications in STAR-RIS-Aided Rate-Splitting Multiple Access Systems

In this paper, we investigate covert communications in a simultaneously transmitting and reflecting reconfigurable intelligent surface (STAR-RIS)-aided rate-splitting multiple access (RSMA) system. Under the RSMA principles, the messages for the covert user (Bob) and public user (Grace) are converted to the common and private streams at the legitimate transmitter (Alice) to realize downlink transmissions, while the STAR-RIS is deployed not only to aid the public transmissions from Alice to Grace, but also to shield the covert transmissions from Alice to Bob against the warden (Willie). To characterize the covert performance of the considered STAR-RIS-aided RSMA (STAR-RIS-RSMA) system, we derive analytical expression for the minimum average detection error probability of Willie, based on which a covert rate maximization problem is formulated. To maximize Bob's covert rate while confusing Willie's monitoring, the transmit power allocation, common rate allocation, and STAR-RIS reflection/transmission beamforming are jointly optimized subject to Grace's quality of service (QoS) requirements. The non-convex covert rate maximization problem, consisting of highly coupled system parameters are decoupled into three sub-problems of transmit power allocation, common rate allocation, and STAR-RIS reflection/transmission beamforming, respectively. To obtain the rank-one constrained optimal solution for the sub-problem of optimizing the STAR-RIS reflection/transmission beamforming, a penalty-based successive convex approximation scheme is developed. Moreover, an alternative optimization (AO) algorithm is designed to determine the optimal solution for the sub-problem of optimizing the transmit power allocation, while the original problem is overall solved by a new AO algorithm.

[70] 2312.01053

End-to-End Speech-to-Text Translation: A Survey

Speech-to-text translation pertains to the task of converting speech signals in a language to text in another language. It finds its application in various domains, such as hands-free communication, dictation, video lecture transcription, and translation, to name a few. Automatic Speech Recognition (ASR), as well as Machine Translation(MT) models, play crucial roles in traditional ST translation, enabling the conversion of spoken language in its original form to written text and facilitating seamless cross-lingual communication. ASR recognizes spoken words, while MT translates the transcribed text into the target language. Such disintegrated models suffer from cascaded error propagation and high resource and training costs. As a result, researchers have been exploring end-to-end (E2E) models for ST translation. However, to our knowledge, there is no comprehensive review of existing works on E2E ST. The present survey, therefore, discusses the work in this direction. Our attempt has been to provide a comprehensive review of models employed, metrics, and datasets used for ST tasks, providing challenges and future research direction with new insights. We believe this review will be helpful to researchers working on various applications of ST models.

[71] 2312.01071

Hybrid Hierarchical DRL Enabled Resource Allocation for Secure Transmission in Multi-IRS-Assisted Sensing-Enhanced Spectrum Sharing Networks

Secure communications are of paramount importance in spectrum sharing networks due to the allocation and sharing characteristics of spectrum resources. To further explore the potential of intelligent reflective surfaces (IRSs) in enhancing spectrum sharing and secure transmission performance, a multiple intelligent reflection surface (multi-IRS)-assisted sensing-enhanced wideband spectrum sharing network is investigated by considering physical layer security techniques. An intelligent resource allocation scheme based on double deep Q networks (D3QN) algorithm and soft Actor-Critic (SAC) algorithm is proposed to maximize the secure transmission rate of the secondary network by jointly optimizing IRS pairings, subchannel assignment, transmit beamforming of the secondary base station, reflection coefficients of IRSs and the sensing time. To tackle the sparse reward problem caused by a significant amount of reflection elements of multiple IRSs, the method of hierarchical reinforcement learning is exploited. An alternative optimization (AO)-based conventional mathematical scheme is introduced to verify the computational complexity advantage of our proposed intelligent scheme. Simulation results demonstrate the efficiency of our proposed intelligent scheme as well as the superiority of multi-IRS design in enhancing secrecy rate and spectrum utilization. It is shown that inappropriate deployment of IRSs can reduce the security performance with the presence of multiple eavesdroppers (Eves), and the arrangement of IRSs deserves further consideration.

[72] 2312.01092

A Semi-Supervised Deep Learning Approach to Dataset Collection for Query-By-Humming Task

Query-by-Humming (QbH) is a task that involves finding the most relevant song based on a hummed or sung fragment. Despite recent successful commercial solutions, implementing QbH systems remains challenging due to the lack of high-quality datasets for training machine learning models. In this paper, we propose a deep learning data collection technique and introduce Covers and Hummings Aligned Dataset (CHAD), a novel dataset that contains 18 hours of short music fragments, paired with time-aligned hummed versions. To expand our dataset, we employ a semi-supervised model training pipeline that leverages the QbH task as a specialized case of cover song identification (CSI) task. Starting with a model trained on the initial dataset, we iteratively collect groups of fragments of cover versions of the same song and retrain the model on the extended data. Using this pipeline, we collect over 308 hours of additional music fragments, paired with time-aligned cover versions. The final model is successfully applied to the QbH task and achieves competitive results on benchmark datasets. Our study shows that the proposed dataset and training pipeline can effectively facilitate the implementation of QbH systems.

[73] 2312.01100

Prior-Aware Robust Beam Alignment for Low-SNR Millimeter-Wave Communications

This paper presents a robust beam alignment technique for millimeter-wave communications in low signal-to-noise ratio (SNR) environments. The core strategy of our technique is to repeatedly transmit the most probable beam candidates to reduce beam misalignment probability induced by noise. Specifically, for a given beam training overhead, both the selection of candidates and the number of repetitions for each beam candidate are optimized based on channel prior information. To achieve this, a deep neural network is employed to learn the prior probability of the optimal beam at each location. The beam misalignment probability is then analyzed based on the channel prior, forming the basis for an optimization problem aimed at minimizing the analyzed beam misalignment probability. A closed-form solution is derived for a special case with two beam candidates, and an efficient algorithm is developed for general cases with multiple beam candidates. Simulation results using the DeepMIMO dataset demonstrate the superior performance of our technique in dynamic low-SNR communication environments when compared to existing beam alignment techniques.

[74] 2312.01125

Design and Performance Analysis of Index Modulation Empowered AFDM System

In this letter, we incorporate index modulation (IM) into affine frequency division multiplexing (AFDM), called AFDM-IM, to enhance the bit error rate (BER) and energy efficiency (EE) performance. In this scheme, the information bits are conveyed not only by $M$-ary constellation symbols, but also by the activation of the chirp subcarriers (SCs) indices, which are determined based on the incoming bit streams. Then, two power allocation strategies, namely power reallocation (PR) strategy and power saving (PS) strategy, are proposed to enhance BER and EE performance, respectively. Furthermore, the average bit error probability (ABEP) is theoretically analyzed. Simulation results demonstrate that the proposed AFDM-IM scheme achieves better BER performance than the conventional AFDM scheme.

[75] 2312.01126

BER Analysis of SCMA-OFDM Systems in the Presence of Carrier Frequency Offset

Sparse code multiple access (SCMA) building upon orthogonal frequency division multiplexing (OFDM) is a promising wireless technology for supporting massive connectivity in future machine-type communication networks. However, the sensitivity of OFDM to carrier frequency offset (CFO) poses a major challenge because it leads to orthogonality loss and incurs intercarrier interference (ICI). In this paper, we investigate the bit error rate (BER) performance of SCMA-OFDM systems in the presence of CFO over both Gaussian and multipath Rayleigh fading channels. We first model the ICI in SCMA-OFDM as Gaussian variables conditioned on a single channel realization for fading channels. The BER is then evaluated by averaging over all codeword pairs considering the fading statistics. Through simulations, we validate the accuracy of our BER analysis and reveal that there is a significant BER degradation for SCMA-OFDM systems when the normalized CFO exceeds 0.02.

[76] 2312.01137

Fast and Robust Sparsity-Aware Block Diagonal Representation

The block diagonal structure of an affinity matrix is a commonly desired property in cluster analysis because it represents clusters of feature vectors by non-zero coefficients that are concentrated in blocks. However, recovering a block diagonal affinity matrix is challenging in real-world applications, in which the data may be subject to outliers and heavy-tailed noise that obscure the hidden cluster structure. To address this issue, we first analyze the effect of different fundamental outlier types in graph-based cluster analysis. A key idea that simplifies the analysis is to introduce a vector that represents a block diagonal matrix as a piece-wise linear function of the similarity coefficients that form the affinity matrix. We reformulate the problem as a robust piece-wise linear fitting problem and propose a Fast and Robust Sparsity-Aware Block Diagonal Representation (FRS-BDR) method, which jointly estimates cluster memberships and the number of blocks. Comprehensive experiments on a variety of real-world applications demonstrate the effectiveness of FRS-BDR in terms of clustering accuracy, robustness against corrupted features, computation time and cluster enumeration performance.

[77] 2312.01214

Model-Based Sensor Diagnostics for Robotic Manipulators

Ensuring the safe and reliable operation of collaborative robots demands robust sensor diagnostics. This paper introduces a methodology for formulating model-based constraints tailored for sensor diagnostics, featuring analytical relationships extending across mechanical and electrical domains. While applicable to various robotic systems, the study specifically centers on a robotic joint employing a series elastic actuator. Three distinct constraints are imposed on the series elastic actuator: the Torsional Spring Constraint, Joint Dynamics Constraint, and Electrical Motor Constraint. Through a simulation example, we demonstrate the efficacy of the proposed model-based sensor diagnostics methodology. The study addresses two distinct types of sensor faults that may arise in the torque sensor of a robot joint, and delves into their respective detection methods. This insightful sensor diagnostic methodology is customizable and applicable across various components of robots, offering fault diagnostic and isolation capabilities. This research contributes valuable insights aimed at enhancing the diagnostic capabilities essential for the optimal performance of robotic manipulators in collaborative environments.

[78] 2312.01227

Distributed Bayesian Estimation in Sensor Networks: Consensus on Marginal Densities

In this paper, we aim to design and analyze distributed Bayesian estimation algorithms for sensor networks. The challenges we address are to (i) derive a distributed provably-correct algorithm in the functional space of probability distributions over continuous variables, and (ii) leverage these results to obtain new distributed estimators restricted to subsets of variables observed by individual agents. This relates to applications such as cooperative localization and federated learning, where the data collected at any agent depends on a subset of all variables of interest. We present Bayesian density estimation algorithms using data from non-linear likelihoods at agents in centralized, distributed, and marginal distributed settings. After setting up a distributed estimation objective, we prove almost-sure convergence to the optimal set of pdfs at each agent. Then, we prove the same for a storage-aware algorithm estimating densities only over relevant variables at each agent. Finally, we present a Gaussian version of these algorithms and implement it in a mapping problem using variational inference to handle non-linear likelihood models associated with LiDAR sensing.

[79] 2312.01249

A Multifidelity Sim-to-Real Pipeline for Verifiable and Compositional Reinforcement Learning

We propose and demonstrate a compositional framework for training and verifying reinforcement learning (RL) systems within a multifidelity sim-to-real pipeline, in order to deploy reliable and adaptable RL policies on physical hardware. By decomposing complex robotic tasks into component subtasks and defining mathematical interfaces between them, the framework allows for the independent training and testing of the corresponding subtask policies, while simultaneously providing guarantees on the overall behavior that results from their composition. By verifying the performance of these subtask policies using a multifidelity simulation pipeline, the framework not only allows for efficient RL training, but also for a refinement of the subtasks and their interfaces in response to challenges arising from discrepancies between simulation and reality. In an experimental case study we apply the framework to train and deploy a compositional RL system that successfully pilots a Warthog unmanned ground robot.

[80] 2312.01288

Task-Oriented Edge Networks: Decentralized Learning Over Wireless Fronthaul

This paper studies task-oriented edge networks where multiple edge internet-of-things nodes execute machine learning tasks with the help of powerful deep neural networks (DNNs) at a network cloud. Separate edge nodes (ENs) result in a partially observable system where they can only get partitioned features of the global network states. These local observations need to be forwarded to the cloud via resource-constrained wireless fronthual links. Individual ENs compress their local observations into uplink fronthaul messages using task-oriented encoder DNNs. Then, the cloud carries out a remote inference task by leveraging received signals. Such a distributed topology requests a decentralized training and decentralized execution (DTDE) learning framework for designing edge-cloud cooperative inference rules and their decentralized training strategies. First, we develop fronthaul-cooperative DNN architecture along with proper uplink coordination protocols suitable for wireless fronthaul interconnection. Inspired by the nomographic function, an efficient cloud inference model becomes an integration of a number of shallow DNNs. This modulized architecture brings versatile calculations that are independent of the number of ENs. Next, we present a decentralized training algorithm of separate edge-cloud DNNs over downlink wireless fronthaul channels. An appropriate downlink coordination protocol is proposed, which backpropagates gradient vectors wirelessly from the cloud to the ENs.

[81] 2312.01292

Joint Beam Scheduling and Power Optimization for Beam Hopping LEO Satellite Systems

Low earth orbit (LEO) satellite communications can provide ubiquitous and reliable services, making it an essential part of the Internet of Everything network. Beam hopping (BH) is an emerging technology for effectively addressing the issue of low resource utilization caused by the non-uniform spatio-temporal distribution of traffic demands. However, how to allocate multi-dimensional resources in a timely and efficient way for the highly dynamic LEO satellite systems remains a challenge. This paper proposes a joint beam scheduling and power optimization beam hopping (JBSPO-BH) algorithm considering the differences in the geographic distribution of sink nodes. The JBSPO-BH algorithm decouples the original problem into two sub-problems. The beam scheduling problem is modelled as a potential game, and the Nash equilibrium (NE) point is obtained as the beam scheduling strategy. Moreover, the penalty function interior point method is applied to optimize the power allocation. Simulation results show that the JBSPO-BH algorithm has low time complexity and fast convergence and achieves better performance both in throughput and fairness. Compared with greedy-based BH, greedy-based BH with the power optimization, round-robin BH, Max-SINR BH and satellite resource allocation algorithm, the throughput of the proposed algorithm is improved by 44.99%, 20.79%, 156.06%, 15.39% and 8.17%, respectively.

[82] 2312.01313

Observer-based Periodic Event-triggered and Self-triggered Boundary Control of a Class of Parabolic PDEs

This paper introduces the first observer-based periodic event-triggered control (PETC) and self-triggered control (STC) for boundary control of a class of parabolic PDEs using PDE backstepping control. We introduce techniques to convert a certain class of continuous-time event-triggered control into PETC and STC, eliminating the need for continuous monitoring of the event-triggering function. For the PETC, the event-triggering function requires only periodic evaluations to detect events, while the STC proactively computes the time of the next event right at the current event time using the system model and the continuously available measurements. For both strategies, the control input is updated exclusively at events and is maintained using a zero-order hold between events. We demonstrate that the closed-loop system is Zeno-free. We offer criteria for selecting an appropriate sampling period for the PETC and for determining the time until the next event under the STC. We prove the system's global exponential convergence to zero in the spatial $L^2$ norm for both anti-collocated and collocated sensing and actuation under the PETC. For the STC, local exponential convergence to zero in the spatial $L^2$ norm for collocated sensing and actuation is proven. Simulations are provided to illustrate the theoretical claims.

[83] 2312.01361

MoEC: Mixture of Experts Implicit Neural Compression

Emerging Implicit Neural Representation (INR) is a promising data compression technique, which represents the data using the parameters of a Deep Neural Network (DNN). Existing methods manually partition a complex scene into local regions and overfit the INRs into those regions. However, manually designing the partition scheme for a complex scene is very challenging and fails to jointly learn the partition and INRs. To solve the problem, we propose MoEC, a novel implicit neural compression method based on the theory of mixture of experts. Specifically, we use a gating network to automatically assign a specific INR to a 3D point in the scene. The gating network is trained jointly with the INRs of different local regions. Compared with block-wise and tree-structured partitions, our learnable partition can adaptively find the optimal partition in an end-to-end manner. We conduct detailed experiments on massive and diverse biomedical data to demonstrate the advantages of MoEC against existing approaches. In most of experiment settings, we have achieved state-of-the-art results. Especially in cases of extreme compression ratios, such as 6000x, we are able to uphold the PSNR of 48.16.

[84] 2312.01456

Compositional Policy Learning in Stochastic Control Systems with Formal Guarantees

Reinforcement learning has shown promising results in learning neural network policies for complicated control tasks. However, the lack of formal guarantees about the behavior of such policies remains an impediment to their deployment. We propose a novel method for learning a composition of neural network policies in stochastic environments, along with a formal certificate which guarantees that a specification over the policy's behavior is satisfied with the desired probability. Unlike prior work on verifiable RL, our approach leverages the compositional nature of logical specifications provided in SpectRL, to learn over graphs of probabilistic reach-avoid specifications. The formal guarantees are provided by learning neural network policies together with reach-avoid supermartingales (RASM) for the graph's sub-tasks and then composing them into a global policy. We also derive a tighter lower bound compared to previous work on the probability of reach-avoidance implied by a RASM, which is required to find a compositional policy with an acceptable probabilistic threshold for complex tasks with multiple edge policies. We implement a prototype of our approach and evaluate it on a Stochastic Nine Rooms environment.

[85] 2312.01464

Diffusion Posterior Sampling for Nonlinear CT Reconstruction

Diffusion models have been demonstrated as powerful deep learning tools for image generation in CT reconstruction and restoration. Recently, diffusion posterior sampling, where a score-based diffusion prior is combined with a likelihood model, has been used to produce high quality CT images given low-quality measurements. This technique is attractive since it permits a one-time, unsupervised training of a CT prior; which can then be incorporated with an arbitrary data model. However, current methods only rely on a linear model of x-ray CT physics to reconstruct or restore images. While it is common to linearize the transmission tomography reconstruction problem, this is an approximation to the true and inherently nonlinear forward model. We propose a new method that solves the inverse problem of nonlinear CT image reconstruction via diffusion posterior sampling. We implement a traditional unconditional diffusion model by training a prior score function estimator, and apply Bayes rule to combine this prior with a measurement likelihood score function derived from the nonlinear physical model to arrive at a posterior score function that can be used to sample the reverse-time diffusion process. This plug-and-play method allows incorporation of a diffusion-based prior with generalized nonlinear CT image reconstruction into multiple CT system designs with different forward models, without the need for any additional training. We develop the algorithm that performs this reconstruction, including an ordered-subsets variant for accelerated processing and demonstrate the technique in both fully sampled low dose data and sparse-view geometries using a single unsupervised training of the prior.

[86] 2312.01479

OpenVoice: Versatile Instant Voice Cloning

We introduce OpenVoice, a versatile voice cloning approach that requires only a short audio clip from the reference speaker to replicate their voice and generate speech in multiple languages. OpenVoice represents a significant advancement in addressing the following open challenges in the field: 1) Flexible Voice Style Control. OpenVoice enables granular control over voice styles, including emotion, accent, rhythm, pauses, and intonation, in addition to replicating the tone color of the reference speaker. The voice styles are not directly copied from and constrained by the style of the reference speaker. Previous approaches lacked the ability to flexibly manipulate voice styles after cloning. 2) Zero-Shot Cross-Lingual Voice Cloning. OpenVoice achieves zero-shot cross-lingual voice cloning for languages not included in the massive-speaker training set. Unlike previous approaches, which typically require extensive massive-speaker multi-lingual (MSML) dataset for all languages, OpenVoice can clone voices into a new language without any massive-speaker training data for that language. OpenVoice is also computationally efficient, costing tens of times less than commercially available APIs that offer even inferior performance. To foster further research in the field, we have made the source code and trained model publicly accessible. We also provide qualitative results in our demo website. Prior to its public release, our internal version of OpenVoice was used tens of millions of times by users worldwide between May and October 2023, serving as the backend of

[87] 2312.01515

Bigger is not Always Better: The Effect of Context Size on Speech Pre-Training

It has been generally assumed in the automatic speech recognition (ASR) literature that it is better for models to have access to wider context windows. Yet, many of the potential reasons this might be true in the supervised setting do not necessarily transfer over to the case of unsupervised learning. We investigate how much context is necessary to achieve high-quality pre-trained acoustic models using self-supervised learning. We principally investigate contrastive predictive coding (CPC), which we adapt to be able to precisely control the amount of context visible to the model during training and inference. We find that phone discriminability in the resulting model representations peaks at around 40~ms of preceding context, and that having too much context (beyond around 320 ms) substantially degrades the quality of the representations. Surprisingly, we find that this pattern also transfers to supervised ASR when the pre-trained representations are used as frozen input features. Our results point to potential changes in the design of current upstream architectures to better facilitate a variety of downstream tasks.

[88] 2312.01529

T3D: Towards 3D Medical Image Understanding through Vision-Language Pre-training

Expert annotation of 3D medical image for downstream analysis is resource-intensive, posing challenges in clinical applications. Visual self-supervised learning (vSSL), though effective for learning visual invariance, neglects the incorporation of domain knowledge from medicine. To incorporate medical knowledge into visual representation learning, vision-language pre-training (VLP) has shown promising results in 2D image. However, existing VLP approaches become generally impractical when applied to high-resolution 3D medical images due to GPU hardware constraints and the potential loss of critical details caused by downsampling, which is the intuitive solution to hardware constraints. To address the above limitations, we introduce T3D, the first VLP framework designed for high-resolution 3D medical images. T3D incorporates two text-informed pretext tasks: (\lowerromannumeral{1}) text-informed contrastive learning; (\lowerromannumeral{2}) text-informed image restoration. These tasks focus on learning 3D visual representations from high-resolution 3D medical images and integrating clinical knowledge from radiology reports, without distorting information through forced alignment of downsampled volumes with detailed anatomical text. Trained on a newly curated large-scale dataset of 3D medical images and radiology reports, T3D significantly outperforms current vSSL methods in tasks like organ and tumor segmentation, as well as disease classification. This underlines T3D's potential in representation learning for 3D medical image analysis. All data and code will be available upon acceptance.

[89] 2312.01544

KEEC: Embed to Control on An Equivariant Geometry

This paper investigates how representation learning can enable optimal control in unknown and complex dynamics, such as chaotic and non-linear systems, without relying on prior domain knowledge of the dynamics. The core idea is to establish an equivariant geometry that is diffeomorphic to the manifold defined by a dynamical system and to perform optimal control within this corresponding geometry, which is a non-trivial task. To address this challenge, Koopman Embed to Equivariant Control (KEEC) is introduced for model learning and control. Inspired by Lie theory, KEEC begins by learning a non-linear dynamical system defined on a manifold and embedding trajectories into a Lie group. Subsequently, KEEC formulates an equivariant value function equation in reinforcement learning on the equivariant geometry, ensuring an invariant effect as the value function on the original manifold. By deriving analytical-form optimal actions on the equivariant value function, KEEC theoretically achieves quadratic convergence for the optimal equivariant value function by leveraging the differential information on the equivariant geometry. The effectiveness of KEEC is demonstrated in challenging dynamical systems, including chaotic ones like Lorenz-63. Notably, our findings indicate that isometric and isomorphic loss functions, ensuring the compactness and smoothness of geometry, outperform loss functions without these properties.

[90] 2312.01546

Learning Channel Capacity with Neural Mutual Information Estimator Based on Message Importance Measure

Channel capacity estimation plays a crucial role in beyond 5G intelligent communications. Despite its significance, this task is challenging for a majority of channels, especially for the complex channels not modeled as the well-known typical ones. Recently, neural networks have been used in mutual information estimation and optimization. They are particularly considered as efficient tools for learning channel capacity. In this paper, we propose a cooperative framework to simultaneously estimate channel capacity and design the optimal codebook. First, we will leverage MIM-based GAN, a novel form of generative adversarial network (GAN) using message importance measure (MIM) as the information distance, into mutual information estimation, and develop a novel method, named MIM-based mutual information estimator (MMIE). Then, we design a generalized cooperative framework for channel capacity learning, in which a generator is regarded as an encoder producing the channel input, while a discriminator is the mutual information estimator that assesses the performance of the generator. Through the adversarial training, the generator automatically learns the optimal codebook and the discriminator estimates the channel capacity. Numerical experiments will demonstrate that compared with several conventional estimators, the MMIE achieves state-of-the-art performance in terms of accuracy and stability.

[91] 2312.01554

Building Ears for Robots: Machine Hearing in the Age of Autonomy

Robot hearing system is becoming an important topic due to the increasing number of field robots in uncertain environments. This study discusses what a hearing system means to a robot and why it is important. In particular, the hardware design principles are introduced with the example of robotaxi, on which exterior microphone arrays are used for detection of siren and other abnormal sound events. After that, a preliminary robot hearing software design framework is developed based on the taxonomy of modern probabilistic robotics as a part of decision processes.

[92] 2312.01558

Hyperspectral Image Compression Using Sampling and Implicit Neural Representations

Hyperspectral images, which record the electromagnetic spectrum for a pixel in the image of a scene, often store hundreds of channels per pixel and contain an order of magnitude more information than a similarly-sized RBG color image. Consequently, concomitant with the decreasing cost of capturing these images, there is a need to develop efficient techniques for storing, transmitting, and analyzing hyperspectral images. This paper develops a method for hyperspectral image compression using implicit neural representations where a multilayer perceptron network F with sinusoidal activation functions "learns" to map pixel locations to pixel intensities for a given hyperspectral image I. F thus acts as a compressed encoding of this image, and the original image is reconstructed by evaluating F at each pixel location. We use a sampling method with two factors: window size and sampling rate to reduce the compression time. We have evaluated our method on four benchmarks -- Indian Pines, Jasper Ridge, Pavia University, and Cuprite using PSNR and SSIM -- and we show that the proposed method achieves better compression than JPEG, JPEG2000, and PCA-DCT at low bitrates. Besides, we compare our results with the learning-based methods like PCA+JPEG2000, FPCA+JPEG2000, 3D DCT, 3D DWT+SVR, and WSRC and show the corresponding results in the "Compression Results" section. We also show that our methods with sampling achieve better speed and performance than our method without sampling.

[93] 2312.01566

Coronary Atherosclerotic Plaque Characterization with Photon-counting CT: a Simulation-based Feasibility Study

Recent development of photon-counting CT (PCCT) brings great opportunities for plaque characterization with much-improved spatial resolution and spectral imaging capability. While existing coronary plaque PCCT imaging results are based on detectors made of CZT or CdTe materials, deep-silicon photon-counting detectors have unique performance characteristics and promise distinct imaging capabilities. In this work, we report a systematic simulation study of a deep-silicon PCCT scanner with a new clinically-relevant digital plaque phantom with realistic geometrical parameters and chemical compositions. This work investigates the effects of spatial resolution, noise, motion artifacts, radiation dose, and spectral characterization. Our simulation results suggest that the deep-silicon PCCT design provides adequate spatial resolution for visualizing a necrotic core and quantitation of key plaque features. Advanced denoising techniques and aggressive bowtie filter designs can keep image noise to acceptable levels at this resolution while keeping radiation dose comparable to that of a conventional CT scan. The ultrahigh resolution of PCCT also means an elevated sensitivity to motion artifacts. It is found that a tolerance of less than 0.4 mm residual movement range requires the application of accurate motion correction methods for best plaque imaging quality with PCCT.

[94] 2312.01568

Multimodal Speech Emotion Recognition Using Modality-specific Self-Supervised Frameworks

Emotion recognition is a topic of significant interest in assistive robotics due to the need to equip robots with the ability to comprehend human behavior, facilitating their effective interaction in our society. Consequently, efficient and dependable emotion recognition systems supporting optimal human-machine communication are required. Multi-modality (including speech, audio, text, images, and videos) is typically exploited in emotion recognition tasks. Much relevant research is based on merging multiple data modalities and training deep learning models utilizing low-level data representations. However, most existing emotion databases are not large (or complex) enough to allow machine learning approaches to learn detailed representations. This paper explores modalityspecific pre-trained transformer frameworks for self-supervised learning of speech and text representations for data-efficient emotion recognition while achieving state-of-the-art performance in recognizing emotions. This model applies feature-level fusion using nonverbal cue data points from motion capture to provide multimodal speech emotion recognition. The model was trained using the publicly available IEMOCAP dataset, achieving an overall accuracy of 77.58% for four emotions, outperforming state-of-the-art approaches

[95] 2312.01586

On the Maximization of Long-Run Reward CVaR for Markov Decision Processes

This paper studies the optimization of Markov decision processes (MDPs) from a risk-seeking perspective, where the risk is measured by conditional value-at-risk (CVaR). The objective is to find a policy that maximizes the long-run CVaR of instantaneous rewards over an infinite horizon across all history-dependent randomized policies. By establishing two optimality inequalities of opposing directions, we prove that the maximum of long-run CVaR of MDPs over the set of history-dependent randomized policies can be found within the class of stationary randomized policies. In contrast to classical MDPs, we find that there may not exist an optimal stationary deterministic policy for maximizing CVaR. Instead, we prove the existence of an optimal stationary randomized policy that requires randomizing over at most two actions. Via a convex optimization representation of CVaR, we convert the long-run CVaR maximization MDP into a minimax problem, where we prove the interchangeability of minimum and maximum and the related existence of saddle point solutions. Furthermore, we propose an algorithm that finds the saddle point solution by solving two linear programs. These results are then extended to objectives that involve maximizing some combination of mean and CVaR of rewards simultaneously. Finally, we conduct numerical experiments to demonstrate the main results.

[96] 2312.01645

A text-dependent speaker verification application framework based on Chinese numerical string corpus

Researches indicate that text-dependent speaker verification (TD-SV) often outperforms text-independent verification (TI-SV) in short speech scenarios. However, collecting large-scale fixed text speech data is challenging, and as speech length increases, factors like sentence rhythm and pauses affect TDSV's sensitivity to text sequence. Based on these factors, We propose the hypothesis that strategies such as more fine-grained pooling methods on time scales and decoupled representations of speech speaker embedding and text embedding are more suitable for TD-SV. We have introduced an end-to-end TD-SV system based on a dataset comprising longer Chinese numerical string texts. It contains a text embedding network, a speaker embedding network, and back-end fusion. First, we recorded a dataset consisting of long Chinese numerical text named SHAL, which is publicly available on the Open-SLR website. We addressed the issue of dataset scarcity by augmenting it using Tacotron2 and HiFi-GAN. Next, we introduced a dual representation of speech with text embedding and speaker embedding. In the text embedding network, we employed an enhanced Transformer and introduced a triple loss that includes text classification loss, CTC loss, and decoder loss. For the speaker embedding network, we enhanced a sliding window attentive statistics pooling (SWASP), combined with attentive statistics pooling (ASP) to create a multi-scale pooling method. Finally, we fused text embedding and speaker embedding. Our pooling methods achieved an equal error rate (EER) performance improvement of 49.2% on Hi-Mia and 75.0% on SHAL, respectively.

[97] 2312.01662

Universal Deoxidation of Semiconductor Substrates Assisted by Machine-Learning and Real-Time-Feedback-Control

Thin film deposition is an essential step in the semiconductor process. During preparation or loading, the substrate is exposed to the air unavoidably, which has motivated studies of the process control to remove the surface oxide before thin film deposition. Optimizing the deoxidation process in molecular beam epitaxy (MBE) for a random substrate is a multidimensional challenge and sometimes controversial. Due to variations in semiconductor materials and growth processes, the determination of substrate deoxidation temperature is highly dependent on the grower's expertise; the same substrate may yield inconsistent results when evaluated by different growers. Here, we employ a machine learning (ML) hybrid convolution and vision transformer (CNN-ViT) model. This model utilizes reflection high-energy electron diffraction (RHEED) video as input to determine the deoxidation status of the substrate as output, enabling automated substrate deoxidation under a controlled architecture. This also extends to the successful application of deoxidation processes on other substrates. Furthermore, we showcase the potential of models trained on data from a single MBE equipment to achieve high-accuracy deployment on other equipment. In contrast to traditional methods, our approach holds exceptional practical value. It standardizes deoxidation temperatures across various equipment and substrate materials, advancing the standardization research process in semiconductor preparation, a significant milestone in thin film growth technology. The concepts and methods demonstrated in this work are anticipated to revolutionize semiconductor manufacturing in optoelectronics and microelectronics industries by applying them to diverse material growth processes.

[98] 2312.01795

Distributed Continual Learning with CoCoA in High-dimensional Linear Regression

We consider estimation under scenarios where the signals of interest exhibit change of characteristics over time. In particular, we consider the continual learning problem where different tasks, e.g., data with different distributions, arrive sequentially and the aim is to perform well on the newly arrived task without performance degradation on the previously seen tasks. In contrast to the continual learning literature focusing on the centralized setting, we investigate the problem from a distributed estimation perspective. We consider the well-established distributed learning algorithm COCOA, which distributes the model parameters and the corresponding features over the network. We provide exact analytical characterization for the generalization error of COCOA under continual learning for linear regression in a range of scenarios, where overparameterization is of particular interest. These analytical results characterize how the generalization error depends on the network structure, the task similarity and the number of tasks, and show how these dependencies are intertwined. In particular, our results show that the generalization error can be significantly reduced by adjusting the network size, where the most favorable network size depends on task similarity and the number of tasks. We present numerical results verifying the theoretical analysis and illustrate the continual learning performance of COCOA with a digit classification task.

[99] 2312.01842

Exploring the Viability of Synthetic Audio Data for Audio-Based Dialogue State Tracking

Dialogue state tracking plays a crucial role in extracting information in task-oriented dialogue systems. However, preceding research are limited to textual modalities, primarily due to the shortage of authentic human audio datasets. We address this by investigating synthetic audio data for audio-based DST. To this end, we develop cascading and end-to-end models, train them with our synthetic audio dataset, and test them on actual human speech data. To facilitate evaluation tailored to audio modalities, we introduce a novel PhonemeF1 to capture pronunciation similarity. Experimental results showed that models trained solely on synthetic datasets can generalize their performance to human voice data. By eliminating the dependency on human speech data collection, these insights pave the way for significant practical advancements in audio-based DST. Data and code are available at

[100] 2312.01869

TCP Slice: A semi-distributed TCP algorithm for Delay-constrained Applications

The TCP congestion control protocol serves as the cornerstone of reliable internet communication. However, as new applications require more specific guarantees regarding data rate and delay, network management must adapt. Thus, service providers are shifting from decentralized to centralized control of the network using a software-defined network controller (SDN). The SDN classifies applications and allocates logically separate resources called slices, over the physical network. We propose TCP Slice, a congestion control algorithm that meets specific delay and bandwidth guarantees. Obtaining closed-form delay bounds for a client is challenging due to dependencies on other clients and their traffic stochasticity. We use network calculus to derive the client's delay bound and incorporate it as a constraint in the Network Utility Maximization problem. We solve the resulting optimization using dual decomposition and obtain a semi-distributed TCP protocol that can be implemented with the help of SDN controller and the use of an Explicit Congestion Notification (ECN) bit. Additionally, we also propose a proactive approach for congestion control using digital twin. TCP Slice represents a significant step towards accommodating evolving internet traffic patterns and the need for better network management in the face of increasing application diversity.

[101] 2312.01887

Non-Intrusive Load Monitoring for Feeder-Level EV Charging Detection: Sliding Window-based Approaches to Offline and Online Detection

Understanding electric vehicle (EV) charging on the distribution network is key to effective EV charging management and aiding decarbonization across the energy and transport sectors. Advanced metering infrastructure has allowed distribution system operators and utility companies to collect high-resolution load data from their networks. These advancements enable the non-intrusive load monitoring (NILM) technique to detect EV charging using load measurement data. While existing studies primarily focused on NILM for EV charging detection in individual households, there is a research gap on EV charging detection at the feeder level, presenting unique challenges due to the combined load measurement from multiple households. In this paper, we develop a novel and effective approach for EV detection at the feeder level, involving sliding-window feature extraction and classical machine learning techniques, specifically models like XGBoost and Random Forest. Our developed method offers a lightweight and efficient solution, capable of quick training. Moreover, our developed method is versatile, supporting both offline and online EV charging detection. Our experimental results demonstrate high-accuracy EV charging detection at the feeder level, achieving an F-Score of 98.88% in offline detection and 93.01% in online detection.

[102] 2312.01904

Unsupervised Anomaly Detection using Aggregated Normative Diffusion

Early detection of anomalies in medical images such as brain MRI is highly relevant for diagnosis and treatment of many conditions. Supervised machine learning methods are limited to a small number of pathologies where there is good availability of labeled data. In contrast, unsupervised anomaly detection (UAD) has the potential to identify a broader spectrum of anomalies by spotting deviations from normal patterns. Our research demonstrates that existing state-of-the-art UAD approaches do not generalise well to diverse types of anomalies in realistic multi-modal MR data. To overcome this, we introduce a new UAD method named Aggregated Normative Diffusion (ANDi). ANDi operates by aggregating differences between predicted denoising steps and ground truth backwards transitions in Denoising Diffusion Probabilistic Models (DDPMs) that have been trained on pyramidal Gaussian noise. We validate ANDi against three recent UAD baselines, and across three diverse brain MRI datasets. We show that ANDi, in some cases, substantially surpasses these baselines and shows increased robustness to varying types of anomalies. Particularly in detecting multiple sclerosis (MS) lesions, ANDi achieves improvements of up to 178% in terms of AUPRC.

[103] 2312.01907

Model Predictive Control Approach to Autonomous Formation Flight

Formation flight is when multiple objects fly together in a coordination. Various automatic control methods have been used for the autonomous execution of formation flight of aerial vehicles. In this paper, the capacity of the model predictive control (MPC) approach in the autonomous execution of formation flight is examined. The MPC is a controller that capable of performing formation flight, maintaining tracking desired trajectory while avoiding collisions between aerial vehicles, and obstacles faced. Through this approach, aerial vehicle models with six degrees of freedom in a three-dimensional environment are performed formation flight autonomously, mostly in a triangle order. Not only the trajectory for the formation flight can be tracked through the MPC architecture, also the collision avoidance strategies of the aerial vehicles can be performed by this architecture. Simulation studies show that MPC has sufficient capability in both cases. Therefore, it is concluded that this method can deal with constraints, avoid obstacles as well as collisions between aerial vehicles. However, implementation of MPC to aerial vehicles in real time holds challenges.

[104] 2312.01968

Augmenting Channel Charting with Classical Wireless Source Localization Techniques

Channel Charting aims to construct a map of the radio environment by leveraging similarity relationships found in high-dimensional channel state information. Although resulting channel charts usually accurately represent local neighborhood relationships, even under conditions with strong multipath propagation, they often fall short in capturing global geometric features. On the other hand, classical model-based localization methods, such as triangulation and multilateration, can easily localize signal sources in the global coordinate frame. However, these methods rely heavily on the assumption of line-of-sight channels and distributed antenna deployments. Based on measured data, we compare classical source localization techniques to channel charts with respect to localization performance. We suggest and evaluate methods to enhance Channel Charting with model-based localization approaches: One approach involves using information derived from classical localization methods to map channel chart locations to physical positions after conventional training of the forward charting function. Foremost, though, we suggest to incorporate information from model-based approaches during the training of the forward charting function in what we call "augmented Channel Charting". We demonstrate that Channel Charting can outperform classical localization methods on the considered dataset.

[105] 2312.01970

CaRL: Cascade Reinforcement Learning with State Space Splitting for O-RAN based Traffic Steering

The Open Radio Access Network (O-RAN) architecture empowers intelligent and automated optimization of the RAN through applications deployed on the RAN Intelligent Controller (RIC) platform, enabling capabilities beyond what is achievable with traditional RAN solutions. Within this paradigm, Traffic Steering (TS) emerges as a pivotal RIC application that focuses on optimizing cell-level mobility settings in near-real-time, aiming to significantly improve network spectral efficiency. In this paper, we design a novel TS algorithm based on a Cascade Reinforcement Learning (CaRL) framework. We propose state space factorization and policy decomposition to reduce the need for large models and well-labeled datasets. For each sub-state space, an RL sub-policy will be trained to learn an optimized mapping onto the action space. To apply CaRL on new network regions, we propose a knowledge transfer approach to initialize a new sub-policy based on knowledge learned by the trained policies. To evaluate CaRL, we build a data-driven and scalable RIC digital twin (DT) that is modeled using important real-world data, including network configuration, user geo-distribution, and traffic demand, among others, from a tier-1 mobile operator in the US. We evaluate CaRL on two DT scenarios representing two network clusters in two different cities and compare its performance with the business-as-usual (BAU) policy and other competing optimization approaches using heuristic and Q-table algorithms. Benchmarking results show that CaRL performs the best and improves the average cluster-aggregated downlink throughput over the BAU policy by 24% and 18% in these two scenarios, respectively.

[106] 2312.01994

A Generative Self-Supervised Framework using Functional Connectivity in fMRI Data

Deep neural networks trained on Functional Connectivity (FC) networks extracted from functional Magnetic Resonance Imaging (fMRI) data have gained popularity due to the increasing availability of data and advances in model architectures, including Graph Neural Network (GNN). Recent research on the application of GNN to FC suggests that exploiting the time-varying properties of the FC could significantly improve the accuracy and interpretability of the model prediction. However, the high cost of acquiring high-quality fMRI data and corresponding phenotypic labels poses a hurdle to their application in real-world settings, such that a model na\"ively trained in a supervised fashion can suffer from insufficient performance or a lack of generalization on a small number of data. In addition, most Self-Supervised Learning (SSL) approaches for GNNs to date adopt a contrastive strategy, which tends to lose appropriate semantic information when the graph structure is perturbed or does not leverage both spatial and temporal information simultaneously. In light of these challenges, we propose a generative SSL approach that is tailored to effectively harness spatio-temporal information within dynamic FC. Our empirical results, experimented with large-scale (>50,000) fMRI datasets, demonstrate that our approach learns valuable representations and enables the construction of accurate and robust models when fine-tuned for downstream tasks.

[107] 2312.02025

Self-Synchronized Trichel Pulse Trains in Multi-Point Corona Discharge Systems

Evidence of self-synchronization has been observed in multi-electrode corona discharge systems, where the application of high negative DC voltages induces a self-sustained mode of current pulse trains. These pulses, historically referred to as Trichel pulses, characterize the operation of a two-electrode system where the discharge electrode is subjected to a high negative DC voltage. The numerical algorithm reveals that in a two electrode discharge system, each of which is composed of a pair of electrodes operating in a pulsed mode, synchronization occurs due to weak yet significant interactions. These interactions arise from the mutual influence of electric fields and space charges generated by each discharge pair. This influence extends beyond individual systems, leading to a synchronization between both pairs, both in a pulsed mode. A three-species model of discharge was employed to simulate this process and it was based on the finite element method formulation. Two different numerical models were investigated, a 2D model, consisting of two discharge electrodes and a third grounded electrode, and two 1D-axisymmetric models, consisting dual and triple pairs of discharge systems. Experiments show a multi-stable nature of the coupled pulsed discharge systems, indicating that under appropriate conditions the pulse trains exhibit two distinct modes of synchronization: in-phase synchronization and anti-phase synchronization. The occurrence of each mode depends on factors such as interaction strength, applied voltage level, and various system parameters. Furthermore, variations in these factors can lead to additional outcomes, including out of phase synchronization, as well as scenarios involving near-harmonic oscillations and quenching.

[108] 2312.02042

Kirchhoff Meets Johnson: In Pursuit of Unconditionally Secure Communication

Noise: an enemy to be dealt with and a major factor limiting communication system performance. However, what if there is gold in that garbage? In conventional engineering, our focus is primarily on eliminating, suppressing, combating, or even ignoring noise and its detrimental impacts. Conversely, could we exploit it similarly to biology, which utilizes noise-alike carrier signals to convey information? In this context, the utilization of noise, or noise-alike signals in general, has been put forward as a means to realize unconditionally secure communication systems in the future. In this tutorial article, we begin by tracing the origins of thermal noise-based communication and highlighting one of its significant applications for ensuring unconditionally secure networks: the Kirchhoff-law-Johnson-noise (KLJN) secure key exchange scheme. We then delve into the inherent challenges tied to secure communication and discuss the imperative need for physics-based key distribution schemes in pursuit of unconditional security. Concurrently, we provide a concise overview of quantum key distribution (QKD) schemes and draw comparisons with their KLJN-based counterparts. Finally, extending beyond wired communication loops, we explore the transmission of noise signals over-the-air and evaluate their potential for stealth and secure wireless communication systems.

[109] 2312.02080

Fixed-point methods for long-term power control and beamforming design in large-scale MIMO

This study presents novel applications of fixed-point methods to solve previously open joint power control and beamforming design problems in modern large-scale MIMO systems, e.g., based on the cell-free massive MIMO and XL-MIMO concepts. In particular, motivated by the need for scalable system architectures, we revisit the classical sum power minimization and max-min fair design criteria by considering long-term power control and beamforming design based on channel statistics and possibly limited channel state information (CSI) sharing across distributed processing units. This approach is believed to mitigate the severe scalability issues of competing short-term optimal algorithms in the literature, which must be executed for every channel realization by a central controller endowed with global CSI, hence imposing very demanding requirements in terms of computation and interconnection capabilities. The obtained optimal algorithms are then illustrated and compared against existing short-term and long-term approaches via numerical simulations in a cell-free massive MIMO setup.

[110] 2312.02102

Mitigating Data Injection Attacks on Federated Learning

Federated learning is a technique that allows multiple entities to collaboratively train models using their data without compromising data privacy. However, despite its advantages, federated learning can be susceptible to false data injection attacks. In these scenarios, a malicious entity with control over specific agents in the network can manipulate the learning process, leading to a suboptimal model. Consequently, addressing these data injection attacks presents a significant research challenge in federated learning systems. In this paper, we propose a novel technique to detect and mitigate data injection attacks on federated learning systems. Our mitigation method is a local scheme, performed during a single instance of training by the coordinating node, allowing the mitigation during the convergence of the algorithm. Whenever an agent is suspected to be an attacker, its data will be ignored for a certain period, this decision will often be re-evaluated. We prove that with probability 1, after a finite time, all attackers will be ignored while the probability of ignoring a trustful agent becomes 0, provided that there is a majority of truthful agents. Simulations show that when the coordinating node detects and isolates all the attackers, the model recovers and converges to the truthful model.

[111] 2312.02112

Distributed Optimization with Feasible Set Privacy

We consider the setup of a constrained optimization problem with two agents $E_1$ and $E_2$ who jointly wish to learn the optimal solution set while keeping their feasible sets $\mathcal{P}_1$ and $\mathcal{P}_2$ private from each other. The objective function $f$ is globally known and each feasible set is a collection of points from a global alphabet. We adopt a sequential symmetric private information retrieval (SPIR) framework where one of the agents (say $E_1$) privately checks in $\mathcal{P}_2$, the presence of candidate solutions of the problem constrained to $\mathcal{P}_1$ only, while learning no further information on $\mathcal{P}_2$ than the solution alone. Further, we extract an information theoretically private threshold PSI (ThPSI) protocol from our scheme and characterize its download cost. We show that, compared to privately acquiring the feasible set $\mathcal{P}_1\cap \mathcal{P}_2$ using an SPIR-based private set intersection (PSI) protocol, and finding the optimum, our scheme is better as it incurs less information leakage and less download cost than the former. Over all possible uniform mappings of $f$ to a fixed range of values, our scheme outperforms the former with a high probability.