### SOLBP: Second-Order Loopy Belief Propagation for Inference in Uncertain Bayesian Networks

In second-order uncertain Bayesian networks, the conditional probabilities are only known within distributions, i.e., probabilities over probabilities. The delta-method has been applied to extend exact first-order inference methods to propagate both means and variances through sum-product networks derived from Bayesian networks, thereby characterizing epistemic uncertainty, or the uncertainty in the model itself. Alternatively, second-order belief propagation has been demonstrated for polytrees but not for general directed acyclic graph structures. In this work, we extend Loopy Belief Propagation to the setting of second-order Bayesian networks, giving rise to Second-Order Loopy Belief Propagation (SOLBP). For second-order Bayesian networks, SOLBP generates inferences consistent with those generated by sum-product networks, while being more computationally efficient and scalable.

### Reproduction and Replication of an Adversarial Stylometry Experiment

Maintaining anonymity while communicating using natural language remains a challenge. Standard authorship attribution techniques that analyze candidate authors' writing styles achieve uncomfortably high accuracy even when the number of candidate authors is high. Adversarial stylometry defends against authorship attribution with the goal of preventing unwanted deanonymization. This paper reproduces and replicates experiments in a seminal study of defenses against authorship attribution (Brennan et al., 2012). We are able to successfully reproduce and replicate the original results, although we conclude that the effectiveness of the defenses studied is overstated due to a lack of a control group in the original study. In our replication, we find new evidence suggesting that an entirely automatic method, round-trip translation, merits re-examination as it appears to reduce the effectiveness of established authorship attribution methods.

### Modeling thermal regulation in thin vascular systems: A mathematical analysis

Mimicking vascular systems in living beings, designers have realized microvascular composites to achieve thermal regulation and other functionalities, such as electromagnetic modulation, sensing, and healing. Such material systems with mentioned functionalities benefit various aerospace, military, and civilian applications. Although heat transfer is a mature field, control of thermal characteristics in synthetic microvascular systems is new, and the fundamental comprehension is hazy. What will benefit designers are predictive mathematical models and an in-depth qualitative understanding of vasculature-based active cooling/heating. So, the central focus of this paper is to address the remarked knowledge gap. First, we present a reduced-order model with broad applicability, allowing the inlet temperature to differ from the ambient temperature. Second, we use mathematical analysis tools for this reduced-order model and reveal many heat transfer properties in fluid-sequestered vasculature systems. We derive point-wise properties (minimum, maximum, and comparison principles) and global properties (such as bounds on performance metrics such as the mean surface temperature and cooling efficiency). These newfound results deepen our understanding of active cooling/heating and propel the perfecting of thermal regulation systems.

### A Survey of Recommender System Techniques and the Ecommerce Domain

In this big data era, it is hard for the current generation to find the right data from the huge amount of data contained within online platforms. In such a situation, there is a need for an information filtering system that might help them find the information they are looking for. In recent years, a research field has emerged known as recommender systems. Recommenders have become important as they have many real-life applications. This paper reviews the different techniques and developments of recommender systems in e-commerce, e-tourism, e-resources, e-government, e-learning, and e-library. By analyzing recent work on this topic, we will be able to provide a detailed overview of current developments and identify existing difficulties in recommendation systems. The final results give practitioners and researchers the necessary guidance and insights into the recommendation system and its application.

### SynKB: Semantic Search for Synthetic Procedures

In this paper we present SynKB, an open-source, automatically extracted knowledge base of chemical synthesis protocols. Similar to proprietary chemistry databases such as Reaxsys, SynKB allows chemists to retrieve structured knowledge about synthetic procedures. By taking advantage of recent advances in natural language processing for procedural texts, SynKB supports more flexible queries about reaction conditions, and thus has the potential to help chemists search the literature for conditions used in relevant reactions as they design new synthetic routes. Using customized Transformer models to automatically extract information from 6 million synthesis procedures described in U.S. and EU patents, we show that for many queries, SynKB has higher recall than Reaxsys, while maintaining high precision. We plan to make SynKB available as an open-source tool; in contrast, proprietary chemistry databases require costly subscriptions.

### Dürfen Maschinen denken (können)? Warum Künstliche Intelligenz eine Ethik braucht. (Are Machines Allowed to (be able to) Think? Why Artificial Intelligence Needs Ethics)

Speech manuscript (German + English) of the impulse lecture for the panel discussion "May machines (be able to) think?" at the 102nd Katholikentag on May 28, 2022 in Stuttgart. Panel: Winfried Kretschmann (MdL, Prime Minister Baden-W\"urttemberg, Stuttgart), Ursula Nothelle-Wildfeuer (Freiburg), Michael Resch (Stuttgart),Karsten Wendland (Aalen). Moderation: Stefanie Rentsch (Fulda). Advocate of the audience: Verena Neuhausen (Stuttgart).

### Combining Predictions under Uncertainty: The Case of Random Decision Trees

A common approach to aggregate classification estimates in an ensemble of decision trees is to either use voting or to average the probabilities for each class. The latter takes uncertainty into account, but not the reliability of the uncertainty estimates (so to say, the "uncertainty about the uncertainty"). More generally, much remains unknown about how to best combine probabilistic estimates from multiple sources. In this paper, we investigate a number of alternative prediction methods. Our methods are inspired by the theories of probability, belief functions and reliable classification, as well as a principle that we call evidence accumulation. Our experiments on a variety of data sets are based on random decision trees which guarantees a high diversity in the predictions to be combined. Somewhat unexpectedly, we found that taking the average over the probabilities is actually hard to beat. However, evidence accumulation showed consistently better results on all but very small leafs.

### Reward Design For An Online Reinforcement Learning Algorithm Supporting Oral Self-Care

Dental disease is one of the most common chronic diseases despite being largely preventable. However, professional advice on optimal oral hygiene practices is often forgotten or abandoned by patients. Therefore patients may benefit from timely and personalized encouragement to engage in oral self-care behaviors. In this paper, we develop an online reinforcement learning (RL) algorithm for use in optimizing the delivery of mobile-based prompts to encourage oral hygiene behaviors. One of the main challenges in developing such an algorithm is ensuring that the algorithm considers the impact of the current action on the effectiveness of future actions (i.e., delayed effects), especially when the algorithm has been made simple in order to run stably and autonomously in a constrained, real-world setting (i.e., highly noisy, sparse data). We address this challenge by designing a quality reward which maximizes the desired health outcome (i.e., high-quality brushing) while minimizing user burden. We also highlight a procedure for optimizing the hyperparameters of the reward by building a simulation environment test bed and evaluating candidates using the test bed. The RL algorithm discussed in this paper will be deployed in Oralytics, an oral self-care app that provides behavioral strategies to boost patient engagement in oral hygiene practices.

### SemAug: Semantically Meaningful Image Augmentations for Object Detection Through Language Grounding

Data augmentation is an essential technique in improving the generalization of deep neural networks. The majority of existing image-domain augmentations either rely on geometric and structural transformations, or apply different kinds of photometric distortions. In this paper, we propose an effective technique for image augmentation by injecting contextually meaningful knowledge into the scenes. Our method of semantically meaningful image augmentation for object detection via language grounding, SemAug, starts by calculating semantically appropriate new objects that can be placed into relevant locations in the image (the what and where problems). Then it embeds these objects into their relevant target locations, thereby promoting diversity of object instance distribution. Our method allows for introducing new object instances and categories that may not even exist in the training set. Furthermore, it does not require the additional overhead of training a context network, so it can be easily added to existing architectures. Our comprehensive set of evaluations showed that the proposed method is very effective in improving the generalization, while the overhead is negligible. In particular, for a wide range of model architectures, our method achieved ~2-4% and ~1-2% mAP improvements for the task of object detection on the Pascal VOC and COCO datasets, respectively.

### Private Query Release via the Johnson-Lindenstrauss Transform

We introduce a new method for releasing answers to statistical queries with differential privacy, based on the Johnson-Lindenstrauss lemma. The key idea is to randomly project the query answers to a lower dimensional space so that the distance between any two vectors of feasible query answers is preserved up to an additive error. Then we answer the projected queries using a simple noise-adding mechanism, and lift the answers up to the original dimension. Using this method, we give, for the first time, purely differentially private mechanisms with optimal worst case sample complexity under average error for answering a workload of $k$ queries over a universe of size $N$. As other applications, we give the first purely private efficient mechanisms with optimal sample complexity for computing the covariance of a bounded high-dimensional distribution, and for answering 2-way marginal queries. We also show that, up to the dependence on the error, a variant of our mechanism is nearly optimal for every given query workload.

### Deep Unsupervised Domain Adaptation: A Review of Recent Advances and Perspectives

Deep learning has become the method of choice to tackle real-world problems in different domains, partly because of its ability to learn from data and achieve impressive performance on a wide range of applications. However, its success usually relies on two assumptions: (i) vast troves of labeled datasets are required for accurate model fitting, and (ii) training and testing data are independent and identically distributed. Its performance on unseen target domains, thus, is not guaranteed, especially when encountering out-of-distribution data at the adaptation stage. The performance drop on data in a target domain is a critical problem in deploying deep neural networks that are successfully trained on data in a source domain. Unsupervised domain adaptation (UDA) is proposed to counter this, by leveraging both labeled source domain data and unlabeled target domain data to carry out various tasks in the target domain. UDA has yielded promising results on natural image processing, video analysis, natural language processing, time-series data analysis, medical image analysis, etc. In this review, as a rapidly evolving topic, we provide a systematic comparison of its methods and applications. In addition, the connection of UDA with its closely related tasks, e.g., domain generalization and out-of-distribution detection, has also been discussed. Furthermore, deficiencies in current methods and possible promising directions are highlighted.

### Retrospective Cost Parameter Estimation with Application to Space Weather Modeling

This chapter reviews standard parameter-estimation techniques and presents a novel gradient-, ensemble-, adjoint-free data-driven parameter estimation technique in the DDDAS framework. This technique, called retrospective cost parameter estimation (RCPE), is motivated by large-scale complex estimation models characterized by high-dimensional nonlinear dynamics, nonlinear parameterizations, and representational models. RCPE is illustrated by estimating unknown parameters in three examples. In the first example, salient features of RCPE are investigated by considering parameter estimation problem in a low-order nonlinear system. In the second example, RCPE is used to estimate the convective coefficient and the viscosity in the generalized Burgers equation by using a scalar measurement. In the final example, RCPE is used to estimate thermal conductivity coefficients that relate temporal temperature variation with the vertical gradient of the temperature in the atmosphere.

### WatchPed: Pedestrian Crossing Intention Prediction Using Embedded Sensors of Smartwatch

The pedestrian intention prediction problem is to estimate whether or not the target pedestrian will cross the street. State-of-the-art approaches heavily rely on visual information collected with the front camera of the ego-vehicle to make a prediction of the pedestrian's intention. As such, the performance of existing methods significantly degrades when the visual information is not accurate, e.g., when the distance between the pedestrian and ego-vehicle is far, or the lighting conditions are not good enough. In this paper, we design, implement, and evaluate the first pedestrian intention prediction model based on integration of motion sensor data gathered with the smartwatch (or smartphone) of the pedestrian. A novel machine learning architecture is proposed to effectively incorporate the motion sensor data to reinforce the visual information to significantly improve the performance in adverse situations where the visual information may be unreliable. We also conduct a large-scale data collection and present the first pedestrian intention prediction dataset integrated with time-synchronized motion sensor data. The dataset consists of a total of 128 video clips with different distances and varying levels of lighting conditions. We trained our model using the widely-used JAAD and our own datasets and compare the performance with a state-of-the-art model. The results demonstrate that our model outperforms the state-of-the-art method particularly when the distance to the pedestrian is far (over 70m), and the lighting conditions are not sufficient.

### Exploring the Viability of Robot-supported Flipped Classes in English for Medical Purposes Reading Com-prehension

This study delved into the viability of Robot-supported flipped classes in English for Medical Purposes reading comprehension. In a 16-session course, the reading comprehension and then workspace performance of 444 students, with Commercially-Off-The-Shelf and Self-Generated robot flipped classes were compared. The results indicated that the flipped classes brought about a good instructional-learning ambience in postsecondary education for English for Medical Purposes (EMP) reading comprehension and adopting proactive approach for workspace performance. In tandem, the Mixed Effect Model revealed that student participation in the self-generated robot-supported flipped classes yielded a larger effect size (+17.6\%) than Commercially-Off-The-Shelf robot-supported flipped classes. Analyses produced five contributing moderators of EMP reading comprehension and workspace performance: reading proficiency, attitude, manner of practicing, as well as student and teacher role.

### Entity Anchored ICD Coding

Medical coding is a complex task, requiring assignment of a subset of over 72,000 ICD codes to a patient's notes. Modern natural language processing approaches to these tasks have been challenged by the length of the input and size of the output space. We limit our model inputs to a small window around medical entities found in our documents. From those local contexts, we build contextualized representations of both ICD codes and entities, and aggregate over these representations to form document-level predictions. In contrast to existing methods which use a representation fixed either in size or by codes seen in training, we represent ICD codes by encoding the code description with local context. We discuss metrics appropriate to deploying coding systems in practice. We show that our approach is superior to existing methods in both standard and deployable measures, including performance on rare and unseen codes.

### Comments on SPD-41 software licensing requirements

The proposed changes to Science Missions Directorate (SMD) Policy Document 41 (SPD-41) include requirements that SMD-funded software should be released using a permissive software license and following best practices from the open source software community. This is a welcome change that will lead to greater acceptance and adoption of SMD-funded software. However, ambiguities exist in the policy text around what licenses will be allowed. This could lead to problems in the solicitation process and friction with the open source software community. Moreover, some additional clarifications are warranted around what exceptions might be allowed to the software licensing policy.

### Self-Supervised Learning for Anomalous Channel Detection in EEG Graphs: Application to Seizure Analysis

Electroencephalogram (EEG) signals are effective tools towards seizure analysis where one of the most important challenges is accurate detection of seizure events and brain regions in which seizure happens or initiates. However, all existing machine learning-based algorithms for seizure analysis require access to the labeled seizure data while acquiring labeled data is very labor intensive, expensive, as well as clinicians dependent given the subjective nature of the visual qualitative interpretation of EEG signals. In this paper, we propose to detect seizure channels and clips in a self-supervised manner where no access to the seizure data is needed. The proposed method considers local structural and contextual information embedded in EEG graphs by employing positive and negative sub-graphs. We train our method through minimizing contrastive and generative losses. The employ of local EEG sub-graphs makes the algorithm an appropriate choice when accessing to the all EEG channels is impossible due to complications such as skull fractures. We conduct an extensive set of experiments on the largest seizure dataset and demonstrate that our proposed framework outperforms the state-of-the-art methods in the EEG-based seizure study. The proposed method is the only study that requires no access to the seizure data in its training phase, yet establishes a new state-of-the-art to the field, and outperforms all related supervised methods.

### Joint Communication and Channel Discrimination

We consider a basic joint communication and sensing setup comprising a transmitter, a receiver and a sensor. The transmitter sends a codeword to the receiver through a discrete memoryless channel, and the receiver is interested in retrieving the underlying encoded message. On the other hand, the sensor picks up a noisy version of the transmitted codeword through one of two possible discrete memoryless channels. The sensor knows the codeword and wishes to discriminate between the two possible channels, i.e. to identify the channel that has generated the output given the input. We study the trade-off between communication and sensing in the asymptotic regime, captured in terms of the message coding rate against the two types of discrimination error exponents. We characterize the optimal trade-off between the rate and the exponents for general discrete memoryless channels with an input cost constraint.

### Invariant Inference With Provable Complexity From the Monotone Theory

Invariant inference algorithms such as interpolation-based inference and IC3/PDR show that it is feasible, in practice, to find inductive invariants for many interesting systems, but non-trivial upper bounds on the computational complexity of such algorithms are scarce, and limited to simple syntactic forms of invariants. In this paper we achieve invariant inference algorithms, in the domain of propositional transition systems, with provable upper bounds on the number of SAT calls. We do this by building on the monotone theory, developed by Bshouty for exact learning Boolean formulas. We prove results for two invariant inference frameworks: (i) model-based interpolation, where we show an algorithm that, under certain conditions about reachability, efficiently infers invariants when they have both short CNF and DNF representations (transcending previous results about monotone invariants); and (ii) abstract interpretation in a domain based on the monotone theory that was previously studied in relation to property-directed reachability, where we propose an efficient implementation of the best abstract transformer, leading to overall complexity bounds on the number of SAT calls. These results build on a novel procedure for computing least monotone overapproximations.

### Hypergraphs with Edge-Dependent Vertex Weights: p-Laplacians and Spectral Clustering

We study p-Laplacians and spectral clustering for a recently proposed hypergraph model that incorporates edge-dependent vertex weights (EDVWs). These weights can reflect different importance of vertices within a hyperedge, thus conferring the hypergraph model higher expressivity and flexibility. By constructing submodular EDVWs-based splitting functions, we convert hypergraphs with EDVWs into submodular hypergraphs for which the spectral theory is better developed. In this way, existing concepts and theorems such as p-Laplacians and Cheeger inequalities proposed under the submodular hypergraph setting can be directly extended to hypergraphs with EDVWs. For submodular hypergraphs with EDVWs-based splitting functions, we propose an efficient algorithm to compute the eigenvector associated with the second smallest eigenvalue of the hypergraph 1-Laplacian. We then utilize this eigenvector to cluster the vertices, achieving higher clustering accuracy than traditional spectral clustering based on the 2-Laplacian. More broadly, the proposed algorithm works for all submodular hypergraphs that are graph reducible. Numerical experiments using real-world data demonstrate the effectiveness of combining spectral clustering based on the 1-Laplacian and EDVWs.

### Learnable Filters for Geometric Scattering Modules

We propose a new graph neural network (GNN) module, based on relaxations of recently proposed geometric scattering transforms, which consist of a cascade of graph wavelet filters. Our learnable geometric scattering (LEGS) module enables adaptive tuning of the wavelets to encourage band-pass features to emerge in learned representations. The incorporation of our LEGS-module in GNNs enables the learning of longer-range graph relations compared to many popular GNNs, which often rely on encoding graph structure via smoothness or similarity between neighbors. Further, its wavelet priors result in simplified architectures with significantly fewer learned parameters compared to competing GNNs. We demonstrate the predictive performance of LEGS-based networks on graph classification benchmarks, as well as the descriptive quality of their learned features in biochemical graph data exploration tasks. Our results show that LEGS-based networks match or outperforms popular GNNs, as well as the original geometric scattering construction, on many datasets, in particular in biochemical domains, while retaining certain mathematical properties of handcrafted (non-learned) geometric scattering.

### A Research Software Engineering Workflow for Computational Science and Engineering

University research groups in Computational Science and Engineering (CSE) generally lack dedicated funding and personnel for Research Software Engineering (RSE), which, combined with the pressure to maximize the number of scientific publications, shifts the focus away from sustainable research software development and reproducible results. The neglect of RSE in CSE at University research groups negatively impacts the scientific output: research data - including research software - related to a CSE publication cannot be found, reproduced, or re-used, different ideas are not combined easily into new ideas, and published methods must very often be re-implemented to be investigated further. This slows down CSE research significantly, resulting in considerable losses in time and, consequentially, public funding. We propose a RSE workflow for Computational Science and Engineering (CSE) that addresses these challenges, that improves the quality of research output in CSE. Our workflow applies established software engineering practices adapted for CSE: software testing, result visualization, and periodical cross-linking of software with reports/publications and data, timed by milestones in the scientific publication process. The workflow introduces minimal work overhead, crucial for university research groups, and delivers modular and tested software linked to publications whose results can easily be reproduced. We define research software quality from a perspective of a pragmatic researcher: the ability to quickly find the publication, data, and software related to a published research idea, quickly reproduce results, understand or re-use a CSE method, and finally extend the method with new research ideas.

### A Library for Representing Python Programs as Graphs for Machine Learning

Graph representations of programs are commonly a central element of machine learning for code research. We introduce an open source Python library python_graphs that applies static analysis to construct graph representations of Python programs suitable for training machine learning models. Our library admits the construction of control-flow graphs, data-flow graphs, and composite program graphs'' that combine control-flow, data-flow, syntactic, and lexical information about a program. We present the capabilities and limitations of the library, perform a case study applying the library to millions of competitive programming submissions, and showcase the library's utility for machine learning research.

### An Overview and Prospective Outlook on Robust Training and Certification of Machine Learning Models

In this discussion paper, we survey recent research surrounding robustness of machine learning models. As learning algorithms become increasingly more popular in data-driven control systems, their robustness to data uncertainty must be ensured in order to maintain reliable safety-critical operations. We begin by reviewing common formalisms for such robustness, and then move on to discuss popular and state-of-the-art techniques for training robust machine learning models as well as methods for provably certifying such robustness. From this unification of robust machine learning, we identify and discuss pressing directions for future research in the area.

### Encoding Integers and Rationals on Neuromorphic Computers using Virtual Neuron

Neuromorphic computers perform computations by emulating the human brain, and use extremely low power. They are expected to be indispensable for energy-efficient computing in the future. While they are primarily used in spiking neural network-based machine learning applications, neuromorphic computers are known to be Turing-complete, and thus, capable of general-purpose computation. However, to fully realize their potential for general-purpose, energy-efficient computing, it is important to devise efficient mechanisms for encoding numbers. Current encoding approaches have limited applicability and may not be suitable for general-purpose computation. In this paper, we present the virtual neuron as an encoding mechanism for integers and rational numbers. We evaluate the performance of the virtual neuron on physical and simulated neuromorphic hardware and show that it can perform an addition operation using 23 nJ of energy on average using a mixed-signal memristor-based neuromorphic processor. We also demonstrate its utility by using it in some of the mu-recursive functions, which are the building blocks of general-purpose computation.

### On the Adoption and Effects of Source Code Reuse on Defect Proneness and Maintenance Effort

Context. Software reusability mechanisms, like inheritance and delegation in Object-Oriented programming, are widely recognized as key instruments of software design. These are used to reduce the risks of source code being affected by defects, other than to reduce the effort required to maintain and evolve source code. Previous work has traditionally employed source code reuse metrics for prediction purposes, e.g., in the context of defect prediction. Objective. However, our research identifies two noticeable limitations of current literature. First, still little is known on the extent to which developers actually employ code reuse mechanisms over time. Second, it is still unclear how these mechanisms may contribute to explain defect-proneness and maintenance effort during software evolution. We aim at bridging this gap of knowledge, as an improved understanding of these aspects might provide insights into the actual support provided by these mechanisms, e.g., by suggesting whether and how to use them for prediction purposes. Method. We propose an exploratory study aiming at (1) assessing how developers use inheritance and delegation during software evolution; and (2) statistically analyze the impact of inheritance and delegation on fault proneness and maintenance effort. The study will be conducted on the commits of 17 Java projects of the DEFECTS4J dataset.

### Towards Inclusive HRI: Using Sim2Real to Address Underrepresentation in Emotion Expression Recognition

Robots and artificial agents that interact with humans should be able to do so without bias and inequity, but facial perception systems have notoriously been found to work more poorly for certain groups of people than others. In our work, we aim to build a system that can perceive humans in a more transparent and inclusive manner. Specifically, we focus on dynamic expressions on the human face, which are difficult to collect for a broad set of people due to privacy concerns and the fact that faces are inherently identifiable. Furthermore, datasets collected from the Internet are not necessarily representative of the general population. We address this problem by offering a Sim2Real approach in which we use a suite of 3D simulated human models that enables us to create an auditable synthetic dataset covering 1) underrepresented facial expressions, outside of the six basic emotions, such as confusion; 2) ethnic or gender minority groups; and 3) a wide range of viewing angles that a robot may encounter a human in the real world. By augmenting a small dynamic emotional expression dataset containing 123 samples with a synthetic dataset containing 4536 samples, we achieved an improvement in accuracy of 15% on our own dataset and 11% on an external benchmark dataset, compared to the performance of the same model architecture without synthetic training data. We also show that this additional step improves accuracy specifically for racial minorities when the architecture's feature extraction weights are trained from scratch.

### BoW3D: Bag of Words for Real-time Loop Closing in 3D LiDAR SLAM

Loop closing is a fundamental part of simultaneous localization and mapping (SLAM) for autonomous mobile systems. In the field of visual SLAM, bag of words (BoW) has achieved great success in loop closure. The BoW features for loop searching can also be used in the subsequent 6-DoF loop correction. However, for 3D LiDAR SLAM, the state-of-the-art methods may fail to effectively recognize the loop in real time, and usually cannot correct the full 6-DoF loop pose. To address this limitation, we present a novel Bag of Words for real-time loop closing in 3D LiDAR SLAM, called BoW3D. The novelty of our method lies in that it not only efficiently recognize the revisited loop places, but also correct the full 6-DoF loop pose in real time. BoW3D builds the bag of words based on the 3D feature LinK3D, which is efficient, pose-invariant and can be used for accurate point-to-point matching. We furthermore embed our proposed method into 3D LiDAR odometry system to evaluate loop closing performance. We test our method on public dataset, and compare it against other state-of-the-art algorithms. BoW3D shows better performance in terms of F1 max and extended precision scores in most scenarios with superior real-time performance. It is noticeable that BoW3D takes an average of 50 ms to recognize and correct the loops in KITTI 00 (includes 4K+ 64-ray LiDAR scans), when executed on a notebook with an Intel Core i7 @2.2 GHz processor.

### CTI4AI: Threat Intelligence Generation and Sharing after Red Teaming AI Models

As the practicality of Artificial Intelligence (AI) and Machine Learning (ML) based techniques grow, there is an ever increasing threat of adversarial attacks. There is a need to red team this ecosystem to identify system vulnerabilities, potential threats, characterize properties that will enhance system robustness, and encourage the creation of effective defenses. A secondary need is to share this AI security threat intelligence between different stakeholders like, model developers, users, and AI/ML security professionals. In this paper, we create and describe a prototype system CTI4AI, to overcome the need to methodically identify and share AI/ML specific vulnerabilities and threat intelligence.

### Low Rank Tensor Decompositions and Approximations

There exist linear relations among tensor entries of low rank tensors. These linear relations can be expressed by multi-linear polynomials, which are called generating polynomials. We use generating polynomials to compute tensor rank decompositions and low rank tensor approximations. We prove that this gives a quasi-optimal low rank tensor approximation if the given tensor is sufficiently close to a low rank one.

### Context-Aware Streaming Perception in Dynamic Environments

Efficient vision works maximize accuracy under a latency budget. These works evaluate accuracy offline, one image at a time. However, real-time vision applications like autonomous driving operate in streaming settings, where ground truth changes between inference start and finish. This results in a significant accuracy drop. Therefore, a recent work proposed to maximize accuracy in streaming settings on average. In this paper, we propose to maximize streaming accuracy for every environment context. We posit that scenario difficulty influences the initial (offline) accuracy difference, while obstacle displacement in the scene affects the subsequent accuracy degradation. Our method, Octopus, uses these scenario properties to select configurations that maximize streaming accuracy at test time. Our method improves tracking performance (S-MOTA) by 7.4% over the conventional static approach. Further, performance improvement using our method comes in addition to, and not instead of, advances in offline accuracy.

### Deep convolutional surrogates and degrees of freedom in thermal design

We present surrogate models for heat transfer and pressure drop prediction of complex fin geometries generated using composite Bezier curves. Thermal design process includes iterative high fidelity simulation which is complex, computationally expensive, and time-consuming. With the advancement in machine learning algorithms as well as Graphics Processing Units (GPUs), we can utilize the parallel processing architecture of GPUs rather than solely relying on CPUs to accelerate the thermo-fluid simulation. In this study, Convolutional Neural Networks (CNNs) are used to predict results of Computational Fluid Dynamics (CFD) directly from topologies saved as images. The case with a single fin as well as multiple morphable fins are studied. A comparison of Xception network and regular CNN is presented for the case with a single fin design. Results show that high accuracy in prediction is observed for single fin design particularly using Xception network. Increasing design freedom to multiple fins increases the error in prediction. This error, however, remains within three percent for pressure drop and heat transfer estimation which is valuable for design purpose.

### Core-shell enhanced single particle model for lithium iron phosphate batteries: model formulation and analysis of numerical solutions

In this paper, a core-shell enhanced single particle model for iron-phosphate battery cells is formulated, implemented, and verified. Starting from the description of the positive and negative electrodes charge and mass transport dynamics, the positive electrode intercalation and deintercalation phenomena and associated phase transitions are described with the core-shell modeling paradigm. Assuming two phases are formed in the positive electrode, one rich and one poor in lithium, a core-shrinking problem is formulated and the phase transition is modeled through a shell phase that covers the core one. A careful discretization of the coupled partial differential equations is proposed and used to convert the model into a system of ordinary differential equations. To ensure robust and accurate numerical solutions of the governing equations, a sensitivity analysis of numerical solutions is performed and the best setting, in terms of solver tolerances, solid phase concentration discretization points, and input current sampling time, is determined in a newly developed probabilistic framework. Finally, unknown model parameters are identified at different C-rate scenarios and the model is verified against experimental data.

### Single Round-trip Hierarchical ORAM via Succinct Indices

Accesses to data stored remotely create a side channel that is known to leak information even if the content is encrypted. Oblivious RAM is a cryptographic primitive that provides confidentiality of access patterns in remote storage settings. To outsource a database of $n$ blocks of $B$ bits, traditional solutions restrict the client to $\mathcal{O}(B)$ bits of private memory. A class of solutions, known as Hierarchical ORAM, has achieved theoretically optimal bandwidth performance in this setting. Hierarchical ORAM distributes data blocks at the server across a hierarchy of hash tables, with the most recently accessed blocks in the lower levels of the hierarchy. Locating a block in the hierarchy requires a large number of round-trips of communication, with the server, per virtual access. Furthermore, rebuilding the hierarchy incurs a large concrete bandwidth overhead. Thus, Hierarchical ORAMs are seen as theoretical artefacts and have not been deployed in practice. For many applications, such as cloud storage, the client can afford a larger, $\omega(B)$-bit, private memory allocation. With this premise, we introduce Rank ORAM, the first practical Hierarchical ORAM that takes advantage of a larger client. We construct a compact client-side data structure that keeps track of how recently data blocks have been accessed. Leveraging this information, Rank ORAM reduces both the number of round-trips of communication and the concrete bandwidth overhead of Hierarchical ORAM. In addition, Rank ORAM achieves a smaller client memory allocation than existing (non-Hierarchical) state-of-the-art practical ORAM schemes while maintaining comparable bandwidth performance. Our experiments on real network file-system traces demonstrate a reduction in client memory, against existing approaches, of a factor of $100$.

### HetVis: A Visual Analysis Approach for Identifying Data Heterogeneity in Horizontal Federated Learning

Horizontal federated learning (HFL) enables distributed clients to train a shared model and keep their data privacy. In training high-quality HFL models, the data heterogeneity among clients is one of the major concerns. However, due to the security issue and the complexity of deep learning models, it is challenging to investigate data heterogeneity across different clients. To address this issue, based on a requirement analysis we developed a visual analytics tool, HetVis, for participating clients to explore data heterogeneity. We identify data heterogeneity through comparing prediction behaviors of the global federated model and the stand-alone model trained with local data. Then, a context-aware clustering of the inconsistent records is done, to provide a summary of data heterogeneity. Combining with the proposed comparison techniques, we develop a novel set of visualizations to identify heterogeneity issues in HFL. We designed three case studies to introduce how HetVis can assist client analysts in understanding different types of heterogeneity issues. Expert reviews and a comparative study demonstrate the effectiveness of HetVis.

### Temporal Action Localization with Multi-temporal Scales

Temporal action localization plays an important role in video analysis, which aims to localize and classify actions in untrimmed videos. The previous methods often predict actions on a feature space of a single-temporal scale. However, the temporal features of a low-level scale lack enough semantics for action classification while a high-level scale cannot provide rich details of the action boundaries. To address this issue, we propose to predict actions on a feature space of multi-temporal scales. Specifically, we use refined feature pyramids of different scales to pass semantics from high-level scales to low-level scales. Besides, to establish the long temporal scale of the entire video, we use a spatial-temporal transformer encoder to capture the long-range dependencies of video frames. Then the refined features with long-range dependencies are fed into a classifier for the coarse action prediction. Finally, to further improve the prediction accuracy, we propose to use a frame-level self attention module to refine the classification and boundaries of each action instance. Extensive experiments show that the proposed method can outperform state-of-the-art approaches on the THUMOS14 dataset and achieves comparable performance on the ActivityNet1.3 dataset. Compared with A2Net (TIP20, Avg\{0.3:0.7\}), Sub-Action (CSVT2022, Avg\{0.1:0.5\}), and AFSD (CVPR21, Avg\{0.3:0.7\}) on the THUMOS14 dataset, the proposed method can achieve improvements of 12.6\%, 17.4\% and 2.2\%, respectively

### SGM-Net: Semantic Guided Matting Net

Human matting refers to extracting human parts from natural images with high quality, including human detail information such as hair, glasses, hat, etc. This technology plays an essential role in image synthesis and visual effects in the film industry. When the green screen is not available, the existing human matting methods need the help of additional inputs (such as trimap, background image, etc.), or the model with high computational cost and complex network structure, which brings great difficulties to the application of human matting in practice. To alleviate such problems, most existing methods (such as MODNet) use multi-branches to pave the way for matting through segmentation, but these methods do not make full use of the image features and only utilize the prediction results of the network as guidance information. Therefore, we propose a module to generate foreground probability map and add it to MODNet to obtain Semantic Guided Matting Net (SGM-Net). Under the condition of only one image, we can realize the human matting task. We verify our method on the P3M-10k dataset. Compared with the benchmark, our method has significantly improved in various evaluation indicators.

### Active Bucketized Learning for ACOPF Optimization Proxies

This paper considers optimization proxies for Optimal Power Flow (OPF), i.e., machine-learning models that approximate the input/output relationship of OPF. Recent work has focused on showing that such proxies can be of high fidelity. However, their training requires significant data, each instance necessitating the (offline) solving of an OPF for a sample of the input distribution. To meet the requirements of market-clearing applications, this paper proposes Active Bucketized Sampling (ABS), a novel active learning framework that aims at training the best possible OPF proxy within a time limit. ABS partitions the input distribution into buckets and uses an acquisition function to determine where to sample next. It relies on an adaptive learning rate that increases and decreases over time. Experimental results demonstrate the benefits of ABS.

### Universal Solutions of Feedforward ReLU Networks for Interpolations

This paper provides a theoretical framework on the solution of feedforward ReLU networks for interpolations, in terms of what is called an interpolation matrix, which is the summary, extension and generalization of our three preceding works, with the expectation that the solution of engineering could be included in this framework and finally understood. To three-layer networks, we classify different kinds of solutions and model them in a normalized form; the solution finding is investigated by three dimensions, including data, networks and the training; the mechanism of overparameterization solutions is interpreted. To deep-layer networks, we present a general result called sparse-matrix principle, which could describe some basic behavior of deep layers and explain the phenomenon of the sparse-activation mode that appears in engineering applications associated with brain science; an advantage of deep layers compared to shallower ones is manifested in this principle. As applications, a general solution of deep neural networks for classification is constructed by that principle; and we also use the principle to study the data-disentangling property of encoders. Analogous to the three-layer case, the solution of deep layers is also explored through several dimensions. The mechanism of multi-output neural networks is explained from the perspective of interpolation matrices.

### On GSOR, the Generalized Successive Overrelaxation Method for Double Saddle-Point Problems

We consider the generalized successive overrelaxation (GSOR) method for solving a class of block three-by-three saddle-point problems. Based on the necessary and sufficient conditions for all roots of a real cubic polynomial to have modulus less than one, we derive convergence results under reasonable assumptions. We also analyze a class of block lower triangular preconditioners induced from GSOR and derive explicit and sharp spectral bounds for the preconditioned matrices. We report numerical experiments on test problems from the liquid crystal director model and the coupled Stokes-Darcy flow, demonstrating the usefulness of GSOR.

### Identifying Source Code File Experts

In software development, the identification of source code file experts is an important task. Identifying these experts helps to improve software maintenance and evolution activities, such as developing new features, code reviews, and bug fixes. Although some studies have proposed repository mining techniques to automatically identify source code experts, there are still gaps in this area that can be explored. For example, investigating new variables related to source code knowledge and applying machine learning aiming to improve the performance of techniques to identify source code experts. The goal of this study is to investigate opportunities to improve the performance of existing techniques to recommend source code files experts. We built an oracle by collecting data from the development history and surveying developers of 113 software projects. Then, we use this oracle to: (i) analyze the correlation between measures extracted from the development history and the developers source code knowledge and (ii) investigate the use of machine learning classifiers by evaluating their performance in identifying source code files experts. First Authorship and Recency of Modification are the variables with the highest positive and negative correlations with source code knowledge, respectively. Machine learning classifiers outperformed the linear techniques (F-Measure = 71% to 73%) in the public dataset, but this advantage is not clear in the private dataset, with F-Measure ranging from 55% to 68% for the linear techniques and 58% to 67% for ML techniques. Overall, the linear techniques and the machine learning classifiers achieved similar performance, particularly if we analyze F-Measure. However, machine learning classifiers usually get higher precision while linear techniques obtained the highest recall values.

### Color Image Edge Detection using Multi-scale and Multi-directional Gabor filter

In this paper, a color edge detection method is proposed where the multi-scale Gabor filter are used to obtain edges from input color images. The main advantage of the proposed method is that high edge detection accuracy is attained while maintaining good noise robustness. The proposed method consists of three aspects: First, the RGB color image is converted to CIE L*a*b* space because of its wide coloring area and uniform color distribution. Second, a set of Gabor filters are used to smooth the input images and the color edge strength maps are extracted, which are fused into a new ESM with the noise robustness and accurate edge extraction. Third, Embedding the fused ESM in the route of the Canny detector yields a noise-robust color edge detector. The results show that the proposed detector has the better experience in detection accuracy and noise-robustness.

### Evaluate Quantum Combinatorial Optimization for Distribution Network Reconfiguration

This paper aims to implement and evaluate the performance of quantum computing on solving combinatorial optimization problems arising from the operations of the power grid. To this end, we construct a novel mixed integer conic programming formulation for the reconfiguration of radial distribution network in response to faults in distribution lines. Comparing to existing bus injection model in the literature, our formulation based the branch flows model is theoretically equivalent without needing non-explainable variables, thus being more numerically stable. The network reconfiguration model is then used as a benchmark to evaluate the performance of quantum computing algorithms in real quantum computers. It shows that while current quantum computing algorithms with fast execution time in quantum computers can be a promising solution candidate, its heuristic nature stem from its theoretical foundation should be considered carefully when applying into power grid optimization problems.

### Reliable Decision from Multiple Subtasks through Threshold Optimization: Content Moderation in the Wild

Social media platforms struggle to protect users from harmful content through content moderation. These platforms have recently leveraged machine learning models to cope with the vast amount of user-generated content daily. Since moderation policies vary depending on countries and types of products, it is common to train and deploy the models per policy. However, this approach is highly inefficient, especially when the policies change, requiring dataset re-labeling and model re-training on the shifted data distribution. To alleviate this cost inefficiency, social media platforms often employ third-party content moderation services that provide prediction scores of multiple subtasks, such as predicting the existence of underage personnel, rude gestures, or weapons, instead of directly providing final moderation decisions. However, making a reliable automated moderation decision from the prediction scores of the multiple subtasks for a specific target policy has not been widely explored yet. In this study, we formulate real-world scenarios of content moderation and introduce a simple yet effective threshold optimization method that searches the optimal thresholds of the multiple subtasks to make a reliable moderation decision in a cost-effective way. Extensive experiments demonstrate that our approach shows better performance in content moderation compared to existing threshold optimization methods and heuristics.

### The Correlated Arc Orienteering Problem

This paper introduces the correlated arc orienteering problem (CAOP), where the task is to find routes for a team of robots to maximize the collection of rewards associated with features in the environment. These features can be one-dimensional or points in the environment, and can have spatial correlation, i.e., visiting a feature in the environment may provide a portion of the reward associated with a correlated feature. A robot incurs costs as it traverses the environment, and the total cost for its route is limited by a resource constraint such as battery life or operation time. As environments are often large, we permit multiple depots where the robots must start and end their routes. The CAOP generalizes the correlated orienteering problem (COP), where the rewards are only associated with point features, and the arc orienteering problem (AOP), where the rewards are not spatially correlated. We formulate a mixed integer quadratic program (MIQP) that formalizes the problem and gives optimal solutions. However, the problem is NP-hard, and therefore we develop an efficient greedy constructive algorithm. We illustrate the problem with two different applications: informative path planning for methane gas leak detection and coverage of road networks.

### Integrating Satellites and Mobile Edge Computing for 6G Wide-Area Edge Intelligence: Minimal Structures and Systematic Thinking

The sixth-generation (6G) network will shift its focus to supporting everything including various machine-type devices (MTDs) in an everyone-centric manner. To ubiquitously cover the MTDs working in rural and disastrous areas, satellite communications become indispensable, while mobile edge computing (MEC) also plays an increasingly crucial role. Their sophisticated integration enables wide-area edge intelligence which promises to facilitate globally-distributed customized services. In this article, we present typical use cases of integrated satellite-MEC networks and discuss the main challenges therein. Inspired by the protein structure and the systematic engineering methodology, we propose three minimal integrating structures, based on which a complex integrated satellite-MEC network can be treated as their extension and combination. We discuss the unique characteristics and key problems of each minimal structure. Accordingly, we establish an on-demand network orchestration framework to enrich the hierarchy of network management, which further leads to a process-oriented network optimization method. On that basis, a case study is utilized to showcase the benefits of on-demand network orchestration and process-oriented network optimization. Finally, we outline potential research issues to envision a more intelligent, more secure, and greener integrated network.

### Understanding the Challenges of Team-Based Live Streaming for First-person Shooter Games

First-person shooter (FPS) game tournaments take place across the globe. A growing number of people choose to watch FPS games online instead of attending the game events in person. However, live streaming might miss critical highlight moments in the game, including kills and tactics. We identify how and why the live streaming team fails to capture highlight moments to reduce such live streaming mistakes. We named such mistakes jarring observations. We conducted a field study of live streaming competitions of Game For Peace, a popular FPS mobile game, to summarize five typical jarring observations and identify three primary reasons that caused the issues. We further studied how to improve the live streaming system to prevent jarring observations from happening by doing semi-structured interviews with two professional streaming teams for Game For Peace. The study showed that a better system should (1) add a new sub-team role to share the director's responsibility of managing observers; (2) provide interfaces customized for three roles of live streamers in the team; (3) abstract more geographical info; (4) predict the priority of observation targets; and (5) provide non-verbal interfaces for sync-up between sub-teams. Our work provides insights for esports streaming system researchers and developers to improve the system for a smoother audience experience.

### Knowledge-Injected Federated Learning

Federated learning is an emerging technique for training models from decentralized data sets. In many applications, data owners participating in the federated learning system hold not only the data but also a set of domain knowledge. Such knowledge includes human know-how and craftsmanship that can be extremely helpful to the federated learning task. In this work, we propose a federated learning framework that allows the injection of participants' domain knowledge, where the key idea is to refine the global model with knowledge locally. The scenario we consider is motivated by a real industry-level application, and we demonstrate the effectiveness of our approach to this application.

### FeedLens: Polymorphic Lenses for Personalizing Exploratory Search over Knowledge Graphs

The vast scale and open-ended nature of knowledge graphs (KGs) make exploratory search over them cognitively demanding for users. We introduce a new technique, polymorphic lenses, that improves exploratory search over a KG by obtaining new leverage from the existing preference models that KG-based systems maintain for recommending content. The approach is based on a simple but powerful observation: in a KG, preference models can be re-targeted to recommend not only entities of a single base entity type (e.g., papers in the scientific literature KG, products in an e-commerce KG), but also all other types (e.g., authors, conferences, institutions; sellers, buyers). We implement our technique in a novel system, FeedLens, which is built over Semantic Scholar, a production system for navigating the scientific literature KG. FeedLens reuses the existing preference models on Semantic Scholar -- people's curated research feeds -- as lenses for exploratory search. Semantic Scholar users can curate multiple feeds/lenses for different topics of interest, e.g., one for human-centered AI and another for document embeddings. Although these lenses are defined in terms of papers, FeedLens re-purposes them to also guide search over authors, institutions, venues, etc. Our system design is based on feedback from intended users via two pilot surveys (n=17 and n=13, respectively). We compare FeedLens and Semantic Scholar via a third (within-subjects) user study (n=15) and find that FeedLens increases user engagement while reducing the cognitive effort required to complete a short literature review task. Our qualitative results also highlight people's preference for this more effective exploratory search experience enabled by FeedLens.

### Large-Scale Minimization of the Pseudospectral Abscissa

This work concerns the minimization of the pseudospectral abscissa of a matrix-valued function dependent on parameters analytically. The problem is motivated by robust stability and transient behavior considerations for a linear control system that has optimization parameters. We describe a subspace procedure to cope with the setting when the matrix-valued function is of large size. The proposed subspace procedure solves a sequence of reduced problems obtained by restricting the matrix-valued function to small subspaces, whose dimensions increase gradually. It possesses desirable features such as the global convergence of the minimal values of the reduced problem to the minimal value of the original problem, and a superlinear convergence exhibited by the decay in the errors of the minimizers of the reduced problems. In mathematical terms, the problem we consider is a large-scale nonconvex minimax eigenvalue optimization problem such that the eigenvalue function appears in the constraint of the inner maximization problem. Devising and analyzing a subspace framework for the minimax eigenvalue optimization problem at hand with the eigenvalue function in the constraint require special treatment that makes use of a Lagrangian and dual variables. There are notable advantages in minimizing the pseudospectral abscissa over maximizing the distance to instability or minimizing the $\mathcal{H}_\infty$ norm; the optimized pseudospectral abscissa provide quantitative information about the worst-case transient behavior, and the initial guesses for the parameter values to optimize the pseudospectral abscissa can be arbitrary, unlike the case to optimize the distance to instability and $\mathcal{H}_\infty$ norm that would normally require initial guesses yielding asymptotically stable systems.

### Social Interactions for Autonomous Driving: A Review and Perspective

No human drives a car in a vacuum; she/he must negotiate with other road users to achieve their goals in social traffic scenes. A rational human driver can interact with other road users in a socially-compatible way through implicit communications to complete their driving tasks smoothly in interaction-intensive, safety-critical environments. This paper aims to review the existing approaches and theories to help understand and rethink the interactions among human drivers toward social autonomous driving. We take this survey to seek the answers to a series of fundamental questions: 1) What is social interaction in road traffic scenes? 2) How to measure and evaluate social interaction? 3) How to model and reveal the process of social interaction? 4) How do human drivers reach an implicit agreement and negotiate smoothly in social interaction? This paper reviews various approaches to modeling and learning the social interactions between human drivers, ranging from optimization theory and graphical models to social force theory and behavioral & cognitive science. We also highlight some new directions, critical challenges, and opening questions for future research.

### An Immersed Weak Galerkin Method for Elliptic Interface Problems on Polygonal Meshes

In this paper we present an immersed weak Galerkin method for solving second-order elliptic interface problems on polygonal meshes, where the meshes do not need to be aligned with the interface. The discrete space consists of constants on each edge and broken linear polynomials satisfying the interface conditions in each element. For triangular meshes, such broken linear plynomials coincide with the basis functions in immersed finite element methods [26]. We establish some approximation properties of the broken linear polynomials and the discrete weak gradient of a certain projection of the solution on polygonal meshes. We then prove an optimal error estimate of our scheme in the discrete $H^1$-seminorm under some assumptions on the exact solution. Numerical experiments are provided to confirm our theoretical analysis.

### Multi-level Contrast Network for Wearables-based Joint Activity Segmentation and Recognition

Human activity recognition (HAR) with wearables is promising research that can be widely adopted in many smart healthcare applications. In recent years, the deep learning-based HAR models have achieved impressive recognition performance. However, most HAR algorithms are susceptible to the multi-class windows problem that is essential yet rarely exploited. In this paper, we propose to relieve this challenging problem by introducing the segmentation technology into HAR, yielding joint activity segmentation and recognition. Especially, we introduce the Multi-Stage Temporal Convolutional Network (MS-TCN) architecture for sample-level activity prediction to joint segment and recognize the activity sequence. Furthermore, to enhance the robustness of HAR against the inter-class similarity and intra-class heterogeneity, a multi-level contrastive loss, containing the sample-level and segment-level contrast, has been proposed to learn a well-structured embedding space for better activity segmentation and recognition performance. Finally, with comprehensive experiments, we verify the effectiveness of the proposed method on two public HAR datasets, achieving significant improvements in the various evaluation metrics.

### pyCANON: A Python library to check the level of anonymity of a dataset

Openly sharing data with sensitive attributes and privacy restrictions is a challenging task. In this document we present the implementation of pyCANON, a Python library and command line interface (CLI) to check and assess the level of anonymity of a dataset through some of the most common anonymization techniques: k-anonymity, ($\alpha$,k)-anonymity, $\ell$-diversity, entropy $\ell$-diversity, recursive (c,$\ell$)-diversity, basic $\beta$-likeness, enhanced $\beta$-likeness, t-closeness and $\delta$-disclosure privacy. For the case of more than one sensitive attributes, two approaches are proposed for evaluating this techniques. The main strength of this library is to obtain a full report of the parameters that are fulfilled for each of the techniques mentioned above, with the unique requirement of the set of quasi-identifiers and that of sensitive attributes. We present the methods implemented together with the attacks they prevent, the description of the library, use examples of the different functions, as well as the impact and the possible applications that can be developed. Finally, some possible aspects to be incorporated in future updates are proposed.

### Traffic Analytics Development Kits (TADK): Enable Real-Time AI Inference in Networking Apps

Sophisticated traffic analytics, such as the encrypted traffic analytics and unknown malware detection, emphasizes the need for advanced methods to analyze the network traffic. Traditional methods of using fixed patterns, signature matching, and rules to detect known patterns in network traffic are being replaced with AI (Artificial Intelligence) driven algorithms. However, the absence of a high-performance AI networking-specific framework makes deploying real-time AI-based processing within networking workloads impossible. In this paper, we describe the design of Traffic Analytics Development Kits (TADK), an industry-standard framework specific for AI-based networking workloads processing. TADK can provide real-time AI-based networking workload processing in networking equipment from the data center out to the edge without the need for specialized hardware (e.g., GPUs, Neural Processing Unit, and so on). We have deployed TADK in commodity WAF and 5G UPF, and the evaluation result shows that TADK can achieve a throughput up to 35.3Gbps per core on traffic feature extraction, 6.5Gbps per core on traffic classification, and can decrease SQLi/XSS detection down to 4.5us per request with higher accuracy than fixed pattern solution.

### A Graph-Based Modelling of Epidemics]{A Graph-Based Modelling of Epidemics: Properties, Simulation, and Continuum Limit

This work is concerned with epidemiological models defined on networks, which highlight the prominent role of the social contact network of a given population in the spread of infectious diseases. In particular, we address the modelling and analysis of very large networks. As a basic epidemiological model, we focus on a SEIR (Susceptible-Exposed-Infective-Removed) model governing the behaviour of infectious disease among a population of individuals, which is partitioned into sub-populations. We study the long-time behaviour of the dynamic for this model that considers the heterogeneity of the infections and the social network. By relying on the theory of graphons we explore the natural question of the large population limit and investigate the behaviour of the model as the size of the network tends to infinity. After establishing the existence and uniqueness of solutions to the models that we will consider, we discuss the possibility of using the graphon-based limit model as a generative model for a network with particular statistical properties relating to the distribution of connections. We also provide some preliminary numerical tests.

### Towards Blockchain-based Trust and Reputation Management for Trustworthy 6G Networks

6G is envisioned to enable futuristic technologies, which exhibit more complexities than the previous generations, as it aims to bring connectivity to a large number of devices, many of which may not be trustworthy. Proper authentication can protect the network from unauthorized adversaries. However, it cannot guarantee in situ reliability and trustworthiness of authorized network nodes, as they can be compromised post-authentication and impede the reliability and resilience of the network. Trust and Reputation Management (TRM) is an effective approach to continuously evaluate the trustworthiness of each participant by collecting and processing evidence of their interactions with other nodes and the infrastructure. In this article, we argue that blockchain-based TRM is critical to build trustworthy 6G networks, where blockchain acts as a decentralized platform for collaboratively managing and processing interaction evidence with the end goal of quantifying trust. We present a case study of resource management in 6G networks, where blockchain-based TRM quantifies and maintains reputation scores by evaluating fulfillment of resource owner's obligations and facilitating resource consumers to provide feedback. We also discuss inherent challenges and future directions for the development of blockchain-based TRM for next-generation 6G networks.

### Reinforcement Learning to Rank with Coarse-grained Labels

Ranking lies at the core of many Information Retrieval (IR) tasks. While existing research on Learning to Rank (LTR) using Deep Neural Network (DNN) has achieved great success, it is somewhat limited because of its dependence on fine-grained labels. In practice, fine-grained labels are often expensive to acquire, i.e. explicit relevance judgements, or suffer from biases, i.e. click logs. Compared to fine-grained labels, coarse-grained labels are easier and cheaper to collect. Some recent works propose utilizing only coarse-grained labels for LTR tasks. A most representative line of work introduces Reinforcement Learning (RL) algorithms. RL can help train the LTR model with little reliance on fine-grained labels compared to Supervised Learning. To study the effectiveness of the RL-based LTR algorithm on coarse-grained labels, in this paper, we implement four different RL paradigms and conduct extensive experiments on two well-established LTR datasets. The results on simulated coarse-grained labeled dataset show that while using coarse-grained labels to train an RL model for LTR tasks still can not outperform traditional approaches using fine-grained labels, it still achieve somewhat promising results and is potentially helpful for future research in LTR. Our code implementations will be released after this work is accepted.

### Prediction of Seismic Intensity Distributions Using Neural Networks

The ground motion prediction equation is commonly used to predict the seismic intensity distribution. However, it is not easy to apply this method to seismic distributions affected by underground plate structures, which are commonly known as abnormal seismic distributions. This study proposes a hybrid of regression and classification approaches using neural networks. The proposed model treats the distributions as 2-dimensional data like an image. Our method can accurately predict seismic intensity distributions, even abnormal distributions.

### Computing Smallest Convex Intersecting Polygons

A polygon C is an intersecting polygon for a set O of objects in the plane if C intersects each object in O, where the polygon includes its interior. We study the problem of computing the minimum-perimeter intersecting polygon and the minimum-area convex intersecting polygon for a given set O of objects. We present an FPTAS for both problems for the case where O is a set of possibly intersecting convex polygons in the plane of total complexity n. Furthermore, we present an exact polynomial-time algorithm for the minimum-perimeter intersecting polygon for the case where O is a set of n possibly intersecting segments in the plane. So far, polynomial-time exact algorithms were only known for the minimum perimeter intersecting polygon of lines or of disjoint segments.

### Inhale: Enabling High-Performance and Energy-Efficient In-SRAM Cryptographic Hash for IoT

In the age of big data, information security has become a major issue of debate, especially with the rise of the Internet of Things (IoT), where attackers can effortlessly obtain physical access to edge devices. The hash algorithm is the current foundation for data integrity and authentication. However, it is challenging to provide a high-performance, high-throughput, and energy-efficient solution on resource-constrained edge devices. In this paper, we propose Inhale, an in-SRAM architecture to effectively compute hash algorithms with innovative data alignment and efficient read/write strategies to implicitly execute data shift operations through the in-situ controller. We present two variations of Inhale: Inhale-Opt, which is optimized for latency, throughput, and area-overhead; and Inhale-Flex, which offers flexibility in repurposing a part of last-level caches for hash computation. We thoroughly evaluate our proposed architectures on both SRAM and ReRAM memories and compare them with the state-of-the-art in-memory and ASIC accelerators. Our performance evaluation confirms that Inhale can achieve 1.4x - 14.5x higher throughput-per-area and about two-orders-of-magnitude higher throughput-per-area-per-energy compared to the state-of-the-art solutions.

### MSE-Based Transceiver Designs for RIS-Aided Communications With Hardware Impairments

It is challenging to precisely configure the phase shifts of the reflecting elements at the reconfigurable intelligent surface (RIS) due to inherent hardware impairments (HIs). In this paper, the mean square error (MSE) performance is investigated in an RIS-aided single-user multiple-input multipleoutput (MIMO) communication system with transceiver HIs and RIS phase noise. We aim to jointly optimize the transmit precoder, linear received equalizer, and RIS reflecting matrices to minimize the MSE. To tackle this problem, an iterative algorithm is proposed, wherein the beamforming matrices are alternately optimized. Specifically, for the beamforming optimization subproblem, we derive the closed-form expression of the optimal precoder and equalizer matrices. Then, for the phase shift optimization subproblem, an efficient algorithm based on the majorization-minimization (MM) method is proposed. Simulation results show that the proposed MSE-based RIS-aided transceiver scheme dramatically outperforms the conventional system algorithms that do not consider HIs at both the transceiver and the RIS.

### Fine-Grained Complexity Lower Bounds for Families of Dynamic Graphs

A dynamic graph algorithm is a data structure that answers queries about a property of the current graph while supporting graph modifications such as edge insertions and deletions. Prior work has shown strong conditional lower bounds for general dynamic graphs, yet graph families that arise in practice often exhibit structural properties that the existing lower bound constructions do not possess. We study three specific graph families that are ubiquitous, namely constant-degree graphs, power-law graphs, and expander graphs, and give the first conditional lower bounds for them. Our results show that even when restricting our attention to one of these graph classes, any algorithm for fundamental graph problems such as distance computation or approximation or maximum matching, cannot simultaneously achieve a sub-polynomial update time and query time. For example, we show that the same lower bounds as for general graphs hold for maximum matching and ($s,t$)-distance in constant-degree graphs, power-law graphs or expanders. Namely, in an $m$-edge graph, there exists no dynamic algorithms with both $O(m^{1/2 - \epsilon})$ update time and $O(m^{1 -\epsilon})$ query time, for any small $\epsilon > 0$. Note that for ($s,t$)-distance the trivial dynamic algorithm achieves an almost matching upper bound of constant update time and $O(m)$ query time. We prove similar bounds for the other graph families and for other fundamental problems such as densest subgraph detection and perfect matching.

### Machine Learning-Based Test Smell Detection

Context: Test smells are symptoms of sub-optimal design choices adopted when developing test cases. Previous studies have proved their harmfulness for test code maintainability and effectiveness. Therefore, researchers have been proposing automated, heuristic-based techniques to detect them. However, the performance of such detectors is still limited and dependent on thresholds to be tuned. Objective: We propose the design and experimentation of a novel test smell detection approach based on machine learning to detect four test smells. Method: We plan to develop the largest dataset of manually-validated test smells. This dataset will be leveraged to train six machine learners and assess their capabilities in within- and cross-project scenarios. Finally, we plan to compare our approach with state-of-the-art heuristic-based techniques.

### Object Discovery via Contrastive Learning for Weakly Supervised Object Detection

Weakly Supervised Object Detection (WSOD) is a task that detects objects in an image using a model trained only on image-level annotations. Current state-of-the-art models benefit from self-supervised instance-level supervision, but since weak supervision does not include count or location information, the most common argmax'' labeling method often ignores many instances of objects. To alleviate this issue, we propose a novel multiple instance labeling method called object discovery. We further introduce a new contrastive loss under weak supervision where no instance-level information is available for sampling, called weakly supervised contrastive loss (WSCL). WSCL aims to construct a credible similarity threshold for object discovery by leveraging consistent features for embedding vectors in the same class. As a result, we achieve new state-of-the-art results on MS-COCO 2014 and 2017 as well as PASCAL VOC 2012, and competitive results on PASCAL VOC 2007.

### Order-Invariance of Two-Variable Logic is coNExpTime-complete

We establish coNExpTime-completeness of the problem of deciding order-invariance of a given two variable first-order formula, improving and significantly simplifying coTwoNExpTime bound by Zeume and Harwath.

### Deletion Robust Non-Monotone Submodular Maximization over Matroids

Maximizing a submodular function is a fundamental task in machine learning and in this paper we study the deletion robust version of the problem under the classic matroids constraint. Here the goal is to extract a small size summary of the dataset that contains a high value independent set even after an adversary deleted some elements. We present constant-factor approximation algorithms, whose space complexity depends on the rank $k$ of the matroid and the number $d$ of deleted elements. In the centralized setting we present a $(4.597+O(\varepsilon))$-approximation algorithm with summary size $O( \frac{k+d}{\varepsilon^2}\log \frac{k}{\varepsilon})$ that is improved to a $(3.582+O(\varepsilon))$-approximation with $O(k + \frac{d}{\varepsilon^2}\log \frac{k}{\varepsilon})$ summary size when the objective is monotone. In the streaming setting we provide a $(9.435 + O(\varepsilon))$-approximation algorithm with summary size and memory $O(k + \frac{d}{\varepsilon^2}\log \frac{k}{\varepsilon})$; the approximation factor is then improved to $(5.582+O(\varepsilon))$ in the monotone case.

### HVS-Inspired Signal Degradation Network for Just Noticeable Difference Estimation

Significant improvement has been made on just noticeable difference (JND) modelling due to the development of deep neural networks, especially for the recently developed unsupervised-JND generation models. However, they have a major drawback that the generated JND is assessed in the real-world signal domain instead of in the perceptual domain in the human brain. There is an obvious difference when JND is assessed in such two domains since the visual signal in the real world is encoded before it is delivered into the brain with the human visual system (HVS). Hence, we propose an HVS-inspired signal degradation network for JND estimation. To achieve this, we carefully analyze the HVS perceptual process in JND subjective viewing to obtain relevant insights, and then design an HVS-inspired signal degradation (HVS-SD) network to represent the signal degradation in the HVS. On the one hand, the well learnt HVS-SD enables us to assess the JND in the perceptual domain. On the other hand, it provides more accurate prior information for better guiding JND generation. Additionally, considering the requirement that reasonable JND should not lead to visual attention shifting, a visual attention loss is proposed to control JND generation. Experimental results demonstrate that the proposed method achieves the SOTA performance for accurately estimating the redundancy of the HVS. Source code will be available at https://github.com/jianjin008/HVS-SD-JND.

### Neural network fragile watermarking with no model performance degradation

Deep neural networks are vulnerable to malicious fine-tuning attacks such as data poisoning and backdoor attacks. Therefore, in recent research, it is proposed how to detect malicious fine-tuning of neural network models. However, it usually negatively affects the performance of the protected model. Thus, we propose a novel neural network fragile watermarking with no model performance degradation. In the process of watermarking, we train a generative model with the specific loss function and secret key to generate triggers that are sensitive to the fine-tuning of the target classifier. In the process of verifying, we adopt the watermarked classifier to get labels of each fragile trigger. Then, malicious fine-tuning can be detected by comparing secret keys and labels. Experiments on classic datasets and classifiers show that the proposed method can effectively detect model malicious fine-tuning with no model performance degradation.

### Making Reinforcement Learning Work on Swimmer

The SWIMMER environment is a standard benchmark in reinforcement learning (RL). In particular, it is often used in papers comparing or combining RL methods with direct policy search methods such as genetic algorithms or evolution strategies. A lot of these papers report poor performance on SWIMMER from RL methods and much better performance from direct policy search methods. In this technical report we show that the low performance of RL methods on SWIMMER simply comes from the inadequate tuning of an important hyper-parameter and that, by setting this hyper-parameter to a correct value, the issue can be very easily fixed.

### Human-to-Robot Manipulability Domain Adaptation with Parallel Transport and Manifold-Aware ICP

Manipulability ellipsoids efficiently capture the human pose and reveal information about the task at hand. Their use in task-dependent robot teaching - particularly their transfer from a teacher to a learner - can advance emulation of human-like motion. Although in recent literature focus is shifted towards manipulability transfer between two robots, the adaptation to the capabilities of the other kinematic system is to date not addressed and research in transfer from human to robot is still in its infancy. This work presents a novel manipulability domain adaptation method for the transfer of manipulability information to the domain of another kinematic system. As manipulability matrices/ellipsoids are symmetric positive-definite (SPD) they can be viewed as points on the Riemannian manifold of SPD matrices. We are the first to address the problem of manipulability transfer from the perspective of point cloud registration. We propose a manifold-aware Iterative Closest Point algorithm (ICP) with parallel transport initialization. Furthermore, we introduce a correspondence matching heuristic for manipulability ellipsoids based on inherent geometric features. We confirm our method in simulation experiments with 2-DoF manipulators as well as 7-DoF models representing the human-arm kinematics.

### Efficient Multimodal Transformer with Dual-Level Feature Restoration for Robust Multimodal Sentiment Analysis

With the proliferation of user-generated online videos, Multimodal Sentiment Analysis (MSA) has attracted increasing attention recently. Despite significant progress, there are still two major challenges on the way towards robust MSA: 1) inefficiency when modeling cross-modal interactions in unaligned multimodal data; and 2) vulnerability to random modality feature missing which typically occurs in realistic settings. In this paper, we propose a generic and unified framework to address them, named Efficient Multimodal Transformer with Dual-Level Feature Restoration (EMT-DLFR). Concretely, EMT employs utterance-level representations from each modality as the global multimodal context to interact with local unimodal features and mutually promote each other. It not only avoids the quadratic scaling cost of previous local-local cross-modal interaction methods but also leads to better performance. To improve model robustness in the incomplete modality setting, on the one hand, DLFR performs low-level feature reconstruction to implicitly encourage the model to learn semantic information from incomplete data. On the other hand, it innovatively regards complete and incomplete data as two different views of one sample and utilizes siamese representation learning to explicitly attract their high-level representations. Comprehensive experiments on three popular datasets demonstrate that our method achieves superior performance in both complete and incomplete modality settings.

### [76] 2208.07591

Source-free domain adaptation (SFDA) aims to adapt a classifier to an unlabelled target data set by only using a pre-trained source model. However, the absence of the source data and the domain shift makes the predictions on the target data unreliable. We propose quantifying the uncertainty in the source model predictions and utilizing it to guide the target adaptation. For this, we construct a probabilistic source model by incorporating priors on the network parameters inducing a distribution over the model predictions. Uncertainties are estimated by employing a Laplace approximation and incorporated to identify target data points that do not lie in the source manifold and to down-weight them when maximizing the mutual information on the target data. Unlike recent works, our probabilistic treatment is computationally lightweight, decouples source training and target adaptation, and requires no specialized source training or changes of the model architecture. We show the advantages of uncertainty-guided SFDA over traditional SFDA in the closed-set and open-set settings and provide empirical evidence that our approach is more robust to strong domain shifts even without tuning.

### Multi-Point Integrated Sensing and Communication: Fusion Model and Functionality Selection

Integrated sensing and communication (ISAC) represents a paradigm shift, where previously competing wireless transmissions are jointly designed to operate in harmony via the shared use of the hardware platform for improving the spectral, energy, and hardware efficiencies. However, due to adversarial factors such as fading and blockages, ISAC without fusion may suffer from high sensing uncertainties. This paper presents a multi-point ISAC (MPISAC) system that fuses the outputs from multiple ISAC devices for achieving higher sensing performance by exploiting multi-radar data redundancy. Furthermore, we propose to effectively explore the performance trade-off between sensing and communication via a functionality selection module that adaptively determines the working state (i.e., sensing or communication) of an ISAC device. The crux of our approach is to adopt a fusion model that predicts the fusion accuracy via hypothesis testing and optimal voting analysis. Simulation results demonstrate the superiority of MPISAC over various benchmark schemes and show that the proposed approach can effectively span the trade-off region in ISAC systems.

### The Low Emission Oil&Gas Open (LEOGO) Reference Platform of an Off-Grid Energy System for Renewable Integration Studies

This article introduces and describes the integrated energy system of a the Low Emission Oil&Gas Open (LEOGO) reference platform. It is a hypothetical case meant to represent a typical oil&gas installation in the North Sea. The aim of this detailed specification is to serve as an open reference case where all information about it can be publicly shared, facilitating benchmarking and collaboration. The relevance of this reference case of an off-grid energy system is not limited to the oil&gas industry, since it can also been seen as a special kind of electrical micro grid. The remote offshore location makes it especially relevant for studying offshore wind power and ocean energy sources like wave power. The specification has an emphasis on the energy system and electrical configuration, but includes also a basic description of oil field and processing system. The intention is that it will serve as a basis for energy system studies and relating power system stability analyses regarding the integration of renewable energy sources. This allows for comparisons of a base case with different design modifications, new operational planning methods, power management strategies and control concepts. Examples of possible modifications are the replacement of gas turbines by wind turbines, addition of energy storage systems, a more variable operation of loads, etc. The last part of the article demonstrates the behaviour of the reference platform implemented in two software tools. One for operational planning and one for dynamic power system analyses.

### The Moment Passing Method for Wireless Channel Capacity Estimation

Wireless network capacity can be regarded as the most important performance metric for wireless communication systems. With the fast development of wireless communication technology, future wireless systems will become more and more complicated. As a result, the channel gain matrix will become a large-dimensional random matrix, leading to an extremely high computational cost to obtain the capacity. In this paper, we propose a moment passing method (MPM) to realize the fast and accurate capacity estimation for future ultra-dense wireless systems. It can determine the capacity with quadratic complexity, which is optimal considering that the cost of a single matrix operation is not less than quadratic complexity. Moreover, it has high accuracy. The simulation results show that the estimation error of this method is below 2 percent. Finally, our method is highly general, as it is independent of the distributions of BSs and users, and the shape of network areas. More importantly, it can be applied not only to the conventional multi-user multiple input and multiple output (MU-MIMO) networks, but also to the capacity-centric networks designed for B5G/6G.

### Manual-Guided Dialogue for Flexible Conversational Agents

How to build and use dialogue data efficiently, and how to deploy models in different domains at scale can be two critical issues in building a task-oriented dialogue system. In this paper, we propose a novel manual-guided dialogue scheme to alleviate these problems, where the agent learns the tasks from both dialogue and manuals. The manual is an unstructured textual document that guides the agent in interacting with users and the database during the conversation. Our proposed scheme reduces the dependence of dialogue models on fine-grained domain ontology, and makes them more flexible to adapt to various domains. We then contribute a fully-annotated multi-domain dataset MagDial to support our scheme. It introduces three dialogue modeling subtasks: instruction matching, argument filling, and response generation. Modeling these subtasks is consistent with the human agent's behavior patterns. Experiments demonstrate that the manual-guided dialogue scheme improves data efficiency and domain scalability in building dialogue systems. The dataset and benchmark will be publicly available for promoting future research.

### The Parareal Algorithm Applied to the FESOM 2 Ocean Circulation Model

In this work the parallel-in-time algorithm Parareal was applied to the ocean-circulation and sea-ice model FESOM2 developed by the Alfred-Wegener Institut (AWI). The climate model provides one time integration method and hence, the coarse and fine propagators were defined by time step width. The coarse method was executed at the CFL condition limit, while fine step-sizes where gradually refined. As a first assessment of the performance of Parareal a low-resolution test mesh was used with default settings provided by the AWI. An introduction to FESOM2 and the straightforward implementation of Parareal on the DKRZ cluster is given. The evaluation of numerical results for different simulation intervals and fine propagator configurations shows strong dependence on the simulation time and time step-size of the fine propagator. Increasing the latter leads to stagnation and eventually divergence of the Parareal algorithm.

### Mixed-dimensional linked models of diffusion: mathematical analysis and stable finite element discretizations

In the context of mathematical modeling, it is sometimes convenient to employ models of different dimensionality simultaneously, even for a single physical phenomenon. However, this type of combination might entail difficulties even when individual models are well-understood, particularly in relation to well-posedness. In this article, we focus on combining two diffusive models, one defined over a continuum and the other one over a curve. The resulting problem is of mixed-dimensionality, where here the low-dimensional problem is embedded within the high-dimensional one. We show unconditional stability and convergence of the continuous and discrete linked problems discretized by mixed finite elements. The theoretical results are highlighted with numerical examples illustrating the effects of linking diffusive models. As a side result, we show that the methods introduced in this article can be used to infer the solution of diffusive problems with incomplete data. The findings of this article are of particular interest to engineers and applied mathematicians and open an avenue for further research connected to the field of data science.

### An Optimal Transport Approach to the Computation of the LM Rate

Mismatch capacity characterizes the highest information rate for a channel under a prescribed decoding metric, and is thus a highly relevant fundamental performance metric when dealing with many practically important communication scenarios. Compared with the frequently used generalized mutual information (GMI), the LM rate has been known as a tighter lower bound of the mismatch capacity. The computation of the LM rate, however, has been a difficult task, due to the fact that the LM rate involves a maximization over a function of the channel input, which becomes challenging as the input alphabet size grows, and direct numerical methods (e.g., interior point methods) suffer from intensive memory and computational resource requirements. Noting that the computation of the LM rate can also be formulated as an entropy-based optimization problem with constraints, in this work, we transform the task into an optimal transport (OT) problem with an extra constraint. This allows us to efficiently and accurately accomplish our task by using the well-known Sinkhorn algorithm. Indeed, only a few iterations are required for convergence, due to the fact that the formulated problem does not contain additional regularization terms. Moreover, we convert the extra constraint into a root-finding procedure for a one-dimensional monotonic function. Numerical experiments demonstrate the feasibility and efficiency of our OT approach to the computation of the LM rate.

### Achieve Fully Decentralized End to End Encryption Meeting via Blockchain

Zoom Meeting is an enterprise online video conferencing solution with real-time messaging and content sharing. However, it's lack of privacy protection since centralized Zoom servers are capable of monitoring user's messages. Thereby, to solve the privacy problem, in May 2020, Zoom acquired Keybase so that Keybase's team can help it to build end-to-end encryption meeting while remaining Zoom's current scalability and high-performance. Nonetheless, according to the latest released Zoom's whitepaper, even with the new design of E2E (end to end) encryption meeting, the security threats can't be erased completely since the new design is not fully decentralized. In this paper, we introduce a fully decentralized design of E2E encryption meeting via blockchain technology. With this new design, Zoom's E2E meeting privacy can be further improved.

### Does lossy image compression affect racial bias within face recognition?

Yes - This study investigates the impact of commonplace lossy image compression on face recognition algorithms with regard to the racial characteristics of the subject. We adopt a recently proposed racial phenotype-based bias analysis methodology to measure the effect of varying levels of lossy compression across racial phenotype categories. Additionally, we determine the relationship between chroma-subsampling and race-related phenotypes for recognition performance. Prior work investigates the impact of lossy JPEG compression algorithm on contemporary face recognition performance. However, there is a gap in how this impact varies with different race-related inter-sectional groups and the cause of this impact. Via an extensive experimental setup, we demonstrate that common lossy image compression approaches have a more pronounced negative impact on facial recognition performance for specific racial phenotype categories such as darker skin tones (by up to 34.55\%). Furthermore, removing chroma-subsampling during compression improves the false matching rate (up to 15.95\%) across all phenotype categories affected by the compression, including darker skin tones, wide noses, big lips, and monolid eye categories. In addition, we outline the characteristics that may be attributable as the underlying cause of such phenomenon for lossy compression algorithms such as JPEG.

### KRACL: Contrastive Learning with Graph Context Modeling for Sparse Knowledge Graph Completion

Knowledge Graph Embeddings (KGE) aim to map entities and relations to low dimensional spaces and have become the \textit{de-facto} standard for knowledge graph completion. Most existing KGE methods suffer from the sparsity challenge, where it is harder to predict entities that appear less frequently in knowledge graphs. In this work, we propose a novel framework KRACL to alleviate the widespread sparsity in KGs with graph context and contrastive learning. Firstly, we propose the Knowledge Relational Attention Network (KRAT) to leverage the graph context by simultaneously projecting neighboring triples to different latent spaces and jointly aggregating messages with the attention mechanism. KRAT is capable of capturing the subtle semantic information and importance of different context triples as well as leveraging multi-hop information in knowledge graphs. Secondly, we propose the knowledge contrastive loss by combining the contrastive loss with cross entropy loss, which introduces more negative samples and thus enriches the feedback to sparse entities. Our experiments demonstrate that KRACL achieves superior results across various standard knowledge graph benchmarks, especially on WN18RR and NELL-995 which have large numbers of low in-degree entities. Extensive experiments also bear out KRACL's effectiveness in handling sparse knowledge graphs and robustness against noisy triples.

### Don't Reinvent the Wheel: Towards Automatic Replacement of Custom Implementations with APIs

Reusing code is a common practice in software development: It helps developers speedup the implementation task while also reducing the chances of introducing bugs, given the assumption that the reused code has been tested, possibly in production. Despite these benefits, opportunities for reuse are not always in plain sight and, thus, developers may miss them. We present our preliminary steps in building RETIWA, a recommender able to automatically identify custom implementations in a given project that are good candidates to be replaced by open source APIs. RETIWA relies on a knowledge base'' consisting of real examples of custom implementation-to-API replacements. In this work, we present the mining strategy we tailored to automatically and reliably extract replacements of custom implementations with APIs from open source projects. This is the first step towards building the envisioned recommender.

### Algorithmic Assistance with Recommendation-Dependent Preferences

When we use algorithms to produce recommendations, we typically think of these recommendations as providing helpful information, such as when risk assessments are presented to judges or doctors. But when a decision-maker obtains a recommendation, they may not only react to the information. The decision-maker may view the recommendation as a default action, making it costly for them to deviate, for example when a judge is reluctant to overrule a high-risk assessment of a defendant or a doctor fears the consequences of deviating from recommended procedures. In this article, we consider the effect and design of recommendations when they affect choices not just by shifting beliefs, but also by altering preferences. We motivate our model from institutional factors, such as a desire to avoid audits, as well as from well-established models in behavioral science that predict loss aversion relative to a reference point, which here is set by the algorithm. We show that recommendation-dependent preferences create inefficiencies where the decision-maker is overly responsive to the recommendation, which changes the optimal design of the algorithm towards providing less conservative recommendations. As a potential remedy, we discuss an algorithm that strategically withholds recommendations, and show how it can improve the quality of final decisions.

### FALCON: Sound and Complete Neural Semantic Entailment over ALC Ontologies

Many ontologies, i.e., Description Logic (DL) knowledge bases, have been developed to provide rich knowledge about various domains, and a lot of them are based on ALC, i.e., a prototypical and expressive DL, or its extensions. The main task that explores ALC ontologies is to compute semantic entailment. Symbolic approaches can guarantee sound and complete semantic entailment but are sensitive to inconsistency and missing information. To this end, we propose FALCON, a Fuzzy ALC Ontology Neural reasoner. FALCON uses fuzzy logic operators to generate single model structures for arbitrary ALC ontologies, and uses multiple model structures to compute semantic entailments. Theoretical results demonstrate that FALCON is guaranteed to be a sound and complete algorithm for computing semantic entailments over ALC ontologies. Experimental results show that FALCON enables not only approximate reasoning (reasoning over incomplete ontologies) and paraconsistent reasoning (reasoning over inconsistent ontologies), but also improves machine learning in the biomedical domain by incorporating background knowledge from ALC ontologies.

### Online Learning for Non-monotone Submodular Maximization: From Full Information to Bandit Feedback

In this paper, we revisit the online non-monotone continuous DR-submodular maximization problem over a down-closed convex set, which finds wide real-world applications in the domain of machine learning, economics, and operations research. At first, we present the Meta-MFW algorithm achieving a $1/e$-regret of $O(\sqrt{T})$ at the cost of $T^{3/2}$ stochastic gradient evaluations per round. As far as we know, Meta-MFW is the first algorithm to obtain $1/e$-regret of $O(\sqrt{T})$ for the online non-monotone continuous DR-submodular maximization problem over a down-closed convex set. Furthermore, in sharp contrast with ODC algorithm \citep{thang2021online}, Meta-MFW relies on the simple online linear oracle without discretization, lifting, or rounding operations. Considering the practical restrictions, we then propose the Mono-MFW algorithm, which reduces the per-function stochastic gradient evaluations from $T^{3/2}$ to 1 and achieves a $1/e$-regret bound of $O(T^{4/5})$. Next, we extend Mono-MFW to the bandit setting and propose the Bandit-MFW algorithm which attains a $1/e$-regret bound of $O(T^{8/9})$. To the best of our knowledge, Mono-MFW and Bandit-MFW are the first sublinear-regret algorithms to explore the one-shot and bandit setting for online non-monotone continuous DR-submodular maximization problem over a down-closed convex set, respectively. Finally, we conduct numerical experiments on both synthetic and real-world datasets to verify the effectiveness of our methods.

### A New Scheme for Image Compression and Encryption Using ECIES, Henon Map, and AEGAN

Providing security in the transmission of images and other multimedia data has become one of the most important scientific and practical issues. In this paper, a method for compressing and encryption images is proposed, which can safely transmit images in low-bandwidth data transmission channels. At first, using the autoencoding generative adversarial network (AEGAN) model, the images are mapped to a vector in the latent space with low dimensions. In the next step, the obtained vector is encrypted using public key encryption methods. In the proposed method, Henon chaotic map is used for permutation, which makes information transfer more secure. To evaluate the results of the proposed scheme, three criteria SSIM, PSNR, and execution time have been used.

### Mask and Reason: Pre-Training Knowledge Graph Transformers for Complex Logical Queries

Knowledge graph (KG) embeddings have been a mainstream approach for reasoning over incomplete KGs. However, limited by their inherently shallow and static architectures, they can hardly deal with the rising focus on complex logical queries, which comprise logical operators, imputed edges, multiple source entities, and unknown intermediate entities. In this work, we present the Knowledge Graph Transformer (kgTransformer) with masked pre-training and fine-tuning strategies. We design a KG triple transformation method to enable Transformer to handle KGs, which is further strengthened by the Mixture-of-Experts (MoE) sparse activation. We then formulate the complex logical queries as masked prediction and introduce a two-stage masked pre-training strategy to improve transferability and generalizability. Extensive experiments on two benchmarks demonstrate that kgTransformer can consistently outperform both KG embedding-based baselines and advanced encoders on nine in-domain and out-of-domain reasoning tasks. Additionally, kgTransformer can reason with explainability via providing the full reasoning paths to interpret given answers.

### Educating Reflective Systems Developers at Scale: Towards productive feedback in a semi-capstone large-scale software engineering course

Feedback is critical in education. This Innovative Practice Full Paper reports lessons learned from improving the quality of feedback in a semi-capstone software engineering course, with particular focus on how to deliver productive feedback in large scale during project work. The bachelor-level introduction to software engineering course is taken by about 500 students from eight study programs, organised into 72 project teams. The course aims to educate reflective systems developers. The teaching staff includes 29 teaching assistants as supervisor and product owners for teams. Project teams get feedback on seven deliverables as part of formative portfolio assessment. Students expressed frustration on feedback not being aligned, that they got critique on topics not stated in assignments and that teaching assistants were reluctant to discuss the feedback. This article provides a description of the course design, an assessment of the quality of feedback and lessons learned from three main changes: Revising assignments and rubrics, reorganising the teaching staff and increasing training of teaching assistants. In discussing the changes, we draw on a survey to students with 142 respondents, a survey to teaching assistants with 18 respondents, meeting minutes from a student reference group and experience reports from teaching assistants as well as literature and own experience. The article concludes with three actionable lessons learned for large-scale semi-capstone courses.

### A Review of the Convergence of 5G/6G Architecture and Deep Learning

The convergence of 5G architecture and deep learning has gained a lot of research interests in both the fields of wireless communication and artificial intelligence. This is because deep learning technologies have been identified to be the potential driver of the 5G technologies, that make up the 5G architecture. Hence, there have been extensive surveys on the convergence of 5G architecture and deep learning. However, most of the existing survey papers mainly focused on how deep learning can converge with a specific 5G technology, thus, not covering the full spectrum of the 5G architecture. Although there is a recent survey paper that appears to be robust, a review of that paper shows that it is not well structured to specifically cover the convergence of deep learning and the 5G technologies. Hence, this paper provides a robust overview of the convergence of the key 5G technologies and deep learning. The challenges faced by such convergence are discussed. In addition, a brief overview of the future 6G architecture, and how it can converge with deep learning is also discussed.

### Asynchronous Functional Sessions: Cyclic and Concurrent (Extended Version)

We present Concurrent GV (CGV), a functional calculus with message-passing concurrency governed by session types. With respect to prior calculi, CGV has increased support for concurrent evaluation and for cyclic network topologies. The design of CGV draws on APCP, a session-typed asynchronous pi-calculus developed in prior work. Technical contributions are (i) the syntax, semantics, and type system of CGV; (ii) a correct translation of CGV into APCP; (iii) a technique for establishing deadlock-free CGV programs, by resorting to APCP's priority-based type system.

### American cultural regions mapped through the lexical analysis of social media

Cultural areas represent a useful concept that cross-fertilizes diverse fields in social sciences. Knowledge of how humans organize and relate their ideas and behavior within a society helps to understand their actions and attitudes towards different issues. However, the selection of common traits that shape a cultural area is somewhat arbitrary. What is needed is a method that can leverage the massive amounts of data coming online, especially through social media, to identify cultural regions without ad-hoc assumptions, biases or prejudices. In this work, we take a crucial step towards this direction by introducing a method to infer cultural regions based on the automatic analysis of large datasets from microblogging posts. Our approach is based on the principle that cultural affiliation can be inferred from the topics that people discuss among themselves. Specifically, we measure regional variations in written discourse generated in American social media. From the frequency distributions of content words in geotagged Tweets, we find the words' usage regional hotspots, and from there we derive principal components of regional variation. Through a hierarchical clustering of the data in this lower-dimensional space, our method yields clear cultural areas and the topics of discussion that define them. We obtain a manifest North-South separation, which is primarily influenced by the African American culture, and further contiguous (East-West) and non-contiguous (urban-rural) divisions that provide a comprehensive picture of today's cultural areas in the US.

### The Weakest Failure Detector for Genuine Atomic Multicast (Extended Version)

Atomic broadcast is a group communication primitive to order messages across a set of distributed processes. Atomic multicast is its natural generalization where each message $m$ is addressed to $dst(m)$, a subset of the processes called its destination group. A solution to atomic multicast is genuine when a process takes steps only if a message is addressed to it. Genuine solutions are the ones used in practice because they have better performance. Let $G$ be all the destination groups and $F$ be the cyclic families in it, that is the subsets of $G$ whose intersection graph is hamiltonian. This paper establishes that the weakest failure detector to solve genuine atomic multicast is $\mu=(\wedge_{g,h \in G}~\Sigma_{g \cap h}) \wedge (\wedge_{g \in G}~\Omega_g) \wedge \gamma$, where (i) $\Sigma_P$ and $\Omega_P$ are the quorum and leader failure detectors restricted to the processes in $P$, and (ii) $\gamma$ is a new failure detector that informs the processes in a cyclic family $f \in F$ when $f$ is faulty. We also study two classical variations of atomic multicast. The first variation requires that message delivery follows the real-time order. In this case, $\mu$ must be strengthened with $1^{g \cap h}$, the indicator failure detector that informs each process in $g \cup h$ when $g \cap h$ is faulty. The second variation requires a message to be delivered when the destination group runs in isolation. We prove that its weakest failure detector is at least $\mu \wedge (\wedge_{g, h \in G}~\Omega_{g \cap h})$. This value is attained when $F=\varnothing$.

### CorpusBrain: Pre-train a Generative Retrieval Model for Knowledge-Intensive Language Tasks

Knowledge-intensive language tasks (KILT) usually require a large body of information to provide correct answers. A popular paradigm to solve this problem is to combine a search system with a machine reader, where the former retrieves supporting evidences and the latter examines them to produce answers. Recently, the reader component has witnessed significant advances with the help of large-scale pre-trained generative models. Meanwhile most existing solutions in the search component rely on the traditional index-retrieve-then-rank'' pipeline, which suffers from large memory footprint and difficulty in end-to-end optimization. Inspired by recent efforts in constructing model-based IR models, we propose to replace the traditional multi-step search pipeline with a novel single-step generative model, which can dramatically simplify the search process and be optimized in an end-to-end manner. We show that a strong generative retrieval model can be learned with a set of adequately designed pre-training tasks, and be adopted to improve a variety of downstream KILT tasks with further fine-tuning. We name the pre-trained generative retrieval model as CorpusBrain as all information about the corpus is encoded in its parameters without the need of constructing additional index. Empirical results show that CorpusBrain can significantly outperform strong baselines for the retrieval task on the KILT benchmark and establish new state-of-the-art downstream performances. We also show that CorpusBrain works well under zero- and low-resource settings.

### Matching Multiple Perspectives for Efficient Representation Learning

Representation learning approaches typically rely on images of objects captured from a single perspective that are transformed using affine transformations. Additionally, self-supervised learning, a successful paradigm of representation learning, relies on instance discrimination and self-augmentations which cannot always bridge the gap between observations of the same object viewed from a different perspective. Viewing an object from multiple perspectives aids holistic understanding of an object which is particularly important in situations where data annotations are limited. In this paper, we present an approach that combines self-supervised learning with a multi-perspective matching technique and demonstrate its effectiveness on learning higher quality representations on data captured by a robotic vacuum with an embedded camera. We show that the availability of multiple views of the same object combined with a variety of self-supervised pretraining algorithms can lead to improved object classification performance without extra labels.

### DRAGON: Decentralized Fault Tolerance in Edge Federations

Edge Federation is a new computing paradigm that seamlessly interconnects the resources of multiple edge service providers. A key challenge in such systems is the deployment of latency-critical and AI based resource-intensive applications in constrained devices. To address this challenge, we propose a novel memory-efficient deep learning based model, namely generative optimization networks (GON). Unlike GANs, GONs use a single network to both discriminate input and generate samples, significantly reducing their memory footprint. Leveraging the low memory footprint of GONs, we propose a decentralized fault-tolerance method called DRAGON that runs simulations (as per a digital modeling twin) to quickly predict and optimize the performance of the edge federation. Extensive experiments with real-world edge computing benchmarks on multiple Raspberry-Pi based federated edge configurations show that DRAGON can outperform the baseline methods in fault-detection and Quality of Service (QoS) metrics. Specifically, the proposed method gives higher F1 scores for fault-detection than the best deep learning (DL) method, while consuming lower memory than the heuristic methods. This allows for improvement in energy consumption, response time and service level agreement violations by up to 74, 63 and 82 percent, respectively.

### Representation Learning on Graphs to Identifying Circular Trading in Goods and Services Tax

Circular trading is a form of tax evasion in Goods and Services Tax where a group of fraudulent taxpayers (traders) aims to mask illegal transactions by superimposing several fictitious transactions (where no value is added to the goods or service) among themselves in a short period. Due to the vast database of taxpayers, it is infeasible for authorities to manually identify groups of circular traders and the illegitimate transactions they are involved in. This work uses big data analytics and graph representation learning techniques to propose a framework to identify communities of circular traders and isolate the illegitimate transactions in the respective communities. Our approach is tested on real-life data provided by the Department of Commercial Taxes, Government of Telangana, India, where we uncovered several communities of circular traders.

### M2HF: Multi-level Multi-modal Hybrid Fusion for Text-Video Retrieval

Videos contain multi-modal content, and exploring multi-level cross-modal interactions with natural language queries can provide great prominence to text-video retrieval task (TVR). However, new trending methods applying large-scale pre-trained model CLIP for TVR do not focus on multi-modal cues in videos. Furthermore, the traditional methods simply concatenating multi-modal features do not exploit fine-grained cross-modal information in videos. In this paper, we propose a multi-level multi-modal hybrid fusion (M2HF) network to explore comprehensive interactions between text queries and each modality content in videos. Specifically, M2HF first utilizes visual features extracted by CLIP to early fuse with audio and motion features extracted from videos, obtaining audio-visual fusion features and motion-visual fusion features respectively. Multi-modal alignment problem is also considered in this process. Then, visual features, audio-visual fusion features, motion-visual fusion features, and texts extracted from videos establish cross-modal relationships with caption queries in a multi-level way. Finally, the retrieval outputs from all levels are late fused to obtain final text-video retrieval results. Our framework provides two kinds of training strategies, including an ensemble manner and an end-to-end manner. Moreover, a novel multi-modal balance loss function is proposed to balance the contributions of each modality for efficient end-to-end training. M2HF allows us to obtain state-of-the-art results on various benchmarks, eg, Rank@1 of 64.9\%, 68.2\%, 33.2\%, 57.1\%, 57.8\% on MSR-VTT, MSVD, LSMDC, DiDeMo, and ActivityNet, respectively.

### QPQ 1DLT: A system for the rapid deployment of secure and efficient EVM-based blockchains

Limited scalability and transaction costs are, among others, some of the critical issues that hamper a wider adoption of distributed ledger technologies (DLT). That is particularly true for the Ethereum blockchain, which, so far, has been the ecosystem with the highest adoption rate. Quite a few solutions, especially on the Ethereum side of things, have been attempted in the last few years. Most of them adopt the approach to offload transactions from the blockchain mainnet, a.k.a. Level 1 (L1), to a separate network. Such systems are collectively known as Level 2 (L2) systems. While mitigating the scalability issue, the adoption of L2 introduces additional drawbacks: users have to trust that the L2 system has correctly performed transactions or, conversely, high computational power is required to prove transactions correctness. In addition, significant technical knowledge is needed to set up and manage such an L2 system. To tackle such limitations, we propose 1DLT: a novel system that enables rapid and trustless deployment of an Ethereum Virtual Machine based blockchain that overcomes those drawbacks.

### Random Assignment of Indivisible Goods under Constraints

We consider the problem of random assignment of indivisible goods, in which each agent has an ordinal preference and a constraint. Our goal is to characterize the conditions under which there exists a random assignment that simultaneously achieves efficiency and envy-freeness. The probabilistic serial mechanism ensures the existence of such an assignment for the unconstrained setting. In this paper, we consider a more general setting in which each agent can consume a set of items only if the set satisfies her feasibility constraint. Such constraints must be taken into account in student course placements, employee shift assignments, and so on. We demonstrate that an efficient and envy-free assignment may not exist even for the simple case of partition matroid constraints, where the items are categorized and each agent demands one item from each category. We then identify special cases in which an efficient and envy-free assignment always exists. For these cases, the probabilistic serial cannot be naturally extended; therefore, we provide mechanisms to find the desired assignment using various approaches.

### On Optimizing Back-Substitution Methods for Neural Network Verification

With the increasing application of deep learning in mission-critical systems, there is a growing need to obtain formal guarantees about the behaviors of neural networks. Indeed, many approaches for verifying neural networks have been recently proposed, but these generally struggle with limited scalability or insufficient accuracy. A key component in many state-of-the-art verification schemes is computing lower and upper bounds on the values that neurons in the network can obtain for a specific input domain -- and the tighter these bounds, the more likely the verification is to succeed. Many common algorithms for computing these bounds are variations of the symbolic-bound propagation method; and among these, approaches that utilize a process called back-substitution are particularly successful. In this paper, we present an approach for making back-substitution produce tighter bounds. To achieve this, we formulate and then minimize the imprecision errors incurred during back-substitution. Our technique is general, in the sense that it can be integrated into numerous existing symbolic-bound propagation techniques, with only minor modifications. We implement our approach as a proof-of-concept tool, and present favorable results compared to state-of-the-art verifiers that perform back-substitution.

### ConTextual Mask Auto-Encoder for Dense Passage Retrieval

Dense passage retrieval aims to retrieve the relevant passages of a query from a large corpus based on dense representations (i.e., vectors) of the query and the passages. Recent studies have explored improving pre-trained language models to boost dense retrieval performance. This paper proposes CoT-MAE (ConTextual Masked Auto-Encoder), a simple yet effective generative pre-training method for dense passage retrieval. CoT-MAE employs an asymmetric encoder-decoder architecture that learns to compress the sentence semantics into a dense vector through self-supervised and context-supervised masked auto-encoding. Precisely, self-supervised masked auto-encoding learns to model the semantics of the tokens inside a text span, and context-supervised masked auto-encoding learns to model the semantical correlation between the text spans. We conduct experiments on large-scale passage retrieval benchmarks and show considerable improvements over strong baselines, demonstrating the high efficiency of CoT-MAE.

### Approximated Doubly Robust Search Relevance Estimation

Extracting query-document relevance from the sparse, biased clickthrough log is among the most fundamental tasks in the web search system. Prior art mainly learns a relevance judgment model with semantic features of the query and document and ignores directly counterfactual relevance evaluation from the clicking log. Though the learned semantic matching models can provide relevance signals for tail queries as long as the semantic feature is available. However, such a paradigm lacks the capability to introspectively adjust the biased relevance estimation whenever it conflicts with massive implicit user feedback. The counterfactual evaluation methods, on the contrary, ensure unbiased relevance estimation with sufficient click information. However, they suffer from the sparse or even missing clicks caused by the long-tailed query distribution. In this paper, we propose to unify the counterfactual evaluating and learning approaches for unbiased relevance estimation on search queries with various popularities. Specifically, we theoretically develop a doubly robust estimator with low bias and variance, which intentionally combines the benefits of existing relevance evaluating and learning approaches. We further instantiate the proposed unbiased relevance estimation framework in Baidu search, with comprehensive practical solutions designed regarding the data pipeline for click behavior tracking and online relevance estimation with an approximated deep neural network. Finally, we present extensive empirical evaluations to verify the effectiveness of our proposed framework, finding that it is robust in practice and manages to improve online ranking performance substantially.

### A non-invasive fault location method for modular multilevel converters under light load conditions

This paper proposes a non-invasive fault location method for modular multilevel converters (MMC) considering light load conditions. The prior-art fault location methods of the MMC are often developed and verified under full load conditions. However, it is revealed that the faulty arm current will be suppressed to be unipolar when the open-circuit fault happens on the submodule switch under light load. This leads to the capacitor voltage of the healthy and faulty submodules rising or falling with the same variations, increasing the difficulty of fault location. The proposed approach of injecting the second-order circulating current will rebuild the bipolar arm current of the MMC and enlarge the capacitor voltage deviations between the healthy and faulty SMs. As a result, the fault location time is significantly shortened. The simulations are carried out to validate the effectiveness of the proposed approach, showing that the fault location time is reduced to 1/6 compared with the condition without second-order circulating current injection.

### Enhancement to Training of Bidirectional GAN : An Approach to Demystify Tax Fraud

Outlier detection is a challenging activity. Several machine learning techniques are proposed in the literature for outlier detection. In this article, we propose a new training approach for bidirectional GAN (BiGAN) to detect outliers. To validate the proposed approach, we train a BiGAN with the proposed training approach to detect taxpayers, who are manipulating their tax returns. For each taxpayer, we derive six correlation parameters and three ratio parameters from tax returns submitted by him/her. We train a BiGAN with the proposed training approach on this nine-dimensional derived ground-truth data set. Next, we generate the latent representation of this data set using the $encoder$ (encode this data set using the $encoder$) and regenerate this data set using the $generator$ (decode back using the $generator$) by giving this latent representation as the input. For each taxpayer, compute the cosine similarity between his/her ground-truth data and regenerated data. Taxpayers with lower cosine similarity measures are potential return manipulators. We applied our method to analyze the iron and steel taxpayers data set provided by the Commercial Taxes Department, Government of Telangana, India.

### FedMR: Fedreated Learning via Model Recombination

As a promising privacy-preserving machine learning method, Federated Learning (FL) enables global model training across clients without compromising their confidential local data. However, existing FL methods suffer from the problem of low inference performance for unevenly distributed data, since most of them rely on Federated Averaging (FedAvg)-based aggregation. By averaging model parameters in a coarse manner, FedAvg eclipses the individual characteristics of local models, which strongly limits the inference capability of FL. Worse still, in each round of FL training, FedAvg dispatches the same initial local models to clients, which can easily result in stuck-at-local-search for optimal global models. To address the above issues, this paper proposes a novel and effective FL paradigm named FedMR (Federating Model Recombination). Unlike conventional FedAvg-based methods, the cloud server of FedMR shuffles each layer of collected local models and recombines them to achieve new models for local training on clients. Due to the fine-grained model recombination and local training in each FL round, FedMR can quickly figure out one globally optimal model for all the clients. Comprehensive experimental results demonstrate that, compared with state-of-the-art FL methods, FedMR can significantly improve the inference accuracy without causing extra communication overhead.

### FEC: Fast Euclidean Clustering for Point Cloud Segmentation

Segmentation from point cloud data is essential in many applications such as remote sensing, mobile robots, or autonomous cars. However, the point clouds captured by the 3D range sensor are commonly sparse and unstructured, challenging efficient segmentation. In this paper, we present a fast solution to point cloud instance segmentation with small computational demands. To this end, we propose a novel fast Euclidean clustering (FEC) algorithm which applies a pointwise scheme over the clusterwise scheme used in existing works. Our approach is conceptually simple, easy to implement (40 lines in C++), and achieves two orders of magnitudes faster against the classical segmentation methods while producing high-quality results.

### How Should We Evaluate Synthesized Environmental Sounds

Although several methods of environmental sound synthesis have been proposed, there has been no discussion on how synthesized environmental sounds should be evaluated. Only either subjective or objective evaluations have been conducted in conventional evaluations, and it is not clear what type of evaluation should be carried out. In this paper, we investigate how to evaluate synthesized environmental sounds. We also propose a subjective evaluation methodology to evaluate whether the synthesized sound appropriately represents the information input to the environmental sound synthesis system. In our experiments, we compare the proposed and conventional evaluation methods and show that the results of subjective evaluations tended to differ from those of objective evaluations. From these results, we conclude that it is necessary to conduct not only objective evaluation but also subjective evaluation.

### Generating a Terrain-Robustness Benchmark for Legged Locomotion: A Prototype via Terrain Authoring and Active Learning

Terrain-aware locomotion has become an emerging topic in legged robotics. However, it is hard to generate challenging and realistic terrains in simulation, which limits the way researchers evaluate their locomotion policies. In this paper, we prototype the generation of a terrain dataset via terrain authoring and active learning, and the learned samplers can stably generate diverse high-quality terrains. Hopefully, the generated dataset can make a terrain-robustness benchmark for legged locomotion. The dataset and the code implementation are released at https://bit.ly/3bn4j7f.

### The LAM Dataset: A Novel Benchmark for Line-Level Handwritten Text Recognition

Handwritten Text Recognition (HTR) is an open problem at the intersection of Computer Vision and Natural Language Processing. The main challenges, when dealing with historical manuscripts, are due to the preservation of the paper support, the variability of the handwriting -- even of the same author over a wide time-span -- and the scarcity of data from ancient, poorly represented languages. With the aim of fostering the research on this topic, in this paper we present the Ludovico Antonio Muratori (LAM) dataset, a large line-level HTR dataset of Italian ancient manuscripts edited by a single author over 60 years. The dataset comes in two configurations: a basic splitting and a date-based splitting which takes into account the age of the author. The first setting is intended to study HTR on ancient documents in Italian, while the second focuses on the ability of HTR systems to recognize text written by the same writer in time periods for which training data are not available. For both configurations, we analyze quantitative and qualitative characteristics, also with respect to other line-level HTR benchmarks, and present the recognition performance of state-of-the-art HTR architectures. The dataset is available for download at \url{https://aimagelab.ing.unimore.it/go/lam}.

### FALSE: Fake News Automatic and Lightweight Solution

Fake news existed ever since there was news, from rumors to printed media then radio and television. Recently, the information age, with its communications and Internet breakthroughs, exacerbated the spread of fake news. Additionally, aside from e-Commerce, the current Internet economy is dependent on advertisements, views and clicks, which prompted many developers to bait the end users to click links or ads. Consequently, the wild spread of fake news through social media networks has impacted real world issues from elections to 5G adoption and the handling of the Covid- 19 pandemic. Efforts to detect and thwart fake news has been there since the advent of fake news, from fact checkers to artificial intelligence-based detectors. Solutions are still evolving as more sophisticated techniques are employed by fake news propagators. In this paper, R code have been used to study and visualize a modern fake news dataset. We use clustering, classification, correlation and various plots to analyze and present the data. The experiments show high efficiency of classifiers in telling apart real from fake news.

### tile2tile: Learning Game Filters for Platformer Style Transfer

We present tile2tile, an approach for style transfer between levels of tile-based platformer games. Our method involves training models that translate levels from a lower-resolution sketch representation based on tile affordances to the original tile representation for a given game. This enables these models, which we refer to as filters, to translate level sketches into the style of a specific game. Moreover, by converting a level of one game into sketch form and then translating the resulting sketch into the tiles of another game, we obtain a method of style transfer between two games. We use Markov random fields and autoencoders for learning the game filters and apply them to demonstrate style transfer between levels of Super Mario Bros, Kid Icarus, Mega Man and Metroid.

### Secure system based on UAV and BLE for improving SAR missions

This work describes an integrated solution to face a civil security problem in the area of Search And Rescue (SAR) of missing people. This proposal is based on the use of emerging technologies such as Unmanned Aerial Vehicles (UAV), also known as drones, and the use of simulated beacons on smartphones. In particular, in the presented tool, drones fly synchronously in a specific area so that each drone uses on-board sensors to scan and detect any signal emitted by Bluetooth Low Energy (BLE) beacons from smartphones of missing people. This technique allows getting the GPS position of any detected missing person. This work also includes some security issues related to possible attacks focused on the perimeter and physical security.

### Using blockchain in the follow-up of emergency situations related to events

This paper describes a decentralized low-cost system designed to reinforce personal security in big events in case of emergency. The proposal consists of using smart contracts supported by blockchain in the management of events. An alternative communication channel that does not require any cloud service is also provided with the aim of improving the coordination of emergency services. Peers may use this emergency support tool to interact with each other through a chat when additional support is required. Since information security is mandatory in this scenario, Identity-Based Signcryption schemes are here used in order to guarantee communication confidentiality, authenticity and integrity. Depending on the communication mode (peer-to-peer or broadcast), different signcryption methods are used. A first implementation of the proposal has produced promising results.

### Priority and collision avoidance system for traffic lights

In this paper, a collision avoidance system is presented to detect red light running and warn nearby vehicles and pedestrians in real time in order to prevent possible accidents. No complex infrastructure-based solution such as those based on radars or cameras is here required. Instead, a new solution based on smartphones carried by drivers and pedestrians is proposed so that it is the device inside the vehicle violating a traffic light, the one that self-reports the offence in order to generate alerts and warn nearby vehicles and pedestrians to prevent accidents. The proposal could also be used by road authorities to collect data on traffic lights that are most frequently violated in order to define an action plan to investigate causes and look for solutions. It includes a classifier for learning and estimating driver behaviour based on collected data, which is used to predict whether he/she is about to run a red light or detect whether that has already happened. In the first case, the system broadcasts warnings directly to close vehicles and pedestrians through Wi-Fi, while in the second case, the proposal warns vehicles and pedestrians in the neighbourhood through a server. The solution also includes a prioritization system based on changing traffic lights at intersections according to the needs and characteristics of the traffic at all times, giving the top priority to emergency vehicles. Furthermore, the proposal involves the use of cryptographic schemes to protect authenticity and integrity of messages sent from traffic lights, smartphones and servers, and privacy and anonymity to promote the use of the system. A beta version with some parts of the proposal has been implemented and the obtained results are promising.

### QuickSkill: Novice Skill Estimation in Online Multiplayer Games

Matchmaking systems are vital for creating fair matches in online multiplayer games, which directly affects players' satisfactions and game experience. Most of the matchmaking systems largely rely on precise estimation of players' game skills to construct equitable games. However, the skill rating of a novice is usually inaccurate, as current matchmaking rating algorithms require considerable amount of games for learning the true skill of a new player. Using these unreliable skill scores at early stages for matchmaking usually leads to disparities in terms of team performance, which causes negative game experience. This is known as the ''cold-start'' problem for matchmaking rating algorithms. To overcome this conundrum, this paper proposes QuickSKill, a deep learning based novice skill estimation framework to quickly probe abilities of new players in online multiplayer games. QuickSKill extracts sequential performance features from initial few games of a player to predict his/her future skill rating with a dedicated neural network, thus delivering accurate skill estimation at the player's early game stage. By employing QuickSKill for matchmaking, game fairness can be dramatically improved in the initial cold-start period. We conduct experiments in a popular mobile multiplayer game in both offline and online scenarios. Results obtained with two real-world anonymized gaming datasets demonstrate that proposed QuickSKill delivers precise estimation of game skills for novices, leading to significantly lower team skill disparities and better player game experience. To the best of our knowledge, proposed QuickSKill is the first framework that tackles the cold-start problem for traditional skill rating algorithms.

### An algebraically stabilized method for convection-diffusion-reaction problems with optimal experimental convergence rates on general meshes

Algebraically stabilized finite element discretizations of scalar steady-state convection-diffusion-reaction equations often provide accurate approximate solutions satisfying the discrete maximum principle (DMP). However, it was observed that a deterioration of the accuracy and convergence rates may occur for some problems if meshes without local symmetries are used. The paper investigates these phenomena both numerically and analytically and the findings are used to design a new algebraic stabilization called Symmetrized Monotone Upwind-type Algebraically Stabilized (SMUAS) method. It is proved that the SMUAS method is linearity preserving and satisfies the DMP on arbitrary simplicial meshes. Numerical results indicate that the SMUAS method leads to optimal convergence rates on general meshes.

### Existence and Construction of LCD codes over Finite Fields

We show the existence of the Euclidean and Hermitian LCD codes with different parameters over finite fields. Further, we provide a method to construct several Hermitian LCD (self orthogonal, self-dual) codes from a given Hermitian LCD (self orthogonal, self-dual) code and a method to construct Euclidean (Hermitian) LCD codes with parameters [n+1, k+1] from a given Euclidean (Hermitian) LCD code with parameters [n, k] over finite fields. We also give some results on {\sigma}-LCD codes over finite fields.

### Deep learning for enhanced free-space optical communications

Atmospheric effects, such as turbulence and background thermal noise, inhibit the propagation of coherent light used in ON-OFF keying free-space optical communication. Here we present and experimentally validate a convolutional neural network to reduce the bit error rate of free-space optical communication in post-processing that is significantly simpler and cheaper than existing solutions based on advanced optics. Our approach consists of two neural networks, the first determining the presence of coherent bit sequences in thermal noise and turbulence and the second demodulating the coherent bit sequences. All data used for training and testing our network is obtained experimentally by generating ON-OFF keying bit streams of coherent light, combining these with thermal light, and passing the resultant light through a turbulent water tank which we have verified mimics turbulence in the air to a high degree of accuracy. Our convolutional neural network improves detection accuracy over threshold classification schemes and has the capability to be integrated with current demodulation and error correction schemes.

### Traditional methods in Edge, Corner and Boundary detection

This is a review paper of traditional approaches for edge, corner, and boundary detection methods. There are many real-world applications of edge, corner, and boundary detection methods. For instance, in medical image analysis, edge detectors are used to extract the features from the given image. In modern innovations like autonomous vehicles, edge detection and segmentation are the most crucial things. If we want to detect motion or track video, corner detectors help. I tried to compare the results of detectors stage-wise wherever it is possible and also discussed the importance of image prepossessing to minimise the noise. Real-world images are used to validate detector performance and limitations.

### Unsupervised domain adaptation semantic segmentation of high-resolution remote sensing imagery with invariant domain-level context memory

Semantic segmentation is a key technique involved in automatic interpretation of high-resolution remote sensing (HRS) imagery and has drawn much attention in the remote sensing community. Deep convolutional neural networks (DCNNs) have been successfully applied to the HRS imagery semantic segmentation task due to their hierarchical representation ability. However, the heavy dependency on a large number of training data with dense annotation and the sensitiveness to the variation of data distribution severely restrict the potential application of DCNNs for the semantic segmentation of HRS imagery. This study proposes a novel unsupervised domain adaptation semantic segmentation network (MemoryAdaptNet) for the semantic segmentation of HRS imagery. MemoryAdaptNet constructs an output space adversarial learning scheme to bridge the domain distribution discrepancy between source domain and target domain and to narrow the influence of domain shift. Specifically, we embed an invariant feature memory module to store invariant domain-level context information because the features obtained from adversarial learning only tend to represent the variant feature of current limited inputs. This module is integrated by a category attention-driven invariant domain-level context aggregation module to current pseudo invariant feature for further augmenting the pixel representations. An entropy-based pseudo label filtering strategy is used to update the memory module with high-confident pseudo invariant feature of current target images. Extensive experiments under three cross-domain tasks indicate that our proposed MemoryAdaptNet is remarkably superior to the state-of-the-art methods.

### O(n log n) algorithm for finding a solution of Erdős-Ginzburg-Ziv theorem

Erd\H{o}s-Ginzburg-Ziv theorem is a popular theorem in additive number theory, which states any sequence of $2n-1$ integers contains a subsequence of $n$ elements, with their sum is a multiple of $n$. In this article, we provide an algorithm finding a solution of Erd\H{o}s-Ginzburg-Ziv in $O(n \log n)$ time. This is the first known quasi-linear time algorithm finding a solution of Erd\H{o}s-Ginzburg-Ziv theorem.

### Direct Sum Theorems From Fortification

We revisit the direct sum theorems in communication complexity which askes whether the resource to solve $n$ communication problems together is (approximately) the sum of resources to solve these problems separately. Our work starts with the observation that Meir and Dinur's fortification lemma for protocol size over rectangles can be generalized to a general fortification lemma for a sub-additive measure over set. By applying this lemma to the case of cover number, we obtain a dual form of cover number, called $\delta$-fooling set'' which is a generalized fooling set. Given a communication problem $S\subseteq (X\times Y) \times Z$, let $\Lambda \subseteq X\times Y$ be a $\delta$-fooling set of $S$, then given any subset $\tilde{\Lambda} \subseteq \Lambda$ such that $|\tilde{\Lambda}|/{|\Lambda|} > \delta$, there is no monochromatic rectangle that covers the subset $\tilde{\Lambda}$. Particularly, there is a $\frac{16\log|X| |Y|}{\mathsf{ Cov}(S)}$-fooling set of communication problem $S$. With this fact, we are able to reprove the classic direct sum theorem of cover number with a simple double counting argument. And we prove a new direct sum theorem about protocol size which imply a better direct sum theorem for two functions in terms of protocol size. Formally, let $\mathsf{L}$ denote the protocol szie, given a communication problem $F:A \times B \rightarrow \{0,1\}$, $\log\mathsf{L}\left(F\times F\right)\geq \log \mathsf{L}\left(F\right) +\Omega\left(\sqrt{\log\mathsf{L}\left(F\right)}\right)-\log\log|A||B| -4$.We also prove a tight cover number lower bound for the agree problem introduced by Amos Beimel et al.

### Role of Data Augmentation in Unsupervised Anomaly Detection

Self-supervised learning (SSL) has emerged as a promising alternative to create supervisory signals to real-world tasks, avoiding extensive cost of careful labeling. SSL is particularly attractive for unsupervised problems such as anomaly detection (AD), where labeled anomalies are costly to secure, difficult to simulate, or even nonexistent. A large catalog of augmentation functions have been used for SSL-based AD (SSAD), and recent works have observed that the type of augmentation has a significant impact on performance. Motivated by those, this work sets out to put SSAD under a larger lens and carefully investigate the role of data augmentation in AD through extensive experiments on many testbeds. Our main finding is that self-supervision acts as a yet-another model hyperparameter, and should be chosen carefully in regards to the nature of true anomalies in the data. That is, the alignment between the augmentation and the underlying anomaly-generating mechanism is the key for the success of SSAD, and in the lack thereof, SSL can even impair (!) detection performance. Moving beyond proposing another SSAD method, our study contributes to the better understanding of this growing area and lays out new directions for future research.

### Rain Removal from Light Field Images with 4D Convolution and Multi-scale Gaussian Process

Existing deraining methods mainly focus on a single input image. With just a single input image, it is extremely difficult to accurately detect rain streaks, remove rain streaks, and restore rain-free images. Compared with a single 2D image, a light field image (LFI) embeds abundant 3D structure and texture information of the target scene by recording the direction and position of each incident ray via a plenoptic camera, which has emerged as a popular device in the computer vision and graphics research communities. In this paper, we propose a novel network, 4D-MGP-SRRNet, for rain streak removal from an LFI. Our method takes as input all sub-views of a rainy LFI. In order to make full use of the LFI, we adopt 4D convolutional layers to build the proposed rain steak removal network to simultaneously process all sub-views of the LFI. In the proposed network, the rain detection model, MGPDNet, with a novel Multi-scale Self-guided Gaussian Process (MSGP) module is proposed to detect rain streaks from all sub-views of the input LFI. Semi-supervised learning is introduced to accurately detect rain streaks by training on both virtual-world rainy LFIs and real-world rainy LFIs at multiple scales via calculating pseudo ground truth for real-world rain streaks. All sub-views subtracting the predicted rain streaks are then fed into a 4D residual model to estimate depth maps. Finally, all sub-views concatenated with the corresponding rain streaks and fog maps converted from the estimated depth maps are fed into a rainy LFI restoring model that is based on the adversarial recurrent neural network to progressively eliminate rain streaks and recover the rain-free LFI. Extensive quantitative and qualitative evaluations conducted on both synthetic LFIs and real-world LFIs demonstrate the effectiveness of our proposed method.

### Learning Operators with Ignore Effects for Bilevel Planning in Continuous Domains

Bilevel planning, in which a high-level search over an abstraction of an environment is used to guide low-level decision making, is an effective approach to solving long-horizon tasks in continuous state and action spaces. Recent work has shown that action abstractions that enable such bilevel planning can be learned in the form of symbolic operators and neural samplers given symbolic predicates and demonstrations that achieve known goals. In this work, we show that existing approaches fall short in environments where actions tend to cause a large number of predicates to change. To address this issue, we propose to learn operators with ignore effects. The key idea motivating our approach is that modeling every observed change in the predicates is unnecessary; the only changes that need be modeled are those that are necessary for high-level search to achieve the specified goal. Experimentally, we show that our approach is able to learn operators with ignore effects across six hybrid robotic domains that enable an agent to solve novel variations of a task, with different initial states, goals, and numbers of objects, significantly more efficiently than several baselines.

### Rational Uniform Consensus with General Omission Failures

Generally, system failures, such as crash failures, Byzantine failures and so on, are considered as common reasons for the inconsistencies of distributed consensus and have been extensively studied. In fact, strategic manipulations by rational agents do not be ignored for reaching consensus in distributed system. In this paper, we extend the game-theoretic analysis of consensus and design an algorithm of rational uniform consensus with general omission failures under the assumption that processes are controlled by rational agents and prefer consensus. Different from crashing one, agent with omission failures may crash, or omit to send or receive messages when it should, which leads to difficulty of detecting faulty agents. By combining the possible failures of agents at the both ends of a link, we convert omission failure model into link state model to make faulty detection possible. Through analyzing message passing mechanism in the distributed system with n agents, among which t agents may commit omission failures, we provide the upper bound on message passing time for reaching consensus on a state among nonfaulty agents, and message chain mechanism for validating messages. And then we prove our rational uniform consensus is a Nash equilibrium when n>2t+1, and failure patterns and initial preferences are blind (an assumption of randomness). Thus agents could have no motivation to deviate the consensus. Our research strengthens the reliability of consensus with omission failures from the perspective of game theory.

### FlEC: Enhancing QUIC with application-tailored reliability mechanisms

Packet losses are common events in today's networks. They usually result in longer delivery times for application data since retransmissions are the de facto technique to recover from such losses. Retransmissions is a good strategy for many applications but it may lead to poor performance with latency-sensitive applications compared to network coding. Although different types of network coding techniques have been proposed to reduce the impact of losses by transmitting redundant information, they are not widely used. Some niche applications include their own variant of Forward Erasure Correction (FEC) techniques, but there is no generic protocol that enables many applications to easily use them. We close this gap by designing, implementing and evaluating a new Flexible Erasure Correction (FlEC) framework inside the newly standardized QUIC protocol. With FlEC, an application can easily select the reliability mechanism that meets its requirements, from pure retransmissions to various forms of FEC. We consider three different use cases: $(i)$ bulk data transfer, $(ii)$ file transfers with restricted buffers and $(iii)$ delay-constrained messages. We demonstrate that modern transport protocols such as QUIC may benefit from application knowledge by leveraging this knowledge in FlEC to provide better loss recovery and stream scheduling. Our evaluation over a wide range of scenarios shows that the FlEC framework outperforms the standard QUIC reliability mechanisms from a latency viewpoint.

### Three New Arnoldi-Type Methods for the Quadratic Eigenvalue Problem in Rotor Dynamics

Three new Arnoldi-type methods are presented to accelerate the modal analysis and critical speed analysis of the damped rotor dynamics finite element (FE) model. They are the linearized quadratic eigenvalue problem (QEP) Arnoldi method, the QEP Arnoldi method, and the truncated generalized standard eigenvalue problem (SEP) Arnoldi method. And, they correspond to three reduction subspaces, including the linearized QEP Krylov subspace, the QEP Krylov subspace, and the truncated generalized SEP Krylov subspace, where the first subspace is also used in the existing Arnoldi-type methods. The numerical examples constructed by a turbofan engine low-pressure (LP) rotor demonstrate that our proposed three Arnoldi-type methods are more accurate than the existing Arnoldi-type methods.

### Langevin Diffusion Variational Inference

Many methods that build powerful variational distributions based on unadjusted Langevin transitions exist. Most of these were developed using a wide range of different approaches and techniques. Unfortunately, the lack of a unified analysis and derivation makes developing new methods and reasoning about existing ones a challenging task. We address this giving a single analysis that unifies and generalizes these existing techniques. The main idea is to augment the target and variational by numerically simulating the underdamped Langevin diffusion process and its time reversal. The benefits of this approach are twofold: it provides a unified formulation for many existing methods, and it simplifies the development of new ones. In fact, using our formulation we propose a new method that combines the strengths of previously existing algorithms; it uses underdamped Langevin transitions and powerful augmentations parameterized by a score network. Our empirical evaluation shows that our proposed method consistently outperforms relevant baselines in a wide range of tasks.

### Secrecy Performance Analysis of RIS-aided Communication System with Randomly Flying Eavesdroppers

In this letter, we analyze the secrecy performance of a reconfigurable intelligent surface (RIS)-aided communication system with spatially random unmanned aerial vehicles (UAVs) acting as eavesdroppers. We consider the scenarios where the base station (BS) is equipped with single and multiple antennas.The signal-to-noise ratios (SNRs) of the legitimate user and the eavesdroppers are derived analytically and approximated through a computationally effective method. The ergodic secrecy capacity is approximated and derived in closed-form expressions.Simulation results validate the accuracy of the analytical and approximate expressions and show the security-enhanced effect of the deployment of the RIS.

### Predicting student performance using sequence classification with time-based windows

A growing number of universities worldwide use various forms of online and blended learning as part of their academic curricula. Furthermore, the recent changes caused by the COVID-19 pandemic have led to a drastic increase in importance and ubiquity of online education. Among the major advantages of e-learning is not only improving students' learning experience and widening their educational prospects, but also an opportunity to gain insights into students' learning processes with learning analytics. This study contributes to the topic of improving and understanding e-learning processes in the following ways. First, we demonstrate that accurate predictive models can be built based on sequential patterns derived from students' behavioral data, which are able to identify underperforming students early in the course. Second, we investigate the specificity-generalizability trade-off in building such predictive models by investigating whether predictive models should be built for every course individually based on course-specific sequential patterns, or across several courses based on more general behavioral patterns. Finally, we present a methodology for capturing temporal aspects in behavioral data and analyze its influence on the predictive performance of the models. The results of our improved sequence classification technique are capable to predict student performance with high levels of accuracy, reaching 90 percent for course-specific models.

### Asymmetric Dual-Mode Constellation and Protograph LDPC Code Design for Generalized Spatial MPPM Systems

To achieve reliable and efficient transmissions in free-space optical (FSO) communication, this paper designs a new protograph low-density parity-check (PLDPC) coded generalized spatial multipulse position modulation (GSMPPM) system over weak turbulence channels. Specifically, we investigate the PLDPC code, generalized space shift keying (GSSK) modulation, and MPPM constellation. First, we propose a type of novel GSMPPM constellations that intelligently integrate the GSSK into MPPM, referred to as asymmetric dual-mode (ADM) constellations, so as to improve the performance of the PLDPC-coded GSMPPM system. Furthermore, exploiting a protogragh extrinsic information transfer (PEXIT) algorithm, we construct a type of improved PLDPC code, referred as IM-PLDPC code, which outperforms the existing PLDPC codes over weak turbulence channels. Analytical and simulation results show that the proposed ADM constellations and the proposed IM-PLDPC code can obtain noticeable performance gains over their counterparts. Therefore, the proposed PLDPC-coded GSMPPM system with ADM constellations is competent to satisfy the high-reliability requirement for FSO applications.

### Solving the Diffusion of Responsibility Problem in Multiagent Reinforcement Learning with a Policy Resonance Approach

We report a previously undiscovered problem in multiagent reinforcement learning (MARL), named Diffusion of Responsibility (DR). DR causes failures in negotiating a reliable division of responsibilities to complete sophisticated cooperative tasks. It reflects a flaw in how existing algorithms deal with the multiagent exploration-exploitation dilemma in both value-based and policy-based MARL methods. This DR problem shares similarities with a same-name phenomenon in the social psychology domain, also known as the bystander effect. In this work, we start by theoretically analyzing the cause of the DR problem, and we emphasize that the DR problem is not relevant to the reward shaping or the credit assignment problems. To deal with the DR problem, we propose a Policy Resonance method to change the multiagent exploration-exploitation strategy and promote the performance of MARL algorithms in difficult MARL tasks. This method can be equipped by most existing MARL algorithms to resolve the performance degradation caused by the DR problem. Experiments are performed in multiple test benchmark tasks, including FME, a diagnostic multiagent environment, and ADCA, a competitive multiagent game. Finally, we implement the Policy Resonance method on SOTA MARL algorithms to illustrate the effectiveness of this approach.

### Subtype-Aware Dynamic Unsupervised Domain Adaptation

Unsupervised domain adaptation (UDA) has been successfully applied to transfer knowledge from a labeled source domain to target domains without their labels. Recently introduced transferable prototypical networks (TPN) further addresses class-wise conditional alignment. In TPN, while the closeness of class centers between source and target domains is explicitly enforced in a latent space, the underlying fine-grained subtype structure and the cross-domain within-class compactness have not been fully investigated. To counter this, we propose a new approach to adaptively perform a fine-grained subtype-aware alignment to improve performance in the target domain without the subtype label in both domains. The insight of our approach is that the unlabeled subtypes in a class have the local proximity within a subtype, while exhibiting disparate characteristics, because of different conditional and label shifts. Specifically, we propose to simultaneously enforce subtype-wise compactness and class-wise separation, by utilizing intermediate pseudo-labels. In addition, we systematically investigate various scenarios with and without prior knowledge of subtype numbers, and propose to exploit the underlying subtype structure. Furthermore, a dynamic queue framework is developed to evolve the subtype cluster centroids steadily using an alternative processing scheme. Experimental results, carried out with multi-view congenital heart disease data and VisDA and DomainNet, show the effectiveness and validity of our subtype-aware UDA, compared with state-of-the-art UDA methods.

### PoseTrans: A Simple Yet Effective Pose Transformation Augmentation for Human Pose Estimation

Human pose estimation aims to accurately estimate a wide variety of human poses. However, existing datasets often follow a long-tailed distribution that unusual poses only occupy a small portion, which further leads to the lack of diversity of rare poses. These issues result in the inferior generalization ability of current pose estimators. In this paper, we present a simple yet effective data augmentation method, termed Pose Transformation (PoseTrans), to alleviate the aforementioned problems. Specifically, we propose Pose Transformation Module (PTM) to create new training samples that have diverse poses and adopt a pose discriminator to ensure the plausibility of the augmented poses. Besides, we propose Pose Clustering Module (PCM) to measure the pose rarity and select the "rarest" poses to help balance the long-tailed distribution. Extensive experiments on three benchmark datasets demonstrate the effectiveness of our method, especially on rare poses. Also, our method is efficient and simple to implement, which can be easily integrated into the training pipeline of existing pose estimation models.

### Style Your Hair: Latent Optimization for Pose-Invariant Hairstyle Transfer via Local-Style-Aware Hair Alignment

Editing hairstyle is unique and challenging due to the complexity and delicacy of hairstyle. Although recent approaches significantly improved the hair details, these models often produce undesirable outputs when a pose of a source image is considerably different from that of a target hair image, limiting their real-world applications. HairFIT, a pose-invariant hairstyle transfer model, alleviates this limitation yet still shows unsatisfactory quality in preserving delicate hair textures. To solve these limitations, we propose a high-performing pose-invariant hairstyle transfer model equipped with latent optimization and a newly presented local-style-matching loss. In the StyleGAN2 latent space, we first explore a pose-aligned latent code of a target hair with the detailed textures preserved based on local style matching. Then, our model inpaints the occlusions of the source considering the aligned target hair and blends both images to produce a final output. The experimental results demonstrate that our model has strengths in transferring a hairstyle under larger pose differences and preserving local hairstyle textures.

### Enhancing Dynamic Mode Decomposition Workflow with In-Situ Visualization and Data Compression

Modern computational science and engineering applications are being improved by the advances in scientific machine learning. Data-driven methods such as Dynamic Mode Decomposition (DMD) can extract coherent structures from spatio-temporal data generated from dynamical systems and infer different scenarios for said systems. The spatio-temporal data comes as snapshots containing spatial information for each time instant. In modern engineering applications, the generation of high-dimensional snapshots can be time and/or resource-demanding. In the present study, we consider two strategies for enhancing DMD workflow in large numerical simulations: (i) snapshots compression to relieve disk pressure; (ii) the use of in situ visualization images to reconstruct the dynamics (or part of) in runtime. We evaluate our approaches with two 3D fluid dynamics simulations and consider DMD to reconstruct the solutions. Results reveal that snapshot compression considerably reduces the required disk space. We have observed that lossy compression reduces storage by almost $50\%$ with low relative errors in the signal reconstructions and other quantities of interest. We also extend our analysis to data generated on-the-fly, using in-situ visualization tools to generate image files of our state vectors during runtime. On large simulations, the generation of snapshots may be slow enough to use batch algorithms for inference. Streaming DMD takes advantage of the incremental SVD algorithm and updates the modes with the arrival of each new snapshot. We use streaming DMD to reconstruct the dynamics from in-situ generated images. We show that this process is efficient, and the reconstructed dynamics are accurate.

### Unsupervised Domain Adaptation for Segmentation with Black-box Source Model

Unsupervised domain adaptation (UDA) has been widely used to transfer knowledge from a labeled source domain to an unlabeled target domain to counter the difficulty of labeling in a new domain. The training of conventional solutions usually relies on the existence of both source and target domain data. However, privacy of the large-scale and well-labeled data in the source domain and trained model parameters can become the major concern of cross center/domain collaborations. In this work, to address this, we propose a practical solution to UDA for segmentation with a black-box segmentation model trained in the source domain only, rather than original source data or a white-box source model. Specifically, we resort to a knowledge distillation scheme with exponential mixup decay (EMD) to gradually learn target-specific representations. In addition, unsupervised entropy minimization is further applied to regularization of the target domain confidence. We evaluated our framework on the BraTS 2018 database, achieving performance on par with white-box source model adaptation approaches.

### SAT-Inspired Higher-Order Eliminations

We generalize several propositional preprocessing techniques to higher-order logic, building on existing first-order generalizations. These techniques eliminate literals, clauses, or predicate symbols from the problem, with the aim of making it more amenable to automatic proof search. We also introduce a new technique, which we call quasipure literal elimination, that strictly subsumes pure literal elimination. The new techniques are implemented in the Zipperposition theorem prover. Our evaluation shows that they sometimes help prove problems originating from the TPTP library and Isabelle formalizations.

### An Adaptive Repeated-Intersection-Reduction Local Search for the Maximum Independent Set Problem

The maximum independent set (MIS) problem, a classical NP-hard problem with extensive applications in various areas, aims to find a largest set of vertices with no edge among them. Due to its computational intractability, it is difficult to solve the MIS problem effectively, especially on large graphs. Employing heuristic approaches to obtain a good solution within an acceptable amount of time has attracted much attention in literature. In this paper, we propose an efficient local search algorithm for MIS called ARIR, which consists of two main parts: an adaptive local search framework, and a novel inexact efficient reduction rule to simplify instances. We conduct experiments on five benchmarks, encompassing 92 instances. Compared with four state-of-the-art algorithms, ARIR offers the best accuracy on 89 instances and obtains competitive results on the three remaining instances.

### Steps to Knowledge Graphs Quality Assessment

Knowledge Graphs (KGs) have been popularized during the last decade, for instance, they are used widely in the context of the web. In 2012 Google has presented the Google's Knowledge Graph that is used to improve their web search services. The web also hosts different KGs, such as DBpedia and Wikidata, which are used in various applications like personal assistants and question-answering systems. Various web applications rely on KGs to provide concise, complete, accurate, and fresh answer to users. However, what is the quality of those KGs? In which cases should a Knowledge Graph (KG) be used? How might they be evaluated? We reviewed the literature on quality assessment of data, information, linked data, and KGs. We extended the current state-of-the-art frameworks by adding various quality dimensions (QDs) and quality metrics (QMs) that are specific to KGs. Furthermore, we propose a general-purpose, customizable to a domain or task, and practical quality assessment framework for assessing the quality of KGs.

### Polynomial kernel for immersion hitting in tournaments

For a fixed simple digraph $H$ without isolated vertices, we consider the problem of deleting arcs from a given tournament to get a digraph which does not contain $H$ as an immersion. We prove that for every $H$, this problem admits a polynomial kernel when parameterized by the number of deleted arcs. The degree of the bound on the kernel size depends on $H$.

### Your ViT is Secretly a Hybrid Discriminative-Generative Diffusion Model

Diffusion Denoising Probability Models (DDPM) and Vision Transformer (ViT) have demonstrated significant progress in generative tasks and discriminative tasks, respectively, and thus far these models have largely been developed in their own domains. In this paper, we establish a direct connection between DDPM and ViT by integrating the ViT architecture into DDPM, and introduce a new generative model called Generative ViT (GenViT). The modeling flexibility of ViT enables us to further extend GenViT to hybrid discriminative-generative modeling, and introduce a Hybrid ViT (HybViT). Our work is among the first to explore a single ViT for image generation and classification jointly. We conduct a series of experiments to analyze the performance of proposed models and demonstrate their superiority over prior state-of-the-arts in both generative and discriminative tasks. Our code and pre-trained models can be found in https://github.com/sndnyang/Diffusion_ViT .

### Dismantling Complex Networks by a Neural Model Trained from Tiny Networks

Can we employ one neural model to efficiently dismantle many complex yet unique networks? This article provides an affirmative answer. Diverse real-world systems can be abstracted as complex networks each consisting of many functional nodes and edges. Percolation theory has indicated that removing only a few vital nodes can cause the collapse of whole network. However, finding the least number of such vital nodes is a rather challenging task for large networks due to its NP-hardness. Previous studies have proposed many centrality measures and heuristic algorithms to tackle this network dismantling (ND) problem. Different from theirs, this article tries to approach the ND task by designing a neural model which can be trained from tiny synthetic networks but will be applied for various real-world networks. It seems a discouraging mission at first sight, as network sizes and topologies are quite different across distinct real-world networks. Nonetheless, this article initiates insightful efforts of designing and training a neural influence ranking model (NIRM). Experiments on fifteen real-world networks validate its effectiveness for its mostly requiring fewer vital nodes to dismantle a network, compared with the state-of-the-art competitors. The key to its success lies in that our NIRM can efficiently encode both local structural and global topological signals for ranking nodes, in addition to our innovative labelling method in training dataset construction.

### Counterfactual Supervision-based Information Bottleneck for Out-of-Distribution Generalization

Learning invariant (causal) features for out-of-distribution (OOD) generalization has attracted extensive attention recently, and among the proposals invariant risk minimization (IRM) (Arjovsky et al., 2019) is a notable solution. In spite of its theoretical promise for linear regression, the challenges of using IRM in linear classification problems yet remain (Rosenfeld et al.,2020, Nagarajan et al., 2021). Along this line, a recent study (Arjovsky et al., 2019) has made a first step and proposes a learning principle of information bottleneck based invariant risk minimization (IB-IRM). In this paper, we first show that the key assumption of support overlap of invariant features used in (Arjovsky et al., 2019) is rather strong for the guarantee of OOD generalization and it is still possible to achieve the optimal solution without such assumption. To further answer the question of whether IB-IRM is sufficient for learning invariant features in linear classification problems, we show that IB-IRM would still fail in two cases whether or not the invariant features capture all information about the label. To address such failures, we propose a \textit{Counterfactual Supervision-based Information Bottleneck (CSIB)} learning algorithm that provably recovers the invariant features. The proposed algorithm works even when accessing data from a single environment, and has theoretically consistent results for both binary and multi-class problems. We present empirical experiments on three synthetic datasets that verify the efficacy of our proposed method.

### New Parallel Order Maintenance Data Structure

The \emph{Order-Maintenance} (OM) data structure maintains a total order list of items for insertions, deletions, and comparisons. As a basic data structure, OM has many applications, such as maintaining the topological order, core numbers, and truss in graphs, and maintaining ordered sets in Unified Modeling Language (UML) Specification. The prevalence of multicore machines suggests parallelizing such a basic data structure. This paper proposes a new parallel OM data structure that supports insertions, deletions, and comparisons in parallel. Specifically, parallel insertions and deletions are synchronized by using locks efficiently, which achieve up to $7$x and $5.6$x speedups with $64$ workers. One big advantage is that the comparisons are lock-free so that they can execute highly in parallel with other insertions and deletions, which achieve up to $34.4$x speedups with $64$ workers. Typical real applications maintain order lists that always have a much larger portion of comparisons than insertions and deletions. For example, in core maintenance, the number of comparisons is up to 297 times larger compared with insertions and deletions in certain graphs. This is why the lock-free order comparison is a breakthrough in practice.

### Designing an Artificial Immune System inspired Intrusion Detection System

The Human Immune System (HIS) works to protect a body from infection, illness, and disease. This system can inspire cybersecurity professionals to design an Artificial Immune System (AIS) based Intrusion Detection System (IDS). These biologically inspired algorithms using Self/Nonself and Danger Theory can directly augmentIDS designs and implementations. In this paper, we include an examination into the elements of design necessary for building an AIS-IDS framework and present an architecture to create such systems.

### SIERRA: A Modular Framework for Research Automation and Reproducibility

Modern intelligent systems researchers form hypotheses about system behavior and then run experiments using one or more independent variables to test their hypotheses. We present SIERRA, a novel framework structured around that idea for accelerating research development and improving reproducibility of results. SIERRA accelerates research by automating the process of generating executable experiments from queries over independent variables(s), executing experiments, and processing the results to generate deliverables such as graphs and videos. It shifts the paradigm for testing hypotheses from procedural ("Do these steps to answer the query") to declarative ("Here is the query to test--GO!"), reducing the burden on researchers. It employs a modular architecture enabling easy customization and extension for the needs of individual researchers, thereby eliminating manual configuration and processing via throw-away scripts. SIERRA improves reproducibility of research by providing automation independent of the execution environment (HPC hardware, real robots, etc.) and targeted platform (arbitrary simulator or real robots). This enables exact experiment replication, up to the limit of the execution environment and platform, as well as making it easy for researchers to test hypotheses in different computational environments.

### A User-Centered Investigation of Personal Music Tours

Streaming services use recommender systems to surface the right music to users. Playlists are a popular way to present music in a list-like fashion, ie as a plain list of songs. An alternative are tours, where the songs alternate segues, which explain the connections between consecutive songs. Tours address the user need of seeking background information about songs, and are found to be superior to playlists, given the right user context. In this work, we provide, for the first time, a user-centered evaluation of two tour-generation algorithms (Greedy and Optimal) using semi-structured interviews. We assess the algorithms, we discuss attributes of the tours that the algorithms produce, we identify which attributes are desirable and which are not, and we enumerate several possible improvements to the algorithms, along with practical suggestions on how to implement the improvements. Our main findings are that Greedy generates more likeable tours than Optimal, and that three important attributes of tours are segue diversity, song arrangement and song familiarity. More generally, we provide insights into how to present music to users, which could inform the design of user-centered recommender systems.

### A Large-Scale Dataset of Twitter Chatter about Online Learning during the Current COVID-19 Omicron Wave

The COVID-19 Omicron variant, reported to be the most immune evasive variant of COVID-19, is resulting in a surge of COVID-19 cases globally. This has caused schools, colleges, and universities in different parts of the world to transition to online learning. As a result, social media platforms such as Twitter are seeing an increase in conversations related to online learning in the form of tweets. Mining such tweets to develop a dataset can serve as a data resource for different applications and use-cases related to the analysis of interest, views, opinions, perspectives, attitudes, and feedback towards online learning during the current surge of COVID-19 cases caused by the Omicron variant. Therefore, this work presents a large-scale open-access Twitter dataset of conversations about online learning from different parts of the world since the first detected case of the COVID-19 Omicron variant in November 2021. The dataset is compliant with the privacy policy, developer agreement, and guidelines for content redistribution of Twitter, as well as with the FAIR principles (Findability, Accessibility, Interoperability, and Reusability) principles for scientific data management. The paper also briefly outlines some potential applications in the fields of Big Data, Data Mining, Natural Language Processing, and their related disciplines, with a specific focus on online learning during this Omicron wave that may be studied, explored, and investigated by using this dataset.

### Towards Informed Design and Validation Assistance in Computer Games Using Imitation Learning

In games, as in and many other domains, design validation and testing is a huge challenge as systems are growing in size and manual testing is becoming infeasible. This paper proposes a new approach to automated game validation and testing. Our method leverages a data-driven imitation learning technique, which requires little effort and time and no knowledge of machine learning or programming, that designers can use to efficiently train game testing agents. We investigate the validity of our approach through a user study with industry experts. The survey results show that our method is indeed a valid approach to game validation and that data-driven programming would be a useful aid to reducing effort and increasing quality of modern playtesting. The survey also highlights several open challenges. With the help of the most recent literature, we analyze the identified challenges and propose future research directions suitable for supporting and maximizing the utility of our approach.

### Training Latent Variable Models with Auto-encoding Variational Bayes: A Tutorial

Auto-encoding Variational Bayes (AEVB) is a powerful and general algorithm for fitting latent variable models (a promising direction for unsupervised learning), and is well-known for training the Variational Auto-Encoder (VAE). In this tutorial, we focus on motivating AEVB from the classic Expectation Maximization (EM) algorithm, as opposed to from deterministic auto-encoders. Though natural and somewhat self-evident, the connection between EM and AEVB is not emphasized in the recent deep learning literature, and we believe that emphasizing this connection can improve the community's understanding of AEVB. In particular, we find it especially helpful to view (1) optimizing the evidence lower bound (ELBO) with respect to inference parameters as approximate E-step and (2) optimizing ELBO with respect to generative parameters as approximate M-step; doing both simultaneously as in AEVB is then simply tightening and pushing up ELBO at the same time. We discuss how approximate E-step can be interpreted as performing variational inference. Important concepts such as amortization and the reparametrization trick are discussed in great detail. Finally, we derive from scratch the AEVB training procedures of a non-deep and several deep latent variable models, including VAE, Conditional VAE, Gaussian Mixture VAE and Variational RNN. It is our hope that readers would recognize AEVB as a general algorithm that can be used to fit a wide range of latent variable models (not just VAE), and apply AEVB to such models that arise in their own fields of research. PyTorch code for all included models are publicly available.

### Deep Reinforcement Learning for RIS-aided Multiuser Full-Duplex Secure Communications with Hardware Impairments

In this paper, we investigate a reconfigurable intelligent surface (RIS)-aided multiuser full-duplex secure communication system with hardware impairments at transceivers and RIS, where multiple eavesdroppers overhear the two-way transmitted signals simultaneously, and an RIS is applied to enhance the secrecy performance. Aiming at maximizing the sum secrecy rate (SSR), a joint optimization problem of the transmit beamforming at the base station (BS) and the reflecting beamforming at the RIS is formulated under the transmit power constraint of the BS and the unit modulus constraint of the phase shifters. As the environment is time-varying and the system is high-dimensional, this non-convex optimization problem is mathematically intractable. A deep reinforcement learning (DRL)-based algorithm is explored to obtain the satisfactory solution by repeatedly interacting with and learning from the dynamic environment. Extensive simulation results illustrate that the DRL-based secure beamforming algorithm is proved to be significantly effective in improving the SSR. It is also found that the performance of the DRL-based method can be greatly improved and the convergence speed of neural network can be accelerated with appropriate neural network parameters.

### A Deep Reinforcement Learning-based Adaptive Charging Policy for Wireless Rechargeable Sensor Networks

Wireless sensor networks consist of randomly distributed sensor nodes for monitoring targets or areas of interest. Maintaining the network for continuous surveillance is a challenge due to the limited battery capacity in each sensor. Wireless power transfer technology is emerging as a reliable solution for energizing the sensors by deploying a mobile charger (MC) to recharge the sensor. However, designing an optimal charging path for the MC is challenging because of uncertainties arising in the networks. The energy consumption rate of the sensors may fluctuate significantly due to unpredictable changes in the network topology, such as node failures. These changes also lead to shifts in the importance of each sensor, which are often assumed to be the same in existing works. We address these challenges in this paper by proposing a novel adaptive charging scheme using a deep reinforcement learning (DRL) approach. Specifically, we endow the MC with a charging policy that determines the next sensor to charge conditioning on the current state of the network. We then use a deep neural network to parametrize this charging policy, which will be trained by reinforcement learning techniques. Our model can adapt to spontaneous changes in the network topology. The empirical results show that the proposed algorithm outperforms the existing on-demand algorithms by a significant margin.

### An Adaptive Image Encryption Scheme Guided by Fuzzy Models

A new image encryption scheme using the advanced encryption standard (AES), a chaotic map, a genetic operator, and a fuzzy inference system is proposed in this paper. In this work, plain images were used as input, and the required security level was achieved. Security criteria were computed after running a proposed encryption process. Then an adaptive fuzzy system decided whether to repeat the encryption process, terminate it, or run the next stage based on the achieved results and user demand. The SHA-512 hash function was employed to increase key sensitivity. Security analysis was conducted to evaluate the security of the proposed scheme, which showed it had high security and all the criteria necessary for a good and efficient encryption algorithm were met. Simulation results and the comparison of similar works showed the proposed encryptor had a pseudo-noise output and was strongly dependent upon the changing key and plain image.

### Learning Facial Liveness Representation for Domain Generalized Face Anti-spoofing

Face anti-spoofing (FAS) aims at distinguishing face spoof attacks from the authentic ones, which is typically approached by learning proper models for performing the associated classification task. In practice, one would expect such models to be generalized to FAS in different image domains. Moreover, it is not practical to assume that the type of spoof attacks would be known in advance. In this paper, we propose a deep learning model for addressing the aforementioned domain-generalized face anti-spoofing task. In particular, our proposed network is able to disentangle facial liveness representation from the irrelevant ones (i.e., facial content and image domain features). The resulting liveness representation exhibits sufficient domain invariant properties, and thus it can be applied for performing domain-generalized FAS. In our experiments, we conduct experiments on five benchmark datasets with various settings, and we verify that our model performs favorably against state-of-the-art approaches in identifying novel types of spoof attacks in unseen image domains.

### BERT(s) to Detect Multiword Expressions

Multiword expressions (MWEs) present groups of words in which the meaning of the whole is not derived from the meaning of its parts. The task of processing MWEs is crucial in many natural language processing (NLP) applications, including machine translation and terminology extraction. Therefore, detecting MWEs is a popular research theme. In this paper, we explore state-of-the-art neural transformers in the task of detecting MWEs.We empirically evaluate several transformer models in the dataset for SemEval-2016 Task 10: Detecting Minimal Semantic Units and their Meanings (DiMSUM). We show that transformer models outperform the previous neural models based on long short-term memory (LSTM). The code and pre-trained model will be made freely available to the community.

### What Your Firmware Tells You Is Not How You Should Emulate It: A Specification-Guided Approach for Firmware Emulation

Emulating firmware of microcontrollers is challenging due to the lack of peripheral models. Existing work finds out how to respond to peripheral read operations by analyzing the target firmware. This is problematic because the firmware sometimes does not contain enough clues to support the emulation or even contains misleading information (e.g. buggy firmware). In this work, we propose a new approach that builds peripheral models from the peripheral specification. Using NLP, we translate peripheral behaviors in human language (documented in chip manuals) into a set of structured condition-action rules. By checking, executing, and chaining them at runtime, we can dynamically synthesize a peripheral model for each firmware execution. The extracted condition-action rules might not be complete or even be wrong. We, therefore, propose incorporating symbolic execution to quickly pinpoint the root cause. This assists us in the manual correction of the problematic rules. We have implemented our idea for five popular MCU boards spanning three different chip vendors. Using a new edit-distance-based algorithm to calculate trace differences, our evaluation against a large firmware corpus confirmed that our prototype achieves much higher fidelity compared with state-of-the-art solutions. Benefiting from the accurate emulation, our emulator effectively avoids false positives observed in existing fuzzing work. We also designed a new dynamic analysis method to perform driver code compliance checks against the specification. We found some non-compliance which we later confirmed to be bugs caused by race conditions.

### Temporal Concept Drift and Alignment: An empirical approach to comparing Knowledge Organization Systems over time

This research explores temporal concept drift and temporal alignment in knowledge organization systems (KOS). A comparative analysis is pursued using the 1910 Library of Congress Subject Headings, 2020 FAST Topical, and automatic indexing. The use case involves a sample of 90 nineteenth-century Encyclopedia Britannica entries. The entries were indexed using two approaches: 1) full-text indexing; 2) Named Entity Recognition was performed upon the entries with Stanza, Stanford's NLP toolkit, and entities were automatically indexed with the Helping Interdisciplinary Vocabulary application (HIVE), using both 1910 LCSH and FAST Topical. The analysis focused on three goals: 1) identifying results that were exclusive to the 1910 LCSH output; 2) identifying terms in the exclusive set that have been deprecated from the contemporary LCSH, demonstrating temporal concept drift; and 3) exploring the historical significance of these deprecated terms. Results confirm that historical vocabularies can be used to generate anachronistic subject headings representing conceptual drift across time in KOS and historical resources. A methodological contribution is made demonstrating how to study changes in KOS over time and improve the contextualization of historical humanities resources.

### EXTENT: Enabling Approximation-Oriented Energy Efficient STT-RAM Write Circuit

Spin Transfer Torque Random Access Memory (STT-RAM) has garnered interest due to its various characteristics such as non-volatility, low leakage power, high density. Its magnetic properties have a vital role in STT switching operations through thermal effectiveness. A key challenge for STT-RAM in industrial adaptation is the high write energy and latency. In this paper, we overcome this challenge by exploiting the stochastic switching activity of STT-RAM cells and, in tandem, with circuit-level approximation. We enforce the robustness of our technique by analyzing the vulnerability of write operation against radiation-induced soft errors and applying a low-cost improvement. Due to serious reliability challenges in nanometer-scale technology, the robustness of the proposed circuit is also analyzed in the presence of CMOS and magnetic tunnel junction (MTJ) process variation. Compared to the state-of-the-art, we achieved 33.04% and 5.47% lower STT-RAM write energy and latency, respectively, with a 3.7% area overhead, for memory-centric applications.

### Multi-Pair D2D Communications Aided by An Active RIS over Spatially Correlated Channels with Phase Noise

This paper investigates a multi-pair device-to-device (D2D) communication system aided by an active reconfigurable intelligent surface (RIS) with phase noise and direct link. The approximate closed-form expression of the ergodic sum rate is derived over spatially correlated Rician fading channels with statistical channel state information (CSI). When the Rician factors go to infinity, the asymptotic expressions of the ergodic sum rates are presented to give insights in poor scattering environment. The power scaling law for the special case of a single D2D pair is presented without phase noise under uncorrelated Rician fading condition. Then, to solve the ergodic sum rate maximization problem, a method based on genetic algorithm (GA) is proposed for joint power control and discrete phase shifts optimization. Simulation results verify the accuracy of our derivations, and also show that the active RIS outperforms the passive RIS.

### OrthoMAD: Morphing Attack Detection Through Orthogonal Identity Disentanglement

Morphing attacks are one of the many threats that are constantly affecting deep face recognition systems. It consists of selecting two faces from different individuals and fusing them into a final image that contains the identity information of both. In this work, we propose a novel regularisation term that takes into account the existent identity information in both and promotes the creation of two orthogonal latent vectors. We evaluate our proposed method (OrthoMAD) in five different types of morphing in the FRLL dataset and evaluate the performance of our model when trained on five distinct datasets. With a small ResNet-18 as the backbone, we achieve state-of-the-art results in the majority of the experiments, and competitive results in the others. The code of this paper will be publicly available.

### Parallel Hierarchical Transformer with Attention Alignment for Abstractive Multi-Document Summarization

In comparison to single-document summarization, abstractive Multi-Document Summarization (MDS) brings challenges on the representation and coverage of its lengthy and linked sources. This study develops a Parallel Hierarchical Transformer (PHT) with attention alignment for MDS. By incorporating word- and paragraph-level multi-head attentions, the hierarchical architecture of PHT allows better processing of dependencies at both token and document levels. To guide the decoding towards a better coverage of the source documents, the attention-alignment mechanism is then introduced to calibrate beam search with predicted optimal attention distributions. Based on the WikiSum data, a comprehensive evaluation is conducted to test improvements on MDS by the proposed architecture. By better handling the inner- and cross-document information, results in both ROUGE and human evaluation suggest that our hierarchical model generates summaries of higher quality relative to other Transformer-based baselines at relatively low computational cost.

### TexPrax: A Messaging Application for Ethical, Real-time Data Collection and Annotation

Collecting and annotating task-oriented dialog data is difficult, especially for highly specific domains that require expert knowledge. At the same time, informal communication channels such as instant messengers are increasingly being used at work. This has led to a lot of work-relevant information that is disseminated through those channels and needs to be post-processed manually by the employees. To alleviate this problem, we present TexPrax, a messaging system to collect and annotate problems, causes, and solutions that occur in work-related chats. TexPrax uses a chatbot to directly engage the employees to provide lightweight annotations on their conversation and ease their documentation work. To comply with data privacy and security regulations, we use an end-to-end message encryption and give our users full control over their data which has various advantages over conventional annotation tools. We evaluate TexPrax in a user-study with German factory employees who ask their colleagues for solutions on problems that arise during their daily work. Overall, we collect 201 task-oriented German dialogues containing 1,027 sentences with sentence-level expert annotations. Our data analysis also reveals that real-world conversations frequently contain instances with code-switching, varying abbreviations for the same entity, and dialects which NLP systems should be able to handle.

### [173] 2208.07852

State-of-the-art neural language models can now be used to solve ad-hoc language tasks through zero-shot prompting without the need for supervised training. This approach has gained popularity in recent years, and researchers have demonstrated prompts that achieve strong accuracy on specific NLP tasks. However, finding a prompt for new tasks requires experimentation. Different prompt templates with different wording choices lead to significant accuracy differences. PromptIDE allows users to experiment with prompt variations, visualize prompt performance, and iteratively optimize prompts. We developed a workflow that allows users to first focus on model feedback using small data before moving on to a large data regime that allows empirical grounding of promising prompts using quantitative measures of the task. The tool then allows easy deployment of the newly created ad-hoc models. We demonstrate the utility of PromptIDE (demo at this http URL) and our workflow using several real-world use cases.

### Estimating Appearance Models for Image Segmentation via Tensor Factorization

Image Segmentation is one of the core tasks in Computer Vision and solving it often depends on modeling the image appearance data via the color distributions of each it its constituent regions. Whereas many segmentation algorithms handle the appearance models dependence using alternation or implicit methods, we propose here a new approach to directly estimate them from the image without prior information on the underlying segmentation. Our method uses local high order color statistics from the image as an input to tensor factorization-based estimator for latent variable models. This approach is able to estimate models in multiregion images and automatically output the regions proportions without prior user interaction, overcoming the drawbacks from a prior attempt to this problem. We also demonstrate the performance of our proposed method in many challenging synthetic and real imaging scenarios and show that it leads to an efficient segmentation algorithm.

### A Walk in the Park: Learning to Walk in 20 Minutes With Model-Free Reinforcement Learning

Deep reinforcement learning is a promising approach to learning policies in uncontrolled environments that do not require domain knowledge. Unfortunately, due to sample inefficiency, deep RL applications have primarily focused on simulated environments. In this work, we demonstrate that the recent advancements in machine learning algorithms and libraries combined with a carefully tuned robot controller lead to learning quadruped locomotion in only 20 minutes in the real world. We evaluate our approach on several indoor and outdoor terrains which are known to be challenging for classical model-based controllers. We observe the robot to be able to learn walking gait consistently on all of these terrains. Finally, we evaluate our design decisions in a simulated environment.

### StyleFaceV: Face Video Generation via Decomposing and Recomposing Pretrained StyleGAN3

Realistic generative face video synthesis has long been a pursuit in both computer vision and graphics community. However, existing face video generation methods tend to produce low-quality frames with drifted facial identities and unnatural movements. To tackle these challenges, we propose a principled framework named StyleFaceV, which produces high-fidelity identity-preserving face videos with vivid movements. Our core insight is to decompose appearance and pose information and recompose them in the latent space of StyleGAN3 to produce stable and dynamic results. Specifically, StyleGAN3 provides strong priors for high-fidelity facial image generation, but the latent space is intrinsically entangled. By carefully examining its latent properties, we propose our decomposition and recomposition designs which allow for the disentangled combination of facial appearance and movements. Moreover, a temporal-dependent model is built upon the decomposed latent features, and samples reasonable sequences of motions that are capable of generating realistic and temporally coherent face videos. Particularly, our pipeline is trained with a joint training strategy on both static images and high-quality video data, which is of higher data efficiency. Extensive experiments demonstrate that our framework achieves state-of-the-art face video generation results both qualitatively and quantitatively. Notably, StyleFaceV is capable of generating realistic $1024\times1024$ face videos even without high-resolution training videos.

### BERTifying Sinhala -- A Comprehensive Analysis of Pre-trained Language Models for Sinhala Text Classification

This research provides the first comprehensive analysis of the performance of pre-trained language models for Sinhala text classification. We test on a set of different Sinhala text classification tasks and our analysis shows that out of the pre-trained multilingual models that include Sinhala (XLM-R, LaBSE, and LASER), XLM-R is the best model by far for Sinhala text classification. We also pre-train two RoBERTa-based monolingual Sinhala models, which are far superior to the existing pre-trained language models for Sinhala. We show that when fine-tuned, these pre-trained language models set a very strong baseline for Sinhala text classification and are robust in situations where labeled data is insufficient for fine-tuning. We further provide a set of recommendations for using pre-trained models for Sinhala text classification. We also introduce new annotated datasets useful for future research in Sinhala text classification and publicly release our pre-trained models.

### Language-guided Semantic Style Transfer of 3D Indoor Scenes

We address the new problem of language-guided semantic style transfer of 3D indoor scenes. The input is a 3D indoor scene mesh and several phrases that describe the target scene. Firstly, 3D vertex coordinates are mapped to RGB residues by a multi-layer perceptron. Secondly, colored 3D meshes are differentiablly rendered into 2D images, via a viewpoint sampling strategy tailored for indoor scenes. Thirdly, rendered 2D images are compared to phrases, via pre-trained vision-language models. Lastly, errors are back-propagated to the multi-layer perceptron to update vertex colors corresponding to certain semantic categories. We did large-scale qualitative analyses and A/B user tests, with the public ScanNet and SceneNN datasets. We demonstrate: (1) visually pleasing results that are potentially useful for multimedia applications. (2) rendering 3D indoor scenes from viewpoints consistent with human priors is important. (3) incorporating semantics significantly improve style transfer quality. (4) an HSV regularization term leads to results that are more consistent with inputs and generally rated better. Codes and user study toolbox are available at https://github.com/AIR-DISCOVER/LASST

### Exploring the scaling limitations of the variational quantum eigensolver with the bond dissociation of hydride diatomic molecules

The variational quantum eigensolver (VQE) allows prediction of chemically accurate total energies via variational minimization on a quantum device. The severe scaling of high-fidelity quantum chemical methods can be overcome due to qubit entanglement and an exponentially large computational space, potentially allowing for modeling of the electronic structure for larger systems to an accuracy of 1kcal/mol or better. Nevertheless, current VQE implementations on existing quantum hardware are limited by qubit error rates, the number of qubits available, and the allowable gate depth. The largest chemical systems modeled to date using VQE include s and p block elements. Here we study the scaling of VQE on a quantum emulator to study the near-term feasibility of modeling a molecule containing d orbitals, specifically the TiH diatomic molecule, in order to better understand the hardware necessary to carry out this calculation on a real device. We model LiH, NaH, KH, and TiH diatomic molecules to demonstrate that the inclusion of d-orbitals and the UCCSD ansatz dramatically increase the cost of this problem. We show that a parallel implementation of VQE along with optimal choice of quantum circuit measurement basis set enable the study of larger systems on a classical emulator, but only provide approximately polynomial savings in the face of exponential growth in classical computational resources. The approximate error rates necessary to perform this calculation on trapped ion hardware are estimated and shown to likely prohibit the practical study of TiH on current devices using the UCCSD ansatz until significant improvements in both hardware and algorithms are available. The successful treatment of moderate-sized, d-electron molecules, such as TiH, using the VQE algorithm therefore stands as an important waypoint in the advancement of useful chemical applications of near-term quantum processors.

### An Efficient Multi-Scale Fusion Network for 3D Organ at Risk (OAR) Segmentation

Accurate segmentation of organs-at-risks (OARs) is a precursor for optimizing radiation therapy planning. Existing deep learning-based multi-scale fusion architectures have demonstrated a tremendous capacity for 2D medical image segmentation. The key to their success is aggregating global context and maintaining high resolution representations. However, when translated into 3D segmentation problems, existing multi-scale fusion architectures might underperform due to their heavy computation overhead and substantial data diet. To address this issue, we propose a new OAR segmentation framework, called OARFocalFuseNet, which fuses multi-scale features and employs focal modulation for capturing global-local context across multiple scales. Each resolution stream is enriched with features from different resolution scales, and multi-scale information is aggregated to model diverse contextual ranges. As a result, feature representations are further boosted. The comprehensive comparisons in our experimental setup with OAR segmentation as well as multi-organ segmentation show that our proposed OARFocalFuseNet outperforms the recent state-of-the-art methods on publicly available OpenKBP datasets and Synapse multi-organ segmentation. Both of the proposed methods (3D-MSF and OARFocalFuseNet) showed promising performance in terms of standard evaluation metrics. Our best performing method (OARFocalFuseNet) obtained a dice coefficient of 0.7995 and hausdorff distance of 5.1435 on OpenKBP datasets and dice coefficient of 0.8137 on Synapse multi-organ segmentation dataset.

### Archimedes Meets Privacy: On Privately Estimating Quantiles in High Dimensions Under Minimal Assumptions

The last few years have seen a surge of work on high dimensional statistics under privacy constraints, mostly following two main lines of work: the worst case'' line, which does not make any distributional assumptions on the input data; and the strong assumptions'' line, which assumes that the data is generated from specific families, e.g., subgaussian distributions. In this work we take a middle ground, obtaining new differentially private algorithms with polynomial sample complexity for estimating quantiles in high-dimensions, as well as estimating and sampling points of high Tukey depth, all working under very mild distributional assumptions. From the technical perspective, our work relies upon deep robustness results in the convex geometry literature, demonstrating how such results can be used in a private context. Our main object of interest is the (convex) floating body (FB), a notion going back to Archimedes, which is a robust and well studied high-dimensional analogue of the interquantile range. We show how one can privately, and with polynomially many samples, (a) output an approximate interior point of the FB -- e.g., a typical user'' in a high-dimensional database -- by leveraging the robustness of the Steiner point of the FB; and at the expense of polynomially many more samples, (b) produce an approximate uniform sample from the FB, by constructing a private noisy projection oracle.

### C3-DINO: Joint Contrastive and Non-contrastive Self-Supervised Learning for Speaker Verification

Self-supervised learning (SSL) has drawn an increased attention in the field of speech processing. Recent studies have demonstrated that contrastive learning is able to learn discriminative speaker embeddings in a self-supervised manner. However, base contrastive self-supervised learning (CSSL) assumes that the pairs generated from a view of anchor instance and any view of other instances are all negative, which introduces many false negative pairs in constructing the loss function. The problem is referred as $class$-$collision$, which remains as one major issue that impedes the CSSL based speaker verification (SV) systems from achieving better performances. In the meanwhile, studies reveal that negative sample free SSL frameworks perform well in learning speaker or image representations. In this study, we investigate SSL techniques that lead to an improved SV performance. We first analyse the impact of false negative pairs in the CSSL systems. Then, a multi-stage Class-Collision Correction (C3) method is proposed, which leads to the state-of-the-art CSSL based speaker embedding system. On the basis of the pretrained CSSL model, we further propose to employ a negative sample free SSL objective (i.e., DINO) to fine-tune the speaker embedding network. The resulting speaker embedding system (C3-DINO) achieves 2.5% EER with a simple Cosine Distance Scoring method on Voxceleb1 test set, which outperforms the previous SOTA SSL system (4.86%) by a significant +45% relative improvement. With speaker clustering and pseudo labeling on Voxceleb2 training set, a LDA/CDS back-end applying on the C3-DINO speaker embeddings is able to further push the EER to 2.2%. Comprehensive experimental investigations of the Voxceleb benchmarks and our internal dataset demonstrate the effectiveness of our proposed methods, and the performance gap between the SSL SV and the supervised counterpart narrows further.

### Speeding up random walk mixing by starting from a uniform vertex

The theory of rapid mixing random walks plays a fundamental role in the study of modern randomised algorithms. Usually, the mixing time is measured with respect to the worst initial position. It is well known that the presence of bottlenecks in a graph hampers mixing and, in particular, starting inside a small bottleneck significantly slows down the diffusion of the walk in the first steps of the process. To circumvent this problem, the average mixing time is defined to be the mixing time starting at a uniformly random vertex. In this paper we provide a general framework to show logarithmic average mixing time for random walks on graphs with small bottlenecks. The framework is especially effective on certain families of random graphs with heterogeneous properties. We demonstrate its applicability on two random models for which the mixing time was known to be of order $\log^2n$, speeding up the mixing to order $\log n$. First, in the context of smoothed analysis on connected graphs, we show logarithmic average mixing time for randomly perturbed graphs of bounded degeneracy. A particular instance is the Newman-Watts small-world model. Second, we show logarithmic average mixing time for supercritically percolated expander graphs. When the host graph is complete, this application gives an alternative proof that the average mixing time of the giant component in the supercritical Erd\H{o}s-R\'enyi graph is logarithmic.

### Semidefinite Programming versus Burer-Monteiro Factorization for Matrix Sensing

Many fundamental low-rank optimization problems, such as matrix completion, phase synchronization/retrieval, power system state estimation, and robust PCA, can be formulated as the matrix sensing problem. Two main approaches for solving matrix sensing are based on semidefinite programming (SDP) and Burer-Monteiro (B-M) factorization. The SDP method suffers from high computational and space complexities, whereas the B-M method may return a spurious solution due to the non-convexity of the problem. The existing theoretical guarantees for the success of these methods have led to similar conservative conditions, which may wrongly imply that these methods have comparable performances. In this paper, we shed light on some major differences between these two methods. First, we present a class of structured matrix completion problems for which the B-M methods fail with an overwhelming probability, while the SDP method works correctly. Second, we identify a class of highly sparse matrix completion problems for which the B-M method works and the SDP method fails. Third, we prove that although the B-M method exhibits the same performance independent of the rank of the unknown solution, the success of the SDP method is correlated to the rank of the solution and improves as the rank increases. Unlike the existing literature that has mainly focused on those instances of matrix sensing for which both SDP and B-M work, this paper offers the first result on the unique merit of each method over the alternative approach.

### Combinatorial optimization solving by coherent Ising machines based on spiking neural networks

Spiking neural network is a kind of neuromorphic computing which is believed to improve on the level of intelligence and provide advabtages for quantum computing. In this work, we address this issue by designing an optical spiking neural network and prove that it can be used to accelerate the speed of computation, especially on the combinatorial optimization problems. Here the spiking neural network is constructed by the antisymmetrically coupled degenerate optical parametric oscillator pulses and dissipative pulses. A nonlinear transfer function is chosen to mitigate amplitude inhomogeneities and destabilize the resulting local minima according to the dynamical behavior of spiking neurons. It is numerically proved that the spiking neural network-coherent Ising machines has excellent performance on combinatorial optimization problems, for which is expected to offer a new applications for neural computing and optical computing.

### Mean estimation when you have the source code; or, quantum Monte Carlo methods

Suppose $\boldsymbol{y}$ is a real random variable, and one is given access to the code'' that generates it (for example, a randomized or quantum circuit whose output is $\boldsymbol{y}$). We give a quantum procedure that runs the code $O(n)$ times and returns an estimate $\widehat{\boldsymbol{\mu}}$ for $\mu = \mathrm{E}[\boldsymbol{y}]$ that with high probability satisfies $|\widehat{\boldsymbol{\mu}} - \mu| \leq \sigma/n$, where $\sigma = \mathrm{stddev}[\boldsymbol{y}]$. This dependence on $n$ is optimal for quantum algorithms. One may compare with classical algorithms, which can only achieve the quadratically worse $|\widehat{\boldsymbol{\mu}} - \mu| \leq \sigma/\sqrt{n}$. Our method improves upon previous works, which either made additional assumptions about $\boldsymbol{y}$, and/or assumed the algorithm knew an a priori bound on $\sigma$, and/or used additional logarithmic factors beyond $O(n)$. The central subroutine for our result is essentially Grover's algorithm but with complex phases.ally Grover's algorithm but with complex phases.

### Coil2Coil: Self-supervised MR image denoising using phased-array coil images

Denoising of magnetic resonance images is beneficial in improving the quality of low signal-to-noise ratio images. Recently, denoising using deep neural networks has demonstrated promising results. Most of these networks, however, utilize supervised learning, which requires large training images of noise-corrupted and clean image pairs. Obtaining training images, particularly clean images, is expensive and time-consuming. Hence, methods such as Noise2Noise (N2N) that require only pairs of noise-corrupted images have been developed to reduce the burden of obtaining training datasets. In this study, we propose a new self-supervised denoising method, Coil2Coil (C2C), that does not require the acquisition of clean images or paired noise-corrupted images for training. Instead, the method utilizes multichannel data from phased-array coils to generate training images. First, it divides and combines multichannel coil images into two images, one for input and the other for label. Then, they are processed to impose noise independence and sensitivity normalization such that they can be used for the training images of N2N. For inference, the method inputs a coil-combined image (e.g., DICOM image), enabling a wide application of the method. When evaluated using synthetic noise-added images, C2C shows the best performance against several self-supervised methods, reporting comparable outcomes to supervised methods. When testing the DICOM images, C2C successfully denoised real noise without showing structure-dependent residuals in the error maps. Because of the significant advantage of not requiring additional scans for clean or paired images, the method can be easily utilized for various clinical applications.

### A unifying partially-interpretable framework for neural network-based extreme quantile regression

Risk management in many environmental settings requires an understanding of the mechanisms that drive extreme events. Useful metrics for quantifying such risk are extreme quantiles of response variables conditioned on predictor variables that describe e.g., climate, biosphere and environmental states. Typically these quantiles lie outside the range of observable data and so, for estimation, require specification of parametric extreme value models within a regression framework. Classical approaches in this context utilise linear or additive relationships between predictor and response variables and suffer in either their predictive capabilities or computational efficiency; moreover, their simplicity is unlikely to capture the truly complex structures that lead to the creation of extreme wildfires. In this paper, we propose a new methodological framework for performing extreme quantile regression using artificial neutral networks, which are able to capture complex non-linear relationships and scale well to high-dimensional data. The "black box" nature of neural networks means that they lack the desirable trait of interpretability often favoured by practitioners; thus, we combine aspects of linear, and additive, models with deep learning to create partially interpretable neural networks that can be used for statistical inference but retain high prediction accuracy. To complement this methodology, we further propose a novel point process model for extreme values which overcomes the finite lower-endpoint problem associated with the generalised extreme value class of distributions. Efficacy of our unified framework is illustrated on U.S. wildfire data with a high-dimensional predictor set and we illustrate vast improvements in predictive performance over linear and spline-based regression techniques.

### $L^p$ sampling numbers for the Fourier-analytic Barron space

In this paper, we consider Barron functions $f : [0,1]^d \to \mathbb{R}$ of smoothness $\sigma > 0$, which are functions that can be written as $f(x) = \int_{\mathbb{R}^d} F(\xi) \, e^{2 \pi i \langle x, \xi \rangle} \, d \xi \quad \text{with} \quad \int_{\mathbb{R}^d} |F(\xi)| \cdot (1 + |\xi|)^{\sigma} \, d \xi < \infty.$ For $\sigma = 1$, these functions play a prominent role in machine learning, since they can be efficiently approximated by (shallow) neural networks without suffering from the curse of dimensionality. For these functions, we study the following question: Given $m$ point samples $f(x_1),\dots,f(x_m)$ of an unknown Barron function $f : [0,1]^d \to \mathbb{R}$ of smoothness $\sigma$, how well can $f$ be recovered from these samples, for an optimal choice of the sampling points and the reconstruction procedure? Denoting the optimal reconstruction error measured in $L^p$ by $s_m (\sigma; L^p)$, we show that $m^{- \frac{1}{\max \{ p,2 \}} - \frac{\sigma}{d}} \lesssim s_m(\sigma;L^p) \lesssim (\ln (e + m))^{\alpha(\sigma,d) / p} \cdot m^{- \frac{1}{\max \{ p,2 \}} - \frac{\sigma}{d}} ,$ where the implied constants only depend on $\sigma$ and $d$ and where $\alpha(\sigma,d)$ stays bounded as $d \to \infty$.

### A two-step approach to Wasserstein distributionally robust chance- and security-constrained dispatch

This paper considers a security constrained dispatch problem involving generation and line contingencies in the presence of the renewable generation. The uncertainty due to renewables is modeled using joint chance-constraint and the mismatch caused by contingencies and renewables are handled using reserves. We consider a distributionally robust approach to solve the chance-constrained program. We assume that samples of the uncertainty are available. Using them, we construct a set of distributions, termed ambiguity set, containing all distributions that are close to the empirical distribution under the Wasserstein metric. The chance constraint is imposed for all distributions in the ambiguity set to form the distributionally robust optimization problem. This problem is nonconvex and computationally heavy to solve exactly. We adopt a two-step approach to find an approximate solution. In the first step, we construct a polyhedral set in the space of uncertainty that contains enough mass under all distributions in the ambiguity set. This set is constructed by solving several two-dimensional distributionally robust problems. In the second step, we solve a linear robust optimization problem where the uncertain constraint is imposed for all uncertainty values lying in the polyhedral set. We demonstrate the scalability and robustness of our method using numerical experiments.

### A Hybrid Deep Feature-Based Deformable Image Registration Method for Pathological Images

Pathologists need to combine information from differently stained pathological slices to obtain accurate diagnostic results. Deformable image registration is a necessary technique for fusing multi-modal pathological slices. This paper proposes a hybrid deep feature-based deformable image registration framework for stained pathological samples. We first extract dense feature points and perform points matching by two deep learning feature networks. Then, to further reduce false matches, an outlier detection method combining the isolation forest statistical model and the local affine correction model is proposed. Finally, the interpolation method generates the DVF for pathology image registration based on the above matching points. We evaluate our method on the dataset of the Non-rigid Histology Image Registration (ANHIR) challenge, which is co-organized with the IEEE ISBI 2019 conference. Our technique outperforms the traditional approaches by 17% with the Average-Average registration target error (rTRE) reaching 0.0034. The proposed method achieved state-of-the-art performance and ranking it 1 in evaluating the test dataset. The proposed hybrid deep feature-based registration method can potentially become a reliable method for pathology image registration.

### Uconv-Conformer: High Reduction of Input Sequence Length for End-to-End Speech Recognition

Optimization of modern ASR architectures is among the highest priority tasks since it saves many computational resources for model training and inference. The work proposes a new Uconv-Conformer architecture based on the standard Conformer model that consistently reduces the input sequence length by 16 times, which results in speeding up the work of the intermediate layers. To solve the convergence problem with such a significant reduction of the time dimension, we use upsampling blocks similar to the U-Net architecture to ensure the correct CTC loss calculation and stabilize network training. The Uconv-Conformer architecture appears to be not only faster in terms of training and inference but also shows better WER compared to the baseline Conformer. Our best Uconv-Conformer model showed 40.3% epoch training time reduction, 47.8%, and 23.5% inference acceleration on the CPU and GPU, respectively. Relative WER on Librispeech test_clean and test_other decreased by 7.3% and 9.2%.

### Score-Based Diffusion meets Annealed Importance Sampling

More than twenty years after its introduction, Annealed Importance Sampling (AIS) remains one of the most effective methods for marginal likelihood estimation. It relies on a sequence of distributions interpolating between a tractable initial distribution and the target distribution of interest which we simulate from approximately using a non-homogeneous Markov chain. To obtain an importance sampling estimate of the marginal likelihood, AIS introduces an extended target distribution to reweight the Markov chain proposal. While much effort has been devoted to improving the proposal distribution used by AIS, by changing the intermediate distributions and corresponding Markov kernels, an underappreciated issue is that AIS uses a convenient but suboptimal extended target distribution. This can hinder its performance. We here leverage recent progress in score-based generative modeling (SGM) to approximate the optimal extended target distribution for AIS proposals corresponding to the discretization of Langevin and Hamiltonian dynamics. We demonstrate these novel, differentiable, AIS procedures on a number of synthetic benchmark distributions and variational auto-encoders.

### A sustainable waste-to-protein system to maximise waste resource utilisation for developing food- and feed-grade protein solutions

A waste-to-protein system that integrates a range of waste-to-protein upgrading technologies has the potential to converge innovations on zero-waste and protein security to ensure a sustainable protein future. We present a global overview of food-safe and feed-safe waste resource potential and technologies to sort and transform such waste streams with compositional quality characteristics into food-grade or feed-grade protein. The identified streams are rich in carbon and nutrients and absent of pathogens and hazardous contaminants, including food waste streams, lignocellulosic waste from agricultural residues and forestry, and contaminant-free waste from the food and drink industry. A wide range of chemical, physical, and biological treatments can be applied to extract nutrients and convert waste-carbon to fermentable sugars or other platform chemicals for subsequent conversion to protein. Our quantitative analyses suggest that the waste-to-protein system has the potential to maximise recovery of various low-value resources and catalyse the transformative solutions toward a sustainable protein future. However, novel protein regulation processes remain expensive and resource intensive in many countries, with protracted timelines for approval. This poses a significant barrier to market expansion, despite accelerated research and development in waste-to-protein technologies and novel protein sources. Thus, the waste-to-protein system is an important initiative to promote metabolic health across the lifespan and tackle the global hunger crisis.

### Hyperparameter Optimization of Generative Adversarial Network Models for High-Energy Physics Simulations

The Generative Adversarial Network (GAN) is a powerful and flexible tool that can generate high-fidelity synthesized data by learning. It has seen many applications in simulating events in High Energy Physics (HEP), including simulating detector responses and physics events. However, training GANs is notoriously hard and optimizing their hyperparameters even more so. It normally requires many trial-and-error training attempts to force a stable training and reach a reasonable fidelity. Significant tuning work has to be done to achieve the accuracy required by physics analyses. This work uses the physics-agnostic and high-performance-computer-friendly hyperparameter optimization tool HYPPO to optimize and examine the sensitivities of the hyperparameters of a GAN for two independent HEP datasets. This work provides the first insights into efficiently tuning GANs for Large Hadron Collider data. We show that given proper hyperparameter tuning, we can find GANs that provide high-quality approximations of the desired quantities. We also provide guidelines for how to go about GAN architecture tuning using the analysis tools in HYPPO.

### Scalable Quantum Neural Networks for Classification

Many recent machine learning tasks resort to quantum computing to improve classification accuracy and training efficiency by taking advantage of quantum mechanics, known as quantum machine learning (QML). The variational quantum circuit (VQC) is frequently utilized to build a quantum neural network (QNN), which is a counterpart to the conventional neural network. Due to hardware limitations, however, current quantum devices only allow one to use few qubits to represent data and perform simple quantum computations. The limited quantum resource on a single quantum device degrades the data usage and limits the scale of the quantum circuits, preventing quantum advantage to some extent. To alleviate this constraint, we propose an approach to implementing a scalable quantum neural network (SQNN) by utilizing the quantum resource of multiple small-size quantum devices cooperatively. In an SQNN system, several quantum devices are used as quantum feature extractors, extracting local features from an input instance in parallel, and a quantum device works as a quantum predictor, performing prediction over the local features collected through classical communication channels. The quantum feature extractors in the SQNN system are independent of each other, so one can flexibly use quantum devices of varying sizes, with larger quantum devices extracting more local features. Especially, the SQNN can be performed on a single quantum device in a modular fashion. Our work is exploratory and carried out on a quantum system simulator using the TensorFlow Quantum library. The evaluation conducts a binary classification on the MNIST dataset. It shows that the SQNN model achieves a comparable classification accuracy to a regular QNN model of the same scale. Furthermore, it demonstrates that the SQNN model with more quantum resources can significantly improve classification accuracy.

### A Latent Feature Analysis-based Approach for Spatio-Temporal Traffic Data Recovery

Missing data is an inevitable and common problem in data-driven intelligent transportation systems (ITS). In the past decade, scholars have done many research on the recovery of missing traffic data, however how to make full use of spatio-temporal traffic patterns to improve the recovery performance is still an open problem. Aiming at the spatio-temporal characteristics of traffic speed data, this paper regards the recovery of missing data as a matrix completion problem, and proposes a spatio-temporal traffic data completion method based on hidden feature analysis, which discovers spatio-temporal patterns and underlying structures from incomplete data to complete the recovery task. Therefore, we introduce spatial and temporal correlation to capture the main underlying features of each dimension. Finally, these latent features are applied to recovery traffic data through latent feature analysis. The experimental and evaluation results show that the evaluation criterion value of the model is small, which indicates that the model has better performance. The results show that the model can accurately estimate the continuous missing data.

### Delaunay-Triangulation-Based Learning with Hessian Total-Variation Regularization

Regression is one of the core problems tackled in supervised learning. Rectified linear unit (ReLU) neural networks generate continuous and piecewise-linear (CPWL) mappings and are the state-of-the-art approach for solving regression problems. In this paper, we propose an alternative method that leverages the expressivity of CPWL functions. In contrast to deep neural networks, our CPWL parameterization guarantees stability and is interpretable. Our approach relies on the partitioning of the domain of the CPWL function by a Delaunay triangulation. The function values at the vertices of the triangulation are our learnable parameters and identify the CPWL function uniquely. Formulating the learning scheme as a variational problem, we use the Hessian total variation (HTV) as regularizer to favor CPWL functions with few affine pieces. In this way, we control the complexity of our model through a single hyperparameter. By developing a computational framework to compute the HTV of any CPWL function parameterized by a triangulation, we discretize the learning problem as the generalized least absolute shrinkage and selection operator (LASSO). Our experiments validate the usage of our method in low-dimensional scenarios.

### Diagnosis of COVID-19 disease using CT scan images and pre-trained models

Diagnosis of COVID-19 is necessary to prevent and control the disease. Deep learning methods have been considered a fast and accurate method. In this paper, by the parallel combination of three well-known pre-trained networks, we attempted to distinguish coronavirus-infected samples from healthy samples. The negative log-likelihood loss function has been used for model training. CT scan images in the SARS-CoV-2 dataset were used for diagnosis. The SARS-CoV-2 dataset contains 2482 images of lung CT scans, of which 1252 images belong to COVID-19-infected samples. The proposed model was close to 97% accurate.

### Optimal algorithms for learning quantum phase states

We analyze the complexity of learning $n$-qubit quantum phase states. A degree-$d$ phase state is defined as a superposition of all $2^n$ basis vectors $x$ with amplitudes proportional to $(-1)^{f(x)}$, where $f$ is a degree-$d$ Boolean polynomial over $n$ variables. We show that the sample complexity of learning an unknown degree-$d$ phase state is $\Theta(n^d)$ if we allow separable measurements and $\Theta(n^{d-1})$ if we allow entangled measurements. Our learning algorithm based on separable measurements has runtime $\textsf{poly}(n)$ (for constant $d$) and is well-suited for near-term demonstrations as it requires only single-qubit measurements in the Pauli $X$ and $Z$ bases. We show similar bounds on the sample complexity for learning generalized phase states with complex-valued amplitudes. We further consider learning phase states when $f$ has sparsity-$s$, degree-$d$ in its $\mathbb{F}_2$ representation (with sample complexity $O(2^d sn)$), $f$ has Fourier-degree-$t$ (with sample complexity $O(2^{2t})$), and learning quadratic phase states with $\varepsilon$-global depolarizing noise (with sample complexity $O(n^{1+\varepsilon})$). These learning algorithms give us a procedure to learn the diagonal unitaries of the Clifford hierarchy and IQP circuits.

### Deep Learning for Size and Microscope Feature Extraction and Classification in Oral Cancer: Enhanced Convolution Neural Network

Background and Aim: Over-fitting issue has been the reason behind deep learning technology not being successfully implemented in oral cancer images classification. The aims of this research were reducing overfitting for accurately producing the required dimension reduction feature map through Deep Learning algorithm using Convolutional Neural Network. Methodology: The proposed system consists of Enhanced Convolutional Neural Network that uses an autoencoder technique to increase the efficiency of the feature extraction process and compresses information. In this technique, unpooling and deconvolution is done to generate the input data to minimize the difference between input and output data. Moreover, it extracts characteristic features from the input data set to regenerate input data from those features by learning a network to reduce overfitting. Results: Different accuracy and processing time value is achieved while using different sample image group of Confocal Laser Endomicroscopy (CLE) images. The results showed that the proposed solution is better than the current system. Moreover, the proposed system has improved the classification accuracy by 5~ 5.5% on average and reduced the average processing time by 20 ~ 30 milliseconds. Conclusion: The proposed system focuses on the accurate classification of oral cancer cells of different anatomical locations from the CLE images. Finally, this study enhances the accuracy and processing time using the autoencoder method that solves the overfitting problem.