New articles on Computer Science


[1] 2409.09039

AutoGeo: Automating Geometric Image Dataset Creation for Enhanced Geometry Understanding

With the rapid advancement of large language models, there has been a growing interest in their capabilities in mathematical reasoning. However, existing research has primarily focused on text-based algebra problems, neglecting the study of geometry due to the lack of high-quality geometric datasets. To address this gap, this paper introduces AutoGeo, a novel approach for automatically generating mathematical geometric images to fulfill the demand for large-scale and diverse geometric datasets. AutoGeo facilitates the creation of AutoGeo-100k, an extensive repository comprising 100k high-quality geometry image-text pairs. By leveraging precisely defined geometric clauses, AutoGeo-100k contains a wide variety of geometric shapes, including lines, polygons, circles, and complex spatial relationships, etc. Furthermore, this paper demonstrates the efficacy of AutoGeo-100k in enhancing the performance of multimodal large language models through fine-tuning. Experimental results indicate significant improvements in the model's ability in handling geometric images, as evidenced by enhanced accuracy in tasks such as geometric captioning and mathematical reasoning. This research not only fills a critical gap in the availability of geometric datasets but also paves the way for the advancement of sophisticated AI-driven tools in education and research. Project page: https://autogeo-official.github.io/.


[2] 2409.09040

ChatSUMO: Large Language Model for Automating Traffic Scenario Generation in Simulation of Urban MObility

Large Language Models (LLMs), capable of handling multi-modal input and outputs such as text, voice, images, and video, are transforming the way we process information. Beyond just generating textual responses to prompts, they can integrate with different software platforms to offer comprehensive solutions across diverse applications. In this paper, we present ChatSUMO, a LLM-based agent that integrates language processing skills to generate abstract and real-world simulation scenarios in the widely-used traffic simulator - Simulation of Urban MObility (SUMO). Our methodology begins by leveraging the LLM for user input which converts to relevant keywords needed to run python scripts. These scripts are designed to convert specified regions into coordinates, fetch data from OpenStreetMap, transform it into a road network, and subsequently run SUMO simulations with the designated traffic conditions. The outputs of the simulations are then interpreted by the LLM resulting in informative comparisons and summaries. Users can continue the interaction and generate a variety of customized scenarios without prior traffic simulation expertise. For simulation generation, we created a real-world simulation for the city of Albany with an accuracy of 96\%. ChatSUMO also realizes the customizing of edge edit, traffic light optimization, and vehicle edit by users effectively.


[3] 2409.09041

Acceptable Use Policies for Foundation Models

As foundation models have accumulated hundreds of millions of users, developers have begun to take steps to prevent harmful types of uses. One salient intervention that foundation model developers adopt is acceptable use policies: legally binding policies that prohibit users from using a model for specific purposes. This paper identifies acceptable use policies from 30 foundation model developers, analyzes the use restrictions they contain, and argues that acceptable use policies are an important lens for understanding the regulation of foundation models. Taken together, developers' acceptable use policies include 127 distinct use restrictions; the wide variety in the number and type of use restrictions may create fragmentation across the AI supply chain. Developers also employ acceptable use policies to prevent competitors or specific industries from making use of their models. Developers alone decide what constitutes acceptable use, and rarely provide transparency about how they enforce their policies. In practice, acceptable use policies are difficult to enforce, and scrupulous enforcement can act as a barrier to researcher access and limit beneficial uses of foundation models. Nevertheless, acceptable use policies for foundation models are an early example of self-regulation that have a significant impact on the market for foundation models and the overall AI ecosystem.


[4] 2409.09042

Semantic Communication for Cooperative Perception using HARQ

Cooperative perception, offering a wider field of view than standalone perception, is becoming increasingly crucial in autonomous driving. This perception is enabled through vehicle-to-vehicle (V2V) communication, allowing connected automated vehicles (CAVs) to exchange sensor data, such as light detection and ranging (LiDAR) point clouds, thereby enhancing the collective understanding of the environment. In this paper, we leverage an importance map to distill critical semantic information, introducing a cooperative perception semantic communication framework that employs intermediate fusion. To counter the challenges posed by time-varying multipath fading, our approach incorporates the use of orthogonal frequency-division multiplexing (OFDM) along with channel estimation and equalization strategies. Furthermore, recognizing the necessity for reliable transmission, especially in the low SNR scenarios, we introduce a novel semantic error detection method that is integrated with our semantic communication framework in the spirit of hybrid automatic repeated request (HARQ). Simulation results show that our model surpasses the traditional separate source-channel coding methods in perception performance, both with and without HARQ. Additionally, in terms of throughput, our proposed HARQ schemes demonstrate superior efficiency to the conventional coding approaches.


[5] 2409.09043

Strengthening Interpretability: An Investigative Study of Integrated Gradient Methods

We conducted a reproducibility study on Integrated Gradients (IG) based methods and the Important Direction Gradient Integration (IDGI) framework. IDGI eliminates the explanation noise in each step of the computation of IG-based methods that use the Riemann Integration for integrated gradient computation. We perform a rigorous theoretical analysis of IDGI and raise a few critical questions that we later address through our study. We also experimentally verify the authors' claims concerning the performance of IDGI over IG-based methods. Additionally, we varied the number of steps used in the Riemann approximation, an essential parameter in all IG methods, and analyzed the corresponding change in results. We also studied the numerical instability of the attribution methods to check the consistency of the saliency maps produced. We developed the complete code to implement IDGI over the baseline IG methods and evaluated them using three metrics since the available code was insufficient for this study.


[6] 2409.09044

ElasticAI: Creating and Deploying Energy-Efficient Deep Learning Accelerator for Pervasive Computing

Deploying Deep Learning (DL) on embedded end devices is a scorching trend in pervasive computing. Since most Microcontrollers on embedded devices have limited computing power, it is necessary to add a DL accelerator. Embedded Field Programmable Gate Arrays (FPGAs) are suitable for deploying DL accelerators for embedded devices, but developing an energy-efficient DL accelerator on an FPGA is not easy. Therefore, we propose the ElasticAI-Workflow that aims to help DL developers to create and deploy DL models as hardware accelerators on embedded FPGAs. This workflow consists of two key components: the ElasticAI-Creator and the Elastic Node. The former is a toolchain for automatically generating DL accelerators on FPGAs. The latter is a hardware platform for verifying the performance of the generated accelerators. With this combination, the performance of the accelerator can be sufficiently guaranteed. We will demonstrate the potential of our approach through a case study.


[7] 2409.09045

United in Diversity? Contextual Biases in LLM-Based Predictions of the 2024 European Parliament Elections

Large language models (LLMs) are perceived by some as having the potential to revolutionize social science research, considering their training data includes information on human attitudes and behavior. If these attitudes are reflected in LLM output, LLM-generated "synthetic samples" could be used as a viable and efficient alternative to surveys of real humans. However, LLM-synthetic samples might exhibit coverage bias due to training data and fine-tuning processes being unrepresentative of diverse linguistic, social, political, and digital contexts. In this study, we examine to what extent LLM-based predictions of public opinion exhibit context-dependent biases by predicting voting behavior in the 2024 European Parliament elections using a state-of-the-art LLM. We prompt GPT-4-Turbo with anonymized individual-level background information, varying prompt content and language, ask the LLM to predict each person's voting behavior, and compare the weighted aggregates to the real election results. Our findings emphasize the limited applicability of LLM-synthetic samples to public opinion prediction. We show that (1) the LLM-based prediction of future voting behavior largely fails, (2) prediction accuracy is unequally distributed across national and linguistic contexts, and (3) improving LLM predictions requires detailed attitudinal information about individuals for prompting. In investigating the contextual differences of LLM-based predictions of public opinion, our research contributes to the understanding and mitigation of biases and inequalities in the development of LLMs and their applications in computational social science.


[8] 2409.09046

HyPA-RAG: A Hybrid Parameter Adaptive Retrieval-Augmented Generation System for AI Legal and Policy Applications

While Large Language Models (LLMs) excel in text generation and question-answering, their effectiveness in AI legal and policy is limited by outdated knowledge, hallucinations, and inadequate reasoning in complex contexts. Retrieval-Augmented Generation (RAG) systems improve response accuracy by integrating external knowledge but struggle with retrieval errors, poor context integration, and high costs, particularly in interpreting qualitative and quantitative AI legal texts. This paper introduces a Hybrid Parameter-Adaptive RAG (HyPA-RAG) system tailored for AI legal and policy, exemplified by NYC Local Law 144 (LL144). HyPA-RAG uses a query complexity classifier for adaptive parameter tuning, a hybrid retrieval strategy combining dense, sparse, and knowledge graph methods, and an evaluation framework with specific question types and metrics. By dynamically adjusting parameters, HyPA-RAG significantly improves retrieval accuracy and response fidelity. Testing on LL144 shows enhanced correctness, faithfulness, and contextual precision, addressing the need for adaptable NLP systems in complex, high-stakes AI legal and policy applications.


[9] 2409.09047

AI Meets the Classroom: When Does ChatGPT Harm Learning?

In this paper, we study how generative AI and specifically large language models (LLMs) impact learning in coding classes. We show across three studies that LLM usage can have positive and negative effects on learning outcomes. Using observational data from university-level programming courses, we establish such effects in the field. We replicate these findings in subsequent experimental studies, which closely resemble typical learning scenarios, to show causality. We find evidence for two contrasting mechanisms that determine the overall effect of LLM usage on learning. Students who use LLMs as personal tutors by conversing about the topic and asking for explanations benefit from usage. However, learning is impaired for students who excessively rely on LLMs to solve practice exercises for them and thus do not invest sufficient own mental effort. Those who never used LLMs before are particularly prone to such adverse behavior. Students without prior domain knowledge gain more from having access to LLMs. Finally, we show that the self-perceived benefits of using LLMs for learning exceed the actual benefits, potentially resulting in an overestimation of one's own abilities. Overall, our findings show promising potential of LLMs as learning support, however also that students have to be very cautious of possible pitfalls.


[10] 2409.09054

Evaluating the Performance of Large Language Models in Competitive Programming: A Multi-Year, Multi-Grade Analysis

This study explores the performance of large language models (LLMs) in solving competitive programming problems from the Romanian Informatics Olympiad at the county level. Romania, a leading nation in computer science competitions, provides an ideal environment for evaluating LLM capabilities due to its rich history and stringent competition standards. We collected and analyzed a dataset comprising 304 challenges from 2002 to 2023, focusing on solutions written by LLMs in C++ and Python for these problems. Our primary goal is to understand why LLMs perform well or poorly on different tasks. We evaluated various models, including closed-source models like GPT-4 and open-weight models such as CodeLlama and RoMistral, using a standardized process involving multiple attempts and feedback rounds. The analysis revealed significant variations in LLM performance across different grades and problem types. Notably, GPT-4 showed strong performance, indicating its potential use as an educational tool for middle school students. We also observed differences in code quality and style across various LLMs


[11] 2409.09056

Identifying Factors to Help Improve Existing Decomposition-Based PMI Estimation Methods

Accurately assessing the postmortem interval (PMI) is an important task in forensic science. Some of the existing techniques use regression models that use a decomposition score to predict the PMI or accumulated degree days (ADD), however, the provided formulas are based on very small samples and the accuracy is low. With the advent of Big Data, much larger samples can be used to improve PMI estimation methods. We, therefore, aim to investigate ways to improve PMI prediction accuracy by (a) using a much larger sample size, (b) employing more advanced linear models, and (c) enhancing models with factors known to affect the human decay process. Specifically, this study involved the curation of a sample of 249 human subjects from a large-scale decomposition dataset, followed by evaluating pre-existing PMI/ADD formulas and fitting increasingly sophisticated models to estimate the PMI/ADD. Results showed that including the total decomposition score (TDS), demographic factors (age, biological sex, and BMI), and weather-related factors (season of discovery, temperature history, and humidity history) increased the accuracy of the PMI/ADD models. Furthermore, the best performing PMI estimation model using the TDS, demographic, and weather-related features as predictors resulted in an adjusted R-squared of 0.34 and an RMSE of 0.95. It had a 7% lower RMSE than a model using only the TDS to predict the PMI and a 48% lower RMSE than the pre-existing PMI formula. The best ADD estimation model, also using the TDS, demographic, and weather-related features as predictors, resulted in an adjusted R-squared of 0.52 and an RMSE of 0.89. It had an 11% lower RMSE than the model using only the TDS to predict the ADD and a 52% lower RMSE than the pre-existing ADD formula. This work demonstrates the need (and way) to incorporate demographic and environmental factors into PMI/ADD estimation models.


[12] 2409.09058

Redefining Data-Centric Design: A New Approach with a Domain Model and Core Data Ontology for Computational Systems

This paper presents an innovative data-centric paradigm for designing computational systems by introducing a new informatics domain model. The proposed model moves away from the conventional node-centric framework and focuses on data-centric categorization, using a multimodal approach that incorporates objects, events, concepts, and actions. By drawing on interdisciplinary research and establishing a foundational ontology based on these core elements, the model promotes semantic consistency and secure data handling across distributed ecosystems. We also explore the implementation of this model as an OWL 2 ontology, discuss its potential applications, and outline its scalability and future directions for research. This work aims to serve as a foundational guide for system designers and data architects in developing more secure, interoperable, and scalable data systems.


[13] 2409.09059

SDP Synthesis of Distributionally Robust Backward Reachable Trees for Probabilistic Planning

The paper presents Maximal Ellipsoid Backward Reachable Trees MAXELLIPSOID BRT, which is a multi-query algorithm for planning of dynamic systems under stochastic motion uncertainty and constraints on the control input. In contrast to existing probabilistic planning methods that grow a roadmap of distributions, our proposed method introduces a framework to construct a roadmap of ambiguity sets of distributions such that each edge in our proposed roadmap provides a feasible control sequence for a family of distributions at once leading to efficient multi-query planning. Specifically, we construct a backward reachable tree of maximal size ambiguity sets and the corresponding distributionally robust edge controllers. Experiments show that the computation of these sets of distributions, in a backwards fashion from the goal, leads to efficient planning at a fraction of the size of the roadmap required for state-of-the-art methods. The computation of these maximal ambiguity sets and edges is carried out via a convex semidefinite relaxation to a novel nonlinear program. We also formally prove a theorem on maximum coverage for a technique proposed in our prior work.


[14] 2409.09061

Eliminating Timing Anomalies in Scheduling Periodic Segmented Self-Suspending Tasks with Release Jitter

Ensuring timing guarantees for every individual tasks is critical in real-time systems. Even for periodic tasks, providing timing guarantees for tasks with segmented self-suspending behavior is challenging due to timing anomalies, i.e., the reduction of execution or suspension time of some jobs increases the response time of another job. The release jitter of tasks can add further complexity to the situation, affecting the predictability and timing guarantees of real-time systems. The existing worst-case response time analyses for sporadic self-suspending tasks are only over-approximations and lead to overly pessimistic results. In this work, we address timing anomalies without compromising the worst-case response time (WCRT) analysis when scheduling periodic segmented self-suspending tasks with release jitter. We propose two treatments: segment release time enforcement and segment priority modification, and prove their effectiveness in eliminating timing anomalies. Our evaluation demonstrates that the proposed treatments achieve higher acceptance ratios in terms of schedulability compared to state-of-the-art scheduling algorithms. Additionally, we implement the segment-level fixed-priority scheduling mechanism on RTEMS and verify the validity of our segment priority modification treatment. This work expands our previous conference publication at the 29th IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS 2023), which considers only periodic segmented self-suspending tasks without release jitter.


[15] 2409.09062

The ART of Sharing Points-to Analysis (Extended Abstract)

Data-flow analyses like points-to analysis can vastly improve the precision of other analyses, and help perform powerful code optimizations. However, whole-program points-to analysis of large programs tend to be expensive - both in terms of time and memory. Consequently, many compilers (both static and JIT) and program-analysis tools tend to employ faster - but more conservative - points-to analysis to improve usability. As an alternative to such trading of precision for performance, various techniques have been proposed to perform precise yet expensive fixed-point points-to analyses ahead of time in a static analyzer, store the results, and then transmit them to independent compilation/program-analysis stages that may need them. However, an underlying concern of safety affects all such techniques - can a compiler (or program analysis tool) trust the points-to analysis results generated by another compiler/tool? In this work, we address this issue of trust, while keeping the issues of performance efficiency in mind. We propose ART: Analysis-results Representation Template - a novel scheme to efficiently and concisely encode results of flow-sensitive, context-insensitive points-to analysis computed by a static analyzer for use in any independent system that may benefit from such a highly precise points-to analysis. Our scheme has two components: (i) a producer that can statically perform expensive points-to analysis and encode the same concisely. (ii) a consumer that, on receiving such encoded results, can regenerate the points-to analysis results encoded by the artwork if it is deemed safe. We demonstrate the usage of ART by implementing a producer (in Soot) and two consumers (in Soot and the Eclipse OpenJ9 JIT compiler). We evaluate our implementation over various benchmarks from the DaCapo and SPECjvm2008 suites.


[16] 2409.09063

TS-EoH: An Edge Server Task Scheduling Algorithm Based on Evolution of Heuristic

With the widespread adoption of 5G and Internet of Things (IoT) technologies, the low latency provided by edge computing has great importance for real-time processing. However, managing numerous simultaneous service requests poses a significant challenge to maintaining low latency. Current edge server task scheduling methods often fail to balance multiple optimization goals effectively. This paper introduces a novel task-scheduling approach based on Evolutionary Computing (EC) theory and heuristic algorithms. We model service requests as task sequences and evaluate various scheduling schemes during each evolutionary process using Large Language Models (LLMs) services. Experimental results show that our task-scheduling algorithm outperforms existing heuristic and traditional reinforcement learning methods. Additionally, we investigate the effects of different heuristic strategies and compare the evolutionary outcomes across various LLM services.


[17] 2409.09068

3D System Design: A Case for Building Customized Modular Systems in 3D

3D promises a new dimension in composing systems by aggregating chips. Literally. While the most common uses are still tightly connected with its early forms as a packaging technology, new application domains have been emerging. As the underlying technology continues to evolve, the unique leverages of 3D have become increasingly appealing to a larger range of applications: from embedded mobile applications to servers and memory systems. In this paper we focus on the system-level implications of 3D technology, trying to differentiate the unique advantages that it provides to different market segments and applications.


[18] 2409.09069

Temporal Many-valued Conditional Logics: a Preliminary Report

In this paper we propose a many-valued temporal conditional logic. We start from a many-valued logic with typicality, and extend it with the temporal operators of the Linear Time Temporal Logic (LTL), thus providing a formalism which is able to capture the dynamics of a system, trough strict and defeasible temporal properties. We also consider an instantiation of the formalism for gradual argumentation.


[19] 2409.09071

ELMS: Elasticized Large Language Models On Mobile Devices

On-device Large Language Models (LLMs) are revolutionizing mobile AI, enabling applications such as UI automation while addressing privacy concerns. Currently, the standard approach involves deploying a single, robust LLM as a universal solution for various applications, often referred to as LLM-as-a-Service (LLMaaS). However, this approach faces a significant system challenge: existing LLMs lack the flexibility to accommodate the diverse Service-Level Objectives (SLOs) regarding inference latency across different applications. To address this issue, we introduce ELMS, an on-device LLM service designed to provide elasticity in both the model and prompt dimensions of an LLMaaS. This system includes: A one-time neuron reordering technique, which utilizes the inherent permutation consistency within transformer models to create high-quality, elastic sub-models with minimal runtime switching costs. A dual-head compact language model, which efficiently refines prompts and coordinates the elastic adaptation between the model and the prompt. We have implemented this elastic on-device LLM service on several off-the-shelf (COTS) smartphones and evaluate ELMS using both standalone NLP/mobile-agent datasets and synthesized end-to-end traces. Across a range of SLOs, ELMS surpasses four strong baselines by up to 16.83% and 11.04% in absolute accuracy on average, with less than 1% Time-To-First-Token (TTFT) switching overhead, comparable memory usage, and fewer than 100 offline GPU hours.


[20] 2409.09072

Joint Model Assignment and Resource Allocation for Cost-Effective Mobile Generative Services

Artificial Intelligence Generated Content (AIGC) services can efficiently satisfy user-specified content creation demands, but the high computational requirements pose various challenges to supporting mobile users at scale. In this paper, we present our design of an edge-enabled AIGC service provisioning system to properly assign computing tasks of generative models to edge servers, thereby improving overall user experience and reducing content generation latency. Specifically, once the edge server receives user requested task prompts, it dynamically assigns appropriate models and allocates computing resources based on features of each category of prompts. The generated contents are then delivered to users. The key to this system is a proposed probabilistic model assignment approach, which estimates the quality score of generated contents for each prompt based on category labels. Next, we introduce a heuristic algorithm that enables adaptive configuration of both generation steps and resource allocation, according to the various task requests received by each generative model on the edge.Simulation results demonstrate that the designed system can effectively enhance the quality of generated content by up to 4.7% while reducing response delay by up to 39.1% compared to benchmarks.


[21] 2409.09073

An Optimization Algorithm for Customer Topological Paths Identification in Electrical Distribution Networks

A customer topological path represents the sequence of network elements connecting an MV/LV transformer to a customer. Accurate knowledge of these paths is crucial for distribution system operators (DSOs) in digitalization, analysis, and network planning. This paper introduces an innovative approach to address the challenge of customer topological path identification (TPI) using only the limited and often inaccurate data available to DSOs. Specifically, our method relies only on geographic information system (GIS) data of network elements and the customer to MV/LV transformers connection information. We introduce an integer linear programming (ILP) optimization algorithm designed to identify customer topological paths that closely approximate the real electricity paths. The effectiveness of the proposed approach is demonstrated through its application to both an academic and a real-world electrical distribution network. Results show that the method effectively addresses data inaccuracies and successfully identifies customer topological paths, providing a valuable tool for DSOs in developing accurate digital twins of their distribution networks.


[22] 2409.09074

Fair Reinforcement Learning Algorithm for PV Active Control in LV Distribution Networks

The increasing adoption of distributed energy resources, particularly photovoltaic (PV) panels, has presented new and complex challenges for power network control. With the significant energy production from PV panels, voltage issues in the network have become a problem. Currently, PV smart inverters (SIs) are used to mitigate the voltage problems by controlling their active power generation and reactive power injection or absorption. However, reducing the active power output of PV panels can be perceived as unfair to some customers, discouraging future installations. To solve this issue, in this paper, a reinforcement learning technique is proposed to address voltage issues in a distribution network, while considering fairness in active power curtailment among customers. The feasibility of the proposed approach is explored through experiments, demonstrating its ability to effectively control voltage in a fair and efficient manner.


[23] 2409.09075

A Systematic Procedure for Topological Path Identification with Raw Data Transformation in Electrical Distribution Networks

This paper introduces a systematic approach to address the topological path identification (TPI) problem in power distribution networks. Our approach starts by listing the DSO's raw information coming from several sources. The raw information undergoes a transformation process using a set of transformation functions. This process converts the raw information into well-defined information exploitable by an algorithm. Then a set of hypothetical paths is generated, considering any potential connections between the elements of the power distribution system. This set of hypothetical paths is processed by the algorithm that identifies the hypothetical paths that are compatible with the well-defined information. This procedure operates iteratively, adapting the set of transformation functions based on the result obtained: if the identified paths fail to meet the DSO's expectations, new data is collected, and/or the transformation functions found to be responsible for the discrepancies are modified. The systematic procedure offers practical advantages for DSOs, including improved accuracy in path identification and high adaptability to diverse network configurations, even with incomplete or inaccurate data. Consequently, it emerges as a useful tool for the construction of digital twins of power distribution networks that aligns with DSO expectations.


[24] 2409.09076

A Dynamic Cooler Model for Cement Clinker Production

We present a 2D model for a grate belt cooler in the pyro-section of a cement plant. The model is formulated as an index-1 differential-algebraic equation (DAE) model based on first engineering principles. The model systematically integrates thermo-physical aspects, transport phenomena, reaction kinetics, mass and energy balances, and algebraic volume and energy relations. The model is used for dynamic simulation of the cooler and the paper provides dynamic and steady-state simulation results matching the expected behavior. The cooler model is one part of a full pyro-section model for dynamical simulations. The model can serve as a basis for the design of optimization and control systems towards improving energy efficiency and CO2 emission.


[25] 2409.09079

D3-GNN: Dynamic Distributed Dataflow for Streaming Graph Neural Networks

Graph Neural Network (GNN) models on streaming graphs entail algorithmic challenges to continuously capture its dynamic state, as well as systems challenges to optimize latency, memory, and throughput during both inference and training. We present D3-GNN, the first distributed, hybrid-parallel, streaming GNN system designed to handle real-time graph updates under online query setting. Our system addresses data management, algorithmic, and systems challenges, enabling continuous capturing of the dynamic state of the graph and updating node representations with fault-tolerance and optimal latency, load-balance, and throughput. D3-GNN utilizes streaming GNN aggregators and an unrolled, distributed computation graph architecture to handle cascading graph updates. To counteract data skew and neighborhood explosion issues, we introduce inter-layer and intra-layer windowed forward pass solutions. Experiments on large-scale graph streams demonstrate that D3-GNN achieves high efficiency and scalability. Compared to DGL, D3-GNN achieves a significant throughput improvement of about 76x for streaming workloads. The windowed enhancement further reduces running times by around 10x and message volumes by up to 15x at higher parallelism.


[26] 2409.09080

Parallel Reduced Order Modeling for Digital Twins using High-Performance Computing Workflows

The integration of Reduced Order Models (ROMs) with High-Performance Computing (HPC) is critical for developing digital twins, particularly for real-time monitoring and predictive maintenance of industrial systems. This paper describes a comprehensive, HPC-enabled workflow for developing and deploying projection-based ROMs (PROMs). We use PyCOMPSs' parallel framework to efficiently execute ROM training simulations, employing parallel Singular Value Decomposition (SVD) algorithms such as randomized SVD, Lanczos SVD, and full SVD based on Tall-Skinny QR. In addition, we introduce a partitioned version of the hyper-reduction scheme known as the Empirical Cubature Method. Despite the widespread use of HPC for PROMs, there is a significant lack of publications detailing comprehensive workflows for building and deploying end-to-end PROMs in HPC environments. Our workflow is validated through a case study focusing on the thermal dynamics of a motor. The PROM is designed to deliver a real-time prognosis tool that could enable rapid and safe motor restarts post-emergency shutdowns under different operating conditions for further integration into digital twins or control systems. To facilitate deployment, we use the HPC Workflow as a Service strategy and Functional Mock-Up Units to ensure compatibility and ease of integration across HPC, edge, and cloud environments. The outcomes illustrate the efficacy of combining PROMs and HPC, establishing a precedent for scalable, real-time digital twin applications across multiple industries.


[27] 2409.09082

Shadowed AHP for multi-criteria supplier selection

Numerous techniques of multi-criteria decision-making (MCDM) have been proposed in a variety of business domains. One of the well-known methods is the Analytical Hierarchical Process (AHP). Various uncertain numbers are commonly used to represent preference values in AHP problems. In the case of multi-granularity linguistic information, several methods have been proposed to address this type of AHP problem. This paper introduces a novel method to solve this problem using shadowed fuzzy numbers (SFNs). These numbers are characterized by approximating different types of fuzzy numbers and preserving their uncertainty properties. The new Shadowed AHP method is proposed to handle preference values which are represented by multi-types of uncertain numbers. The new approach converts multi-granular preference values into unified model of shadowed fuzzy numbers and utilizes their properties. A new ranking approach is introduced to order the results of aggregation preferences. The new approach is applied to solve a supplier selection problem in which multi-granular information are used. The features of the new approach are significant for decision-making applications.


[28] 2409.09083

Distributed Convolutional Neural Network Training on Mobile and Edge Clusters

The training of deep and/or convolutional neural networks (DNNs/CNNs) is traditionally done on servers with powerful CPUs and GPUs. Recent efforts have emerged to localize machine learning tasks fully on the edge. This brings advantages in reduced latency and increased privacy, but necessitates working with resource-constrained devices. Approaches for inference and training in mobile and edge devices based on pruning, quantization or incremental and transfer learning require trading off accuracy. Several works have explored distributing inference operations on mobile and edge clusters instead. However, there is limited literature on distributed training on the edge. Existing approaches all require a central, potentially powerful edge or cloud server for coordination or offloading. In this paper, we describe an approach for distributed CNN training exclusively on mobile and edge devices. Our approach is beneficial for the initial CNN layers that are feature map dominated. It is based on partitioning forward inference and back-propagation operations among devices through tiling and fusing to maximize locality and expose communication and memory-aware parallelism. We also introduce the concept of layer grouping to further fine-tune performance based on computation and communication trade-off. Results show that for a cluster of 2-6 quad-core Raspberry Pi3 devices, training of an object-detection CNN provides a 2x-15x speedup with respect to a single core and up to 8x reduction in memory usage per device, all without sacrificing accuracy. Grouping offers up to 1.5x speedup depending on the reference profile and batch size.


[29] 2409.09084

Optimal Design of Vehicle Dynamics Using Gradient-Based, Mixed-Fidelity Multidisciplinary Optimization

In automotive engineering, designing for optimal vehicle dynamics is challenging due to the complexities involved in analysing the behaviour of a multibody system. Typically, a simplified set of dynamics equations for only the key bodies of the vehicle such as the chassis and wheels are formulated while reducing their degrees of freedom. In contrast, one could employ high-fidelity multibody dynamics simulation and include more intricate details such as the individual suspension components while considering full degrees of freedom for all bodies; however, this is more computationally demanding. Also, for gradient-based design optimization, computing adjoints for different objective functions can be more challenging for the latter approach, and often not feasible if an existing multibody dynamics solver is used. We propose a mixed-fidelity multidisciplinary approach, in which a simplified set of dynamics equations are used to model the whole vehicle while incorporating a high-fidelity multibody suspension module as an additional coupled discipline. We then employ MAUD (modular analysis and unified derivatives) to combine analytical derivatives based on the dynamics equations and finite differences obtained using an existing multibody solver. Also, we use a collocation method for time integration, which solves for both the system trajectory and optimal design variables simultaneously. The benefits of our approach are shown in an experiment conducted to find optimal vehicle parameters that optimize ride comfort and driving performance considering vertical vehicle dynamics.


[30] 2409.09085

HESSO: Towards Automatic Efficient and User Friendly Any Neural Network Training and Pruning

Structured pruning is one of the most popular approaches to effectively compress the heavy deep neural networks (DNNs) into compact sub-networks while retaining performance. The existing methods suffer from multi-stage procedures along with significant engineering efforts and human expertise. The Only-Train-Once (OTO) series has been recently proposed to resolve the many pain points by streamlining the workflow by automatically conducting (i) search space generation, (ii) structured sparse optimization, and (iii) sub-network construction. However, the built-in sparse optimizers in the OTO series, i.e., the Half-Space Projected Gradient (HSPG) family, have limitations that require hyper-parameter tuning and the implicit controls of the sparsity exploration, consequently requires intervening by human expertise. To address such limitations, we propose a Hybrid Efficient Structured Sparse Optimizer (HESSO). HESSO could automatically and efficiently train a DNN to produce a high-performing subnetwork. Meanwhile, it is almost tuning-free and enjoys user-friendly integration for generic training applications. To address another common issue of irreversible performance collapse observed in pruning DNNs, we further propose a Corrective Redundant Identification Cycle (CRIC) for reliably identifying indispensable structures. We numerically demonstrate the efficacy of HESSO and its enhanced version HESSO-CRIC on a variety of applications ranging from computer vision to natural language processing, including large language model. The numerical results showcase that HESSO can achieve competitive even superior performance to varying state-of-the-arts and support most DNN architectures. Meanwhile, CRIC can effectively prevent the irreversible performance collapse and further enhance the performance of HESSO on certain applications. The code is available at https://github.com/microsoft/only_train_once.


[31] 2409.09086

Inf-MLLM: Efficient Streaming Inference of Multimodal Large Language Models on a Single GPU

Multimodal Large Language Models (MLLMs) are distinguished by their multimodal comprehensive ability and widely used in many real-world applications including GPT-4o, autonomous driving and robotics. Despite their impressive performance, the multimodal inputs always incur long context. The inference under long context requires caching massive Key and Value states (KV cache) of previous tokens, which introduces high latency and excessive memory consumption. Due to this reason, it is challenging to deploy streaming inference of MLLMs on edge devices, which largely constrains the power and usage of MLLMs in real-world applications. In this paper, we introduce Inf-MLLM, an efficient inference framework for MLLMs, which enable streaming inference of MLLM on a single GPU with infinite context. Inf-MLLM is based on our key observation of the attention pattern in both LLMs and MLLMs called "attention saddles". Thanks to the newly discovered attention pattern, Inf-MLLM maintains a size-constrained KV cache by dynamically caching recent tokens and relevant tokens. Furthermore, Inf-MLLM proposes attention bias, a novel approach to enable MLLMs to capture long-term dependency. We show that Inf-MLLM enables multiple LLMs and MLLMs to achieve stable performance over 4M-token long texts and multi-round conversations with 1-hour-long videos on a single GPU. In addition, Inf-MLLM exhibits superior streaming reasoning quality than existing methods such as StreamingLLM and 2x speedup than H2O.


[32] 2409.09088

Y-Drop: A Conductance based Dropout for fully connected layers

In this work, we introduce Y-Drop, a regularization method that biases the dropout algorithm towards dropping more important neurons with higher probability. The backbone of our approach is neuron conductance, an interpretable measure of neuron importance that calculates the contribution of each neuron towards the end-to-end mapping of the network. We investigate the impact of the uniform dropout selection criterion on performance by assigning higher dropout probability to the more important units. We show that forcing the network to solve the task at hand in the absence of its important units yields a strong regularization effect. Further analysis indicates that Y-Drop yields solutions where more neurons are important, i.e have high conductance, and yields robust networks. In our experiments we show that the regularization effect of Y-Drop scales better than vanilla dropout w.r.t. the architecture size and consistently yields superior performance over multiple datasets and architecture combinations, with little tuning.


[33] 2409.09090

An Evaluation of GPT-4V for Transcribing the Urban Renewal Hand-Written Collection

Between 1960 and 1980, urban renewal transformed many cities, creating vast handwritten records. These documents posed a significant challenge for researchers due to their volume and handwritten nature. The launch of GPT-4V in November 2023 offered a breakthrough, enabling large-scale, efficient transcription and analysis of these historical urban renewal documents.


[34] 2409.09092

Data-driven Virtual Test-bed of the Blown Powder Directed Energy Deposition Process

Digital twins in manufacturing serve as a crucial bridge between the industrial age and the digital age, offering immense value. Current additive manufacturing processes are able to generate vast amounts of in-process data, which, when effectively ingested, can be transformed into insightful decisions. Data-driven methods from reduced order modeling and system identification are particularly promising in managing this data deluge. This study focuses on Laser Powder Directed Energy Deposition (LP-DED) equipped with in-situ process measurements to develop a compact virtual test-bed. This test-bed can accurately ingest arbitrary process inputs and report in-process observables as outputs. This virtual test-bed is derived using Dynamic Mode Decomposition with Control (DMDc) and is coupled with uncertainty quantification techniques to ensure robust predictions.


[35] 2409.09093

Response Surface Methodology coupled with desirability functions for multi-objective optimization: minimizing indoor overheating hours and maximizing useful daylight illuminance

Response Surface Methodology (RSM) and desirability functions were employed in a case study to optimize the thermal and daylight performance of a computational model of a tropical housing typology. Specifically, this approach simultaneously optimized Indoor Overheating Hours (IOH) and Useful Daylight Illuminance (UDI) metrics through an Overall Desirability (D). The lack of significant association between IOH and other annual daylight metrics enabled a focused optimization of IOH and UDI. Each response required only 138 simulation runs (~30 hours for 276 runs) to determine the optimal values for passive strategies: window-to-wall ratio (WWR) and roof overhang depth across four orientations, totalling eight factors. First, initial screening based on $2_V^{8-2}$ fractional factorial design, identified four key factors using stepwise and Lasso regression, narrowed down to three: roof overhang depth on the south and west, WWR on the west, and WWR on the south. Then, RSM optimization yielded an optimal solution (roof overhang: 3.78 meters, west WWR: 3.76%, south WWR: 29.3%) with a D of 0.625 (IOH: 8.33%, UDI: 79.67%). Finally, robustness analysis with 1,000 bootstrap replications provided 95% confidence intervals for the optimal values. This study optimally balances thermal comfort and daylight with few experiments using a computationally-efficient multi-objective approach.


[36] 2409.09095

meds_reader: A fast and efficient EHR processing library

The growing demand for machine learning in healthcare requires processing increasingly large electronic health record (EHR) datasets, but existing pipelines are not computationally efficient or scalable. In this paper, we introduce meds_reader, an optimized Python package for efficient EHR data processing that is designed to take advantage of many intrinsic properties of EHR data for improved speed. We then demonstrate the benefits of meds_reader by reimplementing key components of two major EHR processing pipelines, achieving 10-100x improvements in memory, speed, and disk usage. The code for meds_reader can be found at https://github.com/som-shahlab/meds_reader.


[37] 2409.09098

AccentBox: Towards High-Fidelity Zero-Shot Accent Generation

While recent Zero-Shot Text-to-Speech (ZS-TTS) models have achieved high naturalness and speaker similarity, they fall short in accent fidelity and control. To address this issue, we propose zero-shot accent generation that unifies Foreign Accent Conversion (FAC), accented TTS, and ZS-TTS, with a novel two-stage pipeline. In the first stage, we achieve state-of-the-art (SOTA) on Accent Identification (AID) with 0.56 f1 score on unseen speakers. In the second stage, we condition ZS-TTS system on the pretrained speaker-agnostic accent embeddings extracted by the AID model. The proposed system achieves higher accent fidelity on inherent/cross accent generation, and enables unseen accent generation.


[38] 2409.09099

S-STE: Continuous Pruning Function for Efficient 2:4 Sparse Pre-training

Training deep neural networks (DNNs) is costly. Fortunately, Nvidia Ampere and Hopper GPUs can accelerate matrix multiplications twice as fast as a dense equivalent by implementing 2:4 sparsity. However, previous STE-based 2:4 pre-training methods (e.g. STE with hard-thresholding, SR-STE) suffer from optimization difficulties because of discontinuous pruning function. In this study, we comprehensively analyse the bottleneck of traditional N:M sparse training and recognize three drawbacks with discontinuity: incorrect descending direction, inability to predict the amount of descent and sparse mask oscillation. In the light of this statement, we propose S-STE, a simple yet powerful 2:4 training method that contains two parts: to continuously project weights to be 2:4 sparse, and to rescale sparse weights with a per-tensor fixed scaling factor. Besides, we adopt minimum-variance unbiased estimation for activation gradient and FP8 quantization for whole process. Results show that our method surpass previous 2:4 pre-training recipes and is comparable even with full parameter models.


[39] 2409.09102

Measurability and continuity of parametric low-rank approximation in Hilbert spaces: linear operators and random variables

We develop a unified theoretical framework for low-rank approximation techniques in parametric settings, where traditional methods like Singular Value Decomposition (SVD), Proper Orthogonal Decomposition (POD), and Principal Component Analysis (PCA) face significant challenges due to repeated queries. Applications include, e.g., the numerical treatment of parameter-dependent partial differential equations (PDEs), where operators vary with parameters, and the statistical analysis of longitudinal data, where complex measurements like audio signals and images are collected over time. Although the applied literature has introduced partial solutions through adaptive algorithms, these advancements lack a comprehensive mathematical foundation. As a result, key theoretical questions -- such as the existence and parametric regularity of optimal low-rank approximants -- remain inadequately addressed. Our goal is to bridge this gap between theory and practice by establishing a rigorous framework for parametric low-rank approximation under minimal assumptions, specifically focusing on cases where parameterizations are either measurable or continuous. The analysis is carried out within the context of separable Hilbert spaces, ensuring applicability to both finite and infinite-dimensional settings. Finally, connections to recently emerging trends in the Deep Learning literature, relevant for engineering and data science, are also discussed.


[40] 2409.09104

Hybrid LSMR algorithms for large-scale general-form regularization

The hybrid LSMR algorithm is proposed for large-scale general-form regularization. It is based on a Krylov subspace projection method where the matrix $A$ is first projected onto a subspace, typically a Krylov subspace, which is implemented via the Golub-Kahan bidiagonalization process applied to $A$, with starting vector $b$. Then a regularization term is employed to the projections. Finally, an iterative algorithm is exploited to solve a least squares problem with constraints. The resulting algorithms are called the {hybrid LSMR algorithm}. At every step, we exploit LSQR algorithm to solve the inner least squares problem, which is proven to become better conditioned as the number of $k$ increases, so that the LSQR algorithm converges faster. We prove how to select the stopping tolerances for LSQR in order to guarantee that the regularized solution obtained by iteratively computing the inner least squares problems and the one obtained by exactly computing the inner least squares problems have the same accuracy. Numerical experiments illustrate that the best regularized solution by the hybrid LSMR algorithm is as accurate as that by JBDQR which is a joint bidiagonalization based algorithm.


[41] 2409.09106

Recent Trends in Modelling the Continuous Time Series using Deep Learning: A Survey

Continuous-time series is essential for different modern application areas, e.g. healthcare, automobile, energy, finance, Internet of things (IoT) and other related areas. Different application needs to process as well as analyse a massive amount of data in time series structure in order to determine the data-driven result, for example, financial trend prediction, potential probability of the occurrence of a particular event occurrence identification, patient health record processing and so many more. However, modeling real-time data using a continuous-time series is challenging since the dynamical systems behind the data could be a differential equation. Several research works have tried to solve the challenges of modelling the continuous-time series using different neural network models and approaches for data processing and learning. The existing deep learning models are not free from challenges and limitations due to diversity among different attributes, behaviour, duration of steps, energy, and data sampling rate. This paper has described the general problem domain of time series and reviewed the challenges of modelling the continuous time series. We have presented a comparative analysis of recent developments in deep learning models and their contribution to solving different difficulties of modelling the continuous time series. We have also identified the limitations of the existing neural network model and open issues. The main goal of this review is to understand the recent trend of neural network models used in a different real-world application with continuous-time data.


[42] 2409.09107

Proactive and Reactive Constraint Programming for Stochastic Project Scheduling with Maximal Time-Lags

This study investigates scheduling strategies for the stochastic resource-constrained project scheduling problem with maximal time lags (SRCPSP/max)). Recent advances in Constraint Programming (CP) and Temporal Networks have reinvoked interest in evaluating the advantages and drawbacks of various proactive and reactive scheduling methods. First, we present a new, CP-based fully proactive method. Second, we show how a reactive approach can be constructed using an online rescheduling procedure. A third contribution is based on partial order schedules and uses Simple Temporal Networks with Uncertainty (STNUs). Our statistical analysis shows that the STNU-based algorithm performs best in terms of solution quality, while also showing good relative offline and online computation time.


[43] 2409.09108

Trimming the Risk: Towards Reliable Continuous Training for Deep Learning Inspection Systems

The industry increasingly relies on deep learning (DL) technology for manufacturing inspections, which are challenging to automate with rule-based machine vision algorithms. DL-powered inspection systems derive defect patterns from labeled images, combining human-like agility with the consistency of a computerized system. However, finite labeled datasets often fail to encompass all natural variations necessitating Continuous Training (CT) to regularly adjust their models with recent data. Effective CT requires fresh labeled samples from the original distribution; otherwise, selfgenerated labels can lead to silent performance degradation. To mitigate this risk, we develop a robust CT-based maintenance approach that updates DL models using reliable data selections through a two-stage filtering process. The initial stage filters out low-confidence predictions, as the model inherently discredits them. The second stage uses variational auto-encoders and histograms to generate image embeddings that capture latent and pixel characteristics, then rejects the inputs of substantially shifted embeddings as drifted data with erroneous overconfidence. Then, a fine-tuning of the original DL model is executed on the filtered inputs while validating on a mixture of recent production and original datasets. This strategy mitigates catastrophic forgetting and ensures the model adapts effectively to new operational conditions. Evaluations on industrial inspection systems for popsicle stick prints and glass bottles using critical real-world datasets showed less than 9% of erroneous self-labeled data are retained after filtering and used for fine-tuning, improving model performance on production data by up to 14% without compromising its results on original validation data.


[44] 2409.09111

Neural Message Passing Induced by Energy-Constrained Diffusion

Learning representations for structured data with certain geometries (observed or unobserved) is a fundamental challenge, wherein message passing neural networks (MPNNs) have become a de facto class of model solutions. In this paper, we propose an energy-constrained diffusion model as a principled interpretable framework for understanding the mechanism of MPNNs and navigating novel architectural designs. The model, inspired by physical systems, combines the inductive bias of diffusion on manifolds with layer-wise constraints of energy minimization. As shown by our analysis, the diffusion operators have a one-to-one correspondence with the energy functions implicitly descended by the diffusion process, and the finite-difference iteration for solving the energy-constrained diffusion system induces the propagation layers of various types of MPNNs operated on observed or latent structures. On top of these findings, we devise a new class of neural message passing models, dubbed as diffusion-inspired Transformers, whose global attention layers are induced by the principled energy-constrained diffusion. Across diverse datasets ranging from real-world networks to images and physical particles, we show that the new model can yield promising performance for cases where the data structures are observed (as a graph), partially observed or completely unobserved.


[45] 2409.09130

FAST: Boosting Uncertainty-based Test Prioritization Methods for Neural Networks via Feature Selection

Due to the vast testing space, the increasing demand for effective and efficient testing of deep neural networks (DNNs) has led to the development of various DNN test case prioritization techniques. However, the fact that DNNs can deliver high-confidence predictions for incorrectly predicted examples, known as the over-confidence problem, causes these methods to fail to reveal high-confidence errors. To address this limitation, in this work, we propose FAST, a method that boosts existing prioritization methods through guided FeAture SelecTion. FAST is based on the insight that certain features may introduce noise that affects the model's output confidence, thereby contributing to high-confidence errors. It quantifies the importance of each feature for the model's correct predictions, and then dynamically prunes the information from the noisy features during inference to derive a new probability vector for the uncertainty estimation. With the help of FAST, the high-confidence errors and correctly classified examples become more distinguishable, resulting in higher APFD (Average Percentage of Fault Detection) values for test prioritization, and higher generalization ability for model enhancement. We conduct extensive experiments to evaluate FAST across a diverse set of model structures on multiple benchmark datasets to validate the effectiveness, efficiency, and scalability of FAST compared to the state-of-the-art prioritization techniques.


[46] 2409.09135

Multimodal Fusion with LLMs for Engagement Prediction in Natural Conversation

Over the past decade, wearable computing devices (``smart glasses'') have undergone remarkable advancements in sensor technology, design, and processing power, ushering in a new era of opportunity for high-density human behavior data. Equipped with wearable cameras, these glasses offer a unique opportunity to analyze non-verbal behavior in natural settings as individuals interact. Our focus lies in predicting engagement in dyadic interactions by scrutinizing verbal and non-verbal cues, aiming to detect signs of disinterest or confusion. Leveraging such analyses may revolutionize our understanding of human communication, foster more effective collaboration in professional environments, provide better mental health support through empathetic virtual interactions, and enhance accessibility for those with communication barriers. In this work, we collect a dataset featuring 34 participants engaged in casual dyadic conversations, each providing self-reported engagement ratings at the end of each conversation. We introduce a novel fusion strategy using Large Language Models (LLMs) to integrate multiple behavior modalities into a ``multimodal transcript'' that can be processed by an LLM for behavioral reasoning tasks. Remarkably, this method achieves performance comparable to established fusion techniques even in its preliminary implementation, indicating strong potential for further research and optimization. This fusion method is one of the first to approach ``reasoning'' about real-world human behavior through a language model. Smart glasses provide us the ability to unobtrusively gather high-density multimodal data on human behavior, paving the way for new approaches to understanding and improving human communication with the potential for important societal benefits. The features and data collected during the studies will be made publicly available to promote further research.


[47] 2409.09137

Robust optimal design of large-scale Bayesian nonlinear inverse problems

We consider robust optimal experimental design (ROED) for nonlinear Bayesian inverse problems governed by partial differential equations (PDEs). An optimal design is one that maximizes some utility quantifying the quality of the solution of an inverse problem. However, the optimal design is dependent on elements of the inverse problem such as the simulation model, the prior, or the measurement error model. ROED aims to produce an optimal design that is aware of the additional uncertainties encoded in the inverse problem and remains optimal even after variations in them. We follow a worst-case scenario approach to develop a new framework for robust optimal design of nonlinear Bayesian inverse problems. The proposed framework a) is scalable and designed for infinite-dimensional Bayesian nonlinear inverse problems constrained by PDEs; b) develops efficient approximations of the utility, namely, the expected information gain; c) employs eigenvalue sensitivity techniques to develop analytical forms and efficient evaluation methods of the gradient of the utility with respect to the uncertainties we wish to be robust against; and d) employs a probabilistic optimization paradigm that properly defines and efficiently solves the resulting combinatorial max-min optimization problem. The effectiveness of the proposed approach is illustrated for optimal sensor placement problem in an inverse problem governed by an elliptic PDE.


[48] 2409.09140

ResPilot: Teleoperated Finger Gaiting via Gaussian Process Residual Learning

Dexterous robot hand teleoperation allows for long-range transfer of human manipulation expertise, and could simultaneously provide a way for humans to teach these skills to robots. However, current methods struggle to reproduce the functional workspace of the human hand, often limiting them to simple grasping tasks. We present a novel method for finger-gaited manipulation with multi-fingered robot hands. Our method provides the operator enhanced flexibility in making contacts by expanding the reachable workspace of the robot hand through residual Gaussian Process learning. We also assist the operator in maintaining stable contacts with the object by allowing them to constrain fingertips of the hand to move in concert. Extensive quantitative evaluations show that our method significantly increases the reachable workspace of the robot hand and enables the completion of novel dexterous finger gaiting tasks. Project website: this http URL


[49] 2409.09141

Sequential infinite-dimensional Bayesian optimal experimental design with derivative-informed latent attention neural operator

In this work, we develop a new computational framework to solve sequential Bayesian experimental design (SBOED) problems constrained by large-scale partial differential equations with infinite-dimensional random parameters. We propose an adaptive terminal formulation of the optimality criteria for SBOED to achieve adaptive global optimality. We also establish an equivalent optimization formulation to achieve computational simplicity enabled by Laplace and low-rank approximations of the posterior. To accelerate the solution of the SBOED problem, we develop a derivative-informed latent attention neural operator (LANO), a new neural network surrogate model that leverages (1) derivative-informed dimension reduction for latent encoding, (2) an attention mechanism to capture the dynamics in the latent space, (3) an efficient training in the latent space augmented by projected Jacobian, which collectively lead to an efficient, accurate, and scalable surrogate in computing not only the parameter-to-observable (PtO) maps but also their Jacobians. We further develop the formulation for the computation of the MAP points, the eigenpairs, and the sampling from posterior by LANO in the reduced spaces and use these computations to solve the SBOED problem. We demonstrate the superior accuracy of LANO compared to two other neural architectures and the high accuracy of LANO compared to the finite element method (FEM) for the computation of MAP points in solving the SBOED problem with application to the experimental design of the time to take MRI images in monitoring tumor growth.


[50] 2409.09143

DomURLs_BERT: Pre-trained BERT-based Model for Malicious Domains and URLs Detection and Classification

Detecting and classifying suspicious or malicious domain names and URLs is fundamental task in cybersecurity. To leverage such indicators of compromise, cybersecurity vendors and practitioners often maintain and update blacklists of known malicious domains and URLs. However, blacklists frequently fail to identify emerging and obfuscated threats. Over the past few decades, there has been significant interest in developing machine learning models that automatically detect malicious domains and URLs, addressing the limitations of blacklists maintenance and updates. In this paper, we introduce DomURLs_BERT, a pre-trained BERT-based encoder adapted for detecting and classifying suspicious/malicious domains and URLs. DomURLs_BERT is pre-trained using the Masked Language Modeling (MLM) objective on a large multilingual corpus of URLs, domain names, and Domain Generation Algorithms (DGA) dataset. In order to assess the performance of DomURLs_BERT, we have conducted experiments on several binary and multi-class classification tasks involving domain names and URLs, covering phishing, malware, DGA, and DNS tunneling. The evaluations results show that the proposed encoder outperforms state-of-the-art character-based deep learning models and cybersecurity-focused BERT models across multiple tasks and datasets. The pre-training dataset, the pre-trained DomURLs_BERT encoder, and the experiments source code are publicly available.


[51] 2409.09144

PrimeDepth: Efficient Monocular Depth Estimation with a Stable Diffusion Preimage

This work addresses the task of zero-shot monocular depth estimation. A recent advance in this field has been the idea of utilising Text-to-Image foundation models, such as Stable Diffusion. Foundation models provide a rich and generic image representation, and therefore, little training data is required to reformulate them as a depth estimation model that predicts highly-detailed depth maps and has good generalisation capabilities. However, the realisation of this idea has so far led to approaches which are, unfortunately, highly inefficient at test-time due to the underlying iterative denoising process. In this work, we propose a different realisation of this idea and present PrimeDepth, a method that is highly efficient at test time while keeping, or even enhancing, the positive aspects of diffusion-based approaches. Our key idea is to extract from Stable Diffusion a rich, but frozen, image representation by running a single denoising step. This representation, we term preimage, is then fed into a refiner network with an architectural inductive bias, before entering the downstream task. We validate experimentally that PrimeDepth is two orders of magnitude faster than the leading diffusion-based method, Marigold, while being more robust for challenging scenarios and quantitatively marginally superior. Thereby, we reduce the gap to the currently leading data-driven approach, Depth Anything, which is still quantitatively superior, but predicts less detailed depth maps and requires 20 times more labelled data. Due to the complementary nature of our approach, even a simple averaging between PrimeDepth and Depth Anything predictions can improve upon both methods and sets a new state-of-the-art in zero-shot monocular depth estimation. In future, data-driven approaches may also benefit from integrating our preimage.


[52] 2409.09149

Adaptive Multi-Modal Control of Digital Human Hand Synthesis Using a Region-Aware Cycle Loss

Diffusion models have shown their remarkable ability to synthesize images, including the generation of humans in specific poses. However, current models face challenges in adequately expressing conditional control for detailed hand pose generation, leading to significant distortion in the hand regions. To tackle this problem, we first curate the How2Sign dataset to provide richer and more accurate hand pose annotations. In addition, we introduce adaptive, multi-modal fusion to integrate characters' physical features expressed in different modalities such as skeleton, depth, and surface normal. Furthermore, we propose a novel Region-Aware Cycle Loss (RACL) that enables the diffusion model training to focus on improving the hand region, resulting in improved quality of generated hand gestures. More specifically, the proposed RACL computes a weighted keypoint distance between the full-body pose keypoints from the generated image and the ground truth, to generate higher-quality hand poses while balancing overall pose accuracy. Moreover, we use two hand region metrics, named hand-PSNR and hand-Distance for hand pose generation evaluations. Our experimental evaluations demonstrate the effectiveness of our proposed approach in improving the quality of digital human pose generation using diffusion models, especially the quality of the hand region. The source code is available at https://github.com/fuqifan/Region-Aware-Cycle-Loss.


[53] 2409.09152

Distributed Binary Optimization with In-Memory Computing: An Application for the SAT Problem

In-memory computing (IMC) has been shown to be a promising approach for solving binary optimization problems while significantly reducing energy and latency. Building on the advantages of parallel computation, we propose an IMC-compatible parallelism framework inspired by parallel tempering (PT), enabling cross-replica communication to improve the performance of IMC solvers. This framework enables an IMC solver not only to improve performance beyond what can be achieved through parallelization, but also affords greater flexibility for the search process with low hardware overhead. We justify that the framework can be applied to almost any IMC solver. We demonstrate the effectiveness of the framework for the Boolean satisfiability (SAT) problem, using the WalkSAT heuristic as a proxy for existing IMC solvers. The resulting PT-inspired cooperative WalkSAT (PTIC-WalkSAT) algorithm outperforms the traditional WalkSAT heuristic in terms of the iterations-to-solution in 76.3% of the tested problem instances and its na\"ive parallel variant (PA-WalkSAT) does so in 68.4% of the instances. An estimate of the energy overhead of the PTIC framework for two hardware accelerator architectures indicates that in both cases the overhead of running the PTIC framework would be less than 1% of the total energy required to run each accelerator.


[54] 2409.09154

Management and Visualization Tools for Emergency Medical Services

This paper describes an online tool for the visualization of medical emergency locations, randomly generated sample paths of medical emergencies, and the animation of ambulance movements under the control of various dispatch methods in response to these emergencies. The tool incorporates statistical models for forecasting emergency locations and call arrival times, the simulation of emergency arrivals and ambulance movement trajectories, and the computation and visualization of performance metrics such as ambulance response time distributions. Data for the Rio de Janeiro Emergency Medical Service are available on the website. A user can upload emergency data for any Emergency Medical Service, and can then use the visualization tool to explore the uploaded data. A user can also use the statistical tools and/or the simulation tool with any of the dispatch methods provided, and can then use the visualization tool to explore the computational output. Future enhancements include the ability of a user to embed additional dispatch algorithms into the simulation; the tool can then be used to visualize the simulation results obtained with the newly embedded algorithms.


[55] 2409.09155

Critical Thresholds for Maximum Cardinality Matching on General Hypergraphs

Significant work has been done on computing the ``average'' optimal solution value for various $\mathsf{NP}$-complete problems using the Erd\"{o}s-R\'{e}nyi model to establish \emph{critical thresholds}. Critical thresholds define narrow bounds for the optimal solution of a problem instance such that the probability that the solution value lies outside these bounds vanishes as the instance size approaches infinity. In this paper, we extend the Erd\"{o}s-R\'{e}nyi model to general hypergraphs on $n$ vertices and $M$ hyperedges. We consider the problem of determining critical thresholds for the largest cardinality matching, and we show that for $M=o(1.155^n)$ the size of the maximum cardinality matching is almost surely 1. On the other hand, if $M=\Theta(2^n)$ then the size of the maximum cardinality matching is $\Omega(n^{\frac12-\gamma})$ for an arbitrary $\gamma >0$. Lastly, we address the gap where $\Omega(1.155^n)=M=o(2^n)$ empirically through computer simulations.


[56] 2409.09164

Measure Preserving Flows for Ergodic Search in Convoluted Environments

Autonomous robotic search has important applications in robotics, such as the search for signs of life after a disaster. When \emph{a priori} information is available, for example in the form of a distribution, a planner can use that distribution to guide the search. Ergodic search is one method that uses the information distribution to generate a trajectory that minimizes the ergodic metric, in that it encourages the robot to spend more time in regions with high information and proportionally less time in the remaining regions. Unfortunately, prior works in ergodic search do not perform well in complex environments with obstacles such as a building's interior or a maze. To address this, our work presents a modified ergodic metric using the Laplace-Beltrami eigenfunctions to capture map geometry and obstacle locations within the ergodic metric. Further, we introduce an approach to generate trajectories that minimize the ergodic metric while guaranteeing obstacle avoidance using measure-preserving vector fields. Finally, we leverage the divergence-free nature of these vector fields to generate collision-free trajectories for multiple agents. We demonstrate our approach via simulations with single and multi-agent systems on maps representing interior hallways and long corridors with non-uniform information distribution. In particular, we illustrate the generation of feasible trajectories in complex environments where prior methods fail.


[57] 2409.09169

Curricula for Learning Robust Policies over Factored State Representations in Changing Environments

Robust policies enable reinforcement learning agents to effectively adapt to and operate in unpredictable, dynamic, and ever-changing real-world environments. Factored representations, which break down complex state and action spaces into distinct components, can improve generalization and sample efficiency in policy learning. In this paper, we explore how the curriculum of an agent using a factored state representation affects the robustness of the learned policy. We experimentally demonstrate three simple curricula, such as varying only the variable of highest regret between episodes, that can significantly enhance policy robustness, offering practical insights for reinforcement learning in complex environments.


[58] 2409.09170

Towards Precision Characterization of Communication Disorders using Models of Perceived Pragmatic Similarity

The diagnosis and treatment of individuals with communication disorders offers many opportunities for the application of speech technology, but research so far has not adequately considered: the diversity of conditions, the role of pragmatic deficits, and the challenges of limited data. This paper explores how a general-purpose model of perceived pragmatic similarity may overcome these limitations. It explains how it might support several use cases for clinicians and clients, and presents evidence that a simple model can provide value, and in particular can capture utterance aspects that are relevant to diagnoses of autism and specific language impairment.


[59] 2409.09171

The Challenges of Effective AGM Belief Contraction

Despite the significant interest in extending the AGM paradigm of belief change beyond finitary logics, the computational aspects of AGM have remained almost untouched. We investigate the computability of AGM contraction on non-finitary logics, and show an intriguing negative result: there are infinitely many uncomputable AGM contraction functions in such logics. Drastically, even if we restrict the theories used to represent epistemic states, in all non-trivial cases, the uncomputability remains. On the positive side, we identify an infinite class of computable AGM contraction functions on Linear Temporal Logic (LTL). We use B\"uchi automata to construct such functions as well as to represent and reason about LTL knowledge.


[60] 2409.09174

Incorporation of Verifier Functionality in the Software for Operations and Network Attack Results Review and the Autonomous Penetration Testing System

The software for operations and network attack results review (SONARR) and the autonomous penetration testing system (APTS) use facts and common properties in digital twin networks to represent real-world entities. However, in some cases fact values will change regularly, making it difficult for objects in SONARR and APTS to consistently and accurately represent their real-world counterparts. This paper proposes and evaluates the addition of verifiers, which check real-world conditions and update network facts, to SONARR. This inclusion allows SONARR to retrieve fact values from its executing environment and update its network, providing a consistent method of ensuring that the operations and, therefore, the results align with the real-world systems being assessed. Verifiers allow arbitrary scripts and dynamic arguments to be added to normal SONARR operations. This provides a layer of flexibility and consistency that results in more reliable output from the software.


[61] 2409.09175

Cybersecurity Software Tool Evaluation Using a 'Perfect' Network Model

Cybersecurity software tool evaluation is difficult due to the inherently adversarial nature of the field. A penetration testing (or offensive) tool must be tested against a viable defensive adversary and a defensive tool must, similarly, be tested against a viable offensive adversary. Characterizing the tool's performance inherently depends on the quality of the adversary, which can vary from test to test. This paper proposes the use of a 'perfect' network, representing computing systems, a network and the attack pathways through it as a methodology to use for testing cybersecurity decision-making tools. This facilitates testing by providing a known and consistent standard for comparison. It also allows testing to include researcher-selected levels of error, noise and uncertainty to evaluate cybersecurity tools under these experimental conditions.


[62] 2409.09177

Transformer with Controlled Attention for Synchronous Motion Captioning

In this paper, we address a challenging task, synchronous motion captioning, that aim to generate a language description synchronized with human motion sequences. This task pertains to numerous applications, such as aligned sign language transcription, unsupervised action segmentation and temporal grounding. Our method introduces mechanisms to control self- and cross-attention distributions of the Transformer, allowing interpretability and time-aligned text generation. We achieve this through masking strategies and structuring losses that push the model to maximize attention only on the most important frames contributing to the generation of a motion word. These constraints aim to prevent undesired mixing of information in attention maps and to provide a monotonic attention distribution across tokens. Thus, the cross attentions of tokens are used for progressive text generation in synchronization with human motion sequences. We demonstrate the superior performance of our approach through evaluation on the two available benchmark datasets, KIT-ML and HumanML3D. As visual evaluation is essential for this task, we provide a comprehensive set of animated visual illustrations in the code repository: https://github.com/rd20karim/Synch-Transformer.


[63] 2409.09183

Quantum-inspired Reinforcement Learning for Synthesizable Drug Design

Synthesizable molecular design (also known as synthesizable molecular optimization) is a fundamental problem in drug discovery, and involves designing novel molecular structures to improve their properties according to drug-relevant oracle functions (i.e., objective) while ensuring synthetic feasibility. However, existing methods are mostly based on random search. To address this issue, in this paper, we introduce a novel approach using the reinforcement learning method with quantum-inspired simulated annealing policy neural network to navigate the vast discrete space of chemical structures intelligently. Specifically, we employ a deterministic REINFORCE algorithm using policy neural networks to output transitional probability to guide state transitions and local search using genetic algorithm to refine solutions to a local optimum within each iteration. Our methods are evaluated with the Practical Molecular Optimization (PMO) benchmark framework with a 10K query budget. We further showcase the competitive performance of our method by comparing it against the state-of-the-art genetic algorithms-based method.


[64] 2409.09184

Stability Margins of Neural Network Controllers

We present a method to train neural network controllers with guaranteed stability margins. The method is applicable to linear time-invariant plants interconnected with uncertainties and nonlinearities that are described by integral quadratic constraints. The type of stability margin we consider is the disk margin. Our training method alternates between a training step to maximize reward and a stability margin-enforcing step. In the stability margin enforcing-step, we solve a semidefinite program to project the controller into the set of controllers for which we can certify the desired disk margin.


[65] 2409.09186

Quantitative Insights into Language Model Usage and Trust in Academia: An Empirical Study

Language models (LMs) are revolutionizing knowledge retrieval and processing in academia. However, concerns regarding their misuse and erroneous outputs, such as hallucinations and fabrications, are reasons for distrust in LMs within academic communities. Consequently, there is a pressing need to deepen the understanding of how actual practitioners use and trust these models. There is a notable gap in quantitative evidence regarding the extent of LM usage, user trust in their outputs, and issues to prioritize for real-world development. This study addresses these gaps by providing data and analysis of LM usage and trust. Specifically, our study surveyed 125 individuals at a private school and secured 88 data points after pre-processing. Through both quantitative analysis and qualitative evidence, we found a significant variation in trust levels, which are strongly related to usage time and frequency. Additionally, we discover through a polling process that fact-checking is the most critical issue limiting usage. These findings inform several actionable insights: distrust can be overcome by providing exposure to the models, policies should be developed that prioritize fact-checking, and user trust can be enhanced by increasing engagement. By addressing these critical gaps, this research not only adds to the understanding of user experiences and trust in LMs but also informs the development of more effective LMs.


[66] 2409.09187

Matrix perturbation analysis of methods for extracting singular values from approximate singular subspaces

Given (orthonormal) approximations $\tilde{U}$ and $\tilde{V}$ to the left and right subspaces spanned by the leading singular vectors of a matrix $A$, we discuss methods to approximate the leading singular values of $A$ and study their accuracy. In particular, we focus our analysis on the generalized Nystr\"om approximation, as surprisingly, it is able to obtain significantly better accuracy than classical methods, namely Rayleigh-Ritz and (one-sided) projected SVD. A key idea of the analysis is to view the methods as finding the exact singular values of a perturbation of $A$. In this context, we derive a matrix perturbation result that exploits the structure of such $2\times2$ block matrix perturbation. Furthermore, we extend it to block tridiagonal matrices. We then obtain bounds on the accuracy of the extracted singular values. This leads to sharp bounds that predict well the approximation error trends and explain the difference in the behavior of these methods. Finally, we present an approach to derive an a-posteriori version of those bounds, which are more amenable to computation in practice.


[67] 2409.09191

ProcessTBench: An LLM Plan Generation Dataset for Process Mining

Large Language Models (LLMs) have shown significant promise in plan generation. Yet, existing datasets often lack the complexity needed for advanced tool use scenarios - such as handling paraphrased query statements, supporting multiple languages, and managing actions that can be done in parallel. These scenarios are crucial for evaluating the evolving capabilities of LLMs in real-world applications. Moreover, current datasets don't enable the study of LLMs from a process perspective, particularly in scenarios where understanding typical behaviors and challenges in executing the same process under different conditions or formulations is crucial. To address these gaps, we present the ProcessTBench dataset, an extension of the TaskBench dataset specifically designed to evaluate LLMs within a process mining framework.


[68] 2409.09194

Hierarchical Hypercomplex Network for Multimodal Emotion Recognition

Emotion recognition is relevant in various domains, ranging from healthcare to human-computer interaction. Physiological signals, being beyond voluntary control, offer reliable information for this purpose, unlike speech and facial expressions which can be controlled at will. They reflect genuine emotional responses, devoid of conscious manipulation, thereby enhancing the credibility of emotion recognition systems. Nonetheless, multimodal emotion recognition with deep learning models remains a relatively unexplored field. In this paper, we introduce a fully hypercomplex network with a hierarchical learning structure to fully capture correlations. Specifically, at the encoder level, the model learns intra-modal relations among the different channels of each input signal. Then, a hypercomplex fusion module learns inter-modal relations among the embeddings of the different modalities. The main novelty is in exploiting intra-modal relations by endowing the encoders with parameterized hypercomplex convolutions (PHCs) that thanks to hypercomplex algebra can capture inter-channel interactions within single modalities. Instead, the fusion module comprises parameterized hypercomplex multiplications (PHMs) that can model inter-modal correlations. The proposed architecture surpasses state-of-the-art models on the MAHNOB-HCI dataset for emotion recognition, specifically in classifying valence and arousal from electroencephalograms (EEGs) and peripheral physiological signals. The code of this study is available at https://github.com/ispamm/MHyEEG.


[69] 2409.09196

Are Sparse Neural Networks Better Hard Sample Learners?

While deep learning has demonstrated impressive progress, it remains a daunting challenge to learn from hard samples as these samples are usually noisy and intricate. These hard samples play a crucial role in the optimal performance of deep neural networks. Most research on Sparse Neural Networks (SNNs) has focused on standard training data, leaving gaps in understanding their effectiveness on complex and challenging data. This paper's extensive investigation across scenarios reveals that most SNNs trained on challenging samples can often match or surpass dense models in accuracy at certain sparsity levels, especially with limited data. We observe that layer-wise density ratios tend to play an important role in SNN performance, particularly for methods that train from scratch without pre-trained initialization. These insights enhance our understanding of SNNs' behavior and potential for efficient learning approaches in data-centric AI. Our code is publicly available at: \url{https://github.com/QiaoXiao7282/hard_sample_learners}.


[70] 2409.09198

Throughput-Optimal Scheduling via Rate Learning

We study the problem of designing scheduling policies for communication networks. This problem is often addressed with max-weight-type approaches since they are throughput-optimal. However, max-weight policies make scheduling decisions based on the network congestion, which can be sometimes unnecessarily restrictive. In this paper, we present a ``schedule as you learn'' (SYL) approach, where we learn an average rate, and then select schedules that generate such a rate in expectation. This approach is interesting because scheduling decisions do not depend on the size of the queue backlogs, and so it provides increased flexibility to select schedules based on other criteria or rules, such as serving high-priority queues. We illustrate the results with numerical experiments for a cross-bar switch and show that, compared to max-weight, SYL can achieve lower latency to certain flows without compromising throughput optimality.


[71] 2409.09199

Batched Online Contextual Sparse Bandits with Sequential Inclusion of Features

Multi-armed Bandits (MABs) are increasingly employed in online platforms and e-commerce to optimize decision making for personalized user experiences. In this work, we focus on the Contextual Bandit problem with linear rewards, under conditions of sparsity and batched data. We address the challenge of fairness by excluding irrelevant features from decision-making processes using a novel algorithm, Online Batched Sequential Inclusion (OBSI), which sequentially includes features as confidence in their impact on the reward increases. Our experiments on synthetic data show the superior performance of OBSI compared to other algorithms in terms of regret, relevance of features used, and compute.


[72] 2409.09201

Contextual Evaluation of Large Language Models for Classifying Tropical and Infectious Diseases

While large language models (LLMs) have shown promise for medical question answering, there is limited work focused on tropical and infectious disease-specific exploration. We build on an opensource tropical and infectious diseases (TRINDs) dataset, expanding it to include demographic and semantic clinical and consumer augmentations yielding 11000+ prompts. We evaluate LLM performance on these, comparing generalist and medical LLMs, as well as LLM outcomes to human experts. We demonstrate through systematic experimentation, the benefit of contextual information such as demographics, location, gender, risk factors for optimal LLM response. Finally we develop a prototype of TRINDs-LM, a research tool that provides a playground to navigate how context impacts LLM outputs for health.


[73] 2409.09202

WarmSwap: Sharing Dependencies for Accelerating Cold Starts in Serverless Functions

This work presents WarmSwap, a novel provider-side cold-start optimization for serverless computing. This optimization reduces cold-start time when booting and loading dependencies at runtime inside a function container. Previous approaches to the optimization of cold starts tend to fall into two categories: optimizing the infrastructure of serverless computing to benefit all serverless functions; or function-specific tuning for individual serverless functions. In contrast, WarmSwap offers a broad middle ground, which optimizes entire categories of serverless functions. WarmSwap eliminates the need to initialize middleware or software dependencies when launching a new serverless container, by migrating a pre-initialized live dependency image to the new function instance. WarmSwap respects the provider's cache constraints, as a single pre-warmed dependency image in the cache is shared among all serverless functions requiring that software dependency image. WarmSwap has been tested on seven representative functions from FunctionBench. The functions are chosen to compare with previous work. In those tests, WarmSwap accelerates cold-start executions for those serverless functions with large dependency requirements by a factor ranging from 1.2 to 2.2.


[74] 2409.09203

Pinto: A latched spring actuated robot for jumping and perching

Arboreal environments challenge current robots but are deftly traversed by many familiar animal locomotors such as squirrels. We present a small, 450 g robot "Pinto" developed for tree-jumping, a behavior seen in squirrels but rarely in legged robots: jumping from the ground onto a vertical tree trunk. We develop a powerful and lightweight latched series-elastic actuator using a twisted string and carbon fiber springs. We consider the effects of scaling down conventional quadrupeds and experimentally show how storing energy in a parallel-elastic fashion using a latch increases jump energy compared to series-elastic or springless strategies. By switching between series and parallel-elastic modes with our latched 5-bar leg mechanism, Pinto executes energetic jumps as well as maintains continuous control during shorter bounding motions. We also develop sprung 2-DoF arms equipped with spined grippers to grasp tree bark for high-speed perching following a jump.


[75] 2409.09204

A Systematic Review on Process Mining for Curricular Analysis

Educational Process Mining (EPM) is a data analysis technique that is used to improve educational processes. It is based on Process Mining (PM), which involves gathering records (logs) of events to discover process models and analyze the data from a process-centric perspective. One specific application of EPM is curriculum mining, which focuses on understanding the learning program students follow to achieve educational goals. This is important for institutional curriculum decision-making and quality improvement. Therefore, academic institutions can benefit from organizing the existing techniques, capabilities, and limitations. We conducted a systematic literature review to identify works on applying PM to curricular analysis and provide insights for further research. From the analysis of 22 primary studies, we found that results can be classified into five categories concerning the objectives they pursue: the discovery of educational trajectories, the identification of deviations in the observed behavior of students, the analysis of bottlenecks, the analysis of stopout and dropout problems, and the generation of recommendation. Moreover, we identified some open challenges and opportunities, such as standardizing for replicating studies to perform cross-university curricular analysis and strengthening the connection between PM and data mining for improving curricular analysis.


[76] 2409.09205

To Shelter or Not To Shelter: Exploring the Influence of Different Modalities in Virtual Reality on Individuals' Tornado Mitigation Behaviors

Timely and adequate risk communication before natural hazards can reduce losses from extreme weather events and provide more resilient disaster preparedness. However, existing natural hazard risk communications have been abstract, ineffective, not immersive, and sometimes counterproductive. The implementation of virtual reality (VR) for natural hazard risk communication presents a promising alternative to the existing risk communication system by offering immersive and engaging experiences. However, it is still unknown how different modalities in VR could affect individuals' mitigation behaviors related to incoming natural hazards. In addition, it is also not clear how the repetitive risk communication of different modalities in the VR system leads to the effect of risk habituation. To fill the knowledge gap, we developed a VR system with a tornado risk communication scenario and conducted a mixed-design human subject experiment (N = 24). We comprehensively investigated our research using both quantitative and qualitative results.


[77] 2409.09207

FB-HyDON: Parameter-Efficient Physics-Informed Operator Learning of Complex PDEs via Hypernetwork and Finite Basis Domain Decomposition

Deep operator networks (DeepONet) and neural operators have gained significant attention for their ability to map infinite-dimensional function spaces and perform zero-shot super-resolution. However, these models often require large datasets for effective training. While physics-informed operators offer a data-agnostic learning approach, they introduce additional training complexities and convergence issues, especially in highly nonlinear systems. To overcome these challenges, we introduce Finite Basis Physics-Informed HyperDeepONet (FB-HyDON), an advanced operator architecture featuring intrinsic domain decomposition. By leveraging hypernetworks and finite basis functions, FB-HyDON effectively mitigates the training limitations associated with existing physics-informed operator learning methods. We validated our approach on the high-frequency harmonic oscillator, Burgers' equation at different viscosity levels, and Allen-Cahn equation demonstrating substantial improvements over other operator learning models.


[78] 2409.09210

ORS: A novel Olive Ridley Survival inspired Meta-heuristic Optimization Algorithm

Meta-heuristic algorithmic development has been a thrust area of research since its inception. In this paper, a novel meta-heuristic optimization algorithm, Olive Ridley Survival (ORS), is proposed which is inspired from survival challenges faced by hatchlings of Olive Ridley sea turtle. A major fact about survival of Olive Ridley reveals that out of one thousand Olive Ridley hatchlings which emerge from nest, only one survive at sea due to various environmental and other factors. This fact acts as the backbone for developing the proposed algorithm. The algorithm has two major phases: hatchlings survival through environmental factors and impact of movement trajectory on its survival. The phases are mathematically modelled and implemented along with suitable input representation and fitness function. The algorithm is analysed theoretically. To validate the algorithm, fourteen mathematical benchmark functions from standard CEC test suites are evaluated and statistically tested. Also, to study the efficacy of ORS on recent complex benchmark functions, ten benchmark functions of CEC-06-2019 are evaluated. Further, three well-known engineering problems are solved by ORS and compared with other state-of-the-art meta-heuristics. Simulation results show that in many cases, the proposed ORS algorithm outperforms some state-of-the-art meta-heuristic optimization algorithms. The sub-optimal behavior of ORS in some recent benchmark functions is also observed.


[79] 2409.09212

Extending predictive process monitoring for collaborative processes

Process mining on business process execution data has focused primarily on orchestration-type processes performed in a single organization (intra-organizational). Collaborative (inter-organizational) processes, unlike those of orchestration type, expand several organizations (for example, in e-Government), adding complexity and various challenges both for their implementation and for their discovery, prediction, and analysis of their execution. Predictive process monitoring is based on exploiting execution data from past instances to predict the execution of current cases. It is possible to make predictions on the next activity and remaining time, among others, to anticipate possible deviations, violations, and delays in the processes to take preventive measures (e.g., re-allocation of resources). In this work, we propose an extension for collaborative processes of traditional process prediction, considering particularities of this type of process, which add information of interest in this context, for example, the next activity of which participant or the following message to be exchanged between two participants.


[80] 2409.09214

Seed-Music: A Unified Framework for High Quality and Controlled Music Generation

We introduce Seed-Music, a suite of music generation systems capable of producing high-quality music with fine-grained style control. Our unified framework leverages both auto-regressive language modeling and diffusion approaches to support two key music creation workflows: \textit{controlled music generation} and \textit{post-production editing}. For controlled music generation, our system enables vocal music generation with performance controls from multi-modal inputs, including style descriptions, audio references, musical scores, and voice prompts. For post-production editing, it offers interactive tools for editing lyrics and vocal melodies directly in the generated audio. We encourage readers to listen to demo audio examples at https://team.doubao.com/seed-music .


[81] 2409.09217

Rational-WENO: A lightweight, physically-consistent three-point weighted essentially non-oscillatory scheme

Conventional WENO3 methods are known to be highly dissipative at lower resolutions, introducing significant errors in the pre-asymptotic regime. In this paper, we employ a rational neural network to accurately estimate the local smoothness of the solution, dynamically adapting the stencil weights based on local solution features. As rational neural networks can represent fast transitions between smooth and sharp regimes, this approach achieves a granular reconstruction with significantly reduced dissipation, improving the accuracy of the simulation. The network is trained offline on a carefully chosen dataset of analytical functions, bypassing the need for differentiable solvers. We also propose a robust model selection criterion based on estimates of the interpolation's convergence order on a set of test functions, which correlates better with the model performance in downstream tasks. We demonstrate the effectiveness of our approach on several one-, two-, and three-dimensional fluid flow problems: our scheme generalizes across grid resolutions while handling smooth and discontinuous solutions. In most cases, our rational network-based scheme achieves higher accuracy than conventional WENO3 with the same stencil size, and in a few of them, it achieves accuracy comparable to WENO5, which uses a larger stencil.


[82] 2409.09218

AI as Extraherics: Fostering Higher-order Thinking Skills in Human-AI Interaction

As artificial intelligence (AI) technologies, including generative AI, continue to evolve, concerns have arisen about over-reliance on AI, which may lead to human deskilling and diminished cognitive engagement. Over-reliance on AI can also lead users to accept information given by AI without performing critical examinations, causing negative consequences, such as misleading users with hallucinated contents. This paper introduces extraheric AI, a human-AI interaction conceptual framework that fosters users' higher-order thinking skills, such as creativity, critical thinking, and problem-solving, during task completion. Unlike existing human-AI interaction designs, which replace or augment human cognition, extraheric AI fosters cognitive engagement by posing questions or providing alternative perspectives to users, rather than direct answers. We discuss interaction strategies, evaluation methods aligned with cognitive load theory and Bloom's taxonomy, and future research directions to ensure that human cognitive skills remain a crucial element in AI-integrated environments, promoting a balanced partnership between humans and AI.


[83] 2409.09220

Market Implications of Alternative Operating Reserve Modeling in Wholesale Electricity Markets

Pricing and settlement mechanisms are crucial for efficient re-source allocation, investment incentives, market competition, and regulatory oversight. In the United States, Regional Transmission Operators (RTOs) adopts a uniform pricing scheme that hinges on the marginal costs of supplying additional electricity. This study investigates the pricing and settlement impacts of alternative reserve constraint modeling, highlighting how even slight variations in the modeling of constraints can drastically alter market clearing prices, reserve quantities, and revenue outcomes. Focusing on the diverse market designs and assumptions in ancillary services by U.S. RTOs, particularly in relation to capacity sharing and reserve substitutions, the research examines four distinct models that combine these elements based on a large-scale synthetic power system test data. Our study provides a critical insight into the economic implications and the underlying factors of these alternative reserve constraints through market simulations and data analysis.


[84] 2409.09221

Multi-modal Speech Transformer Decoders: When Do Multiple Modalities Improve Accuracy?

Decoder-only discrete-token language models have recently achieved significant success in automatic speech recognition. However, systematic analyses of how different modalities impact performance in specific scenarios remain limited. In this paper, we investigate the effects of multiple modalities on recognition accuracy on both synthetic and real-world datasets. Our experiments suggest that: (1) Integrating more modalities can increase accuracy; in particular, our paper is, to our best knowledge, the first to show the benefit of combining audio, image context, and lip information; (2) Images as a supplementary modality for speech recognition provide the greatest benefit at moderate noise levels, moreover, they exhibit a different trend compared to inherently synchronized modalities like lip movements; (3) Performance improves on both synthetic and real-world datasets when the most relevant visual information is filtered as a preprocessing step.


[85] 2409.09222

Dark Patterns in the Opt-Out Process and Compliance with the California Consumer Privacy Act (CCPA)

To protect consumer privacy, the California Consumer Privacy Act (CCPA) mandates that businesses provide consumers with a straightforward way to opt out of the sale and sharing of their personal information. However, the control that businesses enjoy over the opt-out process allows them to impose hurdles on consumers aiming to opt out, including by employing dark patterns. Motivated by the enactment of the California Privacy Rights Act (CPRA), which strengthens the CCPA and explicitly forbids certain dark patterns in the opt-out process, we investigate how dark patterns are used in opt-out processes and assess their compliance with CCPA regulations. Our research reveals that websites employ a variety of dark patterns. Some of these patterns are explicitly prohibited under the CCPA; others evidently take advantage of legal loopholes. Despite the initial efforts to restrict dark patterns by policymakers, there is more work to be done.


[86] 2409.09223

Diagnosis via Proofs of Unsatisfiability for First-Order Logic with Relational Objects

Satisfiability-based automated reasoning is an approach that is being successfully used in software engineering to validate complex software, including for safety-critical systems. Such reasoning underlies many validation activities, from requirements analysis to design consistency to test coverage. While generally effective, the back-end constraint solvers are often complex and inevitably error-prone, which threatens the soundness of their application. Thus, such solvers need to be validated, which includes checking correctness and explaining (un)satisfiability results returned by them. In this work, we consider satisfiability analysis based on First-Order Logic with relational objects (FOL*) which has been shown to be effective for reasoning about time- and data-sensitive early system designs. We tackle the challenge of validating the correctness of FOL* unsatisfiability results and deriving diagnoses to explain the causes of the unsatisfiability. Inspired by the concept of proofs of UNSAT from SAT/SMT solvers, we define a proof format and proof rules to track the solvers' reasoning steps as sequences of derivations towards UNSAT. We also propose an algorithm to verify the correctness of FOL* proofs while filtering unnecessary derivations and develop a proof-based diagnosis to explain the cause of unsatisfiability. We implemented the proposed proof support on top of the state-of-the-art FOL* satisfiability checker to generate proofs of UNSAT and validated our approach by applying the proof-based diagnoses to explain the causes of well-formedness issues of normative requirements of software systems.


[87] 2409.09224

Optimal Control Approach for Gait Transition with Riemannian Splines

Robotic locomotion often relies on sequenced gaits to efficiently convert control input into desired motion. Despite extensive studies on gait optimization, achieving smooth and efficient gait transitions remains challenging. In this paper, we propose a general solver based on geometric optimal control methods, leveraging insights from previous works on gait efficiency. Building upon our previous work, we express the effort to execute the trajectory as distinct geometric objects, transforming the optimization problems into boundary value problems. To validate our approach, we generate gait transition trajectories for three-link swimmers across various fluid environments. This work provides insights into optimal trajectory geometries and mechanical considerations for robotic locomotion.


[88] 2409.09225

Solid-Fluid Interaction on Particle Flow Maps

We propose a novel solid-fluid interaction method for coupling elastic solids with impulse flow maps. Our key idea is to unify the representation of fluid and solid components as particle flow maps with different lengths and dynamics. The solid-fluid coupling is enabled by implementing two novel mechanisms: first, we developed an impulse-to-velocity transfer mechanism to unify the exchanged physical quantities; second, we devised a particle path integral mechanism to accumulate coupling forces along each flow-map trajectory. Our framework integrates these two mechanisms into an Eulerian-Lagrangian impulse fluid simulator to accommodate traditional coupling models, exemplified by the Material Point Method (MPM) and Immersed Boundary Method (IBM), within a particle flow map framework. We demonstrate our method's efficacy by simulating solid-fluid interactions exhibiting strong vortical dynamics, including various vortex shedding and interaction examples across swimming, falling, breezing, and combustion.


[89] 2409.09239

Autoregressive + Chain of Thought (CoT) $\simeq$ Recurrent: Recurrence's Role in Language Models and a Revist of Recurrent Transformer

The Transformer architecture excels in a variety of language modeling tasks, outperforming traditional neural architectures such as RNN and LSTM. This is partially due to its elimination of recurrent connections, which allows for parallel training and a smoother flow of gradients. However, this move away from recurrent structures places the Transformer model at the lower end of Chomsky's computational hierarchy, imposing limitations on its computational abilities. Consequently, even advanced Transformer-based models face considerable difficulties in tasks like counting, string reversal, bracket pairing, and multiplication. These tasks, though seemingly elementary, require a level of computational complexity that exceeds the capabilities of the Transformer architecture. Concurrently, the emergence of ``Chain of Thought" (CoT) prompting has enabled Transformer-based language models to tackle tasks that were previously impossible or poorly executed. Despite some previous research primarily interpreting CoT from a psychological perspective, a comprehensive understanding of \textit{why} CoT proves so effective in the reasoning process remains elusive. In this work, we thoroughly investigate the influence of recurrent structures in language models on their reasoning abilities, shedding light on how the CoT approach can mimic recurrent computation and act as a bridge between autoregression and recurrence. It is this approximated recurrence that notably improves the model's performance and computational capacity. Moreover, we revisit recent recurrent-based Transformer model designs, focusing on their computational abilities through our proposed concept of ``recurrence-completeness" and identify key theoretical limitations in models like Linear Transformer and RWKV. Through this, we aim to provide insight into the neural model architectures and prompt better model design.


[90] 2409.09240

Cross-Entropy Optimization for Hyperparameter Optimization in Stochastic Gradient-based Approaches to Train Deep Neural Networks

In this paper, we present a cross-entropy optimization method for hyperparameter optimization in stochastic gradient-based approaches to train deep neural networks. The value of a hyperparameter of a learning algorithm often has great impact on the performance of a model such as the convergence speed, the generalization performance metrics, etc. While in some cases the hyperparameters of a learning algorithm can be part of learning parameters, in other scenarios the hyperparameters of a stochastic optimization algorithm such as Adam [5] and its variants are either fixed as a constant or are kept changing in a monotonic way over time. We give an in-depth analysis of the presented method in the framework of expectation maximization (EM). The presented algorithm of cross-entropy optimization for hyperparameter optimization of a learning algorithm (CEHPO) can be equally applicable to other areas of optimization problems in deep learning. We hope that the presented methods can provide different perspectives and offer some insights for optimization problems in different areas of machine learning and beyond.


[91] 2409.09242

A Dynamic Weighting Strategy to Mitigate Worker Node Failure in Distributed Deep Learning

The increasing complexity of deep learning models and the demand for processing vast amounts of data make the utilization of large-scale distributed systems for efficient training essential. These systems, however, face significant challenges such as communication overhead, hardware limitations, and node failure. This paper investigates various optimization techniques in distributed deep learning, including Elastic Averaging SGD (EASGD) and the second-order method AdaHessian. We propose a dynamic weighting strategy to mitigate the problem of straggler nodes due to failure, enhancing the performance and efficiency of the overall training process. We conduct experiments with different numbers of workers and communication periods to demonstrate improved convergence rates and test performance using our strategy.


[92] 2409.09244

Investigation of Hierarchical Spectral Vision Transformer Architecture for Classification of Hyperspectral Imagery

In the past three years, there has been significant interest in hyperspectral imagery (HSI) classification using vision Transformers for analysis of remotely sensed data. Previous research predominantly focused on the empirical integration of convolutional neural networks (CNNs) to augment the network's capability to extract local feature information. Yet, the theoretical justification for vision Transformers out-performing CNN architectures in HSI classification remains a question. To address this issue, a unified hierarchical spectral vision Transformer architecture, specifically tailored for HSI classification, is investigated. In this streamlined yet effective vision Transformer architecture, multiple mixer modules are strategically integrated separately. These include the CNN-mixer, which executes convolution operations; the spatial self-attention (SSA)-mixer and channel self-attention (CSA)-mixer, both of which are adaptations of classical self-attention blocks; and hybrid models such as the SSA+CNN-mixer and CSA+CNN-mixer, which merge convolution with self-attention operations. This integration facilitates the development of a broad spectrum of vision Transformer-based models tailored for HSI classification. In terms of the training process, a comprehensive analysis is performed, contrasting classical CNN models and vision Transformer-based counterparts, with particular attention to disturbance robustness and the distribution of the largest eigenvalue of the Hessian. From the evaluations conducted on various mixer models rooted in the unified architecture, it is concluded that the unique strength of vision Transformers can be attributed to their overarching architecture, rather than being exclusively reliant on individual multi-head self-attention (MSA) components.


[93] 2409.09245

Robust Training of Neural Networks at Arbitrary Precision and Sparsity

The discontinuous operations inherent in quantization and sparsification introduce obstacles to backpropagation. This is particularly challenging when training deep neural networks in ultra-low precision and sparse regimes. We propose a novel, robust, and universal solution: a denoising affine transform that stabilizes training under these challenging conditions. By formulating quantization and sparsification as perturbations during training, we derive a perturbation-resilient approach based on ridge regression. Our solution employs a piecewise constant backbone model to ensure a performance lower bound and features an inherent noise reduction mechanism to mitigate perturbation-induced corruption. This formulation allows existing models to be trained at arbitrarily low precision and sparsity levels with off-the-shelf recipes. Furthermore, our method provides a novel perspective on training temporal binary neural networks, contributing to ongoing efforts to narrow the gap between artificial and biological neural networks.


[94] 2409.09247

A differentiable structural analysis framework for high-performance design optimization

Fast, gradient-based structural optimization has long been limited to a highly restricted subset of problems -- namely, density-based compliance minimization -- for which gradients can be analytically derived. For other objective functions, constraints, and design parameterizations, computing gradients has remained inaccessible, requiring the use of derivative-free algorithms that scale poorly with problem size. This has restricted the applicability of optimization to abstracted and academic problems, and has limited the uptake of these potentially impactful methods in practice. In this paper, we bridge the gap between computational efficiency and the freedom of problem formulation through a differentiable analysis framework designed for general structural optimization. We achieve this through leveraging Automatic Differentiation (AD) to manage the complex computational graph of structural analysis programs, and implementing specific derivation rules for performance critical functions along this graph. This paper provides a complete overview of gradient computation for arbitrary structural design objectives, identifies the barriers to their practical use, and derives key intermediate derivative operations that resolves these bottlenecks. Our framework is then tested against a series of structural design problems of increasing complexity: two highly constrained minimum volume problem, a multi-stage shape and section design problem, and an embodied carbon minimization problem. We benchmark our framework against other common optimization approaches, and show that our method outperforms others in terms of speed, stability, and solution quality.


[95] 2409.09249

NovAScore: A New Automated Metric for Evaluating Document Level Novelty

The rapid expansion of online content has intensified the issue of information redundancy, underscoring the need for solutions that can identify genuinely new information. Despite this challenge, the research community has seen a decline in focus on novelty detection, particularly with the rise of large language models (LLMs). Additionally, previous approaches have relied heavily on human annotation, which is time-consuming, costly, and particularly challenging when annotators must compare a target document against a vast number of historical documents. In this work, we introduce NovAScore (Novelty Evaluation in Atomicity Score), an automated metric for evaluating document-level novelty. NovAScore aggregates the novelty and salience scores of atomic information, providing high interpretability and a detailed analysis of a document's novelty. With its dynamic weight adjustment scheme, NovAScore offers enhanced flexibility and an additional dimension to assess both the novelty level and the importance of information within a document. Our experiments show that NovAScore strongly correlates with human judgments of novelty, achieving a 0.626 Point-Biserial correlation on the TAP-DLND 1.0 dataset and a 0.920 Pearson correlation on an internal human-annotated dataset.


[96] 2409.09251

ETAGE: Enhanced Test Time Adaptation with Integrated Entropy and Gradient Norms for Robust Model Performance

Test time adaptation (TTA) equips deep learning models to handle unseen test data that deviates from the training distribution, even when source data is inaccessible. While traditional TTA methods often rely on entropy as a confidence metric, its effectiveness can be limited, particularly in biased scenarios. Extending existing approaches like the Pseudo Label Probability Difference (PLPD), we introduce ETAGE, a refined TTA method that integrates entropy minimization with gradient norms and PLPD, to enhance sample selection and adaptation. Our method prioritizes samples that are less likely to cause instability by combining high entropy with high gradient norms out of adaptation, thus avoiding the overfitting to noise often observed in previous methods. Extensive experiments on CIFAR-10-C and CIFAR-100-C datasets demonstrate that our approach outperforms existing TTA techniques, particularly in challenging and biased scenarios, leading to more robust and consistent model performance across diverse test scenarios. The codebase for ETAGE is available on https://github.com/afsharshamsi/ETAGE.


[97] 2409.09253

Unleash LLMs Potential for Recommendation by Coordinating Twin-Tower Dynamic Semantic Token Generator

Owing to the unprecedented capability in semantic understanding and logical reasoning, the pre-trained large language models (LLMs) have shown fantastic potential in developing the next-generation recommender systems (RSs). However, the static index paradigm adopted by current methods greatly restricts the utilization of LLMs capacity for recommendation, leading to not only the insufficient alignment between semantic and collaborative knowledge, but also the neglect of high-order user-item interaction patterns. In this paper, we propose Twin-Tower Dynamic Semantic Recommender (TTDS), the first generative RS which adopts dynamic semantic index paradigm, targeting at resolving the above problems simultaneously. To be more specific, we for the first time contrive a dynamic knowledge fusion framework which integrates a twin-tower semantic token generator into the LLM-based recommender, hierarchically allocating meaningful semantic index for items and users, and accordingly predicting the semantic index of target item. Furthermore, a dual-modality variational auto-encoder is proposed to facilitate multi-grained alignment between semantic and collaborative knowledge. Eventually, a series of novel tuning tasks specially customized for capturing high-order user-item interaction patterns are proposed to take advantages of user historical behavior. Extensive experiments across three public datasets demonstrate the superiority of the proposed methodology in developing LLM-based generative RSs. The proposed TTDS recommender achieves an average improvement of 19.41% in Hit-Rate and 20.84% in NDCG metric, compared with the leading baseline methods.


[98] 2409.09254

VSFormer: Mining Correlations in Flexible View Set for Multi-view 3D Shape Understanding

View-based methods have demonstrated promising performance in 3D shape understanding. However, they tend to make strong assumptions about the relations between views or learn the multi-view correlations indirectly, which limits the flexibility of exploring inter-view correlations and the effectiveness of target tasks. To overcome the above problems, this paper investigates flexible organization and explicit correlation learning for multiple views. In particular, we propose to incorporate different views of a 3D shape into a permutation-invariant set, referred to as \emph{View Set}, which removes rigid relation assumptions and facilitates adequate information exchange and fusion among views. Based on that, we devise a nimble Transformer model, named \emph{VSFormer}, to explicitly capture pairwise and higher-order correlations of all elements in the set. Meanwhile, we theoretically reveal a natural correspondence between the Cartesian product of a view set and the correlation matrix in the attention mechanism, which supports our model design. Comprehensive experiments suggest that VSFormer has better flexibility, efficient inference efficiency and superior performance. Notably, VSFormer reaches state-of-the-art results on various 3d recognition datasets, including ModelNet40, ScanObjectNN and RGBD. It also establishes new records on the SHREC'17 retrieval benchmark. The code and datasets are available at \url{https://github.com/auniquesun/VSFormer}.


[99] 2409.09256

Audio-text Retrieval with Transformer-based Hierarchical Alignment and Disentangled Cross-modal Representation

Most existing audio-text retrieval (ATR) approaches typically rely on a single-level interaction to associate audio and text, limiting their ability to align different modalities and leading to suboptimal matches. In this work, we present a novel ATR framework that leverages two-stream Transformers in conjunction with a Hierarchical Alignment (THA) module to identify multi-level correspondences of different Transformer blocks between audio and text. Moreover, current ATR methods mainly focus on learning a global-level representation, missing out on intricate details to capture audio occurrences that correspond to textual semantics. To bridge this gap, we introduce a Disentangled Cross-modal Representation (DCR) approach that disentangles high-dimensional features into compact latent factors to grasp fine-grained audio-text semantic correlations. Additionally, we develop a confidence-aware (CA) module to estimate the confidence of each latent factor pair and adaptively aggregate cross-modal latent factors to achieve local semantic alignment. Experiments show that our THA effectively boosts ATR performance, with the DCR approach further contributing to consistent performance gains.


[100] 2409.09258

Active Learning to Guide Labeling Efforts for Question Difficulty Estimation

In recent years, there has been a surge in research on Question Difficulty Estimation (QDE) using natural language processing techniques. Transformer-based neural networks achieve state-of-the-art performance, primarily through supervised methods but with an isolated study in unsupervised learning. While supervised methods focus on predictive performance, they require abundant labeled data. On the other hand, unsupervised methods do not require labeled data but rely on a different evaluation metric that is also computationally expensive in practice. This work bridges the research gap by exploring active learning for QDE, a supervised human-in-the-loop approach striving to minimize the labeling efforts while matching the performance of state-of-the-art models. The active learning process iteratively trains on a labeled subset, acquiring labels from human experts only for the most informative unlabeled data points. Furthermore, we propose a novel acquisition function PowerVariance to add the most informative samples to the labeled set, a regression extension to the PowerBALD function popular in classification. We employ DistilBERT for QDE and identify informative samples by applying Monte Carlo dropout to capture epistemic uncertainty in unlabeled samples. The experiments demonstrate that active learning with PowerVariance acquisition achieves a performance close to fully supervised models after labeling only 10% of the training data. The proposed methodology promotes the responsible use of educational resources, makes QDE tools more accessible to course instructors, and is promising for other applications such as personalized support systems and question-answering tools.


[101] 2409.09260

Analyzing Correlations Between Intrinsic and Extrinsic Bias Metrics of Static Word Embeddings With Their Measuring Biases Aligned

We examine the abilities of intrinsic bias metrics of static word embeddings to predict whether Natural Language Processing (NLP) systems exhibit biased behavior. A word embedding is one of the fundamental NLP technologies that represents the meanings of words through real vectors, and problematically, it also learns social biases such as stereotypes. An intrinsic bias metric measures bias by examining a characteristic of vectors, while an extrinsic bias metric checks whether an NLP system trained with a word embedding is biased. A previous study found that a common intrinsic bias metric usually does not correlate with extrinsic bias metrics. However, the intrinsic and extrinsic bias metrics did not measure the same bias in most cases, which makes us question whether the lack of correlation is genuine. In this paper, we extract characteristic words from datasets of extrinsic bias metrics and analyze correlations with intrinsic bias metrics with those words to ensure both metrics measure the same bias. We observed moderate to high correlations with some extrinsic bias metrics but little to no correlations with the others. This result suggests that intrinsic bias metrics can predict biased behavior in particular settings but not in others. Experiment codes are available at GitHub.


[102] 2409.09261

What Is Wrong with My Model? Identifying Systematic Problems with Semantic Data Slicing

Machine learning models make mistakes, yet sometimes it is difficult to identify the systematic problems behind the mistakes. Practitioners engage in various activities, including error analysis, testing, auditing, and red-teaming, to form hypotheses of what can go (or has gone) wrong with their models. To validate these hypotheses, practitioners employ data slicing to identify relevant examples. However, traditional data slicing is limited by available features and programmatic slicing functions. In this work, we propose SemSlicer, a framework that supports semantic data slicing, which identifies a semantically coherent slice, without the need for existing features. SemSlicer uses Large Language Models to annotate datasets and generate slices from any user-defined slicing criteria. We show that SemSlicer generates accurate slices with low cost, allows flexible trade-offs between different design dimensions, reliably identifies under-performing data slices, and helps practitioners identify useful data slices that reflect systematic problems.


[103] 2409.09262

Informative Subgraphs Aware Masked Auto-Encoder in Dynamic Graphs

Generative self-supervised learning (SSL), especially masked autoencoders (MAE), has greatly succeeded and garnered substantial research interest in graph machine learning. However, the research of MAE in dynamic graphs is still scant. This gap is primarily due to the dynamic graph not only possessing topological structure information but also encapsulating temporal evolution dependency. Applying a random masking strategy which most MAE methods adopt to dynamic graphs will remove the crucial subgraph that guides the evolution of dynamic graphs, resulting in the loss of crucial spatio-temporal information in node representations. To bridge this gap, in this paper, we propose a novel Informative Subgraphs Aware Masked Auto-Encoder in Dynamic Graph, namely DyGIS. Specifically, we introduce a constrained probabilistic generative model to generate informative subgraphs that guide the evolution of dynamic graphs, successfully alleviating the issue of missing dynamic evolution subgraphs. The informative subgraph identified by DyGIS will serve as the input of dynamic graph masked autoencoder (DGMAE), effectively ensuring the integrity of the evolutionary spatio-temporal information within dynamic graphs. Extensive experiments on eleven datasets demonstrate that DyGIS achieves state-of-the-art performance across multiple tasks.


[104] 2409.09263

Operational Wind Speed Forecasts for Chile's Electric Power Sector Using a Hybrid ML Model

As Chile's electric power sector advances toward a future powered by renewable energy, accurate forecasting of renewable generation is essential for managing grid operations. The integration of renewable energy sources is particularly challenging due to the operational difficulties of managing their power generation, which is highly variable compared to fossil fuel sources, delaying the availability of clean energy. To mitigate this, we quantify the impact of increasing intermittent generation from wind and solar on thermal power plants in Chile and introduce a hybrid wind speed forecasting methodology which combines two custom ML models for Chile. The first model is based on TiDE, an MLP-based ML model for short-term forecasts, and the second is based on a graph neural network, GraphCast, for medium-term forecasts up to 10 days. Our hybrid approach outperforms the most accurate operational deterministic systems by 4-21% for short-term forecasts and 5-23% for medium-term forecasts and can directly lower the impact of wind generation on thermal ramping, curtailment, and system-level emissions in Chile.


[105] 2409.09266

TransformerMPC: Accelerating Model Predictive Control via Transformers

In this paper, we address the problem of reducing the computational burden of Model Predictive Control (MPC) for real-time robotic applications. We propose TransformerMPC, a method that enhances the computational efficiency of MPC algorithms by leveraging the attention mechanism in transformers for both online constraint removal and better warm start initialization. Specifically, TransformerMPC accelerates the computation of optimal control inputs by selecting only the active constraints to be included in the MPC problem, while simultaneously providing a warm start to the optimization process. This approach ensures that the original constraints are satisfied at optimality. TransformerMPC is designed to be seamlessly integrated with any MPC solver, irrespective of its implementation. To guarantee constraint satisfaction after removing inactive constraints, we perform an offline verification to ensure that the optimal control inputs generated by the MPC solver meet all constraints. The effectiveness of TransformerMPC is demonstrated through extensive numerical simulations on complex robotic systems, achieving up to 35x improvement in runtime without any loss in performance.


[106] 2409.09267

Cross-Disciplinary Perspectives on Youth Digital Well-Being Research: Identifying Notable Developments, Persistent Gaps, and Future Directions

This paper provides a broad, multi-disciplinary overview of key insights, persistent gaps, and future paths in youth digital well-being research from the perspectives of researchers who are conducting this work.


[107] 2409.09269

Guiding Vision-Language Model Selection for Visual Question-Answering Across Tasks, Domains, and Knowledge Types

Visual Question-Answering (VQA) has become a key use-case in several applications to aid user experience, particularly after Vision-Language Models (VLMs) achieving good results in zero-shot inference. But evaluating different VLMs for an application requirement using a standardized framework in practical settings is still challenging. This paper introduces a comprehensive framework for evaluating VLMs tailored to VQA tasks in practical settings. We present a novel dataset derived from established VQA benchmarks, annotated with task types, application domains, and knowledge types, three key practical aspects on which tasks can vary. We also introduce GoEval, a multimodal evaluation metric developed using GPT-4o, achieving a correlation factor of 56.71% with human judgments. Our experiments with ten state-of-the-art VLMs reveals that no single model excelling universally, making appropriate selection a key design decision. Proprietary models such as Gemini-1.5-Pro and GPT-4o-mini generally outperform others, though open-source models like InternVL-2-8B and CogVLM-2-Llama-3-19B demonstrate competitive strengths in specific contexts, while providing additional advantages. This study guides the selection of VLMs based on specific task requirements and resource constraints, and can also be extended to other vision-language tasks.


[108] 2409.09270

Error estimates of finite element methods for nonlocal problems using exact or approximated interaction neighborhoods

We study the asymptotic error between the finite element solutions of nonlocal models with a bounded interaction neighborhood and the exact solution of the limiting local model. The limit corresponds to the case when the horizon parameter, the radius of the spherical nonlocal interaction neighborhood of the nonlocal model, and the mesh size simultaneously approach zero. Two important cases are discussed: one involving the original nonlocal models and the other for nonlocal models with polygonal approximations of the nonlocal interaction neighborhood. Results of numerical experiments are also reported to substantiate the theoretical studies.


[109] 2409.09271

Python Symbolic Execution with LLM-powered Code Generation

Symbolic execution is a key technology in software testing, which generates test cases by collecting symbolic path constraints and then solving constraints with SMT solvers. Symbolic execution has been proven helpful in generating high-coverage test cases, but its limitations, e.g., the difficulties in solving path constraints, prevent it from broader usage in software testing. Moreover, symbolic execution has encountered many difficulties when applied to dynamically typed languages like Python, because it is extremely challenging to translate the flexible Python grammar into rigid solvers. To overcome the main challenges of applying symbolic execution in Python, we proposed an LLM-empowered agent, LLM-Sym, that automatically calls an SMT solver, Z3, to solve execution path constraints. Based on an introductory-level symbolic execution engine, our LLM agent can extend it to supporting programs with complex data type `list'. The core contribution of LLM-Sym is translating complex Python path constraints into Z3 code. To enable accurate path-to-Z3 translation, we design a multiple-step code generation pipeline including type inference, retrieval and self-refine. Our experiments demonstrate that LLM-Sym is capable of solving path constraints on Leetcode problems with complicated control flows and list data structures, which is impossible for the backbone symbolic execution engine. Our approach paves the way for the combination of the generation ability of LLMs with the reasoning ability of symbolic solvers, and opens up new opportunities in LLM-augmented test case generation.


[110] 2409.09272

SafeEar: Content Privacy-Preserving Audio Deepfake Detection

Text-to-Speech (TTS) and Voice Conversion (VC) models have exhibited remarkable performance in generating realistic and natural audio. However, their dark side, audio deepfake poses a significant threat to both society and individuals. Existing countermeasures largely focus on determining the genuineness of speech based on complete original audio recordings, which however often contain private content. This oversight may refrain deepfake detection from many applications, particularly in scenarios involving sensitive information like business secrets. In this paper, we propose SafeEar, a novel framework that aims to detect deepfake audios without relying on accessing the speech content within. Our key idea is to devise a neural audio codec into a novel decoupling model that well separates the semantic and acoustic information from audio samples, and only use the acoustic information (e.g., prosody and timbre) for deepfake detection. In this way, no semantic content will be exposed to the detector. To overcome the challenge of identifying diverse deepfake audio without semantic clues, we enhance our deepfake detector with real-world codec augmentation. Extensive experiments conducted on four benchmark datasets demonstrate SafeEar's effectiveness in detecting various deepfake techniques with an equal error rate (EER) down to 2.02%. Simultaneously, it shields five-language speech content from being deciphered by both machine and human auditory analysis, demonstrated by word error rates (WERs) all above 93.93% and our user study. Furthermore, our benchmark constructed for anti-deepfake and anti-content recovery evaluation helps provide a basis for future research in the realms of audio privacy preservation and deepfake detection.


[111] 2409.09273

Leveraging Foundation Models for Efficient Federated Learning in Resource-restricted Edge Networks

Recently pre-trained Foundation Models (FMs) have been combined with Federated Learning (FL) to improve training of downstream tasks while preserving privacy. However, deploying FMs over edge networks with resource-constrained Internet of Things (IoT) devices is under-explored. This paper proposes a novel framework, namely, Federated Distilling knowledge to Prompt (FedD2P), for leveraging the robust representation abilities of a vision-language FM without deploying it locally on edge devices. This framework distills the aggregated knowledge of IoT devices to a prompt generator to efficiently adapt the frozen FM for downstream tasks. To eliminate the dependency on a public dataset, our framework leverages perclass local knowledge from IoT devices and linguistic descriptions of classes to train the prompt generator. Our experiments on diverse image classification datasets CIFAR, OxfordPets, SVHN, EuroSAT, and DTD show that FedD2P outperforms the baselines in terms of model performance.


[112] 2409.09274

LabellessFace: Fair Metric Learning for Face Recognition without Attribute Labels

Demographic bias is one of the major challenges for face recognition systems. The majority of existing studies on demographic biases are heavily dependent on specific demographic groups or demographic classifier, making it difficult to address performance for unrecognised groups. This paper introduces ``LabellessFace'', a novel framework that improves demographic bias in face recognition without requiring demographic group labeling typically required for fairness considerations. We propose a novel fairness enhancement metric called the class favoritism level, which assesses the extent of favoritism towards specific classes across the dataset. Leveraging this metric, we introduce the fair class margin penalty, an extension of existing margin-based metric learning. This method dynamically adjusts learning parameters based on class favoritism levels, promoting fairness across all attributes. By treating each class as an individual in facial recognition systems, we facilitate learning that minimizes biases in authentication accuracy among individuals. Comprehensive experiments have demonstrated that our proposed method is effective for enhancing fairness while maintaining authentication accuracy.


[113] 2409.09276

Visuo-Tactile Zero-Shot Object Recognition with Vision-Language Model

Tactile perception is vital, especially when distinguishing visually similar objects. We propose an approach to incorporate tactile data into a Vision-Language Model (VLM) for visuo-tactile zero-shot object recognition. Our approach leverages the zero-shot capability of VLMs to infer tactile properties from the names of tactilely similar objects. The proposed method translates tactile data into a textual description solely by annotating object names for each tactile sequence during training, making it adaptable to various contexts with low training costs. The proposed method was evaluated on the FoodReplica and Cube datasets, demonstrating its effectiveness in recognizing objects that are difficult to distinguish by vision alone.


[114] 2409.09278

Evaluating the Impact of Inter-cluster Communications in Edge Computing

Distributed applications based on micro-services in edge computing are becoming increasingly popular due to the rapid evolution of mobile networks. While Kubernetes is the default framework when it comes to orchestrating and managing micro-service-based applications in mobile networks, the requirement to run applications between multiple sites at cloud and edge poses new challenges. Since Kubernetes does not natively provide tools to abstract inter-cluster communications at the application level, inter-cluster communication in edge computing is becoming increasingly critical to the application performance. In this paper, we evaluate for the first time the impact of inter-cluster communication on edge computing performance by using three prominent, open source inter-cluster communication projects and tools, i.e., Submariner, ClusterLink and Skupper. We develop a fully open-source testbed that integrates these tools in a modular fashion, and experimentally benchmark sample applications, including the ML class of applications, on their performance running in the multi-cluster edge computing system under varying networking conditions. We experimentally analyze two classes of envisioned mobile applications, i.e., a) industrial automation, b) vehicle decision drive assist. Our results show that Submariner performs best out of the three tools in scenarios with small payloads, regardless of the underlying networking conditions or transmission direction between clusters. When sending larger data to a service, ClusterLink outperforms Submariner once the inter-node networking conditions deteriorate, which may be the case in highly mobile scenarios in edge computing. Finally, Skupper significantly outperforms others in a variety of scenarios with larger payloads.


[115] 2409.09280

An empirical evaluation of using ChatGPT to summarize disputes for recommending similar labor and employment cases in Chinese

We present a hybrid mechanism for recommending similar cases of labor and employment litigations. The classifier determines the similarity based on the itemized disputes of the two cases, that the courts prepared. We cluster the disputes, compute the cosine similarity between the disputes, and use the results as the features for the classification tasks. Experimental results indicate that this hybrid approach outperformed our previous system, which considered only the information about the clusters of the disputes. We replaced the disputes that were prepared by the courts with the itemized disputes that were generated by GPT-3.5 and GPT-4, and repeated the same experiments. Using the disputes generated by GPT-4 led to better results. Although our classifier did not perform as well when using the disputes that the ChatGPT generated, the results were satisfactory. Hence, we hope that the future large-language models will become practically useful.


[116] 2409.09281

Language Models "Grok" to Copy

We examine the pre-training dynamics of language models, focusing on their ability to copy text from preceding context--a fundamental skill for various LLM applications, including in-context learning (ICL) and retrieval-augmented generation (RAG). We propose a novel perspective that Transformer-based language models develop copying abilities similarly to grokking, which refers to sudden generalization on test set long after the model fit to the training set. Our experiments yield three arguments: (1) The pre-training loss decreases rapidly, while the context copying ability of models initially lags and then abruptly saturates. (2) The speed of developing copying ability is independent of the number of tokens trained, similarly to how grokking speed is unaffected by dataset size as long as the data distribution is preserved. (3) Induction heads, the attention heads responsible for copying, form from shallow to deep layers during training, mirroring the development of circuits in deeper layers during grokking. We contend that the connection between grokking and context copying can provide valuable insights for more effective language model training, ultimately improving in-context performance. For example, we demonstrated that techniques that enhance grokking, such as regularization, either accelerate or enhance the development of context copying.


[117] 2409.09282

Turbo your multi-modal classification with contrastive learning

Contrastive learning has become one of the most impressive approaches for multi-modal representation learning. However, previous multi-modal works mainly focused on cross-modal understanding, ignoring in-modal contrastive learning, which limits the representation of each modality. In this paper, we propose a novel contrastive learning strategy, called $Turbo$, to promote multi-modal understanding by joint in-modal and cross-modal contrastive learning. Specifically, multi-modal data pairs are sent through the forward pass twice with different hidden dropout masks to get two different representations for each modality. With these representations, we obtain multiple in-modal and cross-modal contrastive objectives for training. Finally, we combine the self-supervised Turbo with the supervised multi-modal classification and demonstrate its effectiveness on two audio-text classification tasks, where the state-of-the-art performance is achieved on a speech emotion recognition benchmark dataset.


[118] 2409.09284

M$^{3}$V: A multi-modal multi-view approach for Device-Directed Speech Detection

With the goal of more natural and human-like interaction with virtual voice assistants, recent research in the field has focused on full duplex interaction mode without relying on repeated wake-up words. This requires that in scenes with complex sound sources, the voice assistant must classify utterances as device-oriented or non-device-oriented. The dual-encoder structure, which is jointly modeled by text and speech, has become the paradigm of device-directed speech detection. However, in practice, these models often produce incorrect predictions for unaligned input pairs due to the unavoidable errors of automatic speech recognition (ASR).To address this challenge, we propose M$^{3}$V, a multi-modal multi-view approach for device-directed speech detection, which frames we frame the problem as a multi-view learning task that introduces unimodal views and a text-audio alignment view in the network besides the multi-modal. Experimental results show that M$^{3}$V significantly outperforms models trained using only single or multi-modality and surpasses human judgment performance on ASR error data for the first time.


[119] 2409.09285

Capability Augmentation for Heterogeneous Dynamic Teaming with Temporal Logic Tasks

This paper considers how heterogeneous multi-agent teams can leverage their different capabilities to mutually improve individual agent performance. We present Capability-Augmenting Tasks (CATs), which encode how agents can augment their capabilities based on interactions with other teammates. Our framework integrates CAT into the semantics of Metric Temporal Logic (MTL), which defines individual spatio-temporal tasks for all agents. A centralized Mixed-Integer Program (MIP) is used to synthesize trajectories for all agents. We compare the expressivity of our approach to a baseline of Capability Temporal Logic Plus (CaTL+). Case studies demonstrate that our approach allows for simpler specifications and improves individual performance when agents leverage the capabilities of their teammates.


[120] 2409.09286

SAM-OCTA2: Layer Sequence OCTA Segmentation with Fine-tuned Segment Anything Model 2

Segmentation of indicated targets aids in the precise analysis of optical coherence tomography angiography (OCTA) samples. Existing segmentation methods typically perform on 2D projection targets, making it challenging to capture the variance of segmented objects through the 3D volume. To address this limitation, the low-rank adaptation technique is adopted to fine-tune the Segment Anything Model (SAM) version 2, enabling the tracking and segmentation of specified objects across the OCTA scanning layer sequence. To further this work, a prompt point generation strategy in frame sequence and a sparse annotation method to acquire retinal vessel (RV) layer masks are proposed. This method is named SAM-OCTA2 and has been experimented on the OCTA-500 dataset. It achieves state-of-the-art performance in segmenting the foveal avascular zone (FAZ) on regular 2D en-face and effectively tracks local vessels across scanning layer sequences. The code is available at: https://github.com/ShellRedia/SAM-OCTA2.


[121] 2409.09287

Panoramic Direct LiDAR-assisted Visual Odometry

Enhancing visual odometry by exploiting sparse depth measurements from LiDAR is a promising solution for improving tracking accuracy of an odometry. Most existing works utilize a monocular pinhole camera, yet could suffer from poor robustness due to less available information from limited field-of-view (FOV). This paper proposes a panoramic direct LiDAR-assisted visual odometry, which fully associates the 360-degree FOV LiDAR points with the 360-degree FOV panoramic image datas. 360-degree FOV panoramic images can provide more available information, which can compensate inaccurate pose estimation caused by insufficient texture or motion blur from a single view. In addition to constraints between a specific view at different times, constraints can also be built between different views at the same moment. Experimental results on public datasets demonstrate the benefit of large FOV of our panoramic direct LiDAR-assisted visual odometry to state-of-the-art approaches.


[122] 2409.09288

Generating API Parameter Security Rules with LLM for API Misuse Detection

In this paper, we present a new framework, named GPTAid, for automatic APSRs generation by analyzing API source code with LLM and detecting API misuse caused by incorrect parameter use. To validate the correctness of the LLM-generated APSRs, we propose an execution feedback-checking approach based on the observation that security-critical API misuse is often caused by APSRs violations, and most of them result in runtime errors. Specifically, GPTAid first uses LLM to generate raw APSRs and the Right calling code, and then generates Violation code for each raw APSR by modifying the Right calling code using LLM. Subsequently, GPTAid performs dynamic execution on each piece of Violation code and further filters out the incorrect APSRs based on runtime errors. To further generate concrete APSRs, GPTAid employs a code differential analysis to refine the filtered ones. Particularly, as the programming language is more precise than natural language, GPTAid identifies the key operations within Violation code by differential analysis, and then generates the corresponding concrete APSR based on the aforementioned operations. These concrete APSRs could be precisely interpreted into applicable detection code, which proven to be effective in API misuse detection. Implementing on the dataset containing 200 randomly selected APIs from eight popular libraries, GPTAid achieves a precision of 92.3%. Moreover, it generates 6 times more APSRs than state-of-the-art detectors on a comparison dataset of previously reported bugs and APSRs. We further evaluated GPTAid on 47 applications, 210 unknown security bugs were found potentially resulting in severe security issues (e.g., system crashes), 150 of which have been confirmed by developers after our reports.


[123] 2409.09289

DSCLAP: Domain-Specific Contrastive Language-Audio Pre-Training

Analyzing real-world multimodal signals is an essential and challenging task for intelligent voice assistants (IVAs). Mainstream approaches have achieved remarkable performance on various downstream tasks of IVAs with pre-trained audio models and text models. However, these models are pre-trained independently and usually on tasks different from target domains, resulting in sub-optimal modality representations for downstream tasks. Moreover, in many domains, collecting enough language-audio pairs is extremely hard, and transcribing raw audio also requires high professional skills, making it difficult or even infeasible to joint pre-training. To address these painpoints, we propose DSCLAP, a simple and effective framework that enables language-audio pre-training with only raw audio signal input. Specifically, DSCLAP converts raw audio signals into text via an ASR system and combines a contrastive learning objective and a language-audio matching objective to align the audio and ASR transcriptions. We pre-train DSCLAP on 12,107 hours of in-vehicle domain audio. Empirical results on two downstream tasks show that while conceptually simple, DSCLAP significantly outperforms the baseline models in all metrics, showing great promise for domain-specific IVAs applications.


[124] 2409.09291

Infrared and Visible Image Fusion with Hierarchical Human Perception

Image fusion combines images from multiple domains into one image, containing complementary information from source domains. Existing methods take pixel intensity, texture and high-level vision task information as the standards to determine preservation of information, lacking enhancement for human perception. We introduce an image fusion method, Hierarchical Perception Fusion (HPFusion), which leverages Large Vision-Language Model to incorporate hierarchical human semantic priors, preserving complementary information that satisfies human visual system. We propose multiple questions that humans focus on when viewing an image pair, and answers are generated via the Large Vision-Language Model according to images. The texts of answers are encoded into the fusion network, and the optimization also aims to guide the human semantic distribution of the fused image more similarly to source images, exploring complementary information within the human perception domain. Extensive experiments demonstrate our HPFusoin can achieve high-quality fusion results both for information preservation and human visual enhancement.


[125] 2409.09292

StyleTalk++: A Unified Framework for Controlling the Speaking Styles of Talking Heads

Individuals have unique facial expression and head pose styles that reflect their personalized speaking styles. Existing one-shot talking head methods cannot capture such personalized characteristics and therefore fail to produce diverse speaking styles in the final videos. To address this challenge, we propose a one-shot style-controllable talking face generation method that can obtain speaking styles from reference speaking videos and drive the one-shot portrait to speak with the reference speaking styles and another piece of audio. Our method aims to synthesize the style-controllable coefficients of a 3D Morphable Model (3DMM), including facial expressions and head movements, in a unified framework. Specifically, the proposed framework first leverages a style encoder to extract the desired speaking styles from the reference videos and transform them into style codes. Then, the framework uses a style-aware decoder to synthesize the coefficients of 3DMM from the audio input and style codes. During decoding, our framework adopts a two-branch architecture, which generates the stylized facial expression coefficients and stylized head movement coefficients, respectively. After obtaining the coefficients of 3DMM, an image renderer renders the expression coefficients into a specific person's talking-head video. Extensive experiments demonstrate that our method generates visually authentic talking head videos with diverse speaking styles from only one portrait image and an audio clip.


[126] 2409.09293

Associate Everything Detected: Facilitating Tracking-by-Detection to the Unknown

Multi-object tracking (MOT) emerges as a pivotal and highly promising branch in the field of computer vision. Classical closed-vocabulary MOT (CV-MOT) methods aim to track objects of predefined categories. Recently, some open-vocabulary MOT (OV-MOT) methods have successfully addressed the problem of tracking unknown categories. However, we found that the CV-MOT and OV-MOT methods each struggle to excel in the tasks of the other. In this paper, we present a unified framework, Associate Everything Detected (AED), that simultaneously tackles CV-MOT and OV-MOT by integrating with any off-the-shelf detector and supports unknown categories. Different from existing tracking-by-detection MOT methods, AED gets rid of prior knowledge (e.g. motion cues) and relies solely on highly robust feature learning to handle complex trajectories in OV-MOT tasks while keeping excellent performance in CV-MOT tasks. Specifically, we model the association task as a similarity decoding problem and propose a sim-decoder with an association-centric learning mechanism. The sim-decoder calculates similarities in three aspects: spatial, temporal, and cross-clip. Subsequently, association-centric learning leverages these threefold similarities to ensure that the extracted features are appropriate for continuous tracking and robust enough to generalize to unknown categories. Compared with existing powerful OV-MOT and CV-MOT methods, AED achieves superior performance on TAO, SportsMOT, and DanceTrack without any prior knowledge. Our code is available at https://github.com/balabooooo/AED.


[127] 2409.09294

Subband Splitting: Simple, Efficient and Effective Technique for Solving Block Permutation Problem in Determined Blind Source Separation

Solving the permutation problem is essential for determined blind source separation (BSS). Existing methods, such as independent vector analysis (IVA) and independent low-rank matrix analysis (ILRMA), tackle the permutation problem by modeling the co-occurrence of the frequency components of source signals. One of the remaining challenges in these methods is the block permutation problem, which may lead to poor separation results. In this paper, we propose a simple and effective technique for solving the block permutation problem. The proposed technique splits the entire frequencies into overlapping subbands and sequentially applies a BSS method (e.g., IVA, ILRMA, or any other method) to each subband. Since the problem size is reduced by the splitting, the BSS method can effectively work in each subband. Then, the permutations between the subbands are aligned by using the separation result in one subband as the initial values for the other subbands. Experimental results showed that the proposed technique remarkably improved the separation performance without increasing the total computational cost.


[128] 2409.09295

GEVO: Memory-Efficient Monocular Visual Odometry Using Gaussians

Constructing a high-fidelity representation of the 3D scene using a monocular camera can enable a wide range of applications on mobile devices, such as micro-robots, smartphones, and AR/VR headsets. On these devices, memory is often limited in capacity and its access often dominates the consumption of compute energy. Although Gaussian Splatting (GS) allows for high-fidelity reconstruction of 3D scenes, current GS-based SLAM is not memory efficient as a large number of past images is stored to retrain Gaussians for reducing catastrophic forgetting. These images often require two-orders-of-magnitude higher memory than the map itself and thus dominate the total memory usage. In this work, we present GEVO, a GS-based monocular SLAM framework that achieves comparable fidelity as prior methods by rendering (instead of storing) them from the existing map. Novel Gaussian initialization and optimization techniques are proposed to remove artifacts from the map and delay the degradation of the rendered images over time. Across a variety of environments, GEVO achieves comparable map fidelity while reducing the memory overhead to around 58 MBs, which is up to 94x lower than prior works.


[129] 2409.09296

Developing an Interactive OpenMP Programming Book with Large Language Models

This paper presents an approach to authoring a textbook titled Interactive OpenMP Programming with the assistance of Large Language Models (LLMs). The writing process utilized state-of-the-art LLMs, including Gemini Pro 1.5, Claude 3, and ChatGPT-4, to generate the initial structure and outline of the book, as well as the initial content for specific chapters. This content included detailed descriptions of individual OpenMP constructs and practical programming examples. The outline and content have then undergone extensive manual revisions to meet our book goals. In this paper, we report our findings about the capabilities and limitations of these LLMs. We address critical questions concerning the necessity of textbook resources and the effectiveness of LLMs in creating fundamental and practical programming content. Our findings suggest that while LLMs offer significant advantages in generating textbook content, they require careful integration with traditional educational methodologies to ensure depth, accuracy, and pedagogical effectiveness. The Interactive OpenMP Programming book is developed with the framework of Jupyter Book, enabling the execution of code within the book from the web browser, providing instant feedback and a dynamic learning experience that stands in contrast to traditional educational resources. The book represents a significant step towards modernizing programming education, offering insights into practical strategies for generating the textbook through advanced AI tools.


[130] 2409.09298

Matrix Profile for Anomaly Detection on Multidimensional Time Series

The Matrix Profile (MP), a versatile tool for time series data mining, has been shown effective in time series anomaly detection (TSAD). This paper delves into the problem of anomaly detection in multidimensional time series, a common occurrence in real-world applications. For instance, in a manufacturing factory, multiple sensors installed across the site collect time-varying data for analysis. The Matrix Profile, named for its role in profiling the matrix storing pairwise distance between subsequences of univariate time series, becomes complex in multidimensional scenarios. If the input univariate time series has n subsequences, the pairwise distance matrix is a n x n matrix. In a multidimensional time series with d dimensions, the pairwise distance information must be stored in a n x n x d tensor. In this paper, we first analyze different strategies for condensing this tensor into a profile vector. We then investigate the potential of extending the MP to efficiently find k-nearest neighbors for anomaly detection. Finally, we benchmark the multidimensional MP against 19 baseline methods on 119 multidimensional TSAD datasets. The experiments covers three learning setups: unsupervised, supervised, and semi-supervised. MP is the only method that consistently delivers high performance across all setups.


[131] 2409.09299

Kernel-Based Regularized Continuous-Time System Identification from Sampled Data

The identification of continuous-time (CT) systems from discrete-time (DT) input and output signals, i.e., the sampled data, has received considerable attention for half a century. The state-of-the-art methods are parametric methods and thus subject to the typical issues of parametric methods. In the last decade, a major advance in system identification is the so-called kernel-based regularization method (KRM), which is free of the issues of parametric methods. It is interesting to test the potential of KRM on CT system identification. However, very few results have been reported, mainly because the estimators have no closed forms for general CT input signals, except for some very special cases. In this paper, we show for KRM that the estimators have closed forms when the DT input signal has the typical intersample behavior, i.e., zero-order hold or band-limited, and this paves the way for the application of KRM for CT system identification. Numerical Monte Carlo simulations show that the proposed method is more robust than the state-of-the-art methods and more accurate when the sample size is small.


[132] 2409.09300

ManiDext: Hand-Object Manipulation Synthesis via Continuous Correspondence Embeddings and Residual-Guided Diffusion

Dynamic and dexterous manipulation of objects presents a complex challenge, requiring the synchronization of hand motions with the trajectories of objects to achieve seamless and physically plausible interactions. In this work, we introduce ManiDext, a unified hierarchical diffusion-based framework for generating hand manipulation and grasp poses based on 3D object trajectories. Our key insight is that accurately modeling the contact correspondences between objects and hands during interactions is crucial. Therefore, we propose a continuous correspondence embedding representation that specifies detailed hand correspondences at the vertex level between the object and the hand. This embedding is optimized directly on the hand mesh in a self-supervised manner, with the distance between embeddings reflecting the geodesic distance. Our framework first generates contact maps and correspondence embeddings on the object's surface. Based on these fine-grained correspondences, we introduce a novel approach that integrates the iterative refinement process into the diffusion process during the second stage of hand pose generation. At each step of the denoising process, we incorporate the current hand pose residual as a refinement target into the network, guiding the network to correct inaccurate hand poses. Introducing residuals into each denoising step inherently aligns with traditional optimization process, effectively merging generation and refinement into a single unified framework. Extensive experiments demonstrate that our approach can generate physically plausible and highly realistic motions for various tasks, including single and bimanual hand grasping as well as manipulating both rigid and articulated objects. Code will be available for research purposes.


[133] 2409.09302

Heterogeneous Roles against Assignment Based Policies in Two vs Two Target Defense Game

In this paper, we consider a target defense game in which the attacker team seeks to reach a high-value target while the defender team seeks to prevent that by capturing them away from the target. To address the curse of dimensionality, a popular approach to solve such team-vs-team game is to decompose it into a set of one-vs-one games. Such an approximation assumes independence between teammates assigned to different one-vs-one games, ignoring the possibility of a richer set of cooperative behaviors, ultimately leading to suboptimality. In this paper, we provide teammate-aware strategies for the attacker team and show that they can outperform the assignment-based strategy, if the defenders still employ an assignment-based strategy. More specifically, the attacker strategy involves heterogeneous roles where one attacker actively intercepts a defender to help its teammate reach the target. We provide sufficient conditions under which such a strategy benefits the attackers, and we validate the results using numerical simulations.


[134] 2409.09304

Consistent Spectral Clustering in Hyperbolic Spaces

Clustering, as an unsupervised technique, plays a pivotal role in various data analysis applications. Among clustering algorithms, Spectral Clustering on Euclidean Spaces has been extensively studied. However, with the rapid evolution of data complexity, Euclidean Space is proving to be inefficient for representing and learning algorithms. Although Deep Neural Networks on hyperbolic spaces have gained recent traction, clustering algorithms or non-deep machine learning models on non-Euclidean Spaces remain underexplored. In this paper, we propose a spectral clustering algorithm on Hyperbolic Spaces to address this gap. Hyperbolic Spaces offer advantages in representing complex data structures like hierarchical and tree-like structures, which cannot be embedded efficiently in Euclidean Spaces. Our proposed algorithm replaces the Euclidean Similarity Matrix with an appropriate Hyperbolic Similarity Matrix, demonstrating improved efficiency compared to clustering in Euclidean Spaces. Our contributions include the development of the spectral clustering algorithm on Hyperbolic Spaces and the proof of its weak consistency. We show that our algorithm converges at least as fast as Spectral Clustering on Euclidean Spaces. To illustrate the efficacy of our approach, we present experimental results on the Wisconsin Breast Cancer Dataset, highlighting the superior performance of Hyperbolic Spectral Clustering over its Euclidean counterpart. This work opens up avenues for utilizing non-Euclidean Spaces in clustering algorithms, offering new perspectives for handling complex data structures and improving clustering efficiency.


[135] 2409.09305

The T05 System for The VoiceMOS Challenge 2024: Transfer Learning from Deep Image Classifier to Naturalness MOS Prediction of High-Quality Synthetic Speech

We present our system (denoted as T05) for the VoiceMOS Challenge (VMC) 2024. Our system was designed for the VMC 2024 Track 1, which focused on the accurate prediction of naturalness mean opinion score (MOS) for high-quality synthetic speech. In addition to a pretrained self-supervised learning (SSL)-based speech feature extractor, our system incorporates a pretrained image feature extractor to capture the difference of synthetic speech observed in speech spectrograms. We first separately train two MOS predictors that use either of an SSL-based or spectrogram-based feature. Then, we fine-tune the two predictors for better MOS prediction using the fusion of two extracted features. In the VMC 2024 Track 1, our T05 system achieved first place in 7 out of 16 evaluation metrics and second place in the remaining 9 metrics, with a significant difference compared to those ranked third and below. We also report the results of our ablation study to investigate essential factors of our system.


[136] 2409.09306

Keypoints-Integrated Instruction-Following Data Generation for Enhanced Human Pose Understanding in Multimodal Models

Current multimodal models are well-suited for general visual understanding tasks. However, they perform inadequately when handling complex visual tasks related to human poses and actions, primarily due to the lack of specialized instruction-following data. We introduce a new method for generating such data by integrating human keypoints with traditional visual features like captions and bounding boxes. Our approach produces datasets designed for fine-tuning models to excel in human-centric activities, focusing on three specific types: conversation, detailed description, and complex reasoning. We fine-tuned the LLaVA-7B model with this novel dataset, achieving significant improvements across various human pose-related tasks. Experimental results show an overall improvement of 21.18% compared to the original LLaVA-7B model. These findings demonstrate the effectiveness of keypoints-assisted data in enhancing multimodal models.


[137] 2409.09312

Registration between Point Cloud Streams and Sequential Bounding Boxes via Gradient Descent

In this paper, we propose an algorithm for registering sequential bounding boxes with point cloud streams. Unlike popular point cloud registration techniques, the alignment of the point cloud and the bounding box can rely on the properties of the bounding box, such as size, shape, and temporal information, which provides substantial support and performance gains. Motivated by this, we propose a new approach to tackle this problem. Specifically, we model the registration process through an overall objective function that includes the final goal and all constraints. We then optimize the function using gradient descent. Our experiments show that the proposed method performs remarkably well with a 40\% improvement in IoU and demonstrates more robust registration between point cloud streams and sequential bounding boxes


[138] 2409.09313

Tensor-Based Synchronization and the Low-Rankness of the Block Trifocal Tensor

The block tensor of trifocal tensors provides crucial geometric information on the three-view geometry of a scene. The underlying synchronization problem seeks to recover camera poses (locations and orientations up to a global transformation) from the block trifocal tensor. We establish an explicit Tucker factorization of this tensor, revealing a low multilinear rank of $(6,4,4)$ independent of the number of cameras under appropriate scaling conditions. We prove that this rank constraint provides sufficient information for camera recovery in the noiseless case. The constraint motivates a synchronization algorithm based on the higher-order singular value decomposition of the block trifocal tensor. Experimental comparisons with state-of-the-art global synchronization methods on real datasets demonstrate the potential of this algorithm for significantly improving location estimation accuracy. Overall this work suggests that higher-order interactions in synchronization problems can be exploited to improve performance, beyond the usual pairwise-based approaches.


[139] 2409.09316

Discrete-time Indirect Adaptive Control for Systems with State-Dependent Disturbances via Directional Forgetting: Concurrent Learning Approach

An adaptive controller design for cases with disturbances is critical in practical applications for preventing unexpected control performance degradation and instability. Recently, adaptive control systems with relaxed persistent excitation (PE) conditions have been proposed to solve this problem; however, most discussions have focused on continuous-time systems. In this study, we propose a novel adaptive control method for discrete-time systems with disturbances that combines directional forgetting and concurrent learning. The proposed method does not require the PE condition, information on disturbances, unknown parameters, or matching conditions, and it guarantees exponential uniform ultimate unbounded (UUB). It was also theoretically demonstrated that the upper bound of the UUB can be designed based on the forgetting factor, which is a design parameter. Numerical simulation results illustrate the effectiveness of the proposed method.


[140] 2409.09318

ODE: Open-Set Evaluation of Hallucinations in Multimodal Large Language Models

Hallucination poses a significant challenge for multimodal large language models (MLLMs). However, existing benchmarks for evaluating hallucinations are static, which can lead to potential data contamination. This paper introduces ODE, an open-set, dynamic protocol for evaluating object existence hallucinations in MLLMs. Our framework employs graph structures to model associations between real-word concepts and generates novel samples for both general and domain-specific scenarios. The dynamic combination of concepts, along with various combination principles, ensures a broad sample distribution. Experimental results show that MLLMs exhibit higher hallucination rates with ODE-generated samples, effectively avoiding data contamination. Moreover, these samples can also be used for fine-tuning to improve MLLM performance on existing benchmarks.


[141] 2409.09319

ChildPlay-Hand: A Dataset of Hand Manipulations in the Wild

Hand-Object Interaction (HOI) is gaining significant attention, particularly with the creation of numerous egocentric datasets driven by AR/VR applications. However, third-person view HOI has received less attention, especially in terms of datasets. Most third-person view datasets are curated for action recognition tasks and feature pre-segmented clips of high-level daily activities, leaving a gap for in-the-wild datasets. To address this gap, we propose ChildPlay-Hand, a novel dataset that includes person and object bounding boxes, as well as manipulation actions. ChildPlay-Hand is unique in: (1) providing per-hand annotations; (2) featuring videos in uncontrolled settings with natural interactions, involving both adults and children; (3) including gaze labels from the ChildPlay-Gaze dataset for joint modeling of manipulations and gaze. The manipulation actions cover the main stages of an HOI cycle, such as grasping, holding or operating, and different types of releasing. To illustrate the interest of the dataset, we study two tasks: object in hand detection (OiH), i.e. if a person has an object in their hand, and manipulation stages (ManiS), which is more fine-grained and targets the main stages of manipulation. We benchmark various spatio-temporal and segmentation networks, exploring body vs. hand-region information and comparing pose and RGB modalities. Our findings suggest that ChildPlay-Hand is a challenging new benchmark for modeling HOI in the wild.


[142] 2409.09322

A Compressive Memory-based Retrieval Approach for Event Argument Extraction

Recent works have demonstrated the effectiveness of retrieval augmentation in the Event Argument Extraction (EAE) task. However, existing retrieval-based EAE methods have two main limitations: (1) input length constraints and (2) the gap between the retriever and the inference model. These issues limit the diversity and quality of the retrieved information. In this paper, we propose a Compressive Memory-based Retrieval (CMR) mechanism for EAE, which addresses the two limitations mentioned above. Our compressive memory, designed as a dynamic matrix that effectively caches retrieved information and supports continuous updates, overcomes the limitations of the input length. Additionally, after pre-loading all candidate demonstrations into the compressive memory, the model further retrieves and filters relevant information from memory based on the input query, bridging the gap between the retriever and the inference model. Extensive experiments show that our method achieves new state-of-the-art performance on three public datasets (RAMS, WikiEvents, ACE05), significantly outperforming existing retrieval-based EAE methods.


[143] 2409.09323

Implicit Neural Representations with Fourier Kolmogorov-Arnold Networks

Implicit neural representations (INRs) use neural networks to provide continuous and resolution-independent representations of complex signals with a small number of parameters. However, existing INR models often fail to capture important frequency components specific to each task. To address this issue, in this paper, we propose a Fourier Kolmogorov Arnold network (FKAN) for INRs. The proposed FKAN utilizes learnable activation functions modeled as Fourier series in the first layer to effectively control and learn the task-specific frequency components. In addition, the activation functions with learnable Fourier coefficients improve the ability of the network to capture complex patterns and details, which is beneficial for high-resolution and high-dimensional data. Experimental results show that our proposed FKAN model outperforms three state-of-the-art baseline schemes, and improves the peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) for the image representation task and intersection over union (IoU) for the 3D occupancy volume representation task, respectively.


[144] 2409.09324

Efficient Fine-Tuning of Large Language Models for Automated Medical Documentation

Scientific research indicates that for every hour spent in direct patient care, physicians spend nearly two additional hours on administrative tasks, particularly on electronic health records (EHRs) and desk work. This excessive administrative burden not only reduces the time available for patient care but also contributes to physician burnout and inefficiencies in healthcare delivery. To address these challenges, this study introduces MediGen, a fine-tuned large language model (LLM) designed to automate the generation of medical reports from medical dialogues. By leveraging state-of-the-art methodologies for fine-tuning open-source pretrained models, including LLaMA3-8B, MediGen achieves high accuracy in transcribing and summarizing clinical interactions. The fine-tuned LLaMA3-8B model demonstrated promising results, achieving a ROUGE score of 58% and a BERTScore-F1 of 72%, indicating its effectiveness in generating accurate and clinically relevant medical reports. These findings suggest that MediGen has the potential to significantly reduce the administrative workload on physicians, improving both healthcare efficiency and physician well-being.


[145] 2409.09326

LawDNet: Enhanced Audio-Driven Lip Synthesis via Local Affine Warping Deformation

In the domain of photorealistic avatar generation, the fidelity of audio-driven lip motion synthesis is essential for realistic virtual interactions. Existing methods face two key challenges: a lack of vivacity due to limited diversity in generated lip poses and noticeable anamorphose motions caused by poor temporal coherence. To address these issues, we propose LawDNet, a novel deep-learning architecture enhancing lip synthesis through a Local Affine Warping Deformation mechanism. This mechanism models the intricate lip movements in response to the audio input by controllable non-linear warping fields. These fields consist of local affine transformations focused on abstract keypoints within deep feature maps, offering a novel universal paradigm for feature warping in networks. Additionally, LawDNet incorporates a dual-stream discriminator for improved frame-to-frame continuity and employs face normalization techniques to handle pose and scene variations. Extensive evaluations demonstrate LawDNet's superior robustness and lip movement dynamism performance compared to previous methods. The advancements presented in this paper, including the methodologies, training data, source codes, and pre-trained models, will be made accessible to the research community.


[146] 2409.09329

Reputation-Driven Peer-to-Peer Live Streaming Architecture for Preventing Free-Riding

We present a peer-to-peer (P2P) live-streaming architecture designed to address challenges such as free-riding, malicious peers, churn, and network instability through the integration of a reputation system. The proposed algorithm incentivizes active peer participation while discouraging opportunistic behaviors, with a reputation mechanism that rewards altruistic peers and penalizes free riders and malicious actors. To manage peer dynamics, the algorithm continuously updates the strategies and adjusts to changing neighbors. It also implements a request-to-join mechanism for flash crowd scenarios, allowing the source node to delegate requests to child nodes, forming an interconnected tree structure that efficiently handles high demand and maintains system stability. The decentralized reputation mechanism promotes long-term sustainability in the P2P live streaming system.


[147] 2409.09330

VOMTC: Vision Objects for Millimeter and Terahertz Communications

Recent advances in sensing and computer vision (CV) technologies have opened the door for the application of deep learning (DL)-based CV technologies in the realm of 6G wireless communications. For the successful application of this emerging technology, it is crucial to have a qualified vision dataset tailored for wireless applications (e.g., RGB images containing wireless devices such as laptops and cell phones). An aim of this paper is to propose a large-scale vision dataset referred to as Vision Objects for Millimeter and Terahertz Communications (VOMTC). The VOMTC dataset consists of 20,232 pairs of RGB and depth images obtained from a camera attached to the base station (BS), with each pair labeled with three representative object categories (person, cell phone, and laptop) and bounding boxes of the objects. Through experimental studies of the VOMTC datasets, we show that the beamforming technique exploiting the VOMTC-trained object detector outperforms conventional beamforming techniques.


[148] 2409.09331

Efficient Online Inference and Learning in Partially Known Nonlinear State-Space Models by Learning Expressive Degrees of Freedom Offline

Intelligent real-world systems critically depend on expressive information about their system state and changing operation conditions, e.g., due to variation in temperature, location, wear, or aging. To provide this information, online inference and learning attempts to perform state estimation and (partial) system identification simultaneously. Current works combine tailored estimation schemes with flexible learning-based models but suffer from convergence problems and computational complexity due to many degrees of freedom in the inference problem (i.e., parameters to determine). To resolve these issues, we propose a procedure for data-driven offline conditioning of a highly flexible Gaussian Process (GP) formulation such that online learning is restricted to a subspace, spanned by expressive basis functions. Due to the simplicity of the transformed problem, a standard particle filter can be employed for Bayesian inference. In contrast to most existing works, the proposed method enables online learning of target functions that are nested nonlinearly inside a first-principles model. Moreover, we provide a theoretical quantification of the error, introduced by restricting learning to a subspace. A Monte-Carlo simulation study with a nonlinear battery model shows that the proposed approach enables rapid convergence with significantly fewer particles compared to a baseline and a state-of-the-art method.


[149] 2409.09334

Probabilistic Reachability of Discrete-Time Nonlinear Stochastic Systems

In this paper we study the reachability problem for discrete-time nonlinear stochastic systems. Our goal is to present a unified framework for calculating the probabilistic reachable set of discrete-time systems in the presence of both deterministic input and stochastic noise. By adopting a suitable separation strategy, the probabilistic reachable set is decoupled into a deterministic reachable set and the effect of the stochastic noise. To capture the effect of the stochastic noise, in particular sub-Gaussian noise, we provide a probabilistic bound on the distance between a stochastic trajectory and its deterministic counterpart. The key to our approach is a novel energy function called the Averaged Moment Generating Function, which we leverage to provide a high probability bound on this distance. We show that this probabilistic bound is tight for a large class of discrete-time nonlinear stochastic systems and is exact for linear stochastic dynamics. By combining this tight probabilistic bound with the existing methods for deterministic reachability analysis, we propose a flexible framework that can efficiently compute probabilistic reachable sets of stochastic systems. We also provide two case studies for applying our framework to Lipschitz bound reachability and interval-based reachability. Three numerical experiments are conducted to validate the theoretical results.


[150] 2409.09338

What you say or how you say it? Predicting Conflict Outcomes in Real and LLM-Generated Conversations

When conflicts escalate, is it due to what is said or how it is said? In the conflict literature, two theoretical approaches take opposing views: one focuses on the content of the disagreement, while the other focuses on how it is expressed. This paper aims to integrate these two perspectives through a computational analysis of 191 communication features -- 128 related to expression and 63 to content. We analyze 1,200 GPT-4 simulated conversations and 12,630 real-world discussions from Reddit. We find that expression features more reliably predict destructive conflict outcomes across both settings, although the most important features differ. In the Reddit data, conversational dynamics such as turn-taking and conversational equality are highly predictive, but they are not predictive in simulated conversations. These results may suggest a possible limitation in simulating social interactions with language models, and we discuss the implications for our findings on building social computing systems.


[151] 2409.09339

Quantum data encoding as a distinct abstraction layer in the design of quantum circuits

Complex quantum circuits are constituted by combinations of quantum subroutines. The computation is possible as long as the quantum data encoding is consistent throughout the circuit. Despite its fundamental importance, the formalization of quantum data encoding has never been addressed systematically so far. We formalize the concept of quantum data encoding, namely the format providing a representation of a data set through a quantum state, as a distinct abstract layer with respect to the associated data loading circuit. We survey existing encoding methods and their respective strategies for classical-to-quantum exact and approximate data loading, for the quantum-to-classical extraction of information from states, and for quantum-to-quantum encoding conversion. Next, we show how major quantum algorithms find a natural interpretation in terms of data loading. For instance, the Quantum Fourier Transform is described as a quantum encoding converter, while the Quantum Amplitude Estimation as an extraction routine. The new conceptual framework is exemplified by considering its application to quantum-based Monte Carlo simulations, thus showcasing the power of the proposed formalism for the description of complex quantum circuits. Indeed, the approach clarifies the structure of complex quantum circuits and enables their efficient design.


[152] 2409.09340

Egocentric Speaker Classification in Child-Adult Dyadic Interactions: From Sensing to Computational Modeling

Autism spectrum disorder (ASD) is a neurodevelopmental condition characterized by challenges in social communication, repetitive behavior, and sensory processing. One important research area in ASD is evaluating children's behavioral changes over time during treatment. The standard protocol with this objective is BOSCC, which involves dyadic interactions between a child and clinicians performing a pre-defined set of activities. A fundamental aspect of understanding children's behavior in these interactions is automatic speech understanding, particularly identifying who speaks and when. Conventional approaches in this area heavily rely on speech samples recorded from a spectator perspective, and there is limited research on egocentric speech modeling. In this study, we design an experiment to perform speech sampling in BOSCC interviews from an egocentric perspective using wearable sensors and explore pre-training Ego4D speech samples to enhance child-adult speaker classification in dyadic interactions. Our findings highlight the potential of egocentric speech collection and pre-training to improve speaker classification accuracy.


[153] 2409.09343

Generative AI in Data Center Networking: Fundamentals, Perspectives, and Case Study

Generative AI (GenAI), exemplified by Large Language Models (LLMs) such as OpenAI's ChatGPT, is revolutionizing various fields. Central to this transformation is Data Center Networking (DCN), which not only provides the computational power necessary for GenAI training and inference but also delivers GenAI-driven services to users. This article examines an interplay between GenAI and DCNs, highlighting their symbiotic relationship and mutual advancements. We begin by reviewing current challenges within DCNs and discuss how GenAI contributes to enhancing DCN capabilities through innovations, such as data augmentation, process automation, and domain transfer. We then focus on analyzing the distinctive characteristics of GenAI workloads on DCNs, gaining insights that catalyze the evolution of DCNs to more effectively support GenAI and LLMs. Moreover, to illustrate the seamless integration of GenAI with DCNs, we present a case study on full-lifecycle DCN digital twins. In this study, we employ LLMs equipped with Retrieval Augmented Generation (RAG) to formulate optimization problems for DCNs and adopt Diffusion-Deep Reinforcement Learning (DRL) for optimizing the RAG knowledge placement strategy. This approach not only demonstrates the application of advanced GenAI methods within DCNs but also positions the digital twin as a pivotal GenAI service operating on DCNs. We anticipate that this article can promote further research into enhancing the virtuous interaction between GenAI and DCNs.


[154] 2409.09345

Enhancing Decision-Making for LLM Agents via Step-Level Q-Value Models

Agents significantly enhance the capabilities of standalone Large Language Models (LLMs) by perceiving environments, making decisions, and executing actions. However, LLM agents still face challenges in tasks that require multiple decision-making steps. Estimating the value of actions in specific tasks is difficult when intermediate actions are neither appropriately rewarded nor penalized. In this paper, we propose leveraging a task-relevant Q-value model to guide action selection. Specifically, we first collect decision-making trajectories annotated with step-level Q values via Monte Carlo Tree Search (MCTS) and construct preference data. We then use another LLM to fit these preferences through step-level Direct Policy Optimization (DPO), which serves as the Q-value model. During inference, at each decision-making step, LLM agents select the action with the highest Q value before interacting with the environment. We apply our method to various open-source and API-based LLM agents, demonstrating that Q-value models significantly improve their performance. Notably, the performance of the agent built with Phi-3-mini-4k-instruct improved by 103% on WebShop and 75% on HotPotQA when enhanced with Q-value models, even surpassing GPT-4o-mini. Additionally, Q-value models offer several advantages, such as generalization to different LLM agents and seamless integration with existing prompting strategies.


[155] 2409.09347

Schrödinger Bridge Flow for Unpaired Data Translation

Mass transport problems arise in many areas of machine learning whereby one wants to compute a map transporting one distribution to another. Generative modeling techniques like Generative Adversarial Networks (GANs) and Denoising Diffusion Models (DDMs) have been successfully adapted to solve such transport problems, resulting in CycleGAN and Bridge Matching respectively. However, these methods do not approximate Optimal Transport (OT) maps, which are known to have desirable properties. Existing techniques approximating OT maps for high-dimensional data-rich problems, such as DDM-based Rectified Flow and Schr\"odinger Bridge procedures, require fully training a DDM-type model at each iteration, or use mini-batch techniques which can introduce significant errors. We propose a novel algorithm to compute the Schr\"odinger Bridge, a dynamic entropy-regularised version of OT, that eliminates the need to train multiple DDM-like models. This algorithm corresponds to a discretisation of a flow of path measures, which we call the Schr\"odinger Bridge Flow, whose only stationary point is the Schr\"odinger Bridge. We demonstrate the performance of our algorithm on a variety of unpaired data translation tasks.


[156] 2409.09348

QTG-VQA: Question-Type-Guided Architectural for VideoQA Systems

In the domain of video question answering (VideoQA), the impact of question types on VQA systems, despite its critical importance, has been relatively under-explored to date. However, the richness of question types directly determines the range of concepts a model needs to learn, thereby affecting the upper limit of its learning capability. This paper focuses on exploring the significance of different question types for VQA systems and their impact on performance, revealing a series of issues such as insufficient learning and model degradation due to uneven distribution of question types. Particularly, considering the significant variation in dependency on temporal information across different question types, and given that the representation of such information coincidentally represents a principal challenge and difficulty for VideoQA as opposed to ImageQA. To address these challenges, we propose QTG-VQA, a novel architecture that incorporates question-type-guided attention and adaptive learning mechanism. Specifically, as to temporal-type questions, we design Masking Frame Modeling technique to enhance temporal modeling, aimed at encouraging the model to grasp richer visual-language relationships and manage more intricate temporal dependencies. Furthermore, a novel evaluation metric tailored to question types is introduced. Experimental results confirm the effectiveness of our approach.


[157] 2409.09350

OPUS: Occupancy Prediction Using a Sparse Set

Occupancy prediction, aiming at predicting the occupancy status within voxelized 3D environment, is quickly gaining momentum within the autonomous driving community. Mainstream occupancy prediction works first discretize the 3D environment into voxels, then perform classification on such dense grids. However, inspection on sample data reveals that the vast majority of voxels is unoccupied. Performing classification on these empty voxels demands suboptimal computation resource allocation, and reducing such empty voxels necessitates complex algorithm designs. To this end, we present a novel perspective on the occupancy prediction task: formulating it as a streamlined set prediction paradigm without the need for explicit space modeling or complex sparsification procedures. Our proposed framework, called OPUS, utilizes a transformer encoder-decoder architecture to simultaneously predict occupied locations and classes using a set of learnable queries. Firstly, we employ the Chamfer distance loss to scale the set-to-set comparison problem to unprecedented magnitudes, making training such model end-to-end a reality. Subsequently, semantic classes are adaptively assigned using nearest neighbor search based on the learned locations. In addition, OPUS incorporates a suite of non-trivial strategies to enhance model performance, including coarse-to-fine learning, consistent point sampling, and adaptive re-weighting, etc. Finally, compared with current state-of-the-art methods, our lightest model achieves superior RayIoU on the Occ3D-nuScenes dataset at near 2x FPS, while our heaviest model surpasses previous best results by 6.1 RayIoU.


[158] 2409.09352

MacST: Multi-Accent Speech Synthesis via Text Transliteration for Accent Conversion

In accented voice conversion or accent conversion, we seek to convert the accent in speech from one another while preserving speaker identity and semantic content. In this study, we formulate a novel method for creating multi-accented speech samples, thus pairs of accented speech samples by the same speaker, through text transliteration for training accent conversion systems. We begin by generating transliterated text with Large Language Models (LLMs), which is then fed into multilingual TTS models to synthesize accented English speech. As a reference system, we built a sequence-to-sequence model on the synthetic parallel corpus for accent conversion. We validated the proposed method for both native and non-native English speakers. Subjective and objective evaluations further validate our dataset's effectiveness in accent conversion studies.


[159] 2409.09353

Overcoming linguistic barriers in code assistants: creating a QLoRA adapter to improve support for Russian-language code writing instructions

In this paper, an approach to training and evaluating an adapter model for the popular language model "zephyr-7b-beta" is described. The adapter was developed to improve the performance of the base model in tasks related to programming and understanding the Russian language. Considering the high quality of the original model in tasks in the English language, the goal of the research was to expand its linguistic and technical spectrum. The proposed adapter was trained using a large and diverse dataset, including question-answer pairs related to programming, as well code-related texts in Russian language. The applied training methodology ensures an improvement in the model's quality of answers in understanding and generating Python code based on Russian instructions. We evaluated the performance of the base model with the installed adapter using various metrics, comparing it to the base model as well as other state-of-the-art models in this field. The obtained results showed significant improvement, both in tasks related to writing Python code and in processing the Russian language, confirming the effectiveness of the proposed adapter.


[160] 2409.09354

PeriGuru: A Peripheral Robotic Mobile App Operation Assistant based on GUI Image Understanding and Prompting with LLM

Smartphones have significantly enhanced our daily learning, communication, and entertainment, becoming an essential component of modern life. However, certain populations, including the elderly and individuals with disabilities, encounter challenges in utilizing smartphones, thus necessitating mobile app operation assistants, a.k.a. mobile app agent. With considerations for privacy, permissions, and cross-platform compatibility issues, we endeavor to devise and develop PeriGuru in this work, a peripheral robotic mobile app operation assistant based on GUI image understanding and prompting with Large Language Model (LLM). PeriGuru leverages a suite of computer vision techniques to analyze GUI screenshot images and employs LLM to inform action decisions, which are then executed by robotic arms. PeriGuru achieves a success rate of 81.94% on the test task set, which surpasses by more than double the method without PeriGuru's GUI image interpreting and prompting design. Our code is available on https://github.com/Z2sJ4t/PeriGuru.


[161] 2409.09356

Towards Robust Detection of Open Source Software Supply Chain Poisoning Attacks in Industry Environments

The exponential growth of open-source package ecosystems, particularly NPM and PyPI, has led to an alarming increase in software supply chain poisoning attacks. Existing static analysis methods struggle with high false positive rates and are easily thwarted by obfuscation and dynamic code execution techniques. While dynamic analysis approaches offer improvements, they often suffer from capturing non-package behaviors and employing simplistic testing strategies that fail to trigger sophisticated malicious behaviors. To address these challenges, we present OSCAR, a robust dynamic code poisoning detection pipeline for NPM and PyPI ecosystems. OSCAR fully executes packages in a sandbox environment, employs fuzz testing on exported functions and classes, and implements aspect-based behavior monitoring with tailored API hook points. We evaluate OSCAR against six existing tools using a comprehensive benchmark dataset of real-world malicious and benign packages. OSCAR achieves an F1 score of 0.95 in NPM and 0.91 in PyPI, confirming that OSCAR is as effective as the current state-of-the-art technologies. Furthermore, for benign packages exhibiting characteristics typical of malicious packages, OSCAR reduces the false positive rate by an average of 32.06% in NPM (from 34.63% to 2.57%) and 39.87% in PyPI (from 41.10% to 1.23%), compared to other tools, significantly reducing the workload of manual reviews in real-world deployments. In cooperation with Ant Group, a leading financial technology company, we have deployed OSCAR on its NPM and PyPI mirrors since January 2023, identifying 10,404 malicious NPM packages and 1,235 malicious PyPI packages over 18 months. This work not only bridges the gap between academic research and industrial application in code poisoning detection but also provides a robust and practical solution that has been thoroughly tested in a real-world industrial setting.


[162] 2409.09357

Joint Semantic Knowledge Distillation and Masked Acoustic Modeling for Full-band Speech Restoration with Improved Intelligibility

Speech restoration aims at restoring full-band speech with high quality and intelligibility, considering a diverse set of distortions. MaskSR is a recently proposed generative model for this task. As other models of its kind, MaskSR attains high quality but, as we show, intelligibility can be substantially improved. We do so by boosting the speech encoder component of MaskSR with predictions of semantic representations of the target speech, using a pre-trained self-supervised teacher model. Then, a masked language model is conditioned on the learned semantic features to predict acoustic tokens that encode low level spectral details of the target speech. We show that, with the same MaskSR model capacity and inference time, the proposed model, MaskSR2, significantly reduces the word error rate, a typical metric for intelligibility. MaskSR2 also achieves competitive word error rate among other models, while providing superior quality. An ablation study shows the effectiveness of various semantic representations.


[163] 2409.09359

Symbolic Regression with a Learned Concept Library

We present a novel method for symbolic regression (SR), the task of searching for compact programmatic hypotheses that best explain a dataset. The problem is commonly solved using genetic algorithms; we show that we can enhance such methods by inducing a library of abstract textual concepts. Our algorithm, called LaSR, uses zero-shot queries to a large language model (LLM) to discover and evolve concepts occurring in known high-performing hypotheses. We discover new hypotheses using a mix of standard evolutionary steps and LLM-guided steps (obtained through zero-shot LLM queries) conditioned on discovered concepts. Once discovered, hypotheses are used in a new round of concept abstraction and evolution. We validate LaSR on the Feynman equations, a popular SR benchmark, as well as a set of synthetic tasks. On these benchmarks, LaSR substantially outperforms a variety of state-of-the-art SR approaches based on deep learning and evolutionary algorithms. Moreover, we show that LaSR can be used to discover a novel and powerful scaling law for LLMs.


[164] 2409.09360

LACOSTE: Exploiting stereo and temporal contexts for surgical instrument segmentation

Surgical instrument segmentation is instrumental to minimally invasive surgeries and related applications. Most previous methods formulate this task as single-frame-based instance segmentation while ignoring the natural temporal and stereo attributes of a surgical video. As a result, these methods are less robust against the appearance variation through temporal motion and view change. In this work, we propose a novel LACOSTE model that exploits Location-Agnostic COntexts in Stereo and TEmporal images for improved surgical instrument segmentation. Leveraging a query-based segmentation model as core, we design three performance-enhancing modules. Firstly, we design a disparity-guided feature propagation module to enhance depth-aware features explicitly. To generalize well for even only a monocular video, we apply a pseudo stereo scheme to generate complementary right images. Secondly, we propose a stereo-temporal set classifier, which aggregates stereo-temporal contexts in a universal way for making a consolidated prediction and mitigates transient failures. Finally, we propose a location-agnostic classifier to decouple the location bias from mask prediction and enhance the feature semantics. We extensively validate our approach on three public surgical video datasets, including two benchmarks from EndoVis Challenges and one real radical prostatectomy surgery dataset GraSP. Experimental results demonstrate the promising performances of our method, which consistently achieves comparable or favorable results with previous state-of-the-art approaches.


[165] 2409.09361

Beta-Sigma VAE: Separating beta and decoder variance in Gaussian variational autoencoder

Variational autoencoder (VAE) is an established generative model but is notorious for its blurriness. In this work, we investigate the blurry output problem of VAE and resolve it, exploiting the variance of Gaussian decoder and $\beta$ of beta-VAE. Specifically, we reveal that the indistinguishability of decoder variance and $\beta$ hinders appropriate analysis of the model by random likelihood value, and limits performance improvement by omitting the gain from $\beta$. To address the problem, we propose Beta-Sigma VAE (BS-VAE) that explicitly separates $\beta$ and decoder variance $\sigma^2_x$ in the model. Our method demonstrates not only superior performance in natural image synthesis but also controllable parameters and predictable analysis compared to conventional VAE. In our experimental evaluation, we employ the analysis of rate-distortion curve and proxy metrics on computer vision datasets. The code is available on https://github.com/overnap/BS-VAE


[166] 2409.09362

Generating Event-oriented Attribution for Movies via Two-Stage Prefix-Enhanced Multimodal LLM

The prosperity of social media platforms has raised the urgent demand for semantic-rich services, e.g., event and storyline attribution. However, most existing research focuses on clip-level event understanding, primarily through basic captioning tasks, without analyzing the causes of events across an entire movie. This is a significant challenge, as even advanced multimodal large language models (MLLMs) struggle with extensive multimodal information due to limited context length. To address this issue, we propose a Two-Stage Prefix-Enhanced MLLM (TSPE) approach for event attribution, i.e., connecting associated events with their causal semantics, in movie videos. In the local stage, we introduce an interaction-aware prefix that guides the model to focus on the relevant multimodal information within a single clip, briefly summarizing the single event. Correspondingly, in the global stage, we strengthen the connections between associated events using an inferential knowledge graph, and design an event-aware prefix that directs the model to focus on associated events rather than all preceding clips, resulting in accurate event attribution. Comprehensive evaluations of two real-world datasets demonstrate that our framework outperforms state-of-the-art methods.


[167] 2409.09363

Security and Privacy Perspectives of People Living in Shared Home Environments

Security and privacy perspectives of people in a multi-user home are a growing area of research, with many researchers reflecting on the complicated power imbalance and challenging access control issues of the devices involved. However, these studies primarily focused on the multi-user scenarios in traditional family home settings, leaving other types of multi-user home environments, such as homes shared by co-habitants without a familial relationship, under-studied. This paper closes this research gap via quantitative and qualitative analysis of results from an online survey and content analysis of sampled online posts on Reddit. It explores the complex roles of shared home users, which depend on various factors unique to the shared home environment, e.g., who owns what home devices, how home devices are used by multiple users, and more complicated relationships between the landlord and people in the shared home and among co-habitants. Half (50.7%) of our survey participants thought that devices in a shared home are less secure than in a traditional family home. This perception was found statistically significantly associated with factors such as the fear of devices being tampered with in their absence and (lack of) trust in other co-habitants and their visitors. Our study revealed new user types and relationships in a multi-user environment such as ExternalPrimary-InternalPrimary while analysing the landlord and shared home resident relationship with regard to shared home device use. We propose a threat actor model for shared home environments, which has a focus on possible malicious behaviours of current and past co-habitants of a shared home, as a special type of insider threat in a home environment. We also recommend further research to understand the complex roles co-habitants can play in navigating and adapting to a shared home environment's security and privacy landscape.


[168] 2409.09366

MHAD: Multimodal Home Activity Dataset with Multi-Angle Videos and Synchronized Physiological Signals

Video-based physiology, exemplified by remote photoplethysmography (rPPG), extracts physiological signals such as pulse and respiration by analyzing subtle changes in video recordings. This non-contact, real-time monitoring method holds great potential for home settings. Despite the valuable contributions of public benchmark datasets to this technology, there is currently no dataset specifically designed for passive home monitoring. Existing datasets are often limited to close-up, static, frontal recordings and typically include only 1-2 physiological signals. To advance video-based physiology in real home settings, we introduce the MHAD dataset. It comprises 1,440 videos from 40 subjects, capturing 6 typical activities from 3 angles in a real home environment. Additionally, 5 physiological signals were recorded, making it a comprehensive video-based physiology dataset. MHAD is compatible with the rPPG-toolbox and has been validated using several unsupervised and supervised methods. Our dataset is publicly available at https://github.com/jdh-algo/MHAD-Dataset.


[169] 2409.09368

Models Are Codes: Towards Measuring Malicious Code Poisoning Attacks on Pre-trained Model Hubs

The proliferation of pre-trained models (PTMs) and datasets has led to the emergence of centralized model hubs like Hugging Face, which facilitate collaborative development and reuse. However, recent security reports have uncovered vulnerabilities and instances of malicious attacks within these platforms, highlighting growing security concerns. This paper presents the first systematic study of malicious code poisoning attacks on pre-trained model hubs, focusing on the Hugging Face platform. We conduct a comprehensive threat analysis, develop a taxonomy of model formats, and perform root cause analysis of vulnerable formats. While existing tools like Fickling and ModelScan offer some protection, they face limitations in semantic-level analysis and comprehensive threat detection. To address these challenges, we propose MalHug, an end-to-end pipeline tailored for Hugging Face that combines dataset loading script extraction, model deserialization, in-depth taint analysis, and heuristic pattern matching to detect and classify malicious code poisoning attacks in datasets and models. In collaboration with Ant Group, a leading financial technology company, we have implemented and deployed MalHug on a mirrored Hugging Face instance within their infrastructure, where it has been operational for over three months. During this period, MalHug has monitored more than 705K models and 176K datasets, uncovering 91 malicious models and 9 malicious dataset loading scripts. These findings reveal a range of security threats, including reverse shell, browser credential theft, and system reconnaissance. This work not only bridges a critical gap in understanding the security of the PTM supply chain but also provides a practical, industry-tested solution for enhancing the security of pre-trained model hubs.


[170] 2409.09369

Interpretable Vision-Language Survival Analysis with Ordinal Inductive Bias for Computational Pathology

Histopathology Whole-Slide Images (WSIs) provide an important tool to assess cancer prognosis in computational pathology (CPATH). While existing survival analysis (SA) approaches have made exciting progress, they are generally limited to adopting highly-expressive architectures and only coarse-grained patient-level labels to learn prognostic visual representations from gigapixel WSIs. Such learning paradigm suffers from important performance bottlenecks, when facing present scarce training data and standard multi-instance learning (MIL) framework in CPATH. To break through it, this paper, for the first time, proposes a new Vision-Language-based SA (VLSA) paradigm. Concretely, (1) VLSA is driven by pathology VL foundation models. It no longer relies on high-capability networks and shows the advantage of data efficiency. (2) In vision-end, VLSA encodes prognostic language prior and then employs it as auxiliary signals to guide the aggregating of prognostic visual features at instance level, thereby compensating for the weak supervision in MIL. Moreover, given the characteristics of SA, we propose i) ordinal survival prompt learning to transform continuous survival labels into textual prompts; and ii) ordinal incidence function as prediction target to make SA compatible with VL-based prediction. VLSA's predictions can be interpreted intuitively by our Shapley values-based method. The extensive experiments on five datasets confirm the effectiveness of our scheme. Our VLSA could pave a new way for SA in CPATH by offering weakly-supervised MIL an effective means to learn valuable prognostic clues from gigapixel WSIs. Our source code is available at https://github.com/liupei101/VLSA.


[171] 2409.09376

BM$^2$: Coupled Schrödinger Bridge Matching

A Schr\"{o}dinger bridge establishes a dynamic transport map between two target distributions via a reference process, simultaneously solving an associated entropic optimal transport problem. We consider the setting where samples from the target distributions are available, and the reference diffusion process admits tractable dynamics. We thus introduce Coupled Bridge Matching (BM$^2$), a simple \emph{non-iterative} approach for learning Schr\"{o}dinger bridges with neural networks. A preliminary theoretical analysis of the convergence properties of BM$^2$ is carried out, supported by numerical experiments that demonstrate the effectiveness of our proposal.


[172] 2409.09378

Prevailing Research Areas for Music AI in the Era of Foundation Models

In tandem with the recent advancements in foundation model research, there has been a surge of generative music AI applications within the past few years. As the idea of AI-generated or AI-augmented music becomes more mainstream, many researchers in the music AI community may be wondering what avenues of research are left. With regards to music generative models, we outline the current areas of research with significant room for exploration. Firstly, we pose the question of foundational representation of these generative models and investigate approaches towards explainability. Next, we discuss the current state of music datasets and their limitations. We then overview different generative models, forms of evaluating these models, and their computational constraints/limitations. Subsequently, we highlight applications of these generative models towards extensions to multiple modalities and integration with artists' workflow as well as music education systems. Finally, we survey the potential copyright implications of generative music and discuss strategies for protecting the rights of musicians. While it is not meant to be exhaustive, our survey calls to attention a variety of research directions enabled by music foundation models.


[173] 2409.09380

The Midas Touch: Triggering the Capability of LLMs for RM-API Misuse Detection

In this paper, we propose an LLM-empowered RM-API misuse detection solution, ChatDetector, which fully automates LLMs for documentation understanding which helps RM-API constraints retrieval and RM-API misuse detection. To correctly retrieve the RM-API constraints, ChatDetector is inspired by the ReAct framework which is optimized based on Chain-of-Thought (CoT) to decompose the complex task into allocation APIs identification, RM-object (allocated/released by RM APIs) extraction and RM-APIs pairing (RM APIs usually exist in pairs). It first verifies the semantics of allocation APIs based on the retrieved RM sentences from API documentation through LLMs. Inspired by the LLMs' performance on various prompting methods,ChatDetector adopts a two-dimensional prompting approach for cross-validation. At the same time, an inconsistency-checking approach between the LLMs' output and the reasoning process is adopted for the allocation APIs confirmation with an off-the-shelf Natural Language Processing (NLP) tool. To accurately pair the RM-APIs, ChatDetector decomposes the task again and identifies the RM-object type first, with which it can then accurately pair the releasing APIs and further construct the RM-API constraints for misuse detection. With the diminished hallucinations, ChatDetector identifies 165 pairs of RM-APIs with a precision of 98.21% compared with the state-of-the-art API detectors. By employing a static detector CodeQL, we ethically report 115 security bugs on the applications integrating on six popular libraries to the developers, which may result in severe issues, such as Denial-of-Services (DoS) and memory corruption. Compared with the end-to-end benchmark method, the result shows that ChatDetector can retrieve at least 47% more RM sentences and 80.85% more RM-API constraints.


[174] 2409.09383

LLM-Powered Ensemble Learning for Paper Source Tracing: A GPU-Free Approach

We participated in the KDD CUP 2024 paper source tracing competition and achieved the 3rd place. This competition tasked participants with identifying the reference sources (i.e., ref-sources, as referred to by the organizers of the competition) of given academic papers. Unlike most teams that addressed this challenge by fine-tuning pre-trained neural language models such as BERT or ChatGLM, our primary approach utilized closed-source large language models (LLMs). With recent advancements in LLM technology, closed-source LLMs have demonstrated the capability to tackle complex reasoning tasks in zero-shot or few-shot scenarios. Consequently, in the absence of GPUs, we employed closed-source LLMs to directly generate predicted reference sources from the provided papers. We further refined these predictions through ensemble learning. Notably, our method was the only one among the award-winning approaches that did not require the use of GPUs for model training. Code available at https://github.com/Cklwanfifa/KDDCUP2024-PST.


[175] 2409.09386

AMBER -- Advanced SegFormer for Multi-Band Image Segmentation: an application to Hyperspectral Imaging

Deep learning has revolutionized the field of hyperspectral image (HSI) analysis, enabling the extraction of complex and hierarchical features. While convolutional neural networks (CNNs) have been the backbone of HSI classification, their limitations in capturing global contextual features have led to the exploration of Vision Transformers (ViTs). This paper introduces AMBER, an advanced SegFormer specifically designed for multi-band image segmentation. AMBER enhances the original SegFormer by incorporating three-dimensional convolutions to handle hyperspectral data. Our experiments, conducted on the Indian Pines, Pavia University, and PRISMA datasets, show that AMBER outperforms traditional CNN-based methods in terms of Overall Accuracy, Kappa coefficient, and Average Accuracy on the first two datasets, and achieves state-of-the-art performance on the PRISMA dataset.


[176] 2409.09391

Tran-GCN: A Transformer-Enhanced Graph Convolutional Network for Person Re-Identification in Monitoring Videos

Person Re-Identification (Re-ID) has gained popularity in computer vision, enabling cross-camera pedestrian recognition. Although the development of deep learning has provided a robust technical foundation for person Re-ID research, most existing person Re-ID methods overlook the potential relationships among local person features, failing to adequately address the impact of pedestrian pose variations and local body parts occlusion. Therefore, we propose a Transformer-enhanced Graph Convolutional Network (Tran-GCN) model to improve Person Re-Identification performance in monitoring videos. The model comprises four key components: (1) A Pose Estimation Learning branch is utilized to estimate pedestrian pose information and inherent skeletal structure data, extracting pedestrian key point information; (2) A Transformer learning branch learns the global dependencies between fine-grained and semantically meaningful local person features; (3) A Convolution learning branch uses the basic ResNet architecture to extract the person's fine-grained local features; (4) A Graph Convolutional Module (GCM) integrates local feature information, global feature information, and body information for more effective person identification after fusion. Quantitative and qualitative analysis experiments conducted on three different datasets (Market-1501, DukeMTMC-ReID, and MSMT17) demonstrate that the Tran-GCN model can more accurately capture discriminative person features in monitoring videos, significantly improving identification accuracy.


[177] 2409.09401

Towards Diverse and Efficient Audio Captioning via Diffusion Models

We introduce Diffusion-based Audio Captioning (DAC), a non-autoregressive diffusion model tailored for diverse and efficient audio captioning. Although existing captioning models relying on language backbones have achieved remarkable success in various captioning tasks, their insufficient performance in terms of generation speed and diversity impede progress in audio understanding and multimedia applications. Our diffusion-based framework offers unique advantages stemming from its inherent stochasticity and holistic context modeling in captioning. Through rigorous evaluation, we demonstrate that DAC not only achieves SOTA performance levels compared to existing benchmarks in the caption quality, but also significantly outperforms them in terms of generation speed and diversity. The success of DAC illustrates that text generation can also be seamlessly integrated with audio and visual generation tasks using a diffusion backbone, paving the way for a unified, audio-related generative model across different modalities.


[178] 2409.09403

AI-Driven Virtual Teacher for Enhanced Educational Efficiency: Leveraging Large Pretrain Models for Autonomous Error Analysis and Correction

Students frequently make mistakes while solving mathematical problems, and traditional error correction methods are both time-consuming and labor-intensive. This paper introduces an innovative \textbf{V}irtual \textbf{A}I \textbf{T}eacher system designed to autonomously analyze and correct student \textbf{E}rrors (VATE). Leveraging advanced large language models (LLMs), the system uses student drafts as a primary source for error analysis, which enhances understanding of the student's learning process. It incorporates sophisticated prompt engineering and maintains an error pool to reduce computational overhead. The AI-driven system also features a real-time dialogue component for efficient student interaction. Our approach demonstrates significant advantages over traditional and machine learning-based error correction methods, including reduced educational costs, high scalability, and superior generalizability. The system has been deployed on the Squirrel AI learning platform for elementary mathematics education, where it achieves 78.3\% accuracy in error analysis and shows a marked improvement in student learning efficiency. Satisfaction surveys indicate a strong positive reception, highlighting the system's potential to transform educational practices.


[179] 2409.09406

Real-world Adversarial Defense against Patch Attacks based on Diffusion Model

Adversarial patches present significant challenges to the robustness of deep learning models, making the development of effective defenses become critical for real-world applications. This paper introduces DIFFender, a novel DIFfusion-based DeFender framework that leverages the power of a text-guided diffusion model to counter adversarial patch attacks. At the core of our approach is the discovery of the Adversarial Anomaly Perception (AAP) phenomenon, which enables the diffusion model to accurately detect and locate adversarial patches by analyzing distributional anomalies. DIFFender seamlessly integrates the tasks of patch localization and restoration within a unified diffusion model framework, enhancing defense efficacy through their close interaction. Additionally, DIFFender employs an efficient few-shot prompt-tuning algorithm, facilitating the adaptation of the pre-trained diffusion model to defense tasks without the need for extensive retraining. Our comprehensive evaluation, covering image classification and face recognition tasks, as well as real-world scenarios, demonstrates DIFFender's robust performance against adversarial attacks. The framework's versatility and generalizability across various settings, classifiers, and attack methodologies mark a significant advancement in adversarial patch defense strategies. Except for the popular visible domain, we have identified another advantage of DIFFender: its capability to easily expand into the infrared domain. Consequently, we demonstrate the good flexibility of DIFFender, which can defend against both infrared and visible adversarial patch attacks alternatively using a universal defense framework.


[180] 2409.09410

Distributed Invariant Kalman Filter for Object-level Multi-robot Pose SLAM

Cooperative localization and target tracking are essential for multi-robot systems to implement high-level tasks. To this end, we propose a distributed invariant Kalman filter based on covariance intersection for effective multi-robot pose estimation. The paper utilizes the object-level measurement models, which have condensed information further reducing the communication burden. Besides, by modeling states on special Lie groups, the better linearity and consistency of the invariant Kalman filter structure can be stressed. We also use a combination of CI and KF to avoid overly confident or conservative estimates in multi-robot systems with intricate and unknown correlations, and some level of robot degradation is acceptable through multi-robot collaboration. The simulation and real data experiment validate the practicability and superiority of the proposed algorithm.


[181] 2409.09412

Label Convergence: Defining an Upper Performance Bound in Object Recognition through Contradictory Annotations

Annotation errors are a challenge not only during training of machine learning models, but also during their evaluation. Label variations and inaccuracies in datasets often manifest as contradictory examples that deviate from established labeling conventions. Such inconsistencies, when significant, prevent models from achieving optimal performance on metrics such as mean Average Precision (mAP). We introduce the notion of "label convergence" to describe the highest achievable performance under the constraint of contradictory test annotations, essentially defining an upper bound on model accuracy. Recognizing that noise is an inherent characteristic of all data, our study analyzes five real-world datasets, including the LVIS dataset, to investigate the phenomenon of label convergence. We approximate that label convergence is between 62.63-67.52 mAP@[0.5:0.95:0.05] for LVIS with 95% confidence, attributing these bounds to the presence of real annotation errors. With current state-of-the-art (SOTA) models at the upper end of the label convergence interval for the well-studied LVIS dataset, we conclude that model capacity is sufficient to solve current object detection problems. Therefore, future efforts should focus on three key aspects: (1) updating the problem specification and adjusting evaluation practices to account for unavoidable label noise, (2) creating cleaner data, especially test data, and (3) including multi-annotated data to investigate annotation variation and make these issues visible from the outset.


[182] 2409.09413

Constructive Approach to Bidirectional Causation between Qualia Structure and Language Emergence

This paper presents a novel perspective on the bidirectional causation between language emergence and relational structure of subjective experiences, termed qualia structure, and lays out the constructive approach to the intricate dependency between the two. We hypothesize that languages with distributional semantics, e.g., syntactic-semantic structures, may have emerged through the process of aligning internal representations among individuals, and such alignment of internal representations facilitates more structured language. This mutual dependency is suggested by the recent advancements in AI and symbol emergence robotics, and collective predictive coding (CPC) hypothesis, in particular. Computational studies show that neural network-based language models form systematically structured internal representations, and multimodal language models can share representations between language and perceptual information. This perspective suggests that language emergence serves not only as a mechanism creating a communication tool but also as a mechanism for allowing people to realize shared understanding of qualitative experiences. The paper discusses the implications of this bidirectional causation in the context of consciousness studies, linguistics, and cognitive science, and outlines future constructive research directions to further explore this dynamic relationship between language emergence and qualia structure.


[183] 2409.09414

Weather Prediction Using CNN-LSTM for Time Series Analysis: A Case Study on Delhi Temperature Data

As global climate change intensifies, accurate weather forecasting is increasingly crucial for sectors such as agriculture, energy management, and environmental protection. Traditional methods, which rely on physical and statistical models, often struggle with complex, nonlinear, and time-varying data, underscoring the need for more advanced techniques. This study explores a hybrid CNN-LSTM model to enhance temperature forecasting accuracy for the Delhi region, using historical meteorological data from 1996 to 2017. We employed both direct and indirect methods, including comprehensive data preprocessing and exploratory analysis, to construct and train our model. The CNN component effectively extracts spatial features, while the LSTM captures temporal dependencies, leading to improved prediction accuracy. Experimental results indicate that the CNN-LSTM model significantly outperforms traditional forecasting methods in terms of both accuracy and stability, with a mean square error (MSE) of 3.26217 and a root mean square error (RMSE) of 1.80615. The hybrid model demonstrates its potential as a robust tool for temperature prediction, offering valuable insights for meteorological forecasting and related fields. Future research should focus on optimizing model architecture, exploring additional feature extraction techniques, and addressing challenges such as overfitting and computational complexity. This approach not only advances temperature forecasting but also provides a foundation for applying deep learning to other time series forecasting tasks.


[184] 2409.09415

Enhancing LLM Problem Solving with REAP: Reflection, Explicit Problem Deconstruction, and Advanced Prompting

Large Language Models (LLMs) have transformed natural language processing, yet improving their problem-solving capabilities, particularly for complex, reasoning-intensive tasks, remains a persistent challenge. This paper introduces the REAP (Reflection, Explicit Problem Deconstruction, and Advanced Prompting) method, an innovative approach within the dynamic context generation framework. REAP guides LLMs through reflection on the query, deconstructing it into manageable components, and generating relevant context to enhance the solution process. We evaluated REAP using a dataset designed to expose LLM limitations, comparing zero-shot prompting with REAP-enhanced prompts across six state-of-the-art models: OpenAI's o1-preview, o1-mini, GPT-4o, GPT-4o-mini, Google's Gemini 1.5 Pro, and Claude 3.5 Sonnet. The results demonstrate notable performance gains, with o1-mini improving by 40.97%, GPT-4o by 66.26%, and GPT-4o-mini by 112.93%. Despite the already strong baseline performance of OpenAI's o1-preview, modest gains were observed. Beyond performance improvements, REAP offers a cost-effective solution; for example, GPT-4o-mini, which is approximately 100 times cheaper than o1-preview, delivered competitive results. REAP also improves the clarity of model outputs, making it easier for humans to understand the reasoning behind the results and simplifying the process of identifying and addressing any issues. These findings demonstrate REAP's potential to greatly improve the capabilities of LLMs, providing both better performance and increased cost-efficiency across a wide range of applications.


[185] 2409.09417

Resources on the Move for Smart City: A Disruptive Perspective on the Grand Convergence of Sensing, Communications, Computing, Storage, and Intelligence

The most commonly seen things on streets in any city are vehicles. However, most of them are used to transport people or goods. What if they also carry resources and capabilities for sensing, communications, computing, storage, and intelligence (SCCSI)? We will have a web of sensors to monitor the city, a network of powerful communicators to transport data around, a grid of computing power to conduct data analytics and machine learning (ML), a network of distributed storage to buffer/cache data/job for optimization, and a set of movable AI/ML toolboxes made available for specialized smart applications. This perspective article presents how to leverage SCCSI-empowered vehicles to design such a service network, simply called SCCSI network, to help build a smart city with a cost-effective and sustainable solution. It showcases how multi-dimensional technologies, namely, sensing, communications, computing, storage, and intelligence, converge to a unifying technology to solve grand challenges for resource demands from emerging large-scale applications. Thus, with SCCSI-empowered vehicles on the ground, over the air, and on the sea, SCCSI network can make resources and capabilities on the move, practically pushing SCCSI services to the edge! We hope this article serves as a spark to stimulate more disruptive thinking to address grand challenges of paramount importance.


[186] 2409.09418

Distributed Clustering based on Distributional Kernel

This paper introduces a new framework for clustering in a distributed network called Distributed Clustering based on Distributional Kernel (K) or KDC that produces the final clusters based on the similarity with respect to the distributions of initial clusters, as measured by K. It is the only framework that satisfies all three of the following properties. First, KDC guarantees that the combined clustering outcome from all sites is equivalent to the clustering outcome of its centralized counterpart from the combined dataset from all sites. Second, the maximum runtime cost of any site in distributed mode is smaller than the runtime cost in centralized mode. Third, it is designed to discover clusters of arbitrary shapes, sizes and densities. To the best of our knowledge, this is the first distributed clustering framework that employs a distributional kernel. The distribution-based clustering leads directly to significantly better clustering outcomes than existing methods of distributed clustering. In addition, we introduce a new clustering algorithm called Kernel Bounded Cluster Cores, which is the best clustering algorithm applied to KDC among existing clustering algorithms. We also show that KDC is a generic framework that enables a quadratic time clustering algorithm to deal with large datasets that would otherwise be impossible.


[187] 2409.09424

NBBOX: Noisy Bounding Box Improves Remote Sensing Object Detection

Data augmentation has seen significant advancements in computer vision to improve model performance over the years, particularly in scenarios with limited and insufficient data. Currently, most studies focus on adjusting the image or its features to expand the size, quality, and variety of samples during training in various tasks including object detection. However, we argue that it is necessary to investigate bounding box transformations as a model regularization technique rather than image-level transformations, especially in aerial imagery due to potentially inconsistent bounding box annotations. Hence, this letter presents a thorough investigation of bounding box transformation in terms of scaling, rotation, and translation for remote sensing object detection. We call this augmentation strategy NBBOX (Noise Injection into Bounding Box). We conduct extensive experiments on DOTA and DIOR-R, both well-known datasets that include a variety of rotated generic objects in aerial images. Experimental results show that our approach significantly improves remote sensing object detection without whistles and bells and it is more time-efficient than other state-of-the-art augmentation strategies.


[188] 2409.09426

Cislunar Communication Performance and System Analysis with Uncharted Phenomena

The Moon and its surrounding cislunar space have numerous unknowns, uncertainties, or partially charted phenomena that need to be investigated to determine the extent to which they affect cislunar communication. These include temperature fluctuations, spacecraft distance and velocity dynamics, surface roughness, and the diversity of propagation mechanisms. To develop robust and dynamically operative Cislunar space networks (CSNs), we need to analyze the communication system by incorporating inclusive models that account for the wide range of possible propagation environments and noise characteristics. In this paper, we consider that the communication signal can be subjected to both Gaussian and non-Gaussian noise, but also to different fading conditions. First, we analyze the communication link by showing the relationship between the brightness temperatures of the Moon and the equivalent noise temperature at the receiver of the Lunar Gateway. We propose to analyze the ergodic capacity and the outage probability, as they are essential metrics for the development of reliable communication. In particular, we model the noise with the additive symmetric alpha-stable distribution, which allows a generic analysis for Gaussian and non-Gaussian signal characteristics. Then, we present the closed-form bounds for the ergodic capacity and the outage probability. Finally, the results show the theoretically and operationally achievable performance bounds for the cislunar communication. To give insight into further designs, we also provide our results with comprehensive system settings that include mission objectives as well as orbital and system dynamics.


[189] 2409.09427

Prototypical Prompting for Text-to-image Person Re-identification

In this paper, we study the problem of Text-to-Image Person Re-identification (TIReID), which aims to find images of the same identity described by a text sentence from a pool of candidate images. Benefiting from Vision-Language Pre-training, such as CLIP (Contrastive Language-Image Pretraining), the TIReID techniques have achieved remarkable progress recently. However, most existing methods only focus on instance-level matching and ignore identity-level matching, which involves associating multiple images and texts belonging to the same person. In this paper, we propose a novel prototypical prompting framework (Propot) designed to simultaneously model instance-level and identity-level matching for TIReID. Our Propot transforms the identity-level matching problem into a prototype learning problem, aiming to learn identity-enriched prototypes. Specifically, Propot works by 'initialize, adapt, enrich, then aggregate'. We first use CLIP to generate high-quality initial prototypes. Then, we propose a domain-conditional prototypical prompting (DPP) module to adapt the prototypes to the TIReID task using task-related information. Further, we propose an instance-conditional prototypical prompting (IPP) module to update prototypes conditioned on intra-modal and inter-modal instances to ensure prototype diversity. Finally, we design an adaptive prototype aggregation module to aggregate these prototypes, generating final identity-enriched prototypes. With identity-enriched prototypes, we diffuse its rich identity information to instances through prototype-to-instance contrastive loss to facilitate identity-level matching. Extensive experiments conducted on three benchmarks demonstrate the superiority of Propot compared to existing TIReID methods.


[190] 2409.09428

Harnessing Lightweight Ciphers for PDF Encryption

Portable Document Format (PDF) is a file format which is used worldwide as de-facto standard for exchanging documents. In fact this document that you are currently reading has been uploaded as a PDF. Confidential information is also exchanged through PDFs. According to PDF standard ISO 3000-2:2020, PDF supports encryption to provide confidentiality of the information contained in it along with digital signatures to ensure authenticity. At present, PDF encryption only supports Advanced Encryption Standard (AES) to encrypt and decrypt information. However, Lightweight Cryptography, which is referred to as crypto for resource constrained environments has gained lot of popularity specially due to the NIST Lightweight Cryptography (LWC) competition announced in 2018 for which ASCON was announced as the winner in February 2023. The current work constitutes the first attempt to benchmark Java implementations of NIST LWC winner ASCON and finalist XOODYAK against the current PDF encryption standard AES. Our research reveals that ASCON emerges as a clear winner with regards to throughput when profiled using two state-of-the-art benchmarking tools YourKit and JMH.


[191] 2409.09429

Real-Time Adaptive Industrial Robots: Improving Safety And Comfort In Human-Robot Collaboration

Industrial robots become increasingly prevalent, resulting in a growing need for intuitive, comforting human-robot collaboration. We present a user-aware robotic system that adapts to operator behavior in real time while non-intrusively monitoring physiological signals to create a more responsive and empathetic environment. Our prototype dynamically adjusts robot speed and movement patterns while measuring operator pupil dilation and proximity. Our user study compares this adaptive system to a non-adaptive counterpart, and demonstrates that the adaptive system significantly reduces both perceived and physiologically measured cognitive load while enhancing usability. Participants reported increased feelings of comfort, safety, trust, and a stronger sense of collaboration when working with the adaptive robot. This highlights the potential of integrating real-time physiological data into human-robot interaction paradigms. This novel approach creates more intuitive and collaborative industrial environments where robots effectively 'read' and respond to human cognitive states, and we feature all data and code for future use.


[192] 2409.09430

Evaluating Pre-trained Convolutional Neural Networks and Foundation Models as Feature Extractors for Content-based Medical Image Retrieval

Medical image retrieval refers to the task of finding similar images for given query images in a database, with applications such as diagnosis support, treatment planning, and educational tools for inexperienced medical practitioners. While traditional medical image retrieval was performed using clinical metadata, content-based medical image retrieval (CBMIR) relies on the characteristic features of the images, such as color, texture, shape, and spatial features. Many approaches have been proposed for CBMIR, and among them, using pre-trained convolutional neural networks (CNNs) is a widely utilized approach. However, considering the recent advances in the development of foundation models for various computer vision tasks, their application for CBMIR can be also investigated for its potentially superior performance. In this study, we used several pre-trained feature extractors from well-known pre-trained CNNs (VGG19, ResNet-50, DenseNet121, and EfficientNetV2M) and pre-trained foundation models (MedCLIP, BioMedCLIP, OpenCLIP, CONCH and UNI) and investigated the CBMIR performance on a subset of the MedMNIST V2 dataset, including eight types of 2D and 3D medical images. Furthermore, we also investigated the effect of image size on the CBMIR performance. Our results show that, overall, for the 2D datasets, foundation models deliver superior performance by a large margin compared to CNNs, with UNI providing the best overall performance across all datasets and image sizes. For 3D datasets, CNNs and foundation models deliver more competitive performance, with CONCH achieving the best overall performance. Moreover, our findings confirm that while using larger image sizes (especially for 2D datasets) yields slightly better performance, competitive CBMIR performance can still be achieved even with smaller image sizes. Our codes to generate and reproduce the results are available on GitHub.


[193] 2409.09432

Detecting Looted Archaeological Sites from Satellite Image Time Series

Archaeological sites are the physical remains of past human activity and one of the main sources of information about past societies and cultures. However, they are also the target of malevolent human actions, especially in countries having experienced inner turmoil and conflicts. Because monitoring these sites from space is a key step towards their preservation, we introduce the DAFA Looted Sites dataset, \datasetname, a labeled multi-temporal remote sensing dataset containing 55,480 images acquired monthly over 8 years across 675 Afghan archaeological sites, including 135 sites looted during the acquisition period. \datasetname~is particularly challenging because of the limited number of training samples, the class imbalance, the weak binary annotations only available at the level of the time series, and the subtlety of relevant changes coupled with important irrelevant ones over a long time period. It is also an interesting playground to assess the performance of satellite image time series (SITS) classification methods on a real and important use case. We evaluate a large set of baselines, outline the substantial benefits of using foundation models and show the additional boost that can be provided by using complete time series instead of using a single image.


[194] 2409.09434

Factorization method for inverse elastic cavity scattering

This paper is concerned with the inverse elastic scattering problem to determine the shape and location of an elastic cavity. By establishing a one-to-one correspondence between the Herglotz wave function and its kernel, we introduce the far-field operator which is crucial in the factorization method. We present a theoretical factorization of the far-field operator and rigorously prove the properties of its associated operators involved in the factorization. Unlike the Dirichlet problem where the boundary integral operator of the single-layer potential involved in the factorization of the far-field operator is weakly singular, the boundary integral operator of the conormal derivative of the double-layer potential involved in the factorization of the far-field operator with Neumann boundary conditions is hypersingular, which forces us to prove that this operator is isomorphic using Fredholm's theorem. Meanwhile, we present theoretical analyses of the factorization method for various illumination and measurement cases, including compression-wave illumination and compression-wave measurement, shear-wave illumination and shear-wave measurement, and full-wave illumination and full-wave measurement. In addition, we also consider the limited aperture problem and provide a rigorous theoretical analysis of the factorization method in this case. Numerous numerical experiments are carried out to demonstrate the effectiveness of the proposed method, and to analyze the influence of various factors, such as polarization direction, frequency, wavenumber, and multi-scale scatterers on the reconstructed results.


[195] 2409.09435

Behavior Tree Generation using Large Language Models for Sequential Manipulation Planning with Human Instructions and Feedback

In this work, we propose an LLM-based BT generation framework to leverage the strengths of both for sequential manipulation planning. To enable human-robot collaborative task planning and enhance intuitive robot programming by nonexperts, the framework takes human instructions to initiate the generation of action sequences and human feedback to refine BT generation in runtime. All presented methods within the framework are tested on a real robotic assembly example, which uses a gear set model from the Siemens Robot Assembly Challenge. We use a single manipulator with a tool-changing mechanism, a common practice in flexible manufacturing, to facilitate robust grasping of a large variety of objects. Experimental results are evaluated regarding success rate, logical coherence, executability, time consumption, and token consumption. To our knowledge, this is the first human-guided LLM-based BT generation framework that unifies various plausible ways of using LLMs to fully generate BTs that are executable on the real testbed and take into account granular knowledge of tool use.


[196] 2409.09441

PIP-Loco: A Proprioceptive Infinite Horizon Planning Framework for Quadrupedal Robot Locomotion

A core strength of Model Predictive Control (MPC) for quadrupedal locomotion has been its ability to enforce constraints and provide interpretability of the sequence of commands over the horizon. However, despite being able to plan, MPC struggles to scale with task complexity, often failing to achieve robust behavior on rapidly changing surfaces. On the other hand, model-free Reinforcement Learning (RL) methods have outperformed MPC on multiple terrains, showing emergent motions but inherently lack any ability to handle constraints or perform planning. To address these limitations, we propose a framework that integrates proprioceptive planning with RL, allowing for agile and safe locomotion behaviors through the horizon. Inspired by MPC, we incorporate an internal model that includes a velocity estimator and a Dreamer module. During training, the framework learns an expert policy and an internal model that are co-dependent, facilitating exploration for improved locomotion behaviors. During deployment, the Dreamer module solves an infinite-horizon MPC problem, adapting actions and velocity commands to respect the constraints. We validate the robustness of our training framework through ablation studies on internal model components and demonstrate improved robustness to training noise. Finally, we evaluate our approach across multi-terrain scenarios in both simulation and hardware.


[197] 2409.09442

Two-grid convergence theory for symmetric positive semidefinite linear systems

This paper is devoted to the convergence theory of two-grid methods for symmetric positive semidefinite linear systems, with particular focus on the singular case. In the case where the Moore--Penrose inverse of coarse-grid matrix is used as a coarse solver, we derive a succinct identity for characterizing the convergence factor of two-grid methods. More generally, we present some convergence estimates for two-grid methods with approximate coarse solvers, including both linear and general cases. A key feature of our analysis is that it does not require any additional assumptions on the system matrix, especially on its null space.


[198] 2409.09444

KAN-HyperpointNet for Point Cloud Sequence-Based 3D Human Action Recognition

Point cloud sequence-based 3D action recognition has achieved impressive performance and efficiency. However, existing point cloud sequence modeling methods cannot adequately balance the precision of limb micro-movements with the integrity of posture macro-structure, leading to the loss of crucial information cues in action inference. To overcome this limitation, we introduce D-Hyperpoint, a novel data type generated through a D-Hyperpoint Embedding module. D-Hyperpoint encapsulates both regional-momentary motion and global-static posture, effectively summarizing the unit human action at each moment. In addition, we present a D-Hyperpoint KANsMixer module, which is recursively applied to nested groupings of D-Hyperpoints to learn the action discrimination information and creatively integrates Kolmogorov-Arnold Networks (KAN) to enhance spatio-temporal interaction within D-Hyperpoints. Finally, we propose KAN-HyperpointNet, a spatio-temporal decoupled network architecture for 3D action recognition. Extensive experiments on two public datasets: MSR Action3D and NTU-RGB+D 60, demonstrate the state-of-the-art performance of our method.


[199] 2409.09446

MulCPred: Learning Multi-modal Concepts for Explainable Pedestrian Action Prediction

Pedestrian action prediction is of great significance for many applications such as autonomous driving. However, state-of-the-art methods lack explainability to make trustworthy predictions. In this paper, a novel framework called MulCPred is proposed that explains its predictions based on multi-modal concepts represented by training samples. Previous concept-based methods have limitations including: 1) they cannot directly apply to multi-modal cases; 2) they lack locality to attend to details in the inputs; 3) they suffer from mode collapse. These limitations are tackled accordingly through the following approaches: 1) a linear aggregator to integrate the activation results of the concepts into predictions, which associates concepts of different modalities and provides ante-hoc explanations of the relevance between the concepts and the predictions; 2) a channel-wise recalibration module that attends to local spatiotemporal regions, which enables the concepts with locality; 3) a feature regularization loss that encourages the concepts to learn diverse patterns. MulCPred is evaluated on multiple datasets and tasks. Both qualitative and quantitative results demonstrate that MulCPred is promising in improving the explainability of pedestrian action prediction without obvious performance degradation. Furthermore, by removing unrecognizable concepts from MulCPred, the cross-dataset prediction performance is improved, indicating the feasibility of further generalizability of MulCPred.


[200] 2409.09447

Fast and Adaptive Bulk Loading of Multidimensional Points

Existing methods for bulk loading disk-based multidimensional points involve multiple applications of external sorting. In this paper, we propose techniques that apply linear scan, and are therefore significantly faster. The resulting FMBI Index possesses several desirable properties, including almost full and square nodes with zero overlap, and has excellent query performance. As a second contribution, we develop an adaptive version AMBI, which utilizes the query workload to build a partial index only for parts of the data space that contain query results. Finally, we extend FMBI and AMBI to parallel bulk loading and query processing in distributed systems. An extensive experimental evaluation with real datasets confirms that FMBI and AMBI clearly outperform competitors in terms of combined index construction and query processing cost, sometimes by orders of magnitude.


[201] 2409.09451

On the Generalizability of Foundation Models for Crop Type Mapping

Foundation models pre-trained using self-supervised and weakly-supervised learning have shown powerful transfer learning capabilities on various downstream tasks, including language understanding, text generation, and image recognition. Recently, the Earth observation (EO) field has produced several foundation models pre-trained directly on multispectral satellite imagery (e.g., Sentinel-2) for applications like precision agriculture, wildfire and drought monitoring, and natural disaster response. However, few studies have investigated the ability of these models to generalize to new geographic locations, and potential concerns of geospatial bias -- models trained on data-rich developed countries not transferring well to data-scarce developing countries -- remain. We investigate the ability of popular EO foundation models to transfer to new geographic regions in the agricultural domain, where differences in farming practices and class imbalance make transfer learning particularly challenging. We first select six crop classification datasets across five continents, normalizing for dataset size and harmonizing classes to focus on four major cereal grains: maize, soybean, rice, and wheat. We then compare three popular foundation models, pre-trained on SSL4EO-S12, SatlasPretrain, and ImageNet, using in-distribution (ID) and out-of-distribution (OOD) evaluation. Experiments show that pre-trained weights designed explicitly for Sentinel-2, such as SSL4EO-S12, outperform general pre-trained weights like ImageNet. Furthermore, the benefits of pre-training on OOD data are the most significant when only 10--100 ID training samples are used. Transfer learning and pre-training with OOD and limited ID data show promising applications, as many developing regions have scarce crop type labels. All harmonized datasets and experimental code are open-source and available for download.


[202] 2409.09453

How persuade's psychological states and traits shape digital persuasion: Lessons learnt from mobile burglary prevention encounters

Persuasion can be a complex process. Persuaders may need to use a high degree of sensitivity to understand a persuadee's states, traits, and values. They must navigate the nuanced field of human interaction. Research on persuasive systems often overlooks the delicate nature of persuasion, favoring "one-size-fits-all" approaches and risking the alienation of certain users. This study examines the considerations made by professional burglary prevention advisors when persuading clients to enhance their home security. It illustrates how advisors adapt their approaches based on each advisee's states and traits. Specifically, the study reveals how advisors deviate from intended and technologically supported practices to accommodate the individual attributes of their advisees. It identifies multiple advisee-specific aspects likely to moderate the effectiveness of persuasive efforts and suggests strategies for addressing these differences. These findings are relevant for designing personalized persuasive systems that rely on conversational modes of persuasion.


[203] 2409.09455

Learning Keypoints for Multi-Agent Behavior Analysis using Self-Supervision

The study of social interactions and collective behaviors through multi-agent video analysis is crucial in biology. While self-supervised keypoint discovery has emerged as a promising solution to reduce the need for manual keypoint annotations, existing methods often struggle with videos containing multiple interacting agents, especially those of the same species and color. To address this, we introduce B-KinD-multi, a novel approach that leverages pre-trained video segmentation models to guide keypoint discovery in multi-agent scenarios. This eliminates the need for time-consuming manual annotations on new experimental settings and organisms. Extensive evaluations demonstrate improved keypoint regression and downstream behavioral classification in videos of flies, mice, and rats. Furthermore, our method generalizes well to other species, including ants, bees, and humans, highlighting its potential for broad applications in automated keypoint annotation for multi-agent behavior analysis. Code available under: https://danielpkhalil.github.io/B-KinD-Multi


[204] 2409.09457

When the System does not Fit: Coping Strategies of Employment Consultants

Case and knowledge management systems are spread at the frontline across public agencies. However, such systems are dedicated for the collaboration within the agency rather than for the face-to-face interaction with the clients. If used as a collaborative resource at the frontline, case and knowledge management systems might disturb the service provision by displaying unfiltered internal information, disclosing private data of other clients, or revealing the limits of frontline employees' competence (if they cannot explain something) or their authority (if they cannot override something). Observation in the German Public Employment Agency shows that employment consultants make use of various coping strategies during face-to-face consultations to extend existing boundaries set by the case and knowledge management systems and by the rules considering their usage. The analysis of these coping strategies unveils the forces that shape the conduct of employment consultants during their contacts with clients: the consultants' own understanding of work, the actual and the perceived needs of the clients, and the political mission as well as the internal rules of the employment agency. The findings form a twofold contribution: First, they contribute to the discourse on work in employment agencies by illustrating how the complexities of social welfare apparatus demonstrate themselves in singular behavioural patterns. Second, they contribute to the discourse on screen-level bureaucracy by depicting the consultants as active and conscious mediators rather than passive interfaces between the system and the client.


[205] 2409.09460

Distribution network reconfiguration for operational objectives: reducing voltage violation incidents and network losses

As the share of Distributed energy resources (DER) in the low voltage distribution network (DN) is expected to rise, a higher and more variable electric load and generation could stress the DNs, leading to increased congestion and power losses. To address these challenges, DSOs will have to invest in strengthening the network infrastructure in the coming decade. This paper looks to minimize the need for flexibility through dynamic DN reconfiguration. Typically, European DNs predominantly use manual switches. Hence, the network configuration is set for longer periods of time. Therefore, an opportunity is missed to benefit from more short-term dynamic switching. In this paper, a method is proposed which identifies the best manual switches to replace with remotely controlled switches based on their performance in terms of avoided voltage congestion incidents and DN power losses. The developed method is an exhaustive search algorithm which divides the problem into 3 subsequent parts, i.e. radial configuration identification, multi-period power flow and impact assessment for reconfigurable switch replacement on DN operation. A numerical evaluation shows that replacing the two top-ranked switches in the test case reduced the power losses by 4.51% and the voltage constraint violations by 38.17%. Thus, investing in only a few reconfigurable switches can substantially improve the operational efficiency of DNs.


[206] 2409.09461

TX-Gen: Multi-Objective Optimization for Sparse Counterfactual Explanations for Time-Series Classification

In time-series classification, understanding model decisions is crucial for their application in high-stakes domains such as healthcare and finance. Counterfactual explanations, which provide insights by presenting alternative inputs that change model predictions, offer a promising solution. However, existing methods for generating counterfactual explanations for time-series data often struggle with balancing key objectives like proximity, sparsity, and validity. In this paper, we introduce TX-Gen, a novel algorithm for generating counterfactual explanations based on the Non-dominated Sorting Genetic Algorithm II (NSGA-II). TX-Gen leverages evolutionary multi-objective optimization to find a diverse set of counterfactuals that are both sparse and valid, while maintaining minimal dissimilarity to the original time series. By incorporating a flexible reference-guided mechanism, our method improves the plausibility and interpretability of the counterfactuals without relying on predefined assumptions. Extensive experiments on benchmark datasets demonstrate that TX-Gen outperforms existing methods in generating high-quality counterfactuals, making time-series models more transparent and interpretable.


[207] 2409.09462

Pen-and-paper Rituals in Service Interaction: Combining High-touch and High-tech in Financial Advisory Encounters

Advisory services are ritualized encounters between an expert and an advisee. Empathetic, high-touch relationship between those two parties was identified as the key aspect of a successful advisory encounter. To facilitate the high-touch interaction, advisors established rituals which stress the unique, individual character of each client and each single encounter. Simultaneously, organizations like banks or insurances rolled out tools and technologies for use in advisory services to offer a uniform experience and consistent quality across branches and advisors. As a consequence, advisors were caught between the high-touch and high-tech aspects of an advisory service. This manuscript presents a system that accommodates for high-touch rituals and practices and combines them with high-tech collaboration. The proposed solution augments pen-and-paper practices with digital content and affords new material performances coherent with the existing rituals. The evaluation in realistic mortgage advisory services unveils the potential of mixed reality approaches for application in professional, institutional settings. The blow-by-blow analysis of the conversations reveals how an advisory service can become equally high-tech and high-touch thanks to a careful ritual-oriented system design. As a consequence, this paper presents a solution to the tension between the high-touch and high-tech tendencies in advisory services.


[208] 2409.09464

Rethinking the Influence of Source Code on Test Case Generation

Large language models (LLMs) have been widely applied to assist test generation with the source code under test provided as the context. This paper aims to answer the question: If the source code under test is incorrect, will LLMs be misguided when generating tests? The effectiveness of test cases is measured by their accuracy, coverage, and bug detection effectiveness. Our evaluation results with five open- and six closed-source LLMs on four datasets demonstrate that incorrect code can significantly mislead LLMs in generating correct, high-coverage, and bug-revealing tests. For instance, in the HumanEval dataset, LLMs achieve 80.45% test accuracy when provided with task descriptions and correct code, but only 57.12% when given task descriptions and incorrect code. For the APPS dataset, prompts with correct code yield tests that detect 39.85% of the bugs, while prompts with incorrect code detect only 19.61%. These findings have important implications for the deployment of LLM-based testing: using it on mature code may help protect against future regression, but on early-stage immature code, it may simply bake in errors. Our findings also underscore the need for further research to improve LLMs resilience against incorrect code in generating reliable and bug-revealing tests.


[209] 2409.09466

Improved Physics-Informed Neural Network based AC Power Flow for Distribution Networks

Power flow analysis plays a critical role in the control and operation of power systems. The high computational burden of traditional solution methods led to a shift towards data-driven approaches, exploiting the availability of digital metering data. However, data-driven approaches, such as deep learning, have not yet won the trust of operators as they are agnostic to the underlying physical model and have poor performances in regimes with limited observability. To address these challenges, this paper proposes a new, physics-informed model. More specifically, a novel physics-informed loss function is developed that can be used to train (deep) neural networks aimed at power flow simulation. The loss function is not only based on the theoretical AC power flow equations that govern the problem but also incorporates real physical line losses, resulting in higher loss accuracy and increased learning potential. The proposed model is used to train a Graph Neural Network (GNN) and is evaluated on a small 3-bus test case both against another physics-informed GNN that does not incorporate physical losses and against a model-free technique. The validation results show that the proposed model outperforms the conventional physics-informed network on all used performance metrics. Even more interesting is that the model shows strong prediction capabilities when tested on scenarios outside the training sample set, something that is a substantial deficiency of model-free techniques.


[210] 2409.09467

Keeping Humans in the Loop: Human-Centered Automated Annotation with Generative AI

Automated text annotation is a compelling use case for generative large language models (LLMs) in social media research. Recent work suggests that LLMs can achieve strong performance on annotation tasks; however, these studies evaluate LLMs on a small number of tasks and likely suffer from contamination due to a reliance on public benchmark datasets. Here, we test a human-centered framework for responsibly evaluating artificial intelligence tools used in automated annotation. We use GPT-4 to replicate 27 annotation tasks across 11 password-protected datasets from recently published computational social science articles in high-impact journals. For each task, we compare GPT-4 annotations against human-annotated ground-truth labels and against annotations from separate supervised classification models fine-tuned on human-generated labels. Although the quality of LLM labels is generally high, we find significant variation in LLM performance across tasks, even within datasets. Our findings underscore the importance of a human-centered workflow and careful evaluation standards: Automated annotations significantly diverge from human judgment in numerous scenarios, despite various optimization strategies such as prompt tuning. Grounding automated annotation in validation labels generated by humans is essential for responsible evaluation.


[211] 2409.09471

Randomized sketched TT-GMRES for linear systems with tensor structure

In the last decade, tensors have shown their potential as valuable tools for various tasks in numerical linear algebra. While most of the research has been focusing on how to compress a given tensor in order to maintain information as well as reducing the storage demand for its allocation, the solution of linear tensor equations is a less explored venue. Even if many of the routines available in the literature are based on alternating minimization schemes (ALS), we pursue a different path and utilize Krylov methods instead. The use of Krylov methods in the tensor realm is not new. However, these routines often turn out to be rather expensive in terms of computational cost and ALS procedures are preferred in practice. We enhance Krylov methods for linear tensor equations with a panel of diverse randomization-based strategies which remarkably increase the efficiency of these solvers making them competitive with state-of-the-art ALS schemes. The up-to-date randomized approaches we employ range from sketched Krylov methods with incomplete orthogonalization and structured sketching transformations to streaming algorithms for tensor rounding. The promising performance of our new solver for linear tensor equations is demonstrated by many numerical results.


[212] 2409.09473

Learning to enhance multi-legged robot on rugged landscapes

Navigating rugged landscapes poses significant challenges for legged locomotion. Multi-legged robots (those with 6 and greater) offer a promising solution for such terrains, largely due to their inherent high static stability, resulting from a low center of mass and wide base of support. Such systems require minimal effort to maintain balance. Recent studies have shown that a linear controller, which modulates the vertical body undulation of a multi-legged robot in response to shifts in terrain roughness, can ensure reliable mobility on challenging terrains. However, the potential of a learning-based control framework that adjusts multiple parameters to address terrain heterogeneity remains underexplored. We posit that the development of an experimentally validated physics-based simulator for this robot can rapidly advance capabilities by allowing wide parameter space exploration. Here we develop a MuJoCo-based simulator tailored to this robotic platform and use the simulation to develop a reinforcement learning-based control framework that dynamically adjusts horizontal and vertical body undulation, and limb stepping in real-time. Our approach improves robot performance in simulation, laboratory experiments, and outdoor tests. Notably, our real-world experiments reveal that the learning-based controller achieves a 30\% to 50\% increase in speed compared to a linear controller, which only modulates vertical body waves. We hypothesize that the superior performance of the learning-based controller arises from its ability to adjust multiple parameters simultaneously, including limb stepping, horizontal body wave, and vertical body wave.


[213] 2409.09475

MALADY: Multiclass Active Learning with Auction Dynamics on Graphs

Active learning enhances the performance of machine learning methods, particularly in semi-supervised cases, by judiciously selecting a limited number of unlabeled data points for labeling, with the goal of improving the performance of an underlying classifier. In this work, we introduce the Multiclass Active Learning with Auction Dynamics on Graphs (MALADY) framework which leverages the auction dynamics algorithm on similarity graphs for efficient active learning. In particular, we generalize the auction dynamics algorithm on similarity graphs for semi-supervised learning in [24] to incorporate a more general optimization functional. Moreover, we introduce a novel active learning acquisition function that uses the dual variable of the auction algorithm to measure the uncertainty in the classifier to prioritize queries near the decision boundaries between different classes. Lastly, using experiments on classification tasks, we evaluate the performance of our proposed method and show that it exceeds that of comparison algorithms.


[214] 2409.09479

MAC-VO: Metrics-aware Covariance for Learning-based Stereo Visual Odometry

We propose the MAC-VO, a novel learning-based stereo VO that leverages the learned metrics-aware matching uncertainty for dual purposes: selecting keypoint and weighing the residual in pose graph optimization. Compared to traditional geometric methods prioritizing texture-affluent features like edges, our keypoint selector employs the learned uncertainty to filter out the low-quality features based on global inconsistency. In contrast to the learning-based algorithms that model the scale-agnostic diagonal weight matrix for covariance, we design a metrics-aware covariance model to capture the spatial error during keypoint registration and the correlations between different axes. Integrating this covariance model into pose graph optimization enhances the robustness and reliability of pose estimation, particularly in challenging environments with varying illumination, feature density, and motion patterns. On public benchmark datasets, MAC-VO outperforms existing VO algorithms and even some SLAM algorithms in challenging environments. The covariance map also provides valuable information about the reliability of the estimated poses, which can benefit decision-making for autonomous systems.


[215] 2409.09481

Scabbard: An Exploratory Study on Hardware Aware Design Choices of Learning with Rounding-based Key Encapsulation Mechanisms

Recently, the construction of cryptographic schemes based on hard lattice problems has gained immense popularity. Apart from being quantum resistant, lattice-based cryptography allows a wide range of variations in the underlying hard problem. As cryptographic schemes can work in different environments under different operational constraints such as memory footprint, silicon area, efficiency, power requirement, etc., such variations in the underlying hard problem are very useful for designers to construct different cryptographic schemes. In this work, we explore various design choices of lattice-based cryptography and their impact on performance in the real world. In particular, we propose a suite of key-encapsulation mechanisms based on the learning with rounding problem with a focus on improving different performance aspects of lattice-based cryptography. Our suite consists of three schemes. Our first scheme is Florete, which is designed for efficiency. The second scheme is Espada, which is aimed at improving parallelization, flexibility, and memory footprint. The last scheme is Sable, which can be considered an improved version in terms of key sizes and parameters of the Saber key-encapsulation mechanism, one of the finalists in the National Institute of Standards and Technology's post-quantum standardization procedure. In this work, we have described our design rationale behind each scheme. Further, to demonstrate the justification of our design decisions, we have provided software and hardware implementations. Our results show Florete is faster than most state-of-the-art KEMs on software and hardware platforms. The scheme Espada requires less memory and area than the implementation of most state-of-the-art schemes. The implementations of Sable maintain a trade-off between Florete and Espada regarding performance and memory requirements on the hardware and software platform.


[216] 2409.09485

Enumerating Minimal Unsatisfiable Cores of LTLf formulas

Linear Temporal Logic over finite traces ($\text{LTL}_f$) is a widely used formalism with applications in AI, process mining, model checking, and more. The primary reasoning task for $\text{LTL}_f$ is satisfiability checking; yet, the recent focus on explainable AI has increased interest in analyzing inconsistent formulas, making the enumeration of minimal explanations for infeasibility a relevant task also for $\text{LTL}_f$. This paper introduces a novel technique for enumerating minimal unsatisfiable cores (MUCs) of an $\text{LTL}_f$ specification. The main idea is to encode a $\text{LTL}_f$ formula into an Answer Set Programming (ASP) specification, such that the minimal unsatisfiable subsets (MUSes) of the ASP program directly correspond to the MUCs of the original $\text{LTL}_f$ specification. Leveraging recent advancements in ASP solving yields a MUC enumerator achieving good performance in experiments conducted on established benchmarks from the literature.


[217] 2409.09491

Robot Learning as an Empirical Science: Best Practices for Policy Evaluation

The robot learning community has made great strides in recent years, proposing new architectures and showcasing impressive new capabilities; however, the dominant metric used in the literature, especially for physical experiments, is "success rate", i.e. the percentage of runs that were successful. Furthermore, it is common for papers to report this number with little to no information regarding the number of runs, the initial conditions, and the success criteria, little to no narrative description of the behaviors and failures observed, and little to no statistical analysis of the findings. In this paper we argue that to move the field forward, researchers should provide a nuanced evaluation of their methods, especially when evaluating and comparing learned policies on physical robots. To do so, we propose best practices for future evaluations: explicitly reporting the experimental conditions, evaluating several metrics designed to complement success rate, conducting statistical analysis, and adding a qualitative description of failures modes. We illustrate these through an evaluation on physical robots of several learned policies for manipulation tasks.


[218] 2409.09493

Hacking, The Lazy Way: LLM Augmented Pentesting

Security researchers are continually challenged by the need to stay current with rapidly evolving cybersecurity research, tools, and techniques. This constant cycle of learning, unlearning, and relearning, combined with the repetitive tasks of sifting through documentation and analyzing data, often hinders productivity and innovation. This has led to a disparity where only organizations with substantial resources can access top-tier security experts, while others rely on firms with less skilled researchers who focus primarily on compliance rather than actual security. We introduce "LLM Augmented Pentesting," demonstrated through a tool named "Pentest Copilot," to address this gap. This approach integrates Large Language Models into penetration testing workflows. Our research includes a "chain of thought" mechanism to streamline token usage and boost performance, as well as unique Retrieval Augmented Generation implementation to minimize hallucinations and keep models aligned with the latest techniques. Additionally, we propose a novel file analysis approach, enabling LLMs to understand files. Furthermore, we highlight a unique infrastructure system that supports if implemented, can support in-browser assisted penetration testing, offering a robust platform for cybersecurity professionals, These advancements mark a significant step toward bridging the gap between automated tools and human expertise, offering a powerful solution to the challenges faced by modern cybersecurity teams.


[219] 2409.09495

Protecting Vehicle Location Privacy with Contextually-Driven Synthetic Location Generation

Geo-obfuscation is a Location Privacy Protection Mechanism used in location-based services that allows users to report obfuscated locations instead of exact ones. A formal privacy criterion, geoindistinguishability (Geo-Ind), requires real locations to be hard to distinguish from nearby locations (by attackers) based on their obfuscated representations. However, Geo-Ind often fails to consider context, such as road networks and vehicle traffic conditions, making it less effective in protecting the location privacy of vehicles, of which the mobility are heavily influenced by these factors. In this paper, we introduce VehiTrack, a new threat model to demonstrate the vulnerability of Geo-Ind in protecting vehicle location privacy from context-aware inference attacks. Our experiments demonstrate that VehiTrack can accurately determine exact vehicle locations from obfuscated data, reducing average inference errors by 61.20% with Laplacian noise and 47.35% with linear programming (LP) compared to traditional Bayesian attacks. By using contextual data like road networks and traffic flow, VehiTrack effectively eliminates a significant number of seemingly "impossible" locations during its search for the actual location of the vehicles. Based on these insights, we propose TransProtect, a new geo-obfuscation approach that limits obfuscation to realistic vehicle movement patterns, complicating attackers' ability to differentiate obfuscated from actual locations. Our results show that TransProtect increases VehiTrack's inference error by 57.75% with Laplacian noise and 27.21% with LP, significantly enhancing protection against these attacks.


[220] 2409.09497

Multi-Scale Grouped Prototypes for Interpretable Semantic Segmentation

Prototypical part learning is emerging as a promising approach for making semantic segmentation interpretable. The model selects real patches seen during training as prototypes and constructs the dense prediction map based on the similarity between parts of the test image and the prototypes. This improves interpretability since the user can inspect the link between the predicted output and the patterns learned by the model in terms of prototypical information. In this paper, we propose a method for interpretable semantic segmentation that leverages multi-scale image representation for prototypical part learning. First, we introduce a prototype layer that explicitly learns diverse prototypical parts at several scales, leading to multi-scale representations in the prototype activation output. Then, we propose a sparse grouping mechanism that produces multi-scale sparse groups of these scale-specific prototypical parts. This provides a deeper understanding of the interactions between multi-scale object representations while enhancing the interpretability of the segmentation model. The experiments conducted on Pascal VOC, Cityscapes, and ADE20K demonstrate that the proposed method increases model sparsity, improves interpretability over existing prototype-based methods, and narrows the performance gap with the non-interpretable counterpart models. Code is available at github.com/eceo-epfl/ScaleProtoSeg.


[221] 2409.09500

A Data-Informed Analysis of Scalable Supervision for Safety in Autonomous Vehicle Fleets

Autonomous driving is a highly anticipated approach toward eliminating roadway fatalities. At the same time, the bar for safety is both high and costly to verify. This work considers the role of remotely-located human operators supervising a fleet of autonomous vehicles (AVs) for safety. Such a 'scalable supervision' concept was previously proposed to bridge the gap between still-maturing autonomy technology and the pressure to begin commercial offerings of autonomous driving. The present article proposes DISCES, a framework for Data-Informed Safety-Critical Event Simulation, to investigate the practicality of this concept from a dynamic network loading standpoint. With a focus on the safety-critical context of AVs merging into mixed-autonomy traffic, vehicular arrival processes at 1,097 highway merge points are modeled using microscopic traffic reconstruction with historical data from interstates across three California counties. Combined with a queuing theoretic model, these results characterize the dynamic supervision requirements and thereby scalability of the teleoperation approach. Across all scenarios we find reductions in operator requirements greater than 99% as compared to in-vehicle supervisors for the time period analyzed. The work also demonstrates two methods for reducing these empirical supervision requirements: (i) the use of cooperative connected AVs -- which are shown to produce an average 3.67 orders-of-magnitude system reliability improvement across the scenarios studied -- and (ii) aggregation across larger regions.


[222] 2409.09501

Synthetic4Health: Generating Annotated Synthetic Clinical Letters

Since clinical letters contain sensitive information, clinical-related datasets can not be widely applied in model training, medical research, and teaching. This work aims to generate reliable, various, and de-identified synthetic clinical letters. To achieve this goal, we explored different pre-trained language models (PLMs) for masking and generating text. After that, we worked on Bio\_ClinicalBERT, a high-performing model, and experimented with different masking strategies. Both qualitative and quantitative methods were used for evaluation. Additionally, a downstream task, Named Entity Recognition (NER), was also implemented to assess the usability of these synthetic letters. The results indicate that 1) encoder-only models outperform encoder-decoder models. 2) Among encoder-only models, those trained on general corpora perform comparably to those trained on clinical data when clinical information is preserved. 3) Additionally, preserving clinical entities and document structure better aligns with our objectives than simply fine-tuning the model. 4) Furthermore, different masking strategies can impact the quality of synthetic clinical letters. Masking stopwords has a positive impact, while masking nouns or verbs has a negative effect. 5) For evaluation, BERTScore should be the primary quantitative evaluation metric, with other metrics serving as supplementary references. 6) Contextual information does not significantly impact the models' understanding, so the synthetic clinical letters have the potential to replace the original ones in downstream tasks.


[223] 2409.09502

One missing piece in Vision and Language: A Survey on Comics Understanding

Vision-language models have recently evolved into versatile systems capable of high performance across a range of tasks, such as document understanding, visual question answering, and grounding, often in zero-shot settings. Comics Understanding, a complex and multifaceted field, stands to greatly benefit from these advances. Comics, as a medium, combine rich visual and textual narratives, challenging AI models with tasks that span image classification, object detection, instance segmentation, and deeper narrative comprehension through sequential panels. However, the unique structure of comics -- characterized by creative variations in style, reading order, and non-linear storytelling -- presents a set of challenges distinct from those in other visual-language domains. In this survey, we present a comprehensive review of Comics Understanding from both dataset and task perspectives. Our contributions are fivefold: (1) We analyze the structure of the comics medium, detailing its distinctive compositional elements; (2) We survey the widely used datasets and tasks in comics research, emphasizing their role in advancing the field; (3) We introduce the Layer of Comics Understanding (LoCU) framework, a novel taxonomy that redefines vision-language tasks within comics and lays the foundation for future work; (4) We provide a detailed review and categorization of existing methods following the LoCU framework; (5) Finally, we highlight current research challenges and propose directions for future exploration, particularly in the context of vision-language models applied to comics. This survey is the first to propose a task-oriented framework for comics intelligence and aims to guide future research by addressing critical gaps in data availability and task definition. A project associated with this survey is available at https://github.com/emanuelevivoli/awesome-comics-understanding.


[224] 2409.09504

Uddessho: An Extensive Benchmark Dataset for Multimodal Author Intent Classification in Low-Resource Bangla Language

With the increasing popularity of daily information sharing and acquisition on the Internet, this paper introduces an innovative approach for intent classification in Bangla language, focusing on social media posts where individuals share their thoughts and opinions. The proposed method leverages multimodal data with particular emphasis on authorship identification, aiming to understand the underlying purpose behind textual content, especially in the context of varied user-generated posts on social media. Current methods often face challenges in low-resource languages like Bangla, particularly when author traits intricately link with intent, as observed in social media posts. To address this, we present the Multimodal-based Author Bangla Intent Classification (MABIC) framework, utilizing text and images to gain deeper insights into the conveyed intentions. We have created a dataset named "Uddessho," comprising 3,048 instances sourced from social media. Our methodology comprises two approaches for classifying textual intent and multimodal author intent, incorporating early fusion and late fusion techniques. In our experiments, the unimodal approach achieved an accuracy of 64.53% in interpreting Bangla textual intent. In contrast, our multimodal approach significantly outperformed traditional unimodal methods, achieving an accuracy of 76.19%. This represents an improvement of 11.66%. To our best knowledge, this is the first research work on multimodal-based author intent classification for low-resource Bangla language social media posts.


[225] 2409.09506

ESPnet-EZ: Python-only ESPnet for Easy Fine-tuning and Integration

We introduce ESPnet-EZ, an extension of the open-source speech processing toolkit ESPnet, aimed at quick and easy development of speech models. ESPnet-EZ focuses on two major aspects: (i) easy fine-tuning and inference of existing ESPnet models on various tasks and (ii) easy integration with popular deep neural network frameworks such as PyTorch-Lightning, Hugging Face transformers and datasets, and Lhotse. By replacing ESPnet design choices inherited from Kaldi with a Python-only, Bash-free interface, we dramatically reduce the effort required to build, debug, and use a new model. For example, to fine-tune a speech foundation model, ESPnet-EZ, compared to ESPnet, reduces the number of newly written code by 2.7x and the amount of dependent code by 6.7x while dramatically reducing the Bash script dependencies. The codebase of ESPnet-EZ is publicly available.


[226] 2409.09509

Learning Nudges for Conditional Cooperation: A Multi-Agent Reinforcement Learning Model

The public goods game describes a social dilemma in which a large proportion of agents act as conditional cooperators (CC): they only act cooperatively if they see others acting cooperatively because they satisfice with the social norm to be in line with what others are doing instead of optimizing cooperation. CCs are guided by aspiration-based reinforcement learning guided by past experiences of interactions with others and satisficing aspirations. In many real-world settings, reinforcing social norms do not emerge. In this paper, we propose that an optimizing reinforcement agent can facilitate cooperation through nudges, i.e. indirect mechanisms for cooperation to happen. The agent's goal is to motivate CCs into cooperation through its own actions to create social norms that signal that others are cooperating. We introduce a multi-agent reinforcement learning model for public goods games, with 3 CC learning agents using aspirational reinforcement learning and 1 nudging agent using deep reinforcement learning to learn nudges that optimize cooperation. For our nudging agent, we model two distinct reward functions, one maximizing the total game return (sum DRL) and one maximizing the number of cooperative contributions contributions higher than a proportional threshold (prop DRL). Our results show that our aspiration-based RL model for CC agents is consistent with empirically observed CC behavior. Games combining 3 CC RL agents and one nudging RL agent outperform the baseline consisting of 4 CC RL agents only. The sum DRL nudging agent increases the total sum of contributions by 8.22% and the total proportion of cooperative contributions by 12.42%, while the prop nudging DRL increases the total sum of contributions by 8.85% and the total proportion of cooperative contributions by 14.87%. Our findings advance the literature on public goods games and reinforcement learning.


[227] 2409.09510

Comparing Retrieval-Augmentation and Parameter-Efficient Fine-Tuning for Privacy-Preserving Personalization of Large Language Models

Privacy-preserving methods for personalizing large language models (LLMs) are relatively under-explored. There are two schools of thought on this topic: (1) generating personalized outputs by personalizing the input prompt through retrieval augmentation from the user's personal information (RAG-based methods), and (2) parameter-efficient fine-tuning of LLMs per user that considers efficiency and space limitations (PEFT-based methods). This paper presents the first systematic comparison between two approaches on a wide range of personalization tasks using seven diverse datasets. Our results indicate that RAG-based and PEFT-based personalization methods on average yield 14.92% and 1.07% improvements over the non-personalized LLM, respectively. We find that combining RAG with PEFT elevates these improvements to 15.98%. Additionally, we identify a positive correlation between the amount of user data and PEFT's effectiveness, indicating that RAG is a better choice for cold-start users (i.e., user's with limited personal data).


[228] 2409.09511

Explaining Deep Learning Embeddings for Speech Emotion Recognition by Predicting Interpretable Acoustic Features

Pre-trained deep learning embeddings have consistently shown superior performance over handcrafted acoustic features in speech emotion recognition (SER). However, unlike acoustic features with clear physical meaning, these embeddings lack clear interpretability. Explaining these embeddings is crucial for building trust in healthcare and security applications and advancing the scientific understanding of the acoustic information that is encoded in them. This paper proposes a modified probing approach to explain deep learning embeddings in the SER space. We predict interpretable acoustic features (e.g., f0, loudness) from (i) the complete set of embeddings and (ii) a subset of the embedding dimensions identified as most important for predicting each emotion. If the subset of the most important dimensions better predicts a given emotion than all dimensions and also predicts specific acoustic features more accurately, we infer those acoustic features are important for the embedding model for the given task. We conducted experiments using the WavLM embeddings and eGeMAPS acoustic features as audio representations, applying our method to the RAVDESS and SAVEE emotional speech datasets. Based on this evaluation, we demonstrate that Energy, Frequency, Spectral, and Temporal categories of acoustic features provide diminishing information to SER in that order, demonstrating the utility of the probing classifier method to relate embeddings to interpretable acoustic features.


[229] 2409.09513

Planning Transformer: Long-Horizon Offline Reinforcement Learning with Planning Tokens

Supervised learning approaches to offline reinforcement learning, particularly those utilizing the Decision Transformer, have shown effectiveness in continuous environments and for sparse rewards. However, they often struggle with long-horizon tasks due to the high compounding error of auto-regressive models. To overcome this limitation, we go beyond next-token prediction and introduce Planning Tokens, which contain high-level, long time-scale information about the agent's future. Predicting dual time-scale tokens at regular intervals enables our model to use these long-horizon Planning Tokens as a form of implicit planning to guide its low-level policy and reduce compounding error. This architectural modification significantly enhances performance on long-horizon tasks, establishing a new state-of-the-art in complex D4RL environments. Additionally, we demonstrate that Planning Tokens improve the interpretability of the model's policy through the interpretable plan visualisations and attention map.


[230] 2409.09517

Deep Learning Under Siege: Identifying Security Vulnerabilities and Risk Mitigation Strategies

With the rise in the wholesale adoption of Deep Learning (DL) models in nearly all aspects of society, a unique set of challenges is imposed. Primarily centered around the architectures of these models, these risks pose a significant challenge, and addressing these challenges is key to their successful implementation and usage in the future. In this research, we present the security challenges associated with the current DL models deployed into production, as well as anticipate the challenges of future DL technologies based on the advancements in computing, AI, and hardware technologies. In addition, we propose risk mitigation techniques to inhibit these challenges and provide metrical evaluations to measure the effectiveness of these metrics.


[231] 2409.09520

Enhancing Skin Disease Diagnosis: Interpretable Visual Concept Discovery with SAM Empowerment

Current AI-assisted skin image diagnosis has achieved dermatologist-level performance in classifying skin cancer, driven by rapid advancements in deep learning architectures. However, unlike traditional vision tasks, skin images in general present unique challenges due to the limited availability of well-annotated datasets, complex variations in conditions, and the necessity for detailed interpretations to ensure patient safety. Previous segmentation methods have sought to reduce image noise and enhance diagnostic performance, but these techniques require fine-grained, pixel-level ground truth masks for training. In contrast, with the rise of foundation models, the Segment Anything Model (SAM) has been introduced to facilitate promptable segmentation, enabling the automation of the segmentation process with simple yet effective prompts. Efforts applying SAM predominantly focus on dermatoscopy images, which present more easily identifiable lesion boundaries than clinical photos taken with smartphones. This limitation constrains the practicality of these approaches to real-world applications. To overcome the challenges posed by noisy clinical photos acquired via non-standardized protocols and to improve diagnostic accessibility, we propose a novel Cross-Attentive Fusion framework for interpretable skin lesion diagnosis. Our method leverages SAM to generate visual concepts for skin diseases using prompts, integrating local visual concepts with global image features to enhance model performance. Extensive evaluation on two skin disease datasets demonstrates our proposed method's effectiveness on lesion diagnosis and interpretability.


[232] 2409.09523

Lab2Car: A Versatile Wrapper for Deploying Experimental Planners in Complex Real-world Environments

Human-level autonomous driving is an ever-elusive goal, with planning and decision making -- the cognitive functions that determine driving behavior -- posing the greatest challenge. Despite a proliferation of promising approaches, progress is stifled by the difficulty of deploying experimental planners in naturalistic settings. In this work, we propose Lab2Car, an optimization-based wrapper that can take a trajectory sketch from an arbitrary motion planner and convert it to a safe, comfortable, dynamically feasible trajectory that the car can follow. This allows motion planners that do not provide such guarantees to be safely tested and optimized in real-world environments. We demonstrate the versatility of Lab2Car by using it to deploy a machine learning (ML) planner and a search-based planner on self-driving cars in Las Vegas. The resulting systems handle challenging scenarios, such as cut-ins, overtaking, and yielding, in complex urban environments like casino pick-up/drop-off areas. Our work paves the way for quickly deploying and evaluating candidate motion planners in realistic settings, ensuring rapid iteration and accelerating progress towards human-level autonomy.


[233] 2409.09525

Foundations of Vision-Based Localization: A New Approach to Localizability Analysis Using Stochastic Geometry

Despite significant algorithmic advances in vision-based positioning, a comprehensive probabilistic framework to study its performance has remained unexplored. The main objective of this paper is to develop such a framework using ideas from stochastic geometry. Due to limitations in sensor resolution, the level of detail in prior information, and computational resources, we may not be able to differentiate between landmarks with similar appearances in the vision data, such as trees, lampposts, and bus stops. While one cannot accurately determine the absolute target position using a single indistinguishable landmark, obtaining an approximate position fix is possible if the target can see multiple landmarks whose geometric placement on the map is unique. Modeling the locations of these indistinguishable landmarks as a Poisson point process (PPP) $\Phi$ on $\mathbb{R}^2$, we develop a new approach to analyze the localizability in this setting. From the target location $\mathbb{x}$, the measurements are obtained from landmarks within the visibility region. These measurements, including ranges and angles to the landmarks, denoted as $f(\mathbb{x})$, can be treated as mappings from the target location. We are interested in understanding the probability that the measurements $f(\mathbb{x})$ are sufficiently distinct from the measurement $f(\mathbb{x}_0)$ at the given location, which we term localizability. Expressions of localizability probability are derived for specific vision-inspired measurements, such as ranges to landmarks and snapshots of their locations. Our analysis reveals that the localizability probability approaches one when the landmark intensity tends to infinity, which means that error-free localization is achievable in this limiting regime.


[234] 2409.09530

An Augmentation-based Model Re-adaptation Framework for Robust Image Segmentation

Image segmentation is a crucial task in computer vision, with wide-ranging applications in industry. The Segment Anything Model (SAM) has recently attracted intensive attention; however, its application in industrial inspection, particularly for segmenting commercial anti-counterfeit codes, remains challenging. Unlike open-source datasets, industrial settings often face issues such as small sample sizes and complex textures. Additionally, computational cost is a key concern due to the varying number of trainable parameters. To address these challenges, we propose an Augmentation-based Model Re-adaptation Framework (AMRF). This framework leverages data augmentation techniques during training to enhance the generalisation of segmentation models, allowing them to adapt to newly released datasets with temporal disparity. By observing segmentation masks from conventional models (FCN and U-Net) and a pre-trained SAM model, we determine a minimal augmentation set that optimally balances training efficiency and model performance. Our results demonstrate that the fine-tuned FCN surpasses its baseline by 3.29% and 3.02% in cropping accuracy, and 5.27% and 4.04% in classification accuracy on two temporally continuous datasets. Similarly, the fine-tuned U-Net improves upon its baseline by 7.34% and 4.94% in cropping, and 8.02% and 5.52% in classification. Both models outperform the top-performing SAM models (ViT-Large and ViT-Base) by an average of 11.75% and 9.01% in cropping accuracy, and 2.93% and 4.83% in classification accuracy, respectively.


[235] 2409.09532

Using Synthetic Data to Mitigate Unfairness and Preserve Privacy through Single-Shot Federated Learning

To address unfairness issues in federated learning (FL), contemporary approaches typically use frequent model parameter updates and transmissions between the clients and server. In such a process, client-specific information (e.g., local dataset size or data-related fairness metrics) must be sent to the server to compute, e.g., aggregation weights. All of this results in high transmission costs and the potential leakage of client information. As an alternative, we propose a strategy that promotes fair predictions across clients without the need to pass information between the clients and server iteratively and prevents client data leakage. For each client, we first use their local dataset to obtain a synthetic dataset by solving a bilevel optimization problem that addresses unfairness concerns during the learning process. We then pass each client's synthetic dataset to the server, the collection of which is used to train the server model using conventional machine learning techniques (that do not take fairness metrics into account). Thus, we eliminate the need to handle fairness-specific aggregation weights while preserving client privacy. Our approach requires only a single communication between the clients and the server, thus making it computationally cost-effective, able to maintain privacy, and able to ensuring fairness. We present empirical evidence to demonstrate the advantages of our approach. The results illustrate that our method effectively uses synthetic data as a means to mitigate unfairness and preserve client privacy.


[236] 2409.09533

Towards Verified Polynomial Factorisation

Computer algebra systems are really good at factoring polynomials, i.e. writing f as a product of irreducible factors. It is relatively easy to verify that we have a factorisation, but verifying that these factors are irreducible is a much harder problem. This paper reports work-in-progress to do such verification in Lean.


[237] 2409.09536

VernaCopter: Disambiguated Natural-Language-Driven Robot via Formal Specifications

It has been an ambition of many to control a robot for a complex task using natural language (NL). The rise of large language models (LLMs) makes it closer to coming true. However, an LLM-powered system still suffers from the ambiguity inherent in an NL and the uncertainty brought up by LLMs. This paper proposes a novel LLM-based robot motion planner, named \textit{VernaCopter}, with signal temporal logic (STL) specifications serving as a bridge between NL commands and specific task objectives. The rigorous and abstract nature of formal specifications allows the planner to generate high-quality and highly consistent paths to guide the motion control of a robot. Compared to a conventional NL-prompting-based planner, the proposed VernaCopter planner is more stable and reliable due to less ambiguous uncertainty. Its efficacy and advantage have been validated by two small but challenging experimental scenarios, implying its potential in designing NL-driven robots.


[238] 2409.09537

Deep Fast Machine Learning Utils: A Python Library for Streamlined Machine Learning Prototyping

Machine learning (ML) research and application often involve time-consuming steps such as model architecture prototyping, feature selection, and dataset preparation. To support these tasks, we introduce the Deep Fast Machine Learning Utils (DFMLU) library, which provides tools designed to automate and enhance aspects of these processes. Compatible with frameworks like TensorFlow, Keras, and Scikit-learn, DFMLU offers functionalities that support model development and data handling. The library includes methods for dense neural network search, advanced feature selection, and utilities for data management and visualization of training outcomes. This manuscript presents an overview of DFMLU's functionalities, providing Python examples for each tool.


[239] 2409.09539

Ensuring System-Level Protection against Eavesdropping Adversaries in Distributed Dynamical Systems

In this work, we address the objective of protecting the states of a distributed dynamical system from eavesdropping adversaries. We prove that state-of-the-art distributed algorithms, which rely on communicating the agents' states, are vulnerable in that the final states can be perfectly estimated by any adversary including those with arbitrarily small eavesdropping success probability. While existing literature typically adds an extra layer of protection, such as encryption or differential privacy techniques, we demonstrate the emergence of a fundamental protection quotient in distributed systems when innovation signals are communicated instead of the agents' states.


[240] 2409.09541

Autonomous Goal Detection and Cessation in Reinforcement Learning: A Case Study on Source Term Estimation

Reinforcement Learning has revolutionized decision-making processes in dynamic environments, yet it often struggles with autonomously detecting and achieving goals without clear feedback signals. For example, in a Source Term Estimation problem, the lack of precise environmental information makes it challenging to provide clear feedback signals and to define and evaluate how the source's location is determined. To address this challenge, the Autonomous Goal Detection and Cessation (AGDC) module was developed, enhancing various RL algorithms by incorporating a self-feedback mechanism for autonomous goal detection and cessation upon task completion. Our method effectively identifies and ceases undefined goals by approximating the agent's belief, significantly enhancing the capabilities of RL algorithms in environments with limited feedback. To validate effectiveness of our approach, we integrated AGDC with deep Q-Network, proximal policy optimization, and deep deterministic policy gradient algorithms, and evaluated its performance on the Source Term Estimation problem. The experimental results showed that AGDC-enhanced RL algorithms significantly outperformed traditional statistical methods such as infotaxis, entrotaxis, and dual control for exploitation and exploration, as well as a non-statistical random action selection method. These improvements were evident in terms of success rate, mean traveled distance, and search time, highlighting AGDC's effectiveness and efficiency in complex, real-world scenarios.


[241] 2409.09545

Multi-Microphone and Multi-Modal Emotion Recognition in Reverbrant Enviroment

This paper presents a Multi-modal Emotion Recognition (MER) system designed to enhance emotion recognition accuracy in challenging acoustic conditions. Our approach combines a modified and extended Hierarchical Token-semantic Audio Transformer (HTS-AT) for multi-channel audio processing with an R(2+1)D Convolutional Neural Networks (CNN) model for video analysis. We evaluate our proposed method on a reverberated version of the Ryerson audio-visual database of emotional speech and song (RAVDESS) dataset using synthetic and real-world Room Impulse Responsess (RIRs). Our results demonstrate that integrating audio and video modalities yields superior performance compared to uni-modal approaches, especially in challenging acoustic conditions. Moreover, we show that the multimodal (audiovisual) approach that utilizes multiple microphones outperforms its single-microphone counterpart.


[242] 2409.09549

COMFORT: A Continual Fine-Tuning Framework for Foundation Models Targeted at Consumer Healthcare

Wearable medical sensors (WMSs) are revolutionizing smart healthcare by enabling continuous, real-time monitoring of user physiological signals, especially in the field of consumer healthcare. The integration of WMSs and modern machine learning (ML) enables unprecedented solutions to efficient early-stage disease detection. Despite the success of Transformers in various fields, their application to sensitive domains, such as smart healthcare, remains underexplored due to limited data accessibility and privacy concerns. To bridge the gap between Transformer-based foundation models and WMS-based disease detection, we propose COMFORT, a continual fine-tuning framework for foundation models targeted at consumer healthcare. COMFORT introduces a novel approach for pre-training a Transformer-based foundation model on a large dataset of physiological signals exclusively collected from healthy individuals with commercially available WMSs. We adopt a masked data modeling (MDM) objective to pre-train this health foundation model. We then fine-tune the model using various parameter-efficient fine-tuning (PEFT) methods, such as low-rank adaptation (LoRA) and its variants, to adapt it to various downstream disease detection tasks that rely on WMS data. In addition, COMFORT continually stores the low-rank decomposition matrices obtained from the PEFT algorithms to construct a library for multi-disease detection. The COMFORT library enables scalable and memory-efficient disease detection on edge devices. Our experimental results demonstrate that COMFORT achieves highly competitive performance while reducing memory overhead by up to 52% relative to conventional methods. Thus, COMFORT paves the way for personalized and proactive solutions to efficient and effective early-stage disease detection for consumer healthcare.


[243] 2409.09550

Swarm Algorithms for Dynamic Task Allocation in Unknown Environments

Robot swarms, systems of many robots that operate in a distributed fashion, have many applications in areas such as search-and-rescue, natural disaster response, and self-assembly. Several of these applications can be abstracted to the general problem of task allocation in an environment, in which robots must assign themselves to and complete tasks. While several algorithms for task allocation have been proposed, most of them assume either prior knowledge of task locations or a static set of tasks. Operating under a discrete general model where tasks dynamically appear in unknown locations, we present three new swarm algorithms for task allocation. We demonstrate that when tasks appear slowly, our variant of a distributed algorithm based on propagating task information completes tasks more efficiently than a Levy random walk algorithm, which is a strategy used by many organisms in nature to efficiently search an environment. We also propose a division of labor algorithm where some agents are using our algorithm based on propagating task information while the remaining agents are using the Levy random walk algorithm. Finally, we introduce a hybrid algorithm where each agent dynamically switches between using propagated task information and following a Levy random walk. We show that our division of labor and hybrid algorithms can perform better than both our algorithm based on propagated task information and the Levy walk algorithm, especially at low and medium task rates. When tasks appear fast, we observe the Levy random walk strategy performs as well or better when compared to these novel approaches. Our work demonstrates the relative performance of these algorithms on a variety of task rates and also provide insight into optimizing our algorithms based on environment parameters.


[244] 2409.09554

ASR Error Correction using Large Language Models

Error correction (EC) models play a crucial role in refining Automatic Speech Recognition (ASR) transcriptions, enhancing the readability and quality of transcriptions. Without requiring access to the underlying code or model weights, EC can improve performance and provide domain adaptation for black-box ASR systems. This work investigates the use of large language models (LLMs) for error correction across diverse scenarios. 1-best ASR hypotheses are commonly used as the input to EC models. We propose building high-performance EC models using ASR N-best lists which should provide more contextual information for the correction process. Additionally, the generation process of a standard EC model is unrestricted in the sense that any output sequence can be generated. For some scenarios, such as unseen domains, this flexibility may impact performance. To address this, we introduce a constrained decoding approach based on the N-best list or an ASR lattice. Finally, most EC models are trained for a specific ASR system requiring retraining whenever the underlying ASR system is changed. This paper explores the ability of EC models to operate on the output of different ASR systems. This concept is further extended to zero-shot error correction using LLMs, such as ChatGPT. Experiments on three standard datasets demonstrate the efficacy of our proposed methods for both Transducer and attention-based encoder-decoder ASR systems. In addition, the proposed method can serve as an effective method for model ensembling.


[245] 2409.09555

Enhancing Printed Circuit Board Defect Detection through Ensemble Learning

The quality control of printed circuit boards (PCBs) is paramount in advancing electronic device technology. While numerous machine learning methodologies have been utilized to augment defect detection efficiency and accuracy, previous studies have predominantly focused on optimizing individual models for specific defect types, often overlooking the potential synergies between different approaches. This paper introduces a comprehensive inspection framework leveraging an ensemble learning strategy to address this gap. Initially, we utilize four distinct PCB defect detection models utilizing state-of-the-art methods: EfficientDet, MobileNet SSDv2, Faster RCNN, and YOLOv5. Each method is capable of identifying PCB defects independently. Subsequently, we integrate these models into an ensemble learning framework to enhance detection performance. A comparative analysis reveals that our ensemble learning framework significantly outperforms individual methods, achieving a 95% accuracy in detecting diverse PCB defects. These findings underscore the efficacy of our proposed ensemble learning framework in enhancing PCB quality control processes.


[246] 2409.09557

Adaptable, shape-conforming robotic endoscope

This paper introduces a size-adaptable robotic endoscope design, which aims to improve the efficiency and comfort of colonoscopy. The robotic endoscope proposed in this paper combines the expansion mechanism and the external drive system, which can adjust the shape according to the different pipe diameters, thus improving the stability and propulsion force during propulsion. As an actuator in the expansion mechanism, flexible bellows can provide a normal force of 3.89 N and an axial deformation of nearly 10mm at the maximum pressure, with a 53% expansion rate in the size of expandable tip. In the test of the locomotion performance of the prototype, we obtained the relationship with the propelling of the prototype by changing the friction coefficient of the pipe and the motor angular velocity. In the experiment with artificial bowel tissues, the prototype can generate a propelling force of 2.83 N, and the maximum linear speed is 29.29 m/s in average, and could produce effective propulsion when it passes through different pipe sizes. The results show that the prototype can realize the ability of shape adaptation in order to obtain more propulsion. The relationship between propelling force and traction force, structural optimization and miniaturization still need further exploration.


[247] 2409.09558

A Statistical Viewpoint on Differential Privacy: Hypothesis Testing, Representation and Blackwell's Theorem

Differential privacy is widely considered the formal privacy for privacy-preserving data analysis due to its robust and rigorous guarantees, with increasingly broad adoption in public services, academia, and industry. Despite originating in the cryptographic context, in this review paper we argue that, fundamentally, differential privacy can be considered a \textit{pure} statistical concept. By leveraging a theorem due to David Blackwell, our focus is to demonstrate that the definition of differential privacy can be formally motivated from a hypothesis testing perspective, thereby showing that hypothesis testing is not merely convenient but also the right language for reasoning about differential privacy. This insight leads to the definition of $f$-differential privacy, which extends other differential privacy definitions through a representation theorem. We review techniques that render $f$-differential privacy a unified framework for analyzing privacy bounds in data analysis and machine learning. Applications of this differential privacy definition to private deep learning, private convex optimization, shuffled mechanisms, and U.S.~Census data are discussed to highlight the benefits of analyzing privacy bounds under this framework compared to existing alternatives.


[248] 2409.09560

Evaluating authenticity and quality of image captions via sentiment and semantic analyses

The growth of deep learning (DL) relies heavily on huge amounts of labelled data for tasks such as natural language processing and computer vision. Specifically, in image-to-text or image-to-image pipelines, opinion (sentiment) may be inadvertently learned by a model from human-generated image captions. Additionally, learning may be affected by the variety and diversity of the provided captions. While labelling large datasets has largely relied on crowd-sourcing or data-worker pools, evaluating the quality of such training data is crucial. This study proposes an evaluation method focused on sentiment and semantic richness. That method was applied to the COCO-MS dataset, comprising approximately 150K images with segmented objects and corresponding crowd-sourced captions. We employed pre-trained models (Twitter-RoBERTa-base and BERT-base) to extract sentiment scores and variability of semantic embeddings from captions. The relation of the sentiment score and semantic variability with object categories was examined using multiple linear regression. Results indicate that while most captions were neutral, about 6% of the captions exhibited strong sentiment influenced by specific object categories. Semantic variability of within-image captions remained low and uncorrelated with object categories. Model-generated captions showed less than 1.5% of strong sentiment which was not influenced by object categories and did not correlate with the sentiment of the respective human-generated captions. This research demonstrates an approach to assess the quality of crowd- or worker-sourced captions informed by image content.


[249] 2409.09564

TG-LLaVA: Text Guided LLaVA via Learnable Latent Embeddings

Currently, inspired by the success of vision-language models (VLMs), an increasing number of researchers are focusing on improving VLMs and have achieved promising results. However, most existing methods concentrate on optimizing the connector and enhancing the language model component, while neglecting improvements to the vision encoder itself. In contrast, we propose Text Guided LLaVA (TG-LLaVA) in this paper, which optimizes VLMs by guiding the vision encoder with text, offering a new and orthogonal optimization direction. Specifically, inspired by the purpose-driven logic inherent in human behavior, we use learnable latent embeddings as a bridge to analyze textual instruction and add the analysis results to the vision encoder as guidance, refining it. Subsequently, another set of latent embeddings extracts additional detailed text-guided information from high-resolution local patches as auxiliary information. Finally, with the guidance of text, the vision encoder can extract text-related features, similar to how humans focus on the most relevant parts of an image when considering a question. This results in generating better answers. Experiments on various datasets validate the effectiveness of the proposed method. Remarkably, without the need for additional training data, our propsoed method can bring more benefits to the baseline (LLaVA-1.5) compared with other concurrent methods. Furthermore, the proposed method consistently brings improvement in different settings.


[250] 2409.09566

Learning Transferable Features for Implicit Neural Representations

Implicit neural representations (INRs) have demonstrated success in a variety of applications, including inverse problems and neural rendering. An INR is typically trained to capture one signal of interest, resulting in learned neural features that are highly attuned to that signal. Assumed to be less generalizable, we explore the aspect of transferability of such learned neural features for fitting similar signals. We introduce a new INR training framework, STRAINER that learns transferrable features for fitting INRs to new signals from a given distribution, faster and with better reconstruction quality. Owing to the sequential layer-wise affine operations in an INR, we propose to learn transferable representations by sharing initial encoder layers across multiple INRs with independent decoder layers. At test time, the learned encoder representations are transferred as initialization for an otherwise randomly initialized INR. We find STRAINER to yield extremely powerful initialization for fitting images from the same domain and allow for $\approx +10dB$ gain in signal quality early on compared to an untrained INR itself. STRAINER also provides a simple way to encode data-driven priors in INRs. We evaluate STRAINER on multiple in-domain and out-of-domain signal fitting tasks and inverse problems and further provide detailed analysis and discussion on the transferability of STRAINER's features. Our demo can be accessed at https://colab.research.google.com/drive/1fBZAwqE8C_lrRPAe-hQZJTWrMJuAKtG2?usp=sharing .


[251] 2409.09568

Thesis proposal: Are We Losing Textual Diversity to Natural Language Processing?

This thesis argues that the currently widely used Natural Language Processing algorithms possibly have various limitations related to the properties of the texts they handle and produce. With the wide adoption of these tools in rapid progress, we must ask what these limitations are and what are the possible implications of integrating such tools even more deeply into our daily lives. As a testbed, we have chosen the task of Neural Machine Translation (NMT). Nevertheless, we aim for general insights and outcomes, applicable even to current Large Language Models (LLMs). We ask whether the algorithms used in NMT have inherent inductive biases that are beneficial for most types of inputs but might harm the processing of untypical texts. To explore this hypothesis, we define a set of measures to quantify text diversity based on its statistical properties, like uniformity or rhythmicity of word-level surprisal, on multiple scales (sentence, discourse, language). We then conduct a series of experiments to investigate whether NMT systems struggle with maintaining the diversity of such texts, potentially reducing the richness of the language generated by these systems, compared to human translators. We search for potential causes of these limitations rooted in training objectives and decoding algorithms. Our ultimate goal is to develop alternatives that do not enforce uniformity in the distribution of statistical properties in the output and that allow for better global planning of the translation, taking into account the intrinsic ambiguity of the translation task.


[252] 2409.09569

Bias Begets Bias: The Impact of Biased Embeddings on Diffusion Models

With the growing adoption of Text-to-Image (TTI) systems, the social biases of these models have come under increased scrutiny. Herein we conduct a systematic investigation of one such source of bias for diffusion models: embedding spaces. First, because traditional classifier-based fairness definitions require true labels not present in generative modeling, we propose statistical group fairness criteria based on a model's internal representation of the world. Using these definitions, we demonstrate theoretically and empirically that an unbiased text embedding space for input prompts is a necessary condition for representationally balanced diffusion models, meaning the distribution of generated images satisfy diversity requirements with respect to protected attributes. Next, we investigate the impact of biased embeddings on evaluating the alignment between generated images and prompts, a process which is commonly used to assess diffusion models. We find that biased multimodal embeddings like CLIP can result in lower alignment scores for representationally balanced TTI models, thus rewarding unfair behavior. Finally, we develop a theoretical framework through which biases in alignment evaluation can be studied and propose bias mitigation methods. By specifically adapting the perspective of embedding spaces, we establish new fairness conditions for diffusion model development and evaluation.


[253] 2409.09570

MindScape Study: Integrating LLM and Behavioral Sensing for Personalized AI-Driven Journaling Experiences

Mental health concerns are prevalent among college students, highlighting the need for effective interventions that promote self-awareness and holistic well-being. MindScape pioneers a novel approach to AI-powered journaling by integrating passively collected behavioral patterns such as conversational engagement, sleep, and location with Large Language Models (LLMs). This integration creates a highly personalized and context-aware journaling experience, enhancing self-awareness and well-being by embedding behavioral intelligence into AI. We present an 8-week exploratory study with 20 college students, demonstrating the MindScape app's efficacy in enhancing positive affect (7%), reducing negative affect (11%), loneliness (6%), and anxiety and depression, with a significant week-over-week decrease in PHQ-4 scores (-0.25 coefficient), alongside improvements in mindfulness (7%) and self-reflection (6%). The study highlights the advantages of contextual AI journaling, with participants particularly appreciating the tailored prompts and insights provided by the MindScape app. Our analysis also includes a comparison of responses to AI-driven contextual versus generic prompts, participant feedback insights, and proposed strategies for leveraging contextual AI journaling to improve well-being on college campuses. By showcasing the potential of contextual AI journaling to support mental health, we provide a foundation for further investigation into the effects of contextual AI journaling on mental health and well-being.


[254] 2409.09572

A Novel Aerial-Aquatic Locomotion Robot with Variable Stiffness Propulsion Module

In recent years, the development of robots capable of operating in both aerial and aquatic environments has gained significant attention. This study presents the design and fabrication of a novel aerial-aquatic locomotion robot (AALR). Inspired by the diving beetle, the AALR incorporates a biomimetic propulsion mechanism with power and recovery strokes. The variable stiffness propulsion module (VSPM) uses low melting point alloy (LMPA) and variable stiffness joints (VSJ) to achieve efficient aquatic locomotion while reduce harm to marine life. The AALR's innovative design integrates the VSPM into the arms of a traditional quadrotor, allowing for effective aerial-aquatic locomotion. The VSPM adjusts joint stiffness through temperature control, meeting locomotion requirements in both aerial and aquatic modes. A dynamic model for the VSPM was developed, with optimized dimensional parameters to increase propulsion force. Experiments focused on aquatic mode analysis and demonstrated the AALR's swimming capability, achieving a maximum swimming speed of 77 mm/s underwater. The results confirm the AALR's effective performance in water environment, highlighting its potential for versatile, eco-friendly operations.


[255] 2409.09573

Decentralized Safe and Scalable Multi-Agent Control under Limited Actuation

To deploy safe and agile robots in cluttered environments, there is a need to develop fully decentralized controllers that guarantee safety, respect actuation limits, prevent deadlocks, and scale to thousands of agents. Current approaches fall short of meeting all these goals: optimization-based methods ensure safety but lack scalability, while learning-based methods scale but do not guarantee safety. We propose a novel algorithm to achieve safe and scalable control for multiple agents under limited actuation. Specifically, our approach includes: $(i)$ learning a decentralized neural Integral Control Barrier function (neural ICBF) for scalable, input-constrained control, $(ii)$ embedding a lightweight decentralized Model Predictive Control-based Integral Control Barrier Function (MPC-ICBF) into the neural network policy to ensure safety while maintaining scalability, and $(iii)$ introducing a novel method to minimize deadlocks based on gradient-based optimization techniques from machine learning to address local minima in deadlocks. Our numerical simulations show that this approach outperforms state-of-the-art multi-agent control algorithms in terms of safety, input constraint satisfaction, and minimizing deadlocks. Additionally, we demonstrate strong generalization across scenarios with varying agent counts, scaling up to 1000 agents.


[256] 2409.09575

Traffic Scene Generation from Natural Language Description for Autonomous Vehicles with Large Language Model

Text-to-scene generation, transforming textual descriptions into detailed scenes, typically relies on generating key scenarios along predetermined paths, constraining environmental diversity and limiting customization flexibility. To address these limitations, we propose a novel text-to-traffic scene framework that leverages a large language model to generate diverse traffic scenarios within the Carla simulator based on natural language descriptions. Users can define specific parameters such as weather conditions, vehicle types, and road signals, while our pipeline can autonomously select the starting point and scenario details, generating scenes from scratch without relying on predetermined locations or trajectories. Furthermore, our framework supports both critical and routine traffic scenarios, enhancing its applicability. Experimental results indicate that our approach promotes diverse agent planning and road selection, enhancing the training of autonomous agents in traffic environments. Notably, our methodology has achieved a 16% reduction in average collision rates. Our work is made publicly available at https://basiclab.github.io/TTSG.


[257] 2409.09582

NEVLP: Noise-Robust Framework for Efficient Vision-Language Pre-training

The success of Vision Language Models (VLMs) on various vision-language tasks heavily relies on pre-training with large scale web-crawled datasets. However, the noisy and incomplete nature of web data makes dataset scale crucial for performance, rendering end-to-end training increasingly prohibitive. In this paper, we propose NEVLP, a noise-robust framework for efficient vision-language pre-training that requires less pre-training data. Specifically, we bridge the modality gap between a frozen image encoder and a large language model with a transformer and introduce two innovative learning strategies: noise-adaptive learning and concept-enhanced learning to mitigate the impact of noise. In noise-adaptive learning, we estimate the noise probability of each image-text pair based on the transformer's memorization effect and employ noise-adaptive regularization on image-text contrastive learning to condition cross-modal alignment. In concept-enhanced learning, we enrich incomplete text by incorporating visual concepts (objects in the image) to provide prior information about existing objects for image-text matching and image-grounded text generation, thereby mitigating text incompletion. Our framework effectively utilizes noisy web data and achieves state-of-the-art performance with less pre-training data across a wide range of vision-language tasks, including image-text retrieval, image captioning, and visual question answering.


[258] 2409.09584

RethinkMCTS: Refining Erroneous Thoughts in Monte Carlo Tree Search for Code Generation

LLM agents enhanced by tree search algorithms have yielded notable performances in code generation. However, current search algorithms in this domain suffer from low search quality due to several reasons: 1) Ineffective design of the search space for the high-reasoning demands of code generation tasks, 2) Inadequate integration of code feedback with the search algorithm, and 3) Poor handling of negative feedback during the search, leading to reduced search efficiency and quality. To address these challenges, we propose to search for the reasoning process of the code and use the detailed feedback of code execution to refine erroneous thoughts during the search. In this paper, we introduce RethinkMCTS, which employs the Monte Carlo Tree Search (MCTS) algorithm to conduct thought-level searches before generating code, thereby exploring a wider range of strategies. More importantly, we construct verbal feedback from fine-grained code execution feedback to refine erroneous thoughts during the search. This ensures that the search progresses along the correct reasoning paths, thus improving the overall search quality of the tree by leveraging execution feedback. Through extensive experiments, we demonstrate that RethinkMCTS outperforms previous search-based and feedback-based code generation baselines. On the HumanEval dataset, it improves the pass@1 of GPT-3.5-turbo from 70.12 to 89.02 and GPT-4o-mini from 87.20 to 94.51. It effectively conducts more thorough exploration through thought-level searches and enhances the search quality of the entire tree by incorporating rethink operation.


[259] 2409.09585

CSQF-based Time-Sensitive Flow Scheduling in Long-distance Industrial IoT Networks

Booming time-critical services, such as automated manufacturing and remote operations, stipulate increasing demands for facilitating large-scale Industrial Internet of Things (IoT). Recently, a cycle specified queuing and forwarding (CSQF) scheme has been advocated to enhance the Ethernet. However, CSQF only outlines a foundational equipment-level primitive, while how to attain network-wide flow scheduling is not yet determined. Prior endeavors primarily focus on the range of a local area, rendering them unsuitable for long-distance factory interconnection. This paper devises the cycle tags planning (CTP) mechanism, the first integer programming model for the CSQF, which makes the CSQF practical for efficient global flow scheduling. In the CTP model, the per-hop cycle alignment problem is solved by decoupling the long-distance link delay from cyclic queuing time. To avoid queue overflows, we discretize the underlying network resources into cycle-related queue resource blocks and detail the core constraints within multiple periods. Then, two heuristic algorithms named flow offset and cycle shift (FO-CS) and Tabu FO-CS are designed to calculate the flows' cycle tags and maximize the number of schedulable flows, respectively. Evaluation results show that FO-CS increases the number of scheduled flows by 31.2%. The Tabu FO-CS algorithm can schedule 94.45% of flows at the level of 2000 flows.


[260] 2409.09586

ValueCompass: A Framework of Fundamental Values for Human-AI Alignment

As AI systems become more advanced, ensuring their alignment with a diverse range of individuals and societal values becomes increasingly critical. But how can we capture fundamental human values and assess the degree to which AI systems align with them? We introduce ValueCompass, a framework of fundamental values, grounded in psychological theory and a systematic review, to identify and evaluate human-AI alignment. We apply ValueCompass to measure the value alignment of humans and language models (LMs) across four real-world vignettes: collaborative writing, education, public sectors, and healthcare. Our findings uncover risky misalignment between humans and LMs, such as LMs agreeing with values like "Choose Own Goals", which are largely disagreed by humans. We also observe values vary across vignettes, underscoring the necessity for context-aware AI alignment strategies. This work provides insights into the design space of human-AI alignment, offering foundations for developing AI that responsibly reflects societal values and ethics.


[261] 2409.09588

GLCONet: Learning Multi-source Perception Representation for Camouflaged Object Detection

Recently, biological perception has been a powerful tool for handling the camouflaged object detection (COD) task. However, most existing methods are heavily dependent on the local spatial information of diverse scales from convolutional operations to optimize initial features. A commonly neglected point in these methods is the long-range dependencies between feature pixels from different scale spaces that can help the model build a global structure of the object, inducing a more precise image representation. In this paper, we propose a novel Global-Local Collaborative Optimization Network, called GLCONet. Technically, we first design a collaborative optimization strategy from the perspective of multi-source perception to simultaneously model the local details and global long-range relationships, which can provide features with abundant discriminative information to boost the accuracy in detecting camouflaged objects. Furthermore, we introduce an adjacent reverse decoder that contains cross-layer aggregation and reverse optimization to integrate complementary information from different levels for generating high-quality representations. Extensive experiments demonstrate that the proposed GLCONet method with different backbones can effectively activate potentially significant pixels in an image, outperforming twenty state-of-the-art methods on three public COD datasets. The source code is available at: \https://github.com/CSYSI/GLCONet.


[262] 2409.09589

On the effectiveness of enrollment speech augmentation for Target Speaker Extraction

Deep learning technologies have significantly advanced the performance of target speaker extraction (TSE) tasks. To enhance the generalization and robustness of these algorithms when training data is insufficient, data augmentation is a commonly adopted technique. Unlike typical data augmentation applied to speech mixtures, this work thoroughly investigates the effectiveness of augmenting the enrollment speech space. We found that for both pretrained and jointly optimized speaker encoders, directly augmenting the enrollment speech leads to consistent performance improvement. In addition to conventional methods such as noise and reverberation addition, we propose a novel augmentation method called self-estimated speech augmentation (SSA). Experimental results on the Libri2Mix test set show that our proposed method can achieve an improvement of up to 2.5 dB.


[263] 2409.09590

Feasibility Study of Curvature Effect in Flexible Antenna Arrays for 2-Dimensional Beam Alignment of 6G Wireless Systems

This article investigates the influential role of flexible antenna array curvature on the performance of 6G communication systems with carrier frequencies above 100 GHz. It is demonstrated that the curvature of flexible antenna arrays can be leveraged for 2-dimensional beam alignment in phased arrays with relatively small insertion loss. The effect of antenna array bending on the radiation properties such as gain and antenna impedance are analytically studied and simulated for a 4x4 microstrip patch antenna array operating between 97.5-102.5 GHz. Moreover, the deployment of this flexible antenna array in conjunction with state-of-the-art flexible board packaging techniques is examined for 6G wireless transceivers based on 65nm CMOS technology and simulated for three variants of quadrature amplitude modulation (4QAM, 16 QAM, and 64 QAM). The communication performance in terms of signal-to-noise ratio (SNR) and bit error rate (BER) is evaluated using analytical derivations and simulation results which exhibit a relatively close match.


[264] 2409.09591

Open-World Test-Time Training: Self-Training with Contrast Learning

Traditional test-time training (TTT) methods, while addressing domain shifts, often assume a consistent class set, limiting their applicability in real-world scenarios characterized by infinite variety. Open-World Test-Time Training (OWTTT) addresses the challenge of generalizing deep learning models to unknown target domain distributions, especially in the presence of strong Out-of-Distribution (OOD) data. Existing TTT methods often struggle to maintain performance when confronted with strong OOD data. In OWTTT, the focus has predominantly been on distinguishing between overall strong and weak OOD data. However, during the early stages of TTT, initial feature extraction is hampered by interference from strong OOD and corruptions, resulting in diminished contrast and premature classification of certain classes as strong OOD. To address this, we introduce Open World Dynamic Contrastive Learning (OWDCL), an innovative approach that utilizes contrastive learning to augment positive sample pairs. This strategy not only bolsters contrast in the early stages but also significantly enhances model robustness in subsequent stages. In comparison datasets, our OWDCL model has produced the most advanced performance.


[265] 2409.09592

Programmable Cycle-Specified Queue for Long-Distance Industrial Deterministic Packet Scheduling

The time-critical industrial applications pose intense demands for enabling long-distance deterministic networks. However, previous priority-based and weight-based scheduling methods focus on probabilistically reducing average delay, which ignores strictly guaranteeing task-oriented on-time packet delivery with bounded worst-case delay and jitter. This paper proposes a new Programmable Cycle-Specified Queue (PCSQ) for long-distance industrial deterministic packet scheduling. By implementing the first high-precision rotation dequeuing, PCSQ enables microsecond-level time slot resource reservation (noted as T) and especially jitter control of up to 2T. Then, we propose the cycle tags computation to approximate cyclic scheduling algorithms, which allows packets to actively pick and lock their favorite queue in a sequence of nodes. Accordingly, PCSQ can precisely defer packets to any desired time. Further, the queue coordination and cycle mapping mechanisms are delicately designed to solve the cycle-queue mismatch problem. Evaluation results show that PCSQ can schedule tens of thousands of time-sensitive flows and strictly guarantee $ms$-level delay and us-level jitter.


[266] 2409.09593

One-Shot Learning for Pose-Guided Person Image Synthesis in the Wild

Current Pose-Guided Person Image Synthesis (PGPIS) methods depend heavily on large amounts of labeled triplet data to train the generator in a supervised manner. However, they often falter when applied to in-the-wild samples, primarily due to the distribution gap between the training datasets and real-world test samples. While some researchers aim to enhance model generalizability through sophisticated training procedures, advanced architectures, or by creating more diverse datasets, we adopt the test-time fine-tuning paradigm to customize a pre-trained Text2Image (T2I) model. However, naively applying test-time tuning results in inconsistencies in facial identities and appearance attributes. To address this, we introduce a Visual Consistency Module (VCM), which enhances appearance consistency by combining the face, text, and image embedding. Our approach, named OnePoseTrans, requires only a single source image to generate high-quality pose transfer results, offering greater stability than state-of-the-art data-driven methods. For each test case, OnePoseTrans customizes a model in around 48 seconds with an NVIDIA V100 GPU.


[267] 2409.09596

$\mathcal{H}_2/\mathcal{H}_\infty$ Optimal Control with Sparse Sensing and Actuation

In this paper, we present novel convex optimization formulations for designing full-state and output-feedback controllers with sparse actuation that achieve user-specified $\mathcal{H}_2$ and $\mathcal{H}_\infty$ performance criteria. For output-feedback control, we extend these formulations to simultaneously design control laws with sparse actuation and sensing. The sparsity is induced through the minimization of a weighted $\ell_1$ norm, promoting the efficient use of sensors and actuators while maintaining desired closed-loop performance. The proposed methods are applied to a nonlinear structural dynamics problem, demonstrating the advantages of simultaneous optimization of the control law, sensing, and actuation architecture in realizing an efficient closed-loop system.


[268] 2409.09598

Improving Statistical Significance in Human Evaluation of Automatic Metrics via Soft Pairwise Accuracy

Selecting an automatic metric that best emulates human judgments is often non-trivial, because there is no clear definition of "best emulates." A meta-metric is required to compare the human judgments to the automatic metric judgments, and metric rankings depend on the choice of meta-metric. We propose Soft Pairwise Accuracy (SPA), a new meta-metric that builds on Pairwise Accuracy (PA) but incorporates the statistical significance of both the human judgments and the metric judgments. SPA allows for more fine-grained comparisons between systems than a simplistic binary win/loss, and addresses a number of shortcomings with PA: it is more stable with respect to both the number of systems and segments used for evaluation, it mitigates the issue of metric ties due to quantization, and it produces more statistically significant results. SPA was selected as the official system-level metric for the 2024 WMT metric shared task.


[269] 2409.09600

High-order accurate structure-preserving finite volume schemes on adaptive moving meshes for shallow water equations: Well-balancedness and positivity

This paper develops high-order accurate, well-balanced (WB), and positivity-preserving (PP) finite volume schemes for shallow water equations on adaptive moving structured meshes. The mesh movement poses new challenges in maintaining the WB property, which not only depends on the balance between flux gradients and source terms but is also affected by the mesh movement. To address these complexities, the WB property in curvilinear coordinates is decomposed into flux source balance and mesh movement balance. The flux source balance is achieved by suitable decomposition of the source terms, the numerical fluxes based on hydrostatic reconstruction, and appropriate discretization of the geometric conservation laws (GCLs). Concurrently, the mesh movement balance is maintained by integrating additional schemes to update the bottom topography during mesh adjustments. The proposed schemes are rigorously proven to maintain the WB property by using the discrete GCLs and these two balances. We provide rigorous analyses of the PP property under a sufficient condition enforced by a PP limiter. Due to the involvement of mesh metrics and movement, the analyses are nontrivial, while some standard techniques, such as splitting high-order schemes into convex combinations of formally first-order PP schemes, are not directly applicable. Various numerical examples validate the high-order accuracy, high efficiency, WB, and PP properties of the proposed schemes.


[270] 2409.09601

A Survey of Foundation Models for Music Understanding

Music is essential in daily life, fulfilling emotional and entertainment needs, and connecting us personally, socially, and culturally. A better understanding of music can enhance our emotions, cognitive skills, and cultural connections. The rapid advancement of artificial intelligence (AI) has introduced new ways to analyze music, aiming to replicate human understanding of music and provide related services. While the traditional models focused on audio features and simple tasks, the recent development of large language models (LLMs) and foundation models (FMs), which excel in various fields by integrating semantic information and demonstrating strong reasoning abilities, could capture complex musical features and patterns, integrate music with language and incorporate rich musical, emotional and psychological knowledge. Therefore, they have the potential in handling complex music understanding tasks from a semantic perspective, producing outputs closer to human perception. This work, to our best knowledge, is one of the early reviews of the intersection of AI techniques and music understanding. We investigated, analyzed, and tested recent large-scale music foundation models in respect of their music comprehension abilities. We also discussed their limitations and proposed possible future directions, offering insights for researchers in this field.


[271] 2409.09602

Security Testbed for Preempting Attacks against Supercomputing Infrastructure

Preempting attacks targeting supercomputing systems before damage remains the top security priority. The main challenge is that noisy attack attempts and unreliable alerts often mask real attacks, causing permanent damages such as system integrity violations and data breaches. This paper describes a security testbed embedded in live traffic of a supercomputer at the National Center for Supercomputing Applications (NCSA). The objective is to demonstrate attack preemption, i.e., stopping system compromise and data breaches at petascale supercomputers. Deployment of our testbed at NCSA enables the following key contributions: 1) Insights from characterizing unique attack patterns found in real security logs of over 200 security incidents curated in the past two decades at NCSA. 2) Deployment of an attack visualization tool to illustrate the challenges of identifying real attacks in HPC environments and to support security operators in interactive attack analyses. 3) Demonstrate the testbed's utility by running novel models, such as Factor Graph-Based models, to preempt a real-world ransomware family.


[272] 2409.09603

Towards Data-Centric RLHF: Simple Metrics for Preference Dataset Comparison

The goal of aligning language models to human preferences requires data that reveal these preferences. Ideally, time and money can be spent carefully collecting and tailoring bespoke preference data to each downstream application. However, in practice, a select few publicly available preference datasets are often used to train reward models for reinforcement learning from human feedback (RLHF). While new preference datasets are being introduced with increasing frequency, there are currently no existing efforts to measure and compare these datasets. In this paper, we systematically study preference datasets through three perspectives: scale, label noise, and information content. We propose specific metrics for each of these perspectives and uncover different axes of comparison for a better understanding of preference datasets. Our work is a first step towards a data-centric approach to alignment by providing perspectives that aid in training efficiency and iterative data collection for RLHF.


[273] 2409.09605

DreamMover: Leveraging the Prior of Diffusion Models for Image Interpolation with Large Motion

We study the problem of generating intermediate images from image pairs with large motion while maintaining semantic consistency. Due to the large motion, the intermediate semantic information may be absent in input images. Existing methods either limit to small motion or focus on topologically similar objects, leading to artifacts and inconsistency in the interpolation results. To overcome this challenge, we delve into pre-trained image diffusion models for their capabilities in semantic cognition and representations, ensuring consistent expression of the absent intermediate semantic representations with the input. To this end, we propose DreamMover, a novel image interpolation framework with three main components: 1) A natural flow estimator based on the diffusion model that can implicitly reason about the semantic correspondence between two images. 2) To avoid the loss of detailed information during fusion, our key insight is to fuse information in two parts, high-level space and low-level space. 3) To enhance the consistency between the generated images and input, we propose the self-attention concatenation and replacement approach. Lastly, we present a challenging benchmark dataset InterpBench to evaluate the semantic consistency of generated results. Extensive experiments demonstrate the effectiveness of our method. Our project is available at https://dreamm0ver.github.io .


[274] 2409.09606

BULKHEAD: Secure, Scalable, and Efficient Kernel Compartmentalization with PKS

The endless stream of vulnerabilities urgently calls for principled mitigation to confine the effect of exploitation. However, the monolithic architecture of commodity OS kernels, like the Linux kernel, allows an attacker to compromise the entire system by exploiting a vulnerability in any kernel component. Kernel compartmentalization is a promising approach that follows the least-privilege principle. However, existing mechanisms struggle with the trade-off on security, scalability, and performance, given the challenges stemming from mutual untrustworthiness among numerous and complex components. In this paper, we present BULKHEAD, a secure, scalable, and efficient kernel compartmentalization technique that offers bi-directional isolation for unlimited compartments. It leverages Intel's new hardware feature PKS to isolate data and code into mutually untrusted compartments and benefits from its fast compartment switching. With untrust in mind, BULKHEAD introduces a lightweight in-kernel monitor that enforces multiple important security invariants, including data integrity, execute-only memory, and compartment interface integrity. In addition, it provides a locality-aware two-level scheme that scales to unlimited compartments. We implement a prototype system on Linux v6.1 to compartmentalize loadable kernel modules (LKMs). Extensive evaluation confirms the effectiveness of our approach. As the system-wide impacts, BULKHEAD incurs an average performance overhead of 2.44% for real-world applications with 160 compartmentalized LKMs. While focusing on a specific compartment, ApacheBench tests on ipv6 show an overhead of less than 2%. Moreover, the performance is almost unaffected by the number of compartments, which makes it highly scalable.


[275] 2409.09609

BaCLNS: A toolbox for fast and efficient control of Linear and Nonlinear Control Affine Systems

Backstepping Control of Linear and Nonlinear Systems (BaCLNS) is a Python package developed to automate the design, simulation, and analysis of backstepping control laws for both linear and nonlinear control-affine systems. By providing a standardized framework, BaCLNS simplifies the process of deriving backstepping controllers, making this powerful control technique more accessible to engineers, researchers, and educators. The package handles complex system dynamics, ensuring robust stabilization even in the presence of significant nonlinearities. BaCLNS's modular design allows users to define custom control systems, simulate their behavior , and visualize the results all within a user-friendly environment. The effectiveness of the package is demonstrated through a series of illustrative examples, ranging from simple linear systems to chaotic nonlinear systems, including the Vaidyanathan Jerk System, the pendulum and the Van der Pol Oscillator.


[276] 2409.09610

TextureDiffusion: Target Prompt Disentangled Editing for Various Texture Transfer

Recently, text-guided image editing has achieved significant success. However, existing methods can only apply simple textures like wood or gold when changing the texture of an object. Complex textures such as cloud or fire pose a challenge. This limitation stems from that the target prompt needs to contain both the input image content and , restricting the texture representation. In this paper, we propose TextureDiffusion, a tuning-free image editing method applied to various texture transfer. Initially, the target prompt is directly set to "", making the texture disentangled from the input image content to enhance texture representation. Subsequently, query features in self-attention and features in residual blocks are utilized to preserve the structure of the input image. Finally, to maintain the background, we introduce an edit localization technique which blends the self-attention results and the intermediate latents. Comprehensive experiments demonstrate that TextureDiffusion can harmoniously transfer various textures with excellent structure and background preservation.


[277] 2409.09611

Integrating Audio Narrations to Strengthen Domain Generalization in Multimodal First-Person Action Recognition

First-person activity recognition is rapidly growing due to the widespread use of wearable cameras but faces challenges from domain shifts across different environments, such as varying objects or background scenes. We propose a multimodal framework that improves domain generalization by integrating motion, audio, and appearance features. Key contributions include analyzing the resilience of audio and motion features to domain shifts, using audio narrations for enhanced audio-text alignment, and applying consistency ratings between audio and visual narrations to optimize the impact of audio in recognition during training. Our approach achieves state-of-the-art performance on the ARGO1M dataset, effectively generalizing across unseen scenarios and locations.


[278] 2409.09613

Rethinking KenLM: Good and Bad Model Ensembles for Efficient Text Quality Filtering in Large Web Corpora

With the increasing demand for substantial amounts of high-quality data to train large language models (LLMs), efficiently filtering large web corpora has become a critical challenge. For this purpose, KenLM, a lightweight n-gram-based language model that operates on CPUs, is widely used. However, the traditional method of training KenLM utilizes only high-quality data and, consequently, does not explicitly learn the linguistic patterns of low-quality data. To address this issue, we propose an ensemble approach that leverages two contrasting KenLMs: (i) Good KenLM, trained on high-quality data; and (ii) Bad KenLM, trained on low-quality data. Experimental results demonstrate that our approach significantly reduces noisy content while preserving high-quality content compared to the traditional KenLM training method. This indicates that our method can be a practical solution with minimal computational overhead for resource-constrained environments.


[279] 2409.09614

HJ-sampler: A Bayesian sampler for inverse problems of a stochastic process by leveraging Hamilton-Jacobi PDEs and score-based generative models

The interplay between stochastic processes and optimal control has been extensively explored in the literature. With the recent surge in the use of diffusion models, stochastic processes have increasingly been applied to sample generation. This paper builds on the log transform, known as the Cole-Hopf transform in Brownian motion contexts, and extends it within a more abstract framework that includes a linear operator. Within this framework, we found that the well-known relationship between the Cole-Hopf transform and optimal transport is a particular instance where the linear operator acts as the infinitesimal generator of a stochastic process. We also introduce a novel scenario where the linear operator is the adjoint of the generator, linking to Bayesian inference under specific initial and terminal conditions. Leveraging this theoretical foundation, we develop a new algorithm, named the HJ-sampler, for Bayesian inference for the inverse problem of a stochastic differential equation with given terminal observations. The HJ-sampler involves two stages: (1) solving the viscous Hamilton-Jacobi partial differential equations, and (2) sampling from the associated stochastic optimal control problem. Our proposed algorithm naturally allows for flexibility in selecting the numerical solver for viscous HJ PDEs. We introduce two variants of the solver: the Riccati-HJ-sampler, based on the Riccati method, and the SGM-HJ-sampler, which utilizes diffusion models. We demonstrate the effectiveness and flexibility of the proposed methods by applying them to solve Bayesian inverse problems involving various stochastic processes and prior distributions, including applications that address model misspecifications and quantifying model uncertainty.


[280] 2409.09615

Enhancing Text Annotation through Rationale-Driven Collaborative Few-Shot Prompting

The traditional data annotation process is often labor-intensive, time-consuming, and susceptible to human bias, which complicates the management of increasingly complex datasets. This study explores the potential of large language models (LLMs) as automated data annotators to improve efficiency and consistency in annotation tasks. By employing rationale-driven collaborative few-shot prompting techniques, we aim to improve the performance of LLMs in text annotation. We conduct a rigorous evaluation of six LLMs across four benchmark datasets, comparing seven distinct methodologies. Our results demonstrate that collaborative methods consistently outperform traditional few-shot techniques and other baseline approaches, particularly in complex annotation tasks. Our work provides valuable insights and a robust framework for leveraging collaborative learning methods to tackle challenging text annotation tasks.


[281] 2409.09616

Enhancing Weakly-Supervised Object Detection on Static Images through (Hallucinated) Motion

While motion has garnered attention in various tasks, its potential as a modality for weakly-supervised object detection (WSOD) in static images remains unexplored. Our study introduces an approach to enhance WSOD methods by integrating motion information. This method involves leveraging hallucinated motion from static images to improve WSOD on image datasets, utilizing a Siamese network for enhanced representation learning with motion, addressing camera motion through motion normalization, and selectively training images based on object motion. Experimental validation on the COCO and YouTube-BB datasets demonstrates improvements over a state-of-the-art method.


[282] 2409.09617

Leveraging Large Language Models for Predicting Cost and Duration in Software Engineering Projects

Accurate estimation of project costs and durations remains a pivotal challenge in software engineering, directly impacting budgeting and resource management. Traditional estimation techniques, although widely utilized, often fall short due to their complexity and the dynamic nature of software development projects. This study introduces an innovative approach using Large Language Models (LLMs) to enhance the accuracy and usability of project cost predictions. We explore the efficacy of LLMs against traditional methods and contemporary machine learning techniques, focusing on their potential to simplify the estimation process and provide higher accuracy. Our research is structured around critical inquiries into whether LLMs can outperform existing models, the ease of their integration into current practices, outperform traditional estimation, and why traditional methods still prevail in industry settings. By applying LLMs to a range of real-world datasets and comparing their performance to both state-of-the-art and conventional methods, this study aims to demonstrate that LLMs not only yield more accurate estimates but also offer a user-friendly alternative to complex predictive models, potentially transforming project management strategies within the software industry.


[283] 2409.09619

Compositional Audio Representation Learning

Human auditory perception is compositional in nature -- we identify auditory streams from auditory scenes with multiple sound events. However, such auditory scenes are typically represented using clip-level representations that do not disentangle the constituent sound sources. In this work, we learn source-centric audio representations where each sound source is represented using a distinct, disentangled source embedding in the audio representation. We propose two novel approaches to learning source-centric audio representations: a supervised model guided by classification and an unsupervised model guided by feature reconstruction, both of which outperform the baselines. We thoroughly evaluate the design choices of both approaches using an audio classification task. We find that supervision is beneficial to learn source-centric representations, and that reconstructing audio features is more useful than reconstructing spectrograms to learn unsupervised source-centric representations. Leveraging source-centric models can help unlock the potential of greater interpretability and more flexible decoding in machine listening.


[284] 2409.09620

Robust DG Schemes on Unstructured Triangular Meshes: Oscillation Elimination and Bound Preservation via Optimal Convex Decomposition

Discontinuous Galerkin (DG) schemes on unstructured meshes offer the advantages of compactness and the ability to handle complex computational domains. However, their robustness and reliability in solving hyperbolic conservation laws depend on two critical abilities: suppressing spurious oscillations and preserving intrinsic bounds or constraints. This paper introduces two significant advancements in enhancing the robustness and efficiency of DG methods on unstructured meshes for general hyperbolic conservation laws, while maintaining their accuracy and compactness. First, we investigate the oscillation-eliminating (OE) DG methods on unstructured meshes. These methods not only maintain key features such as conservation, scale invariance, and evolution invariance but also achieve rotation invariance through a novel rotation-invariant OE (RIOE) procedure. Second, we propose, for the first time, the optimal convex decomposition for designing efficient bound-preserving (BP) DG schemes on unstructured meshes. Finding the optimal convex decomposition that maximizes the BP CFL number is an important yet challenging problem.While this challenge was addressed for rectangular meshes, it remains an open problem for triangular meshes. This paper successfully constructs the optimal convex decomposition for the widely used $P^1$ and $P^2$ spaces on triangular cells, significantly improving the efficiency of BP DG methods.The maximum BP CFL numbers are increased by 100%--200% for $P^1$ and 280.38%--350% for $P^2$, compared to classic decomposition. Furthermore, our RIOE procedure and optimal decomposition technique can be integrated into existing DG codes with little and localized modifications. These techniques require only edge-neighboring cell information, thereby retaining the compactness and high parallel efficiency of original DG methods.


[285] 2409.09622

Computing Arrangements of Hypersurfaces

We present a Julia package HypersurfaceRegions.jl for computing all connected components in the complement of an arrangement of real algebraic hypersurfaces in $\mathbb{R}^n$.


[286] 2409.09623

Multi-Slot Tag Assignment Problem in Billboard Advertisement

Nowadays, billboard advertising has emerged as an effective advertising technique due to higher returns on investment. Given a set of selected slots and tags, how to effectively assign the tags to the slots remains an important question. In this paper, we study the problem of assigning tags to the slots such that the number of tags for which influence demand of each zone is satisfied gets maximized. Formally, we call this problem the Multi-Slot Tag Assignment Problem. The input to the problem is a geographical region partitioned into several zones, a set of selected tags and slots, a trajectory, a billboard database, and the influence demand for every tag for each zone. The task here is to find out the assignment of tags to the slots, such the number of tags for which the zonal influence demand is satisfied is maximized. We show that the problem is NP-hard, and we propose an efficient approximation algorithm to solve this problem. A time and space complexity analysis of the proposed methodology has been done. The proposed methodology has been implemented with real-life datasets, and a number of experiments have been carried out to show the effectiveness and efficiency of the proposed approach. The obtained results have been compared with the baseline methods, and we observe that the proposed approach leads to a number of tags whose zonal influence demand is satisfied.


[287] 2409.09626

Understanding Simplicity Bias towards Compositional Mappings via Learning Dynamics

Obtaining compositional mappings is important for the model to generalize well compositionally. To better understand when and how to encourage the model to learn such mappings, we study their uniqueness through different perspectives. Specifically, we first show that the compositional mappings are the simplest bijections through the lens of coding length (i.e., an upper bound of their Kolmogorov complexity). This property explains why models having such mappings can generalize well. We further show that the simplicity bias is usually an intrinsic property of neural network training via gradient descent. That partially explains why some models spontaneously generalize well when they are trained appropriately.


[288] 2409.09627

Spatial-Temporal Mamba Network for EEG-based Motor Imagery Classification

Motor imagery (MI) classification is key for brain-computer interfaces (BCIs). Until recent years, numerous models had been proposed, ranging from classical algorithms like Common Spatial Pattern (CSP) to deep learning models such as convolutional neural networks (CNNs) and transformers. However, these models have shown limitations in areas such as generalizability, contextuality and scalability when it comes to effectively extracting the complex spatial-temporal information inherent in electroencephalography (EEG) signals. To address these limitations, we introduce Spatial-Temporal Mamba Network (STMambaNet), an innovative model leveraging the Mamba state space architecture, which excels in processing extended sequences with linear scalability. By incorporating spatial and temporal Mamba encoders, STMambaNet effectively captures the intricate dynamics in both space and time, significantly enhancing the decoding performance of EEG signals for MI classification. Experimental results on BCI Competition IV 2a and 2b datasets demonstrate STMambaNet's superiority over existing models, establishing it as a powerful tool for advancing MI-based BCIs and improving real-world BCI systems.


[289] 2409.09628

Can Large Language Models Grasp Event Signals? Exploring Pure Zero-Shot Event-based Recognition

Recent advancements in event-based zero-shot object recognition have demonstrated promising results. However, these methods heavily depend on extensive training and are inherently constrained by the characteristics of CLIP. To the best of our knowledge, this research is the first study to explore the understanding capabilities of large language models (LLMs) for event-based visual content. We demonstrate that LLMs can achieve event-based object recognition without additional training or fine-tuning in conjunction with CLIP, effectively enabling pure zero-shot event-based recognition. Particularly, we evaluate the ability of GPT-4o / 4turbo and two other open-source LLMs to directly recognize event-based visual content. Extensive experiments are conducted across three benchmark datasets, systematically assessing the recognition accuracy of these models. The results show that LLMs, especially when enhanced with well-designed prompts, significantly improve event-based zero-shot recognition performance. Notably, GPT-4o outperforms the compared models and exceeds the recognition accuracy of state-of-the-art event-based zero-shot methods on N-ImageNet by five orders of magnitude. The implementation of this paper is available at \url{https://github.com/ChrisYu-Zz/Pure-event-based-recognition-based-LLM}.


[290] 2409.09629

Confidence Estimation for LLM-Based Dialogue State Tracking

Estimation of a model's confidence on its outputs is critical for Conversational AI systems based on large language models (LLMs), especially for reducing hallucination and preventing over-reliance. In this work, we provide an exhaustive exploration of methods, including approaches proposed for open- and closed-weight LLMs, aimed at quantifying and leveraging model uncertainty to improve the reliability of LLM-generated responses, specifically focusing on dialogue state tracking (DST) in task-oriented dialogue systems (TODS). Regardless of the model type, well-calibrated confidence scores are essential to handle uncertainties, thereby improving model performance. We evaluate four methods for estimating confidence scores based on softmax, raw token scores, verbalized confidences, and a combination of these methods, using the area under the curve (AUC) metric to assess calibration, with higher AUC indicating better calibration. We also enhance these with a self-probing mechanism, proposed for closed models. Furthermore, we assess these methods using an open-weight model fine-tuned for the task of DST, achieving superior joint goal accuracy (JGA). Our findings also suggest that fine-tuning open-weight LLMs can result in enhanced AUC performance, indicating better confidence score calibration.


[291] 2409.09632

High-Order Oscillation-Eliminating Hermite WENO Method for Hyperbolic Conservation Laws

This paper proposes high-order accurate, oscillation-eliminating Hermite weighted essentially non-oscillatory (OE-HWENO) finite volume schemes for hyperbolic conservation laws. The OE-HWENO schemes apply an OE procedure after each Runge--Kutta stage, dampening the first-order moments of the HWENO solution to suppress spurious oscillations without any problem-dependent parameters. This OE procedure acts as a filter, derived from the solution operator of a novel damping equation, solved exactly without discretization. As a result, the OE-HWENO method remains stable with a normal CFL number, even for strong shocks producing highly stiff damping terms. To ensure the method's non-oscillatory property across varying scales and wave speeds, we design a scale- and evolution-invariant damping equation and propose a dimensionless transformation for HWENO reconstruction. The OE-HWENO method offers several advantages over existing HWENO methods: the OE procedure is efficient and easy to implement, requiring only simple multiplication of first-order moments; it preserves high-order accuracy, local compactness, and spectral properties. The non-intrusive OE procedure can be integrated seamlessly into existing HWENO codes. Finally, we analyze the bound-preserving (BP) property using optimal cell average decomposition, relaxing the BP time step-size constraint and reducing decomposition points, improving efficiency. Extensive benchmarks validate the method's accuracy, efficiency, resolution, and robustness.


[292] 2409.09633

A Scalable Tabletop Satellite Automation Testbed:Design And Experiments

This paper presents a detailed system design and component selection for the Transforming Proximity Operations and Docking Service (TPODS) module, designed to gain custody of uncontrolled resident space objects (RSOs) via rendezvous and proximity operation (RPO). In addition to serving as a free-flying robotic manipulator to work with cooperative and uncooperative RSOs, the TPODS modules are engineered to have the ability to cooperate with one another to build scaffolding for more complex satellite servicing activities. The structural design of the prototype module is inspired by Tensegrity principles, minimizing the structural mass of the modules frame. The prototype TPODS module is fabricated using lightweight polycarbonate with an aluminum or carbon fiber frame. The inner shell that houses various electronic and pneumatic components is 3-D printed using ABS material. Four OpenMV H7 R1 cameras are used for the pose estimation of resident space objects (RSOs), including other TPODS modules. Compressed air supplied by an external source is used for the initial testing and can be replaced by module-mounted nitrogen pressure vessels for full on-board propulsion later. A Teensy 4.1 single-board computer is used as a central command unit that receives data from the four OpenMV cameras, and commands its thrusters based on the control logic.


[293] 2409.09635

A Novel Framework For Text Detection From Natural Scene Images With Complex Background

Recognizing texts from camera images is a known hard problem because of the difficulties in text detection from the varied and complicated background. In this paper we propose a novel and efficient method to detect text region from images with complex background using Wavelet Transforms. The framework uses Wavelet Transformation of the original image in its grayscale form followed by Sub-band filtering. Then Region clustering technique is applied using centroids of the regions, further Bounding box is fitted to each region thus identifying the text regions. This method is much sophisticated and efficient than the previous methods as it doesn't stick to a particular font size of the text thus, making it generalized. The sample set used for experimental purpose consists of 50 images with varying backgrounds. Images with edge prominence are considered. Furthermore, our method can be easily customized for applications with different scopes.


[294] 2409.09636

Towards understanding evolution of science through language model series

We introduce AnnualBERT, a series of language models designed specifically to capture the temporal evolution of scientific text. Deviating from the prevailing paradigms of subword tokenizations and "one model to rule them all", AnnualBERT adopts whole words as tokens and is composed of a base RoBERTa model pretrained from scratch on the full-text of 1.7 million arXiv papers published until 2008 and a collection of progressively trained models on arXiv papers at an annual basis. We demonstrate the effectiveness of AnnualBERT models by showing that they not only have comparable performances in standard tasks but also achieve state-of-the-art performances on domain-specific NLP tasks as well as link prediction tasks in the arXiv citation network. We then utilize probing tasks to quantify the models' behavior in terms of representation learning and forgetting as time progresses. Our approach enables the pretrained models to not only improve performances on scientific text processing tasks but also to provide insights into the development of scientific discourse over time. The series of the models is available at https://huggingface.co/jd445/AnnualBERTs.


[295] 2409.09638

Multi-view Hypergraph-based Contrastive Learning Model for Cold-Start Micro-video Recommendation

With the widespread use of mobile devices and the rapid growth of micro-video platforms such as TikTok and Kwai, the demand for personalized micro-video recommendation systems has significantly increased. Micro-videos typically contain diverse information, such as textual metadata, visual cues (e.g., cover images), and dynamic video content, significantly affecting user interaction and engagement patterns. However, most existing approaches often suffer from the problem of over-smoothing, which limits their ability to capture comprehensive interaction information effectively. Additionally, cold-start scenarios present ongoing challenges due to sparse interaction data and the underutilization of available interaction signals. To address these issues, we propose a Multi-view Hypergraph-based Contrastive learning model for cold-start micro-video Recommendation (MHCR). MHCR introduces a multi-view multimodal feature extraction layer to capture interaction signals from various perspectives and incorporates multi-view self-supervised learning tasks to provide additional supervisory signals. Through extensive experiments on two real-world datasets, we show that MHCR significantly outperforms existing video recommendation models and effectively mitigates cold-start challenges. Our code is available at https://anonymous.4open.science/r/MHCR-02EF.


[296] 2409.09641

AACessTalk: Fostering Communication between Minimally Verbal Autistic Children and Parents with Contextual Guidance and Card Recommendation

As minimally verbal autistic (MVA) children communicate with parents through few words and nonverbal cues, parents often struggle to encourage their children to express subtle emotions and needs and to grasp their nuanced signals. We present AACessTalk, a tablet-based, AI-mediated communication system that facilitates meaningful exchanges between an MVA child and a parent. AACessTalk provides real-time guides to the parent to engage the child in conversation and, in turn, recommends contextual vocabulary cards to the child. Through a two-week deployment study with 11 MVA child-parent dyads, we examine how AACessTalk fosters everyday conversation practice and mutual engagement. Our findings show high engagement from all dyads, leading to increased frequency of conversation and turn-taking. AACessTalk also encouraged parents to explore their own interaction strategies and empowered the children to have more agency in communication. We discuss the implications of designing technologies for balanced communication dynamics in parent-MVA child interaction.


[297] 2409.09645

COSCO: A Sharpness-Aware Training Framework for Few-shot Multivariate Time Series Classification

Multivariate time series classification is an important task with widespread domains of applications. Recently, deep neural networks (DNN) have achieved state-of-the-art performance in time series classification. However, they often require large expert-labeled training datasets which can be infeasible in practice. In few-shot settings, i.e. only a limited number of samples per class are available in training data, DNNs show a significant drop in testing accuracy and poor generalization ability. In this paper, we propose to address these problems from an optimization and a loss function perspective. Specifically, we propose a new learning framework named COSCO consisting of a sharpness-aware minimization (SAM) optimization and a Prototypical loss function to improve the generalization ability of DNN for multivariate time series classification problems under few-shot setting. Our experiments demonstrate our proposed method outperforms the existing baseline methods. Our source code is available at: https://github.com/JRB9/COSCO.


[298] 2409.09646

A Simple HMM with Self-Supervised Representations for Phone Segmentation

Despite the recent advance in self-supervised representations, unsupervised phonetic segmentation remains challenging. Most approaches focus on improving phonetic representations with self-supervised learning, with the hope that the improvement can transfer to phonetic segmentation. In this paper, contrary to recent approaches, we show that peak detection on Mel spectrograms is a strong baseline, better than many self-supervised approaches. Based on this finding, we propose a simple hidden Markov model that uses self-supervised representations and features at the boundaries for phone segmentation. Our results demonstrate consistent improvements over previous approaches, with a generalized formulation allowing versatile design adaptations.


[299] 2409.09647

Self-supervised Learning for Acoustic Few-Shot Classification

Labelled data are limited and self-supervised learning is one of the most important approaches for reducing labelling requirements. While it has been extensively explored in the image domain, it has so far not received the same amount of attention in the acoustic domain. Yet, reducing labelling is a key requirement for many acoustic applications. Specifically in bioacoustic, there are rarely sufficient labels for fully supervised learning available. This has led to the widespread use of acoustic recognisers that have been pre-trained on unrelated data for bioacoustic tasks. We posit that training on the actual task data and combining self-supervised pre-training with few-shot classification is a superior approach that has the ability to deliver high accuracy even when only a few labels are available. To this end, we introduce and evaluate a new architecture that combines CNN-based preprocessing with feature extraction based on state space models (SSMs). This combination is motivated by the fact that CNN-based networks alone struggle to capture temporal information effectively, which is crucial for classifying acoustic signals. SSMs, specifically S4 and Mamba, on the other hand, have been shown to have an excellent ability to capture long-range dependencies in sequence data. We pre-train this architecture using contrastive learning on the actual task data and subsequent fine-tuning with an extremely small amount of labelled data. We evaluate the performance of this proposed architecture for ($n$-shot, $n$-class) classification on standard benchmarks as well as real-world data. Our evaluation shows that it outperforms state-of-the-art architectures on the few-shot classification problem.


[300] 2409.09649

SparX: A Sparse Cross-Layer Connection Mechanism for Hierarchical Vision Mamba and Transformer Networks

Due to the capability of dynamic state space models (SSMs) in capturing long-range dependencies with near-linear computational complexity, Mamba has shown notable performance in NLP tasks. This has inspired the rapid development of Mamba-based vision models, resulting in promising results in visual recognition tasks. However, such models are not capable of distilling features across layers through feature aggregation, interaction, and selection. Moreover, existing cross-layer feature aggregation methods designed for CNNs or ViTs are not practical in Mamba-based models due to high computational costs. Therefore, this paper aims to introduce an efficient cross-layer feature aggregation mechanism for Mamba-based vision backbone networks. Inspired by the Retinal Ganglion Cells (RGCs) in the human visual system, we propose a new sparse cross-layer connection mechanism termed SparX to effectively improve cross-layer feature interaction and reuse. Specifically, we build two different types of network layers: ganglion layers and normal layers. The former has higher connectivity and complexity, enabling multi-layer feature aggregation and interaction in an input-dependent manner. In contrast, the latter has lower connectivity and complexity. By interleaving these two types of layers, we design a new vision backbone network with sparsely cross-connected layers, achieving an excellent trade-off among model size, computational cost, memory cost, and accuracy in comparison to its counterparts. For instance, with fewer parameters, SparX-Mamba-T improves the top-1 accuracy of VMamba-T from 82.5% to 83.5%, while SparX-Swin-T achieves a 1.3% increase in top-1 accuracy compared to Swin-T. Extensive experimental results demonstrate that our new connection mechanism possesses both superior performance and generalization capabilities on various vision tasks.


[301] 2409.09652

Unveiling Gender Bias in Large Language Models: Using Teacher's Evaluation in Higher Education As an Example

This paper investigates gender bias in Large Language Model (LLM)-generated teacher evaluations in higher education setting, focusing on evaluations produced by GPT-4 across six academic subjects. By applying a comprehensive analytical framework that includes Odds Ratio (OR) analysis, Word Embedding Association Test (WEAT), sentiment analysis, and contextual analysis, this paper identified patterns of gender-associated language reflecting societal stereotypes. Specifically, words related to approachability and support were used more frequently for female instructors, while words related to entertainment were predominantly used for male instructors, aligning with the concepts of communal and agentic behaviors. The study also found moderate to strong associations between male salient adjectives and male names, though career and family words did not distinctly capture gender biases. These findings align with prior research on societal norms and stereotypes, reinforcing the notion that LLM-generated text reflects existing biases.


[302] 2409.09653

KAN v.s. MLP for Offline Reinforcement Learning

Kolmogorov-Arnold Networks (KAN) is an emerging neural network architecture in machine learning. It has greatly interested the research community about whether KAN can be a promising alternative of the commonly used Multi-Layer Perceptions (MLP). Experiments in various fields demonstrated that KAN-based machine learning can achieve comparable if not better performance than MLP-based methods, but with much smaller parameter scales and are more explainable. In this paper, we explore the incorporation of KAN into the actor and critic networks for offline reinforcement learning (RL). We evaluated the performance, parameter scales, and training efficiency of various KAN and MLP based conservative Q-learning (CQL) on the the classical D4RL benchmark for offline RL. Our study demonstrates that KAN can achieve performance close to the commonly used MLP with significantly fewer parameters. This provides us an option to choose the base networks according to the requirements of the offline RL tasks.


[303] 2409.09654

A Simple Study on the Optimality of Hybrid NOMA

The key idea of hybrid non-orthogonal multiple access (NOMA) is to allow users to use the bandwidth resources to which they cannot have access in orthogonal multiple access (OMA) based legacy networks while still guaranteeing its compatibility with the legacy network. However, in a conventional hybrid NOMA network, some users have access to more bandwidth resources than others, which leads to a potential performance loss. So what if the users can access the same amount of bandwidth resources? This letter focuses on a simple two-user scenario, and develops analytical and simulation results to reveal that for this considered scenario, conventional hybrid NOMA is still an optimal transmission strategy.


[304] 2409.09658

Estimation of inertial properties of a rigid structure maneuvered by satellite modules

The LASR Laboratory is investigating the use of free-flying spacecraft modules in several on-orbit, servicing and manufacturing (OSAM) activities. Previous work consists of the system development and testing of the aforementioned thrust-capable modules. This study makes advancements to devise, implement and validate an algorithm for the estimation of inertial parameters of a rigid structure, to be maneuvered with the help of Transforming Proximity Operations and Docking Service (TPODS) satellite modules. The primary contribution of this activity is observability analysis to infer a conducive input sequence for estimating the inertial parameters. For the experimental validation of proposed estimation algorithm, real-time pose measurements are logged through the VICON motion capture system and the recorded data is utilized to assess the performance of the estimation algorithm to predict mass and moment of inertia of an isolated TPODS module.


[305] 2409.09659

Leveraging Open-Source Large Language Models for Native Language Identification

Native Language Identification (NLI) - the task of identifying the native language (L1) of a person based on their writing in the second language (L2) - has applications in forensics, marketing, and second language acquisition. Historically, conventional machine learning approaches that heavily rely on extensive feature engineering have outperformed transformer-based language models on this task. Recently, closed-source generative large language models (LLMs), e.g., GPT-4, have demonstrated remarkable performance on NLI in a zero-shot setting, including promising results in open-set classification. However, closed-source LLMs have many disadvantages, such as high costs and undisclosed nature of training data. This study explores the potential of using open-source LLMs for NLI. Our results indicate that open-source LLMs do not reach the accuracy levels of closed-source LLMs when used out-of-the-box. However, when fine-tuned on labeled training data, open-source LLMs can achieve performance comparable to that of commercial LLMs.


[306] 2409.09661

ContractTinker: LLM-Empowered Vulnerability Repair for Real-World Smart Contracts

Smart contracts are susceptible to being exploited by attackers, especially when facing real-world vulnerabilities. To mitigate this risk, developers often rely on third-party audit services to identify potential vulnerabilities before project deployment. Nevertheless, repairing the identified vulnerabilities is still complex and labor-intensive, particularly for developers lacking security expertise. Moreover, existing pattern-based repair tools mostly fail to address real-world vulnerabilities due to their lack of high-level semantic understanding. To fill this gap, we propose ContractTinker, a Large Language Models (LLMs)-empowered tool for real-world vulnerability repair. The key insight is our adoption of the Chain-of-Thought approach to break down the entire generation task into sub-tasks. Additionally, to reduce hallucination, we integrate program static analysis to guide the LLM. We evaluate ContractTinker on 48 high-risk vulnerabilities. The experimental results show that among the patches generated by ContractTinker, 23 (48%) are valid patches that fix the vulnerabilities, while 10 (21%) require only minor modifications. A video of ContractTinker is available at https://youtu.be/HWFVi-YHcPE.


[307] 2409.09662

ExploreSelf: Fostering User-driven Exploration and Reflection on Personal Challenges with Adaptive Guidance by Large Language Models

Expressing stressful experiences in words is proven to improve mental and physical health, but individuals often disengage with writing interventions as they struggle to organize their thoughts and emotions. Reflective prompts have been used to provide direction, and large language models (LLMs) have demonstrated the potential to provide tailored guidance. Current systems often limit users' flexibility to direct their reflections. We thus present ExploreSelf, an LLM-driven application designed to empower users to control their reflective journey. ExploreSelf allows users to receive adaptive support through dynamically generated questions. Through an exploratory study with 19 participants, we examine how participants explore and reflect on personal challenges using ExploreSelf. Our findings demonstrate that participants valued the balance between guided support and freedom to control their reflective journey, leading to deeper engagement and insight. Building on our findings, we discuss implications for designing LLM-driven tools that promote user empowerment through effective reflective practices.


[308] 2409.09665

Proximity operations of CubeSats via sensor fusion of ultra-wideband range measurements with rate gyroscopes, accelerometers and monocular vision

A robust pose estimation algorithm based on an extended Kalman filter using measurements from accelerometers, rate gyroscopes, monocular vision and ultra-wideband radar is presented. The sensor fusion and pose estimation algorithm incorporates Mahalonobis distance-based outlier rejection and under-weighting of measurements for robust filter performance in the case of sudden range measurements led by the absence of measurements due to range limitations of radar transceivers. The estimator is further validated through an experimental analysis using low-cost radar, IMU and camera sensors. The pose estimate is utilized to perform proximity operations and docking of Transforming Proximity Operations and Docking Service (TPODS) satellite modules with a fixed target.


[309] 2409.09668

EditBoard: Towards A Comprehensive Evaluation Benchmark for Text-based Video Editing Models

The rapid development of diffusion models has significantly advanced AI-generated content (AIGC), particularly in Text-to-Image (T2I) and Text-to-Video (T2V) generation. Text-based video editing, leveraging these generative capabilities, has emerged as a promising field, enabling precise modifications to videos based on text prompts. Despite the proliferation of innovative video editing models, there is a conspicuous lack of comprehensive evaluation benchmarks that holistically assess these models' performance across various dimensions. Existing evaluations are limited and inconsistent, typically summarizing overall performance with a single score, which obscures models' effectiveness on individual editing tasks. To address this gap, we propose EditBoard, the first comprehensive evaluation benchmark for text-based video editing models. EditBoard encompasses nine automatic metrics across four dimensions, evaluating models on four task categories and introducing three new metrics to assess fidelity. This task-oriented benchmark facilitates objective evaluation by detailing model performance and providing insights into each model's strengths and weaknesses. By open-sourcing EditBoard, we aim to standardize evaluation and advance the development of robust video editing models.


[310] 2409.09670

Unsupervised Hyperspectral and Multispectral Image Blind Fusion Based on Deep Tucker Decomposition Network with Spatial-Spectral Manifold Learning

Hyperspectral and multispectral image fusion aims to generate high spectral and spatial resolution hyperspectral images (HR-HSI) by fusing high-resolution multispectral images (HR-MSI) and low-resolution hyperspectral images (LR-HSI). However, existing fusion methods encounter challenges such as unknown degradation parameters, incomplete exploitation of the correlation between high-dimensional structures and deep image features. To overcome these issues, in this article, an unsupervised blind fusion method for hyperspectral and multispectral images based on Tucker decomposition and spatial spectral manifold learning (DTDNML) is proposed. We design a novel deep Tucker decomposition network that maps LR-HSI and HR-MSI into a consistent feature space, achieving reconstruction through decoders with shared parameter. To better exploit and fuse spatial-spectral features in the data, we design a core tensor fusion network that incorporates a spatial spectral attention mechanism for aligning and fusing features at different scales. Furthermore, to enhance the capacity in capturing global information, a Laplacian-based spatial-spectral manifold constraints is introduced in shared-decoders. Sufficient experiments have validated that this method enhances the accuracy and efficiency of hyperspectral and multispectral fusion on different remote sensing datasets. The source code is available at https://github.com/Shawn-H-Wang/DTDNML.


[311] 2409.09673

SITSMamba for Crop Classification based on Satellite Image Time Series

Satellite image time series (SITS) data provides continuous observations over time, allowing for the tracking of vegetation changes and growth patterns throughout the seasons and years. Numerous deep learning (DL) approaches using SITS for crop classification have emerged recently, with the latest approaches adopting Transformer for SITS classification. However, the quadratic complexity of self-attention in Transformer poses challenges for classifying long time series. While the cutting-edge Mamba architecture has demonstrated strength in various domains, including remote sensing image interpretation, its capacity to learn temporal representations in SITS data remains unexplored. Moreover, the existing SITS classification methods often depend solely on crop labels as supervision signals, which fails to fully exploit the temporal information. In this paper, we proposed a Satellite Image Time Series Mamba (SITSMamba) method for crop classification based on remote sensing time series data. The proposed SITSMamba contains a spatial encoder based on Convolutional Neural Networks (CNN) and a Mamba-based temporal encoder. To exploit richer temporal information from SITS, we design two branches of decoder used for different tasks. The first branch is a crop Classification Branch (CBranch), which includes a ConvBlock to decode the feature to a crop map. The second branch is a SITS Reconstruction Branch that uses a Linear layer to transform the encoded feature to predict the original input values. Furthermore, we design a Positional Weight (PW) applied to the RBranch to help the model learn rich latent knowledge from SITS. We also design two weighting factors to control the balance of the two branches during training. The code of SITSMamba is available at: https://github.com/XiaoleiQinn/SITSMamba.


[312] 2409.09674

Model Selection Through Model Sorting

We propose a novel approach to select the best model of the data. Based on the exclusive properties of the nested models, we find the most parsimonious model containing the risk minimizer predictor. We prove the existence of probable approximately correct (PAC) bounds on the difference of the minimum empirical risk of two successive nested models, called successive empirical excess risk (SEER). Based on these bounds, we propose a model order selection method called nested empirical risk (NER). By the sorted NER (S-NER) method to sort the models intelligently, the minimum risk decreases. We construct a test that predicts whether expanding the model decreases the minimum risk or not. With a high probability, the NER and S-NER choose the true model order and the most parsimonious model containing the risk minimizer predictor, respectively. We use S-NER model selection in the linear regression and show that, the S-NER method without any prior information can outperform the accuracy of feature sorting algorithms like orthogonal matching pursuit (OMP) that aided with prior knowledge of the true model order. Also, in the UCR data set, the NER method reduces the complexity of the classification of UCR datasets dramatically, with a negligible loss of accuracy.


[313] 2409.09676

Nebula: Efficient, Private and Accurate Histogram Estimation

We present Nebula, a system for differential private histogram estimation of data distributed among clients. Nebula enables clients to locally subsample and encode their data such that an untrusted server learns only data values that meet an aggregation threshold to satisfy differential privacy guarantees. Compared with other private histogram estimation systems, Nebula uniquely achieves all of the following: \textit{i)} a strict upper bound on privacy leakage; \textit{ii)} client privacy under realistic trust assumptions; \textit{iii)} significantly better utility compared to standard local differential privacy systems; and \textit{iv)} avoiding trusted third-parties, multi-party computation, or trusted hardware. We provide both a formal evaluation of Nebula's privacy, utility and efficiency guarantees, along with an empirical evaluation on three real-world datasets. We demonstrate that clients can encode and upload their data efficiently (only 0.0058 seconds running time and 0.0027 MB data communication) and privately (strong differential privacy guarantees $\varepsilon=1$). On the United States Census dataset, the Nebula's untrusted aggregation server estimates histograms with above 88\% better utility than the existing local deployment of differential privacy. Additionally, we describe a variant that allows clients to submit multi-dimensional data, with similar privacy, utility, and performance. Finally, we provide an open source implementation of Nebula.


[314] 2409.09677

Mitigating Dimensionality in 2D Rectangle Packing Problem under Reinforcement Learning Schema

This paper explores the application of Reinforcement Learning (RL) to the two-dimensional rectangular packing problem. We propose a reduced representation of the state and action spaces that allow us for high granularity. Leveraging UNet architecture and Proximal Policy Optimization (PPO), we achieved a model that is comparable to the MaxRect heuristic. However, our approach has great potential to be generalized to nonrectangular packing problems and complex constraints.


[315] 2409.09678

A Comprehensive Methodological Survey of Human Activity Recognition Across Divers Data Modalities

Human Activity Recognition (HAR) systems aim to understand human behaviour and assign a label to each action, attracting significant attention in computer vision due to their wide range of applications. HAR can leverage various data modalities, such as RGB images and video, skeleton, depth, infrared, point cloud, event stream, audio, acceleration, and radar signals. Each modality provides unique and complementary information suited to different application scenarios. Consequently, numerous studies have investigated diverse approaches for HAR using these modalities. This paper presents a comprehensive survey of the latest advancements in HAR from 2014 to 2024, focusing on machine learning (ML) and deep learning (DL) approaches categorized by input data modalities. We review both single-modality and multi-modality techniques, highlighting fusion-based and co-learning frameworks. Additionally, we cover advancements in hand-crafted action features, methods for recognizing human-object interactions, and activity detection. Our survey includes a detailed dataset description for each modality and a summary of the latest HAR systems, offering comparative results on benchmark datasets. Finally, we provide insightful observations and propose effective future research directions in HAR.


[316] 2409.09681

E-Commerce Inpainting with Mask Guidance in Controlnet for Reducing Overcompletion

E-commerce image generation has always been one of the core demands in the e-commerce field. The goal is to restore the missing background that matches the main product given. In the post-AIGC era, diffusion models are primarily used to generate product images, achieving impressive results. This paper systematically analyzes and addresses a core pain point in diffusion model generation: overcompletion, which refers to the difficulty in maintaining product features. We propose two solutions: 1. Using an instance mask fine-tuned inpainting model to mitigate this phenomenon; 2. Adopting a train-free mask guidance approach, which incorporates refined product masks as constraints when combining ControlNet and UNet to generate the main product, thereby avoiding overcompletion of the product. Our method has achieved promising results in practical applications and we hope it can serve as an inspiring technical report in this field.


[317] 2409.09682

A Robust Probability-based Joint Registration Method of Multiple Point Clouds Considering Local Consistency

In robotic inspection, joint registration of multiple point clouds is an essential technique for estimating the transformation relationships between measured parts, such as multiple blades in a propeller. However, the presence of noise and outliers in the data can significantly impair the registration performance by affecting the correctness of correspondences. To address this issue, we incorporate local consistency property into the probability-based joint registration method. Specifically, each measured point set is treated as a sample from an unknown Gaussian Mixture Model (GMM), and the registration problem is framed as estimating the probability model. By incorporating local consistency into the optimization process, we enhance the robustness and accuracy of the posterior distributions, which represent the one-to-all correspondences that directly determine the registration results. Effective closed-form solution for transformation and probability parameters are derived with Expectation-Maximization (EM) algorithm. Extensive experiments demonstrate that our method outperforms the existing methods, achieving high accuracy and robustness with the existence of noise and outliers. The code will be available at https://github.com/sulingjie/JPRLC_registration.


[318] 2409.09687

Training Safe Neural Networks with Global SDP Bounds

This paper presents a novel approach to training neural networks with formal safety guarantees using semidefinite programming (SDP) for verification. Our method focuses on verifying safety over large, high-dimensional input regions, addressing limitations of existing techniques that focus on adversarial robustness bounds. We introduce an ADMM-based training scheme for an accurate neural network classifier on the Adversarial Spheres dataset, achieving provably perfect recall with input dimensions up to $d=40$. This work advances the development of reliable neural network verification methods for high-dimensional systems, with potential applications in safe RL policies.


[319] 2409.09689

CAT: Customized Transformer Accelerator Framework on Versal ACAP

Transformer uses GPU as the initial design platform, but GPU can only perform limited hardware customization. Although FPGA has strong customization ability, the design solution space is huge and the design difficulty is high. Versal ACAP is a heterogeneous computing architecture with AI Engine as the core. It is far more flexible than GPU in hardware customization, and has better and smaller design solution space than traditional FPGA. Therefore, this paper proposes the Customized Transformer Accelerator Framework(CAT), through the CAT framework, a customized Transformer accelerator family can be derived on Versal ACAP, CAT framework has an abstract accelerator architecture design idea, which deconstructs and efficiently maps the Transformer into the hardware, which contains a variety of customizable properties. Through the customization and optimization strategy of the CAT framework, the underlying hardware and the upper model jointly constrain and decide on these customizable properties, and finally form a customized accelerator. We use a 7 nm AMD Versal ACAP VCK5000 development board to implement accelerators for different Transformer models based on the CAT framework. Experiments show that we achieve the highest throughput gains of 2.41x, 49.50x, and 1.32x compared to 8 nm Nvidia GPU A10G, 16 nm AMD FPGA ZCU102, and 7 nm AMD Versal ACAP VC190(SOTA). The highest energy efficiency gains are 7.80x, 6.19x and 1.15x, respectively.


[320] 2409.09692

Predicting building types and functions at transnational scale

Building-specific knowledge such as building type and function information is important for numerous energy applications. However, comprehensive datasets containing this information for individual households are missing in many regions of Europe. For the first time, we investigate whether it is feasible to predict building types and functional classes at a European scale based on only open GIS datasets available across countries. We train a graph neural network (GNN) classifier on a large-scale graph dataset consisting of OpenStreetMap (OSM) buildings across the EU, Norway, Switzerland, and the UK. To efficiently perform training using the large-scale graph, we utilize localized subgraphs. A graph transformer model achieves a high Cohen's kappa coefficient of 0.754 when classifying buildings into 9 classes, and a very high Cohen's kappa coefficient of 0.844 when classifying buildings into the residential and non-residential classes. The experimental results imply three core novel contributions to literature. Firstly, we show that building classification across multiple countries is possible using a multi-source dataset consisting of information about 2D building shape, land use, degree of urbanization, and countries as input, and OSM tags as ground truth. Secondly, our results indicate that GNN models that consider contextual information about building neighborhoods improve predictive performance compared to models that only consider individual buildings and ignore the neighborhood. Thirdly, we show that training with GNNs on localized subgraphs instead of standard GNNs improves performance for the task of building classification.


[321] 2409.09696

AutoJournaling: A Context-Aware Journaling System Leveraging MLLMs on Smartphone Screenshots

Journaling offers significant benefits, including fostering self-reflection, enhancing writing skills, and aiding in mood monitoring. However, many people abandon the practice because traditional journaling is time-consuming, and detailed life events may be overlooked if not recorded promptly. Given that smartphones are the most widely used devices for entertainment, work, and socialization, they present an ideal platform for innovative approaches to journaling. Despite their ubiquity, the potential of using digital phenotyping, a method of unobtrusively collecting data from digital devices to gain insights into psychological and behavioral patterns, for automated journal generation has been largely underexplored. In this study, we propose AutoJournaling, the first-of-its-kind system that automatically generates journals by collecting and analyzing screenshots from smartphones. This system captures life events and corresponding emotions, offering a novel approach to digital phenotyping. We evaluated AutoJournaling by collecting screenshots every 3 seconds from three students over five days, demonstrating its feasibility and accuracy. AutoJournaling is the first framework to utilize seamlessly collected screenshots for journal generation, providing new insights into psychological states through digital phenotyping.


[322] 2409.09702

GFlowNet Pretraining with Inexpensive Rewards

Generative Flow Networks (GFlowNets), a class of generative models have recently emerged as a suitable framework for generating diverse and high-quality molecular structures by learning from unnormalized reward distributions. Previous works in this direction often restrict exploration by using predefined molecular fragments as building blocks, limiting the chemical space that can be accessed. In this work, we introduce Atomic GFlowNets (A-GFNs), a foundational generative model leveraging individual atoms as building blocks to explore drug-like chemical space more comprehensively. We propose an unsupervised pre-training approach using offline drug-like molecule datasets, which conditions A-GFNs on inexpensive yet informative molecular descriptors such as drug-likeliness, topological polar surface area, and synthetic accessibility scores. These properties serve as proxy rewards, guiding A-GFNs towards regions of chemical space that exhibit desirable pharmacological properties. We further our method by implementing a goal-conditioned fine-tuning process, which adapts A-GFNs to optimize for specific target properties. In this work, we pretrain A-GFN on the ZINC15 offline dataset and employ robust evaluation metrics to show the effectiveness of our approach when compared to other relevant baseline methods in drug design.


[323] 2409.09704

AlpaPICO: Extraction of PICO Frames from Clinical Trial Documents Using LLMs

In recent years, there has been a surge in the publication of clinical trial reports, making it challenging to conduct systematic reviews. Automatically extracting Population, Intervention, Comparator, and Outcome (PICO) from clinical trial studies can alleviate the traditionally time-consuming process of manually scrutinizing systematic reviews. Existing approaches of PICO frame extraction involves supervised approach that relies on the existence of manually annotated data points in the form of BIO label tagging. Recent approaches, such as In-Context Learning (ICL), which has been shown to be effective for a number of downstream NLP tasks, require the use of labeled examples. In this work, we adopt ICL strategy by employing the pretrained knowledge of Large Language Models (LLMs), gathered during the pretraining phase of an LLM, to automatically extract the PICO-related terminologies from clinical trial documents in unsupervised set up to bypass the availability of large number of annotated data instances. Additionally, to showcase the highest effectiveness of LLM in oracle scenario where large number of annotated samples are available, we adopt the instruction tuning strategy by employing Low Rank Adaptation (LORA) to conduct the training of gigantic model in low resource environment for the PICO frame extraction task. Our empirical results show that our proposed ICL-based framework produces comparable results on all the version of EBM-NLP datasets and the proposed instruction tuned version of our framework produces state-of-the-art results on all the different EBM-NLP datasets. Our project is available at \url{https://github.com/shrimonmuke0202/AlpaPICO.git}.


[324] 2409.09706

Exploring Utility in a Real-World Warehouse Optimization Problem: Formulation Based on Quantun Annealers and Preliminary Results

In the current NISQ-era, one of the major challenges faced by researchers and practitioners lies in figuring out how to combine quantum and classical computing in the most efficient and innovative way. In this paper, we present a mechanism coined as Quantum Initialization for Warehouse Optimization Problem that resorts to D-Wave's Quantum Annealer. The module has been specifically designed to be embedded into already existing classical software dedicated to the optimization of a real-world industrial problem. We preliminary tested the implemented mechanism through a two-phase experiment against the classical version of the software.


[325] 2409.09707

Synergistic Spotting and Recognition of Micro-Expression via Temporal State Transition

Micro-expressions are involuntary facial movements that cannot be consciously controlled, conveying subtle cues with substantial real-world applications. The analysis of micro-expressions generally involves two main tasks: spotting micro-expression intervals in long videos and recognizing the emotions associated with these intervals. Previous deep learning methods have primarily relied on classification networks utilizing sliding windows. However, fixed window sizes and window-level hard classification introduce numerous constraints. Additionally, these methods have not fully exploited the potential of complementary pathways for spotting and recognition. In this paper, we present a novel temporal state transition architecture grounded in the state space model, which replaces conventional window-level classification with video-level regression. Furthermore, by leveraging the inherent connections between spotting and recognition tasks, we propose a synergistic strategy that enhances overall analysis performance. Extensive experiments demonstrate that our method achieves state-of-the-art performance. The codes and pre-trained models are available at https://github.com/zizheng-guo/ME-TST.


[326] 2409.09708

ELSA: Exploiting Layer-wise N:M Sparsity for Vision Transformer Acceleration

$N{:}M$ sparsity is an emerging model compression method supported by more and more accelerators to speed up sparse matrix multiplication in deep neural networks. Most existing $N{:}M$ sparsity methods compress neural networks with a uniform setting for all layers in a network or heuristically determine the layer-wise configuration by considering the number of parameters in each layer. However, very few methods have been designed for obtaining a layer-wise customized $N{:}M$ sparse configuration for vision transformers (ViTs), which usually consist of transformer blocks involving the same number of parameters. In this work, to address the challenge of selecting suitable sparse configuration for ViTs on $N{:}M$ sparsity-supporting accelerators, we propose ELSA, Exploiting Layer-wise $N{:}M$ Sparsity for ViTs. Considering not only all $N{:}M$ sparsity levels supported by a given accelerator but also the expected throughput improvement, our methodology can reap the benefits of accelerators supporting mixed sparsity by trading off negligible accuracy loss with both memory usage and inference time reduction for ViT models. For instance, our approach achieves a noteworthy 2.9$\times$ reduction in FLOPs for both Swin-B and DeiT-B with only a marginal degradation of accuracy on ImageNet. Our code will be released upon paper acceptance.


[327] 2409.09713

Active RIS-Aided Terahertz Communications with Phase Error and Beam Misalignment

Terahertz (THz) communications will be pivotal in sixth-generation (6G) wireless networks, offering significantly wider bandwidths and higher data rates. However, the unique propagation characteristics of the THz frequency band, such as high path loss and sensitivity to blockages, pose substantial challenges. Reconfigurable intelligent surfaces (RISs) present a promising solution for enhancing THz communications by dynamically shaping the propagation environment to address these issues. Active RISs, in particular, can amplify reflected signals, effectively mitigating the multiplicative fading effects in RIS-aided links. Given the highly directional nature of THz signals, beam misalignment is a significant concern, while discrete phase shifting is more practical for real-world RIS deployment compared to continuous adjustments. This paper investigates the performance of active-RIS-aided THz communication systems, focusing on discrete phase shifts and beam misalignment. An expression for the ergodic capacity is derived, incorporating critical system parameters to assess performance. Numerical results offer insights into optimizing active-RIS-aided THz communication systems.


[328] 2409.09714

Pre-Training for 3D Hand Pose Estimation with Contrastive Learning on Large-Scale Hand Images in the Wild

We present a contrastive learning framework based on in-the-wild hand images tailored for pre-training 3D hand pose estimators, dubbed HandCLR. Pre-training on large-scale images achieves promising results in various tasks, but prior 3D hand pose pre-training methods have not fully utilized the potential of diverse hand images accessible from in-the-wild videos. To facilitate scalable pre-training, we first prepare an extensive pool of hand images from in-the-wild videos and design our method with contrastive learning. Specifically, we collected over 2.0M hand images from recent human-centric videos, such as 100DOH and Ego4D. To extract discriminative information from these images, we focus on the similarity of hands; pairs of similar hand poses originating from different samples, and propose a novel contrastive learning method that embeds similar hand pairs closer in the latent space. Our experiments demonstrate that our method outperforms conventional contrastive learning approaches that produce positive pairs sorely from a single image with data augmentation. We achieve significant improvements over the state-of-the-art method in various datasets, with gains of 15% on FreiHand, 10% on DexYCB, and 4% on AssemblyHands.


[329] 2409.09715

Generative Semantic Communication via Textual Prompts: Latency Performance Tradeoffs

This paper develops an edge-device collaborative Generative Semantic Communications (Gen SemCom) framework leveraging pre-trained Multi-modal/Vision Language Models (M/VLMs) for ultra-low-rate semantic communication via textual prompts. The proposed framework optimizes the use of M/VLMs on the wireless edge/device to generate high-fidelity textual prompts through visual captioning/question answering, which are then transmitted over a wireless channel for SemCom. Specifically, we develop a multi-user Gen SemCom framework using pre-trained M/VLMs, and formulate a joint optimization problem of prompt generation offloading, communication and computation resource allocation to minimize the latency and maximize the resulting semantic quality. Due to the nonconvex nature of the problem with highly coupled discrete and continuous variables, we decompose it as a two-level problem and propose a low-complexity swap/leaving/joining (SLJ)-based matching algorithm. Simulation results demonstrate significant performance improvements over the conventional semanticunaware/non-collaborative offloading benchmarks.


[330] 2409.09716

Disentangling Visual Priors: Unsupervised Learning of Scene Interpretations with Compositional Autoencoder

Contemporary deep learning architectures lack principled means for capturing and handling fundamental visual concepts, like objects, shapes, geometric transforms, and other higher-level structures. We propose a neurosymbolic architecture that uses a domain-specific language to capture selected priors of image formation, including object shape, appearance, categorization, and geometric transforms. We express template programs in that language and learn their parameterization with features extracted from the scene by a convolutional neural network. When executed, the parameterized program produces geometric primitives which are rendered and assessed for correspondence with the scene content and trained via auto-association with gradient. We confront our approach with a baseline method on a synthetic benchmark and demonstrate its capacity to disentangle selected aspects of the image formation process, learn from small data, correct inference in the presence of noise, and out-of-sample generalization.


[331] 2409.09717

Automatic Control With Human-Like Reasoning: Exploring Language Model Embodied Air Traffic Agents

Recent developments in language models have created new opportunities in air traffic control studies. The current focus is primarily on text and language-based use cases. However, these language models may offer a higher potential impact in the air traffic control domain, thanks to their ability to interact with air traffic environments in an embodied agent form. They also provide a language-like reasoning capability to explain their decisions, which has been a significant roadblock for the implementation of automatic air traffic control. This paper investigates the application of a language model-based agent with function-calling and learning capabilities to resolve air traffic conflicts without human intervention. The main components of this research are foundational large language models, tools that allow the agent to interact with the simulator, and a new concept, the experience library. An innovative part of this research, the experience library, is a vector database that stores synthesized knowledge that agents have learned from interactions with the simulations and language models. To evaluate the performance of our language model-based agent, both open-source and closed-source models were tested. The results of our study reveal significant differences in performance across various configurations of the language model-based agents. The best-performing configuration was able to solve almost all 120 but one imminent conflict scenarios, including up to four aircraft at the same time. Most importantly, the agents are able to provide human-level text explanations on traffic situations and conflict resolution strategies.


[332] 2409.09719

Optimal Operation of Active RIS-Aided Wireless Powered Communications in IoT Networks

Wireless-powered communications (WPCs) are increasingly crucial for extending the lifespan of low-power Internet of Things (IoT) devices. Furthermore, reconfigurable intelligent surfaces (RISs) can create favorable electromagnetic environments by providing alternative signal paths to counteract blockages. The strategic integration of WPC and RIS technologies can significantly enhance energy transfer and data transmission efficiency. However, passive RISs suffer from double-fading attenuation over RIS-aided cascaded links. In this article, we propose the application of an active RIS within WPC-enabled IoT networks. The enhanced flexibility of the active RIS in terms of energy transfer and information transmission is investigated using adjustable parameters. We derive novel closed-form expressions for the ergodic rate and outage probability by incorporating key parameters, including signal amplification, active noise, power consumption, and phase quantization errors. Additionally, we explore the optimization of WPC scenarios, focusing on the time-switching factor and power consumption of the active RIS. The results validate our analysis, demonstrating that an active RIS significantly enhances WPC performance compared to a passive RIS.


[333] 2409.09721

Finetuning CLIP to Reason about Pairwise Differences

Vision-language models (VLMs) such as CLIP are trained via contrastive learning between text and image pairs, resulting in aligned image and text embeddings that are useful for many downstream tasks. A notable drawback of CLIP, however, is that the resulting embedding space seems to lack some of the structure of their purely text-based alternatives. For instance, while text embeddings have been long noted to satisfy \emph{analogies} in embedding space using vector arithmetic, CLIP has no such property. In this paper, we propose an approach to natively train CLIP in a contrastive manner to reason about differences in embedding space. We finetune CLIP so that the differences in image embedding space correspond to \emph{text descriptions of the image differences}, which we synthetically generate with large language models on image-caption paired datasets. We first demonstrate that our approach yields significantly improved capabilities in ranking images by a certain attribute (e.g., elephants are larger than cats), which is useful in retrieval or constructing attribute-based classifiers, and improved zeroshot classification performance on many downstream image classification tasks. In addition, our approach enables a new mechanism for inference that we refer to as comparative prompting, where we leverage prior knowledge of text descriptions of differences between classes of interest, achieving even larger performance gains in classification. Finally, we illustrate that the resulting embeddings obey a larger degree of geometric properties in embedding space, such as in text-to-image generation.


[334] 2409.09722

Measuring Recency Bias In Sequential Recommendation Systems

Recency bias in a sequential recommendation system refers to the overly high emphasis placed on recent items within a user session. This bias can diminish the serendipity of recommendations and hinder the system's ability to capture users' long-term interests, leading to user disengagement. We propose a simple yet effective novel metric specifically designed to quantify recency bias. Our findings also demonstrate that high recency bias measured in our proposed metric adversely impacts recommendation performance too, and mitigating it results in improved recommendation performances across all models evaluated in our experiments, thus highlighting the importance of measuring recency bias.


[335] 2409.09724

MFCLIP: Multi-modal Fine-grained CLIP for Generalizable Diffusion Face Forgery Detection

The rapid development of photo-realistic face generation methods has raised significant concerns in society and academia, highlighting the urgent need for robust and generalizable face forgery detection (FFD) techniques. Although existing approaches mainly capture face forgery patterns using image modality, other modalities like fine-grained noises and texts are not fully explored, which limits the generalization capability of the model. In addition, most FFD methods tend to identify facial images generated by GAN, but struggle to detect unseen diffusion-synthesized ones. To address the limitations, we aim to leverage the cutting-edge foundation model, contrastive language-image pre-training (CLIP), to achieve generalizable diffusion face forgery detection (DFFD). In this paper, we propose a novel multi-modal fine-grained CLIP (MFCLIP) model, which mines comprehensive and fine-grained forgery traces across image-noise modalities via language-guided face forgery representation learning, to facilitate the advancement of DFFD. Specifically, we devise a fine-grained language encoder (FLE) that extracts fine global language features from hierarchical text prompts. We design a multi-modal vision encoder (MVE) to capture global image forgery embeddings as well as fine-grained noise forgery patterns extracted from the richest patch, and integrate them to mine general visual forgery traces. Moreover, we build an innovative plug-and-play sample pair attention (SPA) method to emphasize relevant negative pairs and suppress irrelevant ones, allowing cross-modality sample pairs to conduct more flexible alignment. Extensive experiments and visualizations show that our model outperforms the state of the arts on different settings like cross-generator, cross-forgery, and cross-dataset evaluations.


[336] 2409.09725

Precise Pick-and-Place using Score-Based Diffusion Networks

In this paper, we propose a novel coarse-to-fine continuous pose diffusion method to enhance the precision of pick-and-place operations within robotic manipulation tasks. Leveraging the capabilities of diffusion networks, we facilitate the accurate perception of object poses. This accurate perception enhances both pick-and-place success rates and overall manipulation precision. Our methodology utilizes a top-down RGB image projected from an RGB-D camera and adopts a coarse-to-fine architecture. This architecture enables efficient learning of coarse and fine models. A distinguishing feature of our approach is its focus on continuous pose estimation, which enables more precise object manipulation, particularly concerning rotational angles. In addition, we employ pose and color augmentation techniques to enable effective training with limited data. Through extensive experiments in simulated and real-world scenarios, as well as an ablation study, we comprehensively evaluate our proposed methodology. Taken together, the findings validate its effectiveness in achieving high-precision pick-and-place tasks.


[337] 2409.09726

High Definition Map Mapping and Update: A General Overview and Future Directions

Along with the rapid growth of autonomous vehicles (AVs), more and more demands are required for environment perception technology. Among others, HD mapping has become one of the more prominent roles in helping the vehicle realize essential tasks such as localization and path planning. While increasing research efforts have been directed toward HD Map development. However, a comprehensive overview of the overall HD map mapping and update framework is still lacking. This article introduces the development and current state of the algorithm involved in creating HD map mapping and its maintenance. As part of this study, the primary data preprocessing approach of processing raw data to information ready to feed for mapping and update purposes, semantic segmentation, and localization are also briefly reviewed. Moreover, the map taxonomy, ontology, and quality assessment are extensively discussed, the map data's general representation method is presented, and the mapping algorithm ranging from SLAM to transformers learning-based approaches are also discussed. The development of the HD map update algorithm, from change detection to the update methods, is also presented. Finally, the authors discuss possible future developments and the remaining challenges in HD map mapping and update technology. This paper simultaneously serves as a position paper and tutorial to those new to HD map mapping and update domains.


[338] 2409.09727

From Challenges and Pitfalls to Recommendations and Opportunities: Implementing Federated Learning in Healthcare

Federated learning holds great potential for enabling large-scale healthcare research and collaboration across multiple centres while ensuring data privacy and security are not compromised. Although numerous recent studies suggest or utilize federated learning based methods in healthcare, it remains unclear which ones have potential clinical utility. This review paper considers and analyzes the most recent studies up to May 2024 that describe federated learning based methods in healthcare. After a thorough review, we find that the vast majority are not appropriate for clinical use due to their methodological flaws and/or underlying biases which include but are not limited to privacy concerns, generalization issues, and communication costs. As a result, the effectiveness of federated learning in healthcare is significantly compromised. To overcome these challenges, we provide recommendations and promising opportunities that might be implemented to resolve these problems and improve the quality of model development in federated learning with healthcare.


[339] 2409.09732

Ten Years of Research Advances in Full-Duplex Massive MIMO

We present an overview of ongoing research endeavors focused on in-band full-duplex (IBFD) massive multiple-input multiple-output (MIMO) systems and their applications. In response to the unprecedented demands for mobile traffic in concurrent and upcoming wireless networks, a paradigm shift from conventional cellular networks to distributed communication systems becomes imperative. Cell-free massive MIMO (CF-mMIMO) emerges as a practical and scalable implementation of distributed/network MIMO systems, serving as a crucial physical layer technology for the advancement of next-generation wireless networks. This architecture inherits benefits from co-located massive MIMO and distributed systems and provides the flexibility for integration with the IBFD technology. We delineate the evolutionary trajectory of cellular networks, transitioning from conventional half-duplex multi-user MIMO networks to IBFD CF-mMIMO. The discussion extends further to the emerging paradigm of network-assisted IBFD CF-mMIMO (NAFD CF-mMIMO), serving as an energy-efficient prototype for asymmetric uplink and downlink communication services. This novel approach finds applications in dual-functionality scenarios, including simultaneous wireless power and information transmission, wireless surveillance, and integrated sensing and communications. We highlight various current use case applications, discuss open challenges, and outline future research directions aimed at fully realizing the potential of NAFD CF-mMIMO systems to meet the evolving demands of future wireless networks.


[340] 2409.09734

Complexity and algorithms for Swap median and relation to other consensus problems

Genome rearrangements are events in which large blocks of DNA exchange pieces during evolution. The analysis of such events is a tool for understanding evolutionary genomics, based on finding the minimum number of rearrangements to transform one genome into another. In a general scenario, more than two genomes are considered and we have new challenges. The {\sc Median} problem consists in finding, given three permutations and a distance metric, a permutation $s$ that minimizes the sum of the distances between $s$ and each input. We study the {\sc median} problem over \emph{swap} distances in permutations, for which the computational complexity has been open for almost 20 years (Eriksen, \emph{Theor. Compt. Sci.}, 2007). We consider this problem through some branches. We associate median solutions and interval convex sets, where the concept of graph convexity inspires the following investigation: Does a median permutation belong to every shortest path between one of the pairs of input permutations? We are able to partially answer this question, and as a by-product we solve a long open problem by proving that the {\sc Swap Median} problem is NP-hard. Furthermore, using a similar approach, we show that the {\sc Closest} problem, which seeks to minimize the maximum distance between the solution and the input permutations, is NP-hard even considering three input permutations. This gives a sharp dichotomy into the P vs. NP-hard approaches, since considering two input permutations the problem is easily solvable and considering any number of input permutations it is known to be NP-hard since 2007 (Popov, \emph{Theor. Compt. Sci.}, 2007). In addition, we show that {\sc Swap Median} and {\sc Swap Closest} are APX-hard problems.


[341] 2409.09739

PersonaMark: Personalized LLM watermarking for model protection and user attribution

The rapid development of LLMs brings both convenience and potential threats. As costumed and private LLMs are widely applied, model copyright protection has become important. Text watermarking is emerging as a promising solution to AI-generated text detection and model protection issues. However, current text watermarks have largely ignored the critical need for injecting different watermarks for different users, which could help attribute the watermark to a specific individual. In this paper, we explore the personalized text watermarking scheme for LLM copyright protection and other scenarios, ensuring accountability and traceability in content generation. Specifically, we propose a novel text watermarking method PersonaMark that utilizes sentence structure as the hidden medium for the watermark information and optimizes the sentence-level generation algorithm to minimize disruption to the model's natural generation process. By employing a personalized hashing function to inject unique watermark signals for different users, personalized watermarked text can be obtained. Since our approach performs on sentence level instead of token probability, the text quality is highly preserved. The injection process of unique watermark signals for different users is time-efficient for a large number of users with the designed multi-user hashing function. As far as we know, we achieved personalized text watermarking for the first time through this. We conduct an extensive evaluation of four different LLMs in terms of perplexity, sentiment polarity, alignment, readability, etc. The results demonstrate that our method maintains performance with minimal perturbation to the model's behavior, allows for unbiased insertion of watermark information, and exhibits strong watermark recognition capabilities.


[342] 2409.09740

VGG-Tex: A Vivid Geometry-Guided Facial Texture Estimation Model for High Fidelity Monocular 3D Face Reconstruction

3D face reconstruction from monocular images has promoted the development of various applications such as augmented reality. Though existing methods have made remarkable progress, most of them emphasize geometric reconstruction, while overlooking the importance of texture prediction. To address this issue, we propose VGG-Tex, a novel Vivid Geometry-Guided Facial Texture Estimation model designed for High Fidelity Monocular 3D Face Reconstruction. The core of this approach is leveraging 3D parametric priors to enhance the outcomes of 2D UV texture estimation. Specifically, VGG-Tex includes a Facial Attributes Encoding Module, a Geometry-Guided Texture Generator, and a Visibility-Enhanced Texture Completion Module. These components are responsible for extracting parametric priors, generating initial textures, and refining texture details, respectively. Based on the geometry-texture complementarity principle, VGG-Tex also introduces a Texture-guided Geometry Refinement Module to further balance the overall fidelity of the reconstructed 3D faces, along with corresponding losses. Comprehensive experiments demonstrate that our method significantly improves texture reconstruction performance compared to existing state-of-the-art methods.


[343] 2409.09741

Benchmarking LLMs in Political Content Text-Annotation: Proof-of-Concept with Toxicity and Incivility Data

This article benchmarked the ability of OpenAI's GPTs and a number of open-source LLMs to perform annotation tasks on political content. We used a novel protest event dataset comprising more than three million digital interactions and created a gold standard that includes ground-truth labels annotated by human coders about toxicity and incivility on social media. We included in our benchmark Google's Perspective algorithm, which, along with GPTs, was employed throughout their respective APIs while the open-source LLMs were deployed locally. The findings show that Perspective API using a laxer threshold, GPT-4o, and Nous Hermes 2 Mixtral outperform other LLM's zero-shot classification annotations. In addition, Nous Hermes 2 and Mistral OpenOrca, with a smaller number of parameters, are able to perform the task with high performance, being attractive options that could offer good trade-offs between performance, implementing costs and computing time. Ancillary findings using experiments setting different temperature levels show that although GPTs tend to show not only excellent computing time but also overall good levels of reliability, only open-source LLMs ensure full reproducibility in the annotation.


[344] 2409.09742

OML-AD: Online Machine Learning for Anomaly Detection in Time Series Data

Time series are ubiquitous and occur naturally in a variety of applications -- from data recorded by sensors in manufacturing processes, over financial data streams to climate data. Different tasks arise, such as regression, classification or segmentation of the time series. However, to reliably solve these challenges, it is important to filter out abnormal observations that deviate from the usual behavior of the time series. While many anomaly detection methods exist for independent data and stationary time series, these methods are not applicable to non-stationary time series. To allow for non-stationarity in the data, while simultaneously detecting anomalies, we propose OML-AD, a novel approach for anomaly detection (AD) based on online machine learning (OML). We provide an implementation of OML-AD within the Python library River and show that it outperforms state-of-the-art baseline methods in terms of accuracy and computational efficiency.


[345] 2409.09744

Taming the Ransomware Threats: Leveraging Prospect Theory for Rational Payment Decisions

Day by day, the frequency of ransomware attacks on organizations is experiencing a significant surge. High-profile incidents involving major entities like Las Vegas giants MGM Resorts, Caesar Entertainment, and Boeing underscore the profound impact, posing substantial business barriers. When a sudden cyberattack occurs, organizations often find themselves at a loss, with a looming countdown to pay the ransom, leading to a cascade of impromptu and unfavourable decisions. This paper adopts a novel approach, leveraging Prospect Theory, to elucidate the tactics employed by cyber attackers to entice organizations into paying the ransom. Furthermore, it introduces an algorithm based on Prospect Theory and an Attack Recovery Plan, enabling organizations to make informed decisions on whether to consent to the ransom demands or resist. This algorithm Ransomware Risk Analysis and Decision Support (RADS) uses Prospect Theory to re-instantiate the shifted reference manipulated as perceived gains by attackers and adjusts for the framing effect created due to time urgency. Additionally, leveraging application criticality and incorporating Prospect Theory's insights into under/over weighing probabilities, RADS facilitates informed decision-making that transcends the simplistic framework of "consent" or "resistance," enabling organizations to achieve optimal decisions.


[346] 2409.09745

The Optimality of (Accelerated) SGD for High-Dimensional Quadratic Optimization

Stochastic gradient descent (SGD) is a widely used algorithm in machine learning, particularly for neural network training. Recent studies on SGD for canonical quadratic optimization or linear regression show it attains well generalization under suitable high-dimensional settings. However, a fundamental question -- for what kinds of high-dimensional learning problems SGD and its accelerated variants can achieve optimality has yet to be well studied. This paper investigates SGD with two essential components in practice: exponentially decaying step size schedule and momentum. We establish the convergence upper bound for momentum accelerated SGD (ASGD) and propose concrete classes of learning problems under which SGD or ASGD achieves min-max optimal convergence rates. The characterization of the target function is based on standard power-law decays in (functional) linear regression. Our results unveil new insights for understanding the learning bias of SGD: (i) SGD is efficient in learning ``dense'' features where the corresponding weights are subject to an infinity norm constraint; (ii) SGD is efficient for easy problem without suffering from the saturation effect; (iii) momentum can accelerate the convergence rate by order when the learning problem is relatively hard. To our knowledge, this is the first work to clearly identify the optimal boundary of SGD versus ASGD for the problem under mild settings.


[347] 2409.09748

Explore the Hallucination on Low-level Perception for MLLMs

The rapid development of Multi-modality Large Language Models (MLLMs) has significantly influenced various aspects of industry and daily life, showcasing impressive capabilities in visual perception and understanding. However, these models also exhibit hallucinations, which limit their reliability as AI systems, especially in tasks involving low-level visual perception and understanding. We believe that hallucinations stem from a lack of explicit self-awareness in these models, which directly impacts their overall performance. In this paper, we aim to define and evaluate the self-awareness of MLLMs in low-level visual perception and understanding tasks. To this end, we present QL-Bench, a benchmark settings to simulate human responses to low-level vision, investigating self-awareness in low-level visual perception through visual question answering related to low-level attributes such as clarity and lighting. Specifically, we construct the LLSAVisionQA dataset, comprising 2,990 single images and 1,999 image pairs, each accompanied by an open-ended question about its low-level features. Through the evaluation of 15 MLLMs, we demonstrate that while some models exhibit robust low-level visual capabilities, their self-awareness remains relatively underdeveloped. Notably, for the same model, simpler questions are often answered more accurately than complex ones. However, self-awareness appears to improve when addressing more challenging questions. We hope that our benchmark will motivate further research, particularly focused on enhancing the self-awareness of MLLMs in tasks involving low-level visual perception and understanding.


[348] 2409.09753

DARDA: Domain-Aware Real-Time Dynamic Neural Network Adaptation

Test Time Adaptation (TTA) has emerged as a practical solution to mitigate the performance degradation of Deep Neural Networks (DNNs) in the presence of corruption/ noise affecting inputs. Existing approaches in TTA continuously adapt the DNN, leading to excessive resource consumption and performance degradation due to accumulation of error stemming from lack of supervision. In this work, we propose Domain-Aware Real-Time Dynamic Adaptation (DARDA) to address such issues. Our key approach is to proactively learn latent representations of some corruption types, each one associated with a sub-network state tailored to correctly classify inputs affected by that corruption. After deployment, DARDA adapts the DNN to previously unseen corruptions in an unsupervised fashion by (i) estimating the latent representation of the ongoing corruption; (ii) selecting the sub-network whose associated corruption is the closest in the latent space to the ongoing corruption; and (iii) adapting DNN state, so that its representation matches the ongoing corruption. This way, DARDA is more resource efficient and can swiftly adapt to new distributions caused by different corruptions without requiring a large variety of input data. Through experiments with two popular mobile edge devices - Raspberry Pi and NVIDIA Jetson Nano - we show that DARDA reduces energy consumption and average cache memory footprint respectively by 1.74x and 2.64x with respect to the state of the art, while increasing the performance by 10.4%, 5.7% and 4.4% on CIFAR-10, CIFAR-100 and TinyImagenet.


[349] 2409.09754

Towards Single-Lens Controllable Depth-of-Field Imaging via All-in-Focus Aberration Correction and Monocular Depth Estimation

Controllable Depth-of-Field (DoF) imaging commonly produces amazing visual effects based on heavy and expensive high-end lenses. However, confronted with the increasing demand for mobile scenarios, it is desirable to achieve a lightweight solution with Minimalist Optical Systems (MOS). This work centers around two major limitations of MOS, i.e., the severe optical aberrations and uncontrollable DoF, for achieving single-lens controllable DoF imaging via computational methods. A Depth-aware Controllable DoF Imaging (DCDI) framework is proposed equipped with All-in-Focus (AiF) aberration correction and monocular depth estimation, where the recovered image and corresponding depth map are utilized to produce imaging results under diverse DoFs of any high-end lens via patch-wise convolution. To address the depth-varying optical degradation, we introduce a Depth-aware Degradation-adaptive Training (DA2T) scheme. At the dataset level, a Depth-aware Aberration MOS (DAMOS) dataset is established based on the simulation of Point Spread Functions (PSFs) under different object distances. Additionally, we design two plug-and-play depth-aware mechanisms to embed depth information into the aberration image recovery for better tackling depth-aware degradation. Furthermore, we propose a storage-efficient Omni-Lens-Field model to represent the 4D PSF library of various lenses. With the predicted depth map, recovered image, and depth-aware PSF map inferred by Omni-Lens-Field, single-lens controllable DoF imaging is achieved. Comprehensive experimental results demonstrate that the proposed framework enhances the recovery performance, and attains impressive single-lens controllable DoF imaging results, providing a seminal baseline for this field. The source code and the established dataset will be publicly available at https://github.com/XiaolongQian/DCDI.


[350] 2409.09755

Analysis of Centrifugal Clutches in Two-Speed Automatic Transmissions with Deep Learning-Based Engagement Prediction

This paper presents a comprehensive numerical analysis of centrifugal clutch systems integrated with a two-speed automatic transmission, a key component in automotive torque transfer. Centrifugal clutches enable torque transmission based on rotational speed without external controls. The study systematically examines various clutch configurations effects on transmission dynamics, focusing on torque transfer, upshifting, and downshifting behaviors under different conditions. A Deep Neural Network (DNN) model predicts clutch engagement using parameters such as spring preload and shoe mass, offering an efficient alternative to complex simulations. The integration of deep learning and numerical modeling provides critical insights for optimizing clutch designs, enhancing transmission performance and efficiency.


[351] 2409.09756

MesonGS: Post-training Compression of 3D Gaussians via Efficient Attribute Transformation

3D Gaussian Splatting demonstrates excellent quality and speed in novel view synthesis. Nevertheless, the huge file size of the 3D Gaussians presents challenges for transmission and storage. Current works design compact models to replace the substantial volume and attributes of 3D Gaussians, along with intensive training to distill information. These endeavors demand considerable training time, presenting formidable hurdles for practical deployment. To this end, we propose MesonGS, a codec for post-training compression of 3D Gaussians. Initially, we introduce a measurement criterion that considers both view-dependent and view-independent factors to assess the impact of each Gaussian point on the rendering output, enabling the removal of insignificant points. Subsequently, we decrease the entropy of attributes through two transformations that complement subsequent entropy coding techniques to enhance the file compression rate. More specifically, we first replace rotation quaternions with Euler angles; then, we apply region adaptive hierarchical transform to key attributes to reduce entropy. Lastly, we adopt finer-grained quantization to avoid excessive information loss. Moreover, a well-crafted finetune scheme is devised to restore quality. Extensive experiments demonstrate that MesonGS significantly reduces the size of 3D Gaussians while preserving competitive quality.


[352] 2409.09760

ELMI: Interactive and Intelligent Sign Language Translation of Lyrics for Song Signing

d/Deaf and hearing song-signers become prevalent on video-sharing platforms, but translating songs into sign language remains cumbersome and inaccessible. Our formative study revealed the challenges song-signers face, including semantic, syntactic, expressive, and rhythmic considerations in translations. We present ELMI, an accessible song-signing tool that assists in translating lyrics into sign language. ELMI enables users to edit glosses line-by-line, with real-time synced lyric highlighting and music video snippets. Users can also chat with a large language model-driven AI to discuss meaning, glossing, emoting, and timing. Through an exploratory study with 13 song-signers, we examined how ELMI facilitates their workflows and how song-signers leverage and receive an LLM-driven chat for translation. Participants successfully adopted ELMI to song-signing, with active discussions on the fly. They also reported improved confidence and independence in their translations, finding ELMI encouraging, constructive, and informative. We discuss design implications for leveraging LLMs in culturally sensitive song-signing translations.


[353] 2409.09763

Range-SLAM: Ultra-Wideband-Based Smoke-Resistant Real-Time Localization and Mapping

This paper presents Range-SLAM, a real-time, lightweight SLAM system designed to address the challenges of localization and mapping in environments with smoke and other harsh conditions using Ultra-Wideband (UWB) signals. While optical sensors like LiDAR and cameras struggle in low-visibility environments, UWB signals provide a robust alternative for real-time positioning. The proposed system uses general UWB devices to achieve accurate mapping and localization without relying on expensive LiDAR or other dedicated hardware. By utilizing only the distance and Received Signal Strength Indicator (RSSI) provided by UWB sensors in relation to anchors, we combine the motion of the tag-carrying agent with raycasting algorithm to construct a 2D occupancy grid map in real time. To enhance localization in challenging conditions, a Weighted Least Squares (WLS) method is employed. Extensive real-world experiments, including smoke-filled environments and simulated


[354] 2409.09766

Automated Lesion Segmentation in Whole-Body PET/CT in a multitracer setting

This study explores a workflow for automated segmentation of lesions in FDG and PSMA PET/CT images. Due to the substantial differences in image characteristics between FDG and PSMA, specialized preprocessing steps are required. Utilizing YOLOv8 for data classification, the FDG and PSMA images are preprocessed separately before feeding them into the segmentation models, aiming to improve lesion segmentation accuracy. The study focuses on evaluating the performance of automated segmentation workflow for multitracer PET images. The findings are expected to provide critical insights for enhancing diagnostic workflows and patient-specific treatment plans. Our code will be open-sourced and available at https://github.com/jiayiliu-pku/AP2024.


[355] 2409.09769

Risk-Aware Autonomous Driving for Linear Temporal Logic Specifications

Decision-making for autonomous driving incorporating different types of risks is a challenging topic. This paper proposes a novel risk metric to facilitate the driving task specified by linear temporal logic (LTL) by balancing the risk brought up by different uncertain events. Such a balance is achieved by discounting the costs of these uncertain events according to their timing and severity, thereby reflecting a human-like awareness of risk. We have established a connection between this risk metric and the occupation measure, a fundamental concept in stochastic reachability problems, such that a risk-aware control synthesis problem under LTL specifications is formulated for autonomous vehicles using occupation measures. As a result, the synthesized policy achieves balanced decisions across different types of risks with associated costs, showcasing advantageous versatility and generalizability. The effectiveness and scalability of the proposed approach are validated by three typical traffic scenarios in Carla simulator.


[356] 2409.09770

Towards Multi-view Graph Anomaly Detection with Similarity-Guided Contrastive Clustering

Anomaly detection on graphs plays an important role in many real-world applications. Usually, these data are composed of multiple types (e.g., user information and transaction records for financial data), thus exhibiting view heterogeneity. Therefore, it can be challenging to leverage such multi-view information and learn the graph's contextual information to identify rare anomalies. To tackle this problem, many deep learning-based methods utilize contrastive learning loss as a regularization term to learn good representations. However, many existing contrastive-based methods show that traditional contrastive learning losses fail to consider the semantic information (e.g., class membership information). In addition, we theoretically show that clustering-based contrastive learning also easily leads to a sub-optimal solution. To address these issues, in this paper, we proposed an autoencoder-based clustering framework regularized by a similarity-guided contrastive loss to detect anomalous nodes. Specifically, we build a similarity map to help the model learn robust representations without imposing a hard margin constraint between the positive and negative pairs. Theoretically, we show that the proposed similarity-guided loss is a variant of contrastive learning loss, and how it alleviates the issue of unreliable pseudo-labels with the connection to graph spectral clustering. Experimental results on several datasets demonstrate the effectiveness and efficiency of our proposed framework.


[357] 2409.09774

Generalizing Alignment Paradigm of Text-to-Image Generation with Preferences through $f$-divergence Minimization

Direct Preference Optimization (DPO) has recently expanded its successful application from aligning large language models (LLMs) to aligning text-to-image models with human preferences, which has generated considerable interest within the community. However, we have observed that these approaches rely solely on minimizing the reverse Kullback-Leibler divergence during alignment process between the fine-tuned model and the reference model, neglecting the incorporation of other divergence constraints. In this study, we focus on extending reverse Kullback-Leibler divergence in the alignment paradigm of text-to-image models to $f$-divergence, which aims to garner better alignment performance as well as good generation diversity. We provide the generalized formula of the alignment paradigm under the $f$-divergence condition and thoroughly analyze the impact of different divergence constraints on alignment process from the perspective of gradient fields. We conduct comprehensive evaluation on image-text alignment performance, human value alignment performance and generation diversity performance under different divergence constraints, and the results indicate that alignment based on Jensen-Shannon divergence achieves the best trade-off among them. The option of divergence employed for aligning text-to-image models significantly impacts the trade-off between alignment performance (especially human value alignment) and generation diversity, which highlights the necessity of selecting an appropriate divergence for practical applications.


[358] 2409.09777

DiFSD: Ego-Centric Fully Sparse Paradigm with Uncertainty Denoising and Iterative Refinement for Efficient End-to-End Autonomous Driving

Current end-to-end autonomous driving methods resort to unifying modular designs for various tasks (e.g. perception, prediction and planning). Although optimized in a planning-oriented spirit with a fully differentiable framework, existing end-to-end driving systems without ego-centric designs still suffer from unsatisfactory performance and inferior efficiency, owing to the rasterized scene representation learning and redundant information transmission. In this paper, we revisit the human driving behavior and propose an ego-centric fully sparse paradigm, named DiFSD, for end-to-end self-driving. Specifically, DiFSD mainly consists of sparse perception, hierarchical interaction and iterative motion planner. The sparse perception module performs detection, tracking and online mapping based on sparse representation of the driving scene. The hierarchical interaction module aims to select the Closest In-Path Vehicle / Stationary (CIPV / CIPS) from coarse to fine, benefiting from an additional geometric prior. As for the iterative motion planner, both selected interactive agents and ego-vehicle are considered for joint motion prediction, where the output multi-modal ego-trajectories are optimized in an iterative fashion. Besides, both position-level motion diffusion and trajectory-level planning denoising are introduced for uncertainty modeling, thus facilitating the training stability and convergence of the whole framework. Extensive experiments conducted on nuScenes dataset demonstrate the superior planning performance and great efficiency of DiFSD, which significantly reduces the average L2 error by \textbf{66\%} and collision rate by \textbf{77\%} than UniAD while achieves \textbf{8.2$\times$} faster running efficiency.


[359] 2409.09778

Rewind-to-Delete: Certified Machine Unlearning for Nonconvex Functions

Machine unlearning algorithms aim to efficiently remove data from a model without retraining it from scratch, in order to enforce data privacy, remove corrupted or outdated data, or respect a user's ``right to be forgotten." Certified machine unlearning is a strong theoretical guarantee that quantifies the extent to which data is erased from the model weights. Most prior works in certified unlearning focus on models trained on convex or strongly convex loss functions, which benefit from convenient convergence guarantees and the existence of global minima. For nonconvex objectives, existing algorithms rely on limiting assumptions and expensive computations that hinder practical implementations. In this work, we propose a simple first-order algorithm for unlearning on general nonconvex loss functions which unlearns by ``rewinding" to an earlier step during the learning process and then performs gradient descent on the loss function of the retained data points. Our algorithm is black-box, in that it can be directly applied to models pretrained with vanilla gradient descent with no prior consideration of unlearning. We prove $(\epsilon, \delta)$ certified unlearning and performance guarantees that establish the privacy-utility-complexity tradeoff of our algorithm, with special consideration for nonconvex functions that satisfy the Polyak-Lojasiewicz inequality.


[360] 2409.09779

Underwater Image Enhancement via Dehazing and Color Restoration

With the rapid development of marine engineering projects such as marine resource extraction and oceanic surveys, underwater visual imaging and analysis has become a critical technology. Unfortunately, due to the inevitable non-linear attenuation of light in underwater environments, underwater images and videos often suffer from low contrast, blurriness, and color degradation, which significantly complicate the subsequent research. Existing underwater image enhancement methods often treat the haze and color cast as a unified degradation process and disregard their independence and interdependence, which limits the performance improvement. Here, we propose a Vision Transformer (ViT)-based network (referred to as WaterFormer) to improve the underwater image quality. WaterFormer contains three major components: a dehazing block (DehazeFormer Block) to capture the self-correlated haze features and extract deep-level features, a Color Restoration Block (CRB) to capture self-correlated color cast features, and a Channel Fusion Block (CFB) to capture fusion features within the network. To ensure authenticity, a soft reconstruction layer based on the underwater imaging physics model is included. To improve the quality of the enhanced images, we introduce the Chromatic Consistency Loss and Sobel Color Loss to train the network. Comprehensive experimental results demonstrate that WaterFormer outperforms other state-of-the-art methods in enhancing underwater images.


[361] 2409.09780

Power Allocation for Finite-Blocklength IR-HARQ

This letter concerns the power allocation across the multiple transmission rounds under the Incremental Redundancy Hybrid Automatic Repeat reQuest (IR-HARQ) policy, in pursuit of an energy-efficient way of fulfilling the outage probability target in the finite-blocklength regime. We start by showing that the optimization objective and the constraints of the above power allocation problem all depend upon the outage probability. The main challenge then lies in the fact that the outage probability cannot be written analytically in terms of the power variables. To sidestep this difficulty, we propose a novel upper bound on the outage probability in the finite-blocklength regime, which is much tighter than the existing ones from the literature. Most importantly, by using this upper bound to approximate the outage probability, we can recast the original intractable power allocation problem into a geometric programming (GP) form--which can be efficiently solved by the standard method.


[362] 2409.09783

Learning Rate Optimization for Deep Neural Networks Using Lipschitz Bandits

Learning rate is a crucial parameter in training of neural networks. A properly tuned learning rate leads to faster training and higher test accuracy. In this paper, we propose a Lipschitz bandit-driven approach for tuning the learning rate of neural networks. The proposed approach is compared with the popular HyperOpt technique used extensively for hyperparameter optimization and the recently developed bandit-based algorithm BLiE. The results for multiple neural network architectures indicate that our method finds a better learning rate using a) fewer evaluations and b) lesser number of epochs per evaluation, when compared to both HyperOpt and BLiE. Thus, the proposed approach enables more efficient training of neural networks, leading to lower training time and lesser computational cost.


[363] 2409.09784

Enhancing Lesion Segmentation in PET/CT Imaging with Deep Learning and Advanced Data Preprocessing Techniques

The escalating global cancer burden underscores the critical need for precise diagnostic tools in oncology. This research employs deep learning to enhance lesion segmentation in PET/CT imaging, utilizing a dataset of 900 whole-body FDG-PET/CT and 600 PSMA-PET/CT studies from the AutoPET challenge III. Our methodical approach includes robust preprocessing and data augmentation techniques to ensure model robustness and generalizability. We investigate the influence of non-zero normalization and modifications to the data augmentation pipeline, such as the introduction of RandGaussianSharpen and adjustments to the Gamma transform parameter. This study aims to contribute to the standardization of preprocessing and augmentation strategies in PET/CT imaging, potentially improving the diagnostic accuracy and the personalized management of cancer patients. Our code will be open-sourced and available at https://github.com/jiayiliu-pku/DC2024.


[364] 2409.09785

Large Language Model Based Generative Error Correction: A Challenge and Baselines forSpeech Recognition, Speaker Tagging, and Emotion Recognition

Given recent advances in generative AI technology, a key question is how large language models (LLMs) can enhance acoustic modeling tasks using text decoding results from a frozen, pretrained automatic speech recognition (ASR) model. To explore new capabilities in language modeling for speech processing, we introduce the generative speech transcription error correction (GenSEC) challenge. This challenge comprises three post-ASR language modeling tasks: (i) post-ASR transcription correction, (ii) speaker tagging, and (iii) emotion recognition. These tasks aim to emulate future LLM-based agents handling voice-based interfaces while remaining accessible to a broad audience by utilizing open pretrained language models or agent-based APIs. We also discuss insights from baseline evaluations, as well as lessons learned for designing future evaluations.


[365] 2409.09787

BEnDEM:A Boltzmann Sampler Based on Bootstrapped Denoising Energy Matching

Developing an efficient sampler capable of generating independent and identically distributed (IID) samples from a Boltzmann distribution is a crucial challenge in scientific research, e.g. molecular dynamics. In this work, we intend to learn neural samplers given energy functions instead of data sampled from the Boltzmann distribution. By learning the energies of the noised data, we propose a diffusion-based sampler, ENERGY-BASED DENOISING ENERGY MATCHING, which theoretically has lower variance and more complexity compared to related works. Furthermore, a novel bootstrapping technique is applied to EnDEM to balance between bias and variance. We evaluate EnDEM and BEnDEM on a 2-dimensional 40 Gaussian Mixture Model (GMM) and a 4-particle double-welling potential (DW-4). The experimental results demonstrate that BEnDEM can achieve state-of-the-art performance while being more robust.


[366] 2409.09788

Reasoning Paths with Reference Objects Elicit Quantitative Spatial Reasoning in Large Vision-Language Models

Despite recent advances demonstrating vision-language models' (VLMs) abilities to describe complex relationships in images using natural language, their capability to quantitatively reason about object sizes and distances remains underexplored. In this work, we introduce a manually annotated benchmark, Q-Spatial Bench, with 271 questions across five categories designed for quantitative spatial reasoning and systematically investigate the performance of state-of-the-art VLMs on this task. Our analysis reveals that reasoning about distances between objects is particularly challenging for SoTA VLMs; however, some VLMs significantly outperform others, with an over 40-point gap between the two best performing models. We also make the surprising observation that the success rate of the top-performing VLM increases by 19 points when a reasoning path using a reference object emerges naturally in the response. Inspired by this observation, we develop a zero-shot prompting technique, SpatialPrompt, that encourages VLMs to answer quantitative spatial questions using reference objects as visual cues. By instructing VLMs to use reference objects in their reasoning paths via SpatialPrompt, Gemini 1.5 Pro, Gemini 1.5 Flash, and GPT-4V improve their success rates by over 40, 20, and 30 points, respectively. We emphasize that these significant improvements are obtained without needing more data, model architectural modifications, or fine-tuning.


[367] 2409.09790

Multiple Rotation Averaging with Constrained Reweighting Deep Matrix Factorization

Multiple rotation averaging plays a crucial role in computer vision and robotics domains. The conventional optimization-based methods optimize a nonlinear cost function based on certain noise assumptions, while most previous learning-based methods require ground truth labels in the supervised training process. Recognizing the handcrafted noise assumption may not be reasonable in all real-world scenarios, this paper proposes an effective rotation averaging method for mining data patterns in a learning manner while avoiding the requirement of labels. Specifically, we apply deep matrix factorization to directly solve the multiple rotation averaging problem in unconstrained linear space. For deep matrix factorization, we design a neural network model, which is explicitly low-rank and symmetric to better suit the background of multiple rotation averaging. Meanwhile, we utilize a spanning tree-based edge filtering to suppress the influence of rotation outliers. What's more, we also adopt a reweighting scheme and dynamic depth selection strategy to further improve the robustness. Our method synthesizes the merit of both optimization-based and learning-based methods. Experimental results on various datasets validate the effectiveness of our proposed method.


[368] 2409.09792

Enhancing Data Quality through Self-learning on Imbalanced Financial Risk Data

In the financial risk domain, particularly in credit default prediction and fraud detection, accurate identification of high-risk class instances is paramount, as their occurrence can have significant economic implications. Although machine learning models have gained widespread adoption for risk prediction, their performance is often hindered by the scarcity and diversity of high-quality data. This limitation stems from factors in datasets such as small risk sample sizes, high labeling costs, and severe class imbalance, which impede the models' ability to learn effectively and accurately forecast critical events. This study investigates data pre-processing techniques to enhance existing financial risk datasets by introducing TriEnhance, a straightforward technique that entails: (1) generating synthetic samples specifically tailored to the minority class, (2) filtering using binary feedback to refine samples, and (3) self-learning with pseudo-labels. Our experiments across six benchmark datasets reveal the efficacy of TriEnhance, with a notable focus on improving minority class calibration, a key factor for developing more robust financial risk prediction systems.


[369] 2409.09794

Federated Learning in Adversarial Environments: Testbed Design and Poisoning Resilience in Cybersecurity

This paper presents the design and implementation of a Federated Learning (FL) testbed, focusing on its application in cybersecurity and evaluating its resilience against poisoning attacks. Federated Learning allows multiple clients to collaboratively train a global model while keeping their data decentralized, addressing critical needs for data privacy and security, particularly in sensitive fields like cybersecurity. Our testbed, built using the Flower framework, facilitates experimentation with various FL frameworks, assessing their performance, scalability, and ease of integration. Through a case study on federated intrusion detection systems, we demonstrate the testbed's capabilities in detecting anomalies and securing critical infrastructure without exposing sensitive network data. Comprehensive poisoning tests, targeting both model and data integrity, evaluate the system's robustness under adversarial conditions. Our results show that while federated learning enhances data privacy and distributed learning, it remains vulnerable to poisoning attacks, which must be mitigated to ensure its reliability in real-world applications.


[370] 2409.09795

CROSS-JEM: Accurate and Efficient Cross-encoders for Short-text Ranking Tasks

Ranking a set of items based on their relevance to a given query is a core problem in search and recommendation. Transformer-based ranking models are the state-of-the-art approaches for such tasks, but they score each query-item independently, ignoring the joint context of other relevant items. This leads to sub-optimal ranking accuracy and high computational costs. In response, we propose Cross-encoders with Joint Efficient Modeling (CROSS-JEM), a novel ranking approach that enables transformer-based models to jointly score multiple items for a query, maximizing parameter utilization. CROSS-JEM leverages (a) redundancies and token overlaps to jointly score multiple items, that are typically short-text phrases arising in search and recommendations, and (b) a novel training objective that models ranking probabilities. CROSS-JEM achieves state-of-the-art accuracy and over 4x lower ranking latency over standard cross-encoders. Our contributions are threefold: (i) we highlight the gap between the ranking application's need for scoring thousands of items per query and the limited capabilities of current cross-encoders; (ii) we introduce CROSS-JEM for joint efficient scoring of multiple items per query; and (iii) we demonstrate state-of-the-art accuracy on standard public datasets and a proprietary dataset. CROSS-JEM opens up new directions for designing tailored early-attention-based ranking models that incorporate strict production constraints such as item multiplicity and latency.


[371] 2409.09798

WASD - Water Saving Devise

In response to escalating global drinking water scarcity we propose an innovative, automatic system for reusing clean sink water to fl ush toilets. Existing solutions for water recycle in the houses involve purifi ers and complex treatments, leading to high costs and constant maintenance. WASD, utilizing sensors and a solenoid valve, rapidly detects and separates clean water, directing it to toilet tank while sending non-reusable water to the drain. This cost-effective and user-friendly approach aims to establish sustainable water practices in domestic settings, contributing to solve the shortage of drinking water.


[372] 2409.09804

Abnormal Event Detection In Videos Using Deep Embedding

Abnormal event detection or anomaly detection in surveillance videos is currently a challenge because of the diversity of possible events. Due to the lack of anomalous events at training time, anomaly detection requires the design of learning methods without supervision. In this work we propose an unsupervised approach for video anomaly detection with the aim to jointly optimize the objectives of the deep neural network and the anomaly detection task using a hybrid architecture. Initially, a convolutional autoencoder is pre-trained in an unsupervised manner with a fusion of depth, motion and appearance features. In the second step, we utilize the encoder part of the pre-trained autoencoder and extract the embeddings of the fused input. Now, we jointly train/ fine tune the encoder to map the embeddings to a hypercenter. Thus, embeddings of normal data fall near the hypercenter, whereas embeddings of anomalous data fall far away from the hypercenter.


[373] 2409.09808

Famba-V: Fast Vision Mamba with Cross-Layer Token Fusion

Mamba and Vision Mamba (Vim) models have shown their potential as an alternative to methods based on Transformer architecture. This work introduces Fast Mamba for Vision (Famba-V), a cross-layer token fusion technique to enhance the training efficiency of Vim models. The key idea of Famba-V is to identify and fuse similar tokens across different Vim layers based on a suit of cross-layer strategies instead of simply applying token fusion uniformly across all the layers that existing works propose. We evaluate the performance of Famba-V on CIFAR-100. Our results show that Famba-V is able to enhance the training efficiency of Vim models by reducing both training time and peak memory usage during training. Moreover, the proposed cross-layer strategies allow Famba-V to deliver superior accuracy-efficiency trade-offs. These results all together demonstrate Famba-V as a promising efficiency enhancement technique for Vim models.


[374] 2409.09810

Local MALA-within-Gibbs for Bayesian image deblurring with total variation prior

We consider Bayesian inference for image deblurring with total variation (TV) prior. Since the posterior is analytically intractable, we resort to Markov chain Monte Carlo (MCMC) methods. However, since most MCMC methods significantly deteriorate in high dimensions, they are not suitable to handle high resolution imaging problems. In this paper, we show how low-dimensional sampling can still be facilitated by exploiting the sparse conditional structure of the posterior. To this end, we make use of the local structures of the blurring operator and the TV prior by partitioning the image into rectangular blocks and employing a blocked Gibbs sampler with proposals stemming from the Metropolis-Hastings adjusted Langevin Algorithm (MALA). We prove that this MALA-within-Gibbs (MLwG) sampling algorithm has dimension-independent block acceptance rates and dimension-independent convergence rate. In order to apply the MALA proposals, we approximate the TV by a smoothed version, and show that the introduced approximation error is evenly distributed and dimension-independent. Since the posterior is a Gibbs density, we can use the Hammersley-Clifford Theorem to identify the posterior conditionals which only depend locally on the neighboring blocks. We outline computational strategies to evaluate the conditionals, which are the target densities in the Gibbs updates, locally and in parallel. In two numerical experiments, we validate the dimension-independent properties of the MLwG algorithm and demonstrate its superior performance over MALA.


[375] 2409.09811

PROSE-FD: A Multimodal PDE Foundation Model for Learning Multiple Operators for Forecasting Fluid Dynamics

We propose PROSE-FD, a zero-shot multimodal PDE foundational model for simultaneous prediction of heterogeneous two-dimensional physical systems related to distinct fluid dynamics settings. These systems include shallow water equations and the Navier-Stokes equations with incompressible and compressible flow, regular and complex geometries, and different buoyancy settings. This work presents a new transformer-based multi-operator learning approach that fuses symbolic information to perform operator-based data prediction, i.e. non-autoregressive. By incorporating multiple modalities in the inputs, the PDE foundation model builds in a pathway for including mathematical descriptions of the physical behavior. We pre-train our foundation model on 6 parametric families of equations collected from 13 datasets, including over 60K trajectories. Our model outperforms popular operator learning, computer vision, and multi-physics models, in benchmark forward prediction tasks. We test our architecture choices with ablation studies.


[376] 2409.09812

Hierarchical Event-Triggered Systems: Safe Learning of Quasi-Optimal Deadline Policies

We present a hierarchical architecture to improve the efficiency of event-triggered control (ETC) in reducing resource consumption. This paper considers event-triggered systems generally as an impulsive control system in which the objective is to minimize the number of impulses. Our architecture recognizes that traditional ETC is a greedy strategy towards optimizing average inter-event times and introduces the idea of a deadline policy for the optimization of long-term discounted inter-event times. A lower layer is designed employing event-triggered control to guarantee the satisfaction of control objectives, while a higher layer implements a deadline policy designed with reinforcement learning to improve the discounted inter-event time. We apply this scheme to the control of an orbiting spacecraft, showing superior performance in terms of actuation frequency reduction with respect to a standard (one-layer) ETC while maintaining safety guarantees.


[377] 2409.09816

Fast Shortest Path Polyline Smoothing With G1 Continuity and Bounded Curvature

In this work, we propose a novel and efficient method for smoothing polylines in motion planning tasks. The algorithm applies to motion planning of vehicles with bounded curvature. In the paper, we show that the generated path: 1) has minimal length, 2) is $G^1$ continuous, and 3) is collision-free by construction, if the hypotheses are respected. We compare our solution with the state-of.the-art and show its convenience both in terms of computation time and of length of the compute path.


[378] 2409.09819

A Simpler Alternative to Variational Regularized Counterfactual Risk Minimization

Variance regularized counterfactual risk minimization (VRCRM) has been proposed as an alternative off-policy learning (OPL) method. VRCRM method uses a lower-bound on the $f$-divergence between the logging policy and the target policy as regularization during learning and was shown to improve performance over existing OPL alternatives on multi-label classification tasks. In this work, we revisit the original experimental setting of VRCRM and propose to minimize the $f$-divergence directly, instead of optimizing for the lower bound using a $f$-GAN approach. Surprisingly, we were unable to reproduce the results reported in the original setting. In response, we propose a novel simpler alternative to f-divergence optimization by minimizing a direct approximation of f-divergence directly, instead of a $f$-GAN based lower bound. Experiments showed that minimizing the divergence using $f$-GANs did not work as expected, whereas our proposed novel simpler alternative works better empirically.


[379] 2409.09821

Forward Propagation of Low Discrepancy Through McKean-Vlasov Dynamics: From QMC to MLQMC

This work develops a particle system addressing the approximation of McKean-Vlasov stochastic differential equations (SDEs). The novelty of the approach lies in involving low discrepancy sequences nontrivially in the construction of a particle system with coupled noise and initial conditions. Weak convergence for SDEs with additive noise is proven. A numerical study demonstrates that the novel approach presented here doubles the respective convergence rates for weak and strong approximation of the mean-field limit, compared with the standard particle system. These rates are proven in the simplified setting of a mean-field ordinary differential equation in terms of appropriate bounds involving the star discrepancy for low discrepancy sequences with a group structure, such as Rank-1 lattice points. This construction nontrivially provides an antithetic multilevel quasi-Monte Carlo estimator. An asymptotic error analysis reveals that the proposed approach outperforms methods based on the classic particle system with independent initial conditions and noise.


[380] 2409.09822

Causal Inference with Large Language Model: A Survey

Causal inference has been a pivotal challenge across diverse domains such as medicine and economics, demanding a complicated integration of human knowledge, mathematical reasoning, and data mining capabilities. Recent advancements in natural language processing (NLP), particularly with the advent of large language models (LLMs), have introduced promising opportunities for traditional causal inference tasks. This paper reviews recent progress in applying LLMs to causal inference, encompassing various tasks spanning different levels of causation. We summarize the main causal problems and approaches, and present a comparison of their evaluation results in different causal scenarios. Furthermore, we discuss key findings and outline directions for future research, underscoring the potential implications of integrating LLMs in advancing causal inference methodologies.


[381] 2409.09823

Efficient Video to Audio Mapper with Visual Scene Detection

Video-to-audio (V2A) generation aims to produce corresponding audio given silent video inputs. This task is particularly challenging due to the cross-modality and sequential nature of the audio-visual features involved. Recent works have made significant progress in bridging the domain gap between video and audio, generating audio that is semantically aligned with the video content. However, a critical limitation of these approaches is their inability to effectively recognize and handle multiple scenes within a video, often leading to suboptimal audio generation in such cases. In this paper, we first reimplement a state-of-the-art V2A model with a slightly modified light-weight architecture, achieving results that outperform the baseline. We then propose an improved V2A model that incorporates a scene detector to address the challenge of switching between multiple visual scenes. Results on VGGSound show that our model can recognize and handle multiple scenes within a video and achieve superior performance against the baseline for both fidelity and relevance.


[382] 2409.09825

GP-GPT: Large Language Model for Gene-Phenotype Mapping

Pre-trained large language models(LLMs) have attracted increasing attention in biomedical domains due to their success in natural language processing. However, the complex traits and heterogeneity of multi-sources genomics data pose significant challenges when adapting these models to the bioinformatics and biomedical field. To address these challenges, we present GP-GPT, the first specialized large language model for genetic-phenotype knowledge representation and genomics relation analysis. Our model is fine-tuned in two stages on a comprehensive corpus composed of over 3,000,000 terms in genomics, proteomics, and medical genetics, derived from multiple large-scale validated datasets and scientific publications. GP-GPT demonstrates proficiency in accurately retrieving medical genetics information and performing common genomics analysis tasks, such as genomics information retrieval and relationship determination. Comparative experiments across domain-specific tasks reveal that GP-GPT outperforms state-of-the-art LLMs, including Llama2, Llama3 and GPT-4. These results highlight GP-GPT's potential to enhance genetic disease relation research and facilitate accurate and efficient analysis in the fields of genomics and medical genetics. Our investigation demonstrated the subtle changes of bio-factor entities' representations in the GP-GPT, which suggested the opportunities for the application of LLMs to advancing gene-phenotype research.


[383] 2409.09827

On the Effect of Robot Errors on Human Teaching Dynamics

Human-in-the-loop learning is gaining popularity, particularly in the field of robotics, because it leverages human knowledge about real-world tasks to facilitate agent learning. When people instruct robots, they naturally adapt their teaching behavior in response to changes in robot performance. While current research predominantly focuses on integrating human teaching dynamics from an algorithmic perspective, understanding these dynamics from a human-centered standpoint is an under-explored, yet fundamental problem. Addressing this issue will enhance both robot learning and user experience. Therefore, this paper explores one potential factor contributing to the dynamic nature of human teaching: robot errors. We conducted a user study to investigate how the presence and severity of robot errors affect three dimensions of human teaching dynamics: feedback granularity, feedback richness, and teaching time, in both forced-choice and open-ended teaching contexts. The results show that people tend to spend more time teaching robots with errors, provide more detailed feedback over specific segments of a robot's trajectory, and that robot error can influence a teacher's choice of feedback modality. Our findings offer valuable insights for designing effective interfaces for interactive learning and optimizing algorithms to better understand human intentions.


[384] 2409.09828

Latent Diffusion Models for Controllable RNA Sequence Generation

This paper presents RNAdiffusion, a latent diffusion model for generating and optimizing discrete RNA sequences. RNA is a particularly dynamic and versatile molecule in biological processes. RNA sequences exhibit high variability and diversity, characterized by their variable lengths, flexible three-dimensional structures, and diverse functions. We utilize pretrained BERT-type models to encode raw RNAs into token-level biologically meaningful representations. A Q-Former is employed to compress these representations into a fixed-length set of latent vectors, with an autoregressive decoder trained to reconstruct RNA sequences from these latent variables. We then develop a continuous diffusion model within this latent space. To enable optimization, we train reward networks to estimate functional properties of RNA from the latent variables. We employ gradient-based guidance during the backward diffusion process, aiming to generate RNA sequences that are optimized for higher rewards. Empirical experiments confirm that RNAdiffusion generates non-coding RNAs that align with natural distributions across various biological indicators. We fine-tuned the diffusion model on untranslated regions (UTRs) of mRNA and optimize sample sequences for protein translation efficiencies. Our guided diffusion model effectively generates diverse UTR sequences with high Mean Ribosome Loading (MRL) and Translation Efficiency (TE), surpassing baselines. These results hold promise for studies on RNA sequence-function relationships, protein synthesis, and enhancing therapeutic RNA design.


[385] 2409.09829

NARF24: Estimating Articulated Object Structure for Implicit Rendering

Articulated objects and their representations pose a difficult problem for robots. These objects require not only representations of geometry and texture, but also of the various connections and joint parameters that make up each articulation. We propose a method that learns a common Neural Radiance Field (NeRF) representation across a small number of collected scenes. This representation is combined with a parts-based image segmentation to produce an implicit space part localization, from which the connectivity and joint parameters of the articulated object can be estimated, thus enabling configuration-conditioned rendering.


[386] 2409.09831

Generating Synthetic Free-text Medical Records with Low Re-identification Risk using Masked Language Modeling

In this paper, we present a system that generates synthetic free-text medical records, such as discharge summaries, admission notes and doctor correspondences, using Masked Language Modeling (MLM). Our system is designed to preserve the critical information of the records while introducing significant diversity and minimizing re-identification risk. The system incorporates a de-identification component that uses Philter to mask Protected Health Information (PHI), followed by a Medical Entity Recognition (NER) model to retain key medical information. We explore various masking ratios and mask-filling techniques to balance the trade-off between diversity and fidelity in the synthetic outputs without affecting overall readability. Our results demonstrate that the system can produce high-quality synthetic data with significant diversity while achieving a HIPAA-compliant PHI recall rate of 0.96 and a low re-identification risk of 0.035. Furthermore, downstream evaluations using a NER task reveal that the synthetic data can be effectively used to train models with performance comparable to those trained on real data. The flexibility of the system allows it to be adapted for specific use cases, making it a valuable tool for privacy-preserving data generation in medical research and healthcare applications.


[387] 2409.09832

Template-based Multi-Domain Face Recognition

Despite the remarkable performance of deep neural networks for face detection and recognition tasks in the visible spectrum, their performance on more challenging non-visible domains is comparatively still lacking. While significant research has been done in the fields of domain adaptation and domain generalization, in this paper we tackle scenarios in which these methods have limited applicability owing to the lack of training data from target domains. We focus on the problem of single-source (visible) and multi-target (SWIR, long-range/remote, surveillance, and body-worn) face recognition task. We show through experiments that a good template generation algorithm becomes crucial as the complexity of the target domain increases. In this context, we introduce a template generation algorithm called Norm Pooling (and a variant known as Sparse Pooling) and show that it outperforms average pooling across different domains and networks, on the IARPA JANUS Benchmark Multi-domain Face (IJB-MDF) dataset.


[388] 2409.09837

Finite element analysis of a nematic liquid crystal Landau-de Gennes model with quartic elastic terms

In arXiv:1906.09232v2, Golovaty et al. present a $Q$-tensor model for liquid crystal dynamics which reduces to the well-known Oseen-Frank director field model in uniaxial states. We study a closely related model and present an energy stable scheme for the corresponding gradient flow. We prove the convergence of this scheme via fixed-point iteration and rigorously show the $\Gamma$-convergence of discrete minimizers as the mesh size approaches zero. In the numerical experiments, we successfully simulate isotropic-to-nematic phase transitions as expected.


[389] 2409.09841

Tracking Virtual Meetings in the Wild: Re-identification in Multi-Participant Virtual Meetings

In recent years, workplaces and educational institutes have widely adopted virtual meeting platforms. This has led to a growing interest in analyzing and extracting insights from these meetings, which requires effective detection and tracking of unique individuals. In practice, there is no standardization in video meetings recording layout, and how they are captured across the different platforms and services. This, in turn, creates a challenge in acquiring this data stream and analyzing it in a uniform fashion. Our approach provides a solution to the most general form of video recording, usually consisting of a grid of participants (\cref{fig:videomeeting}) from a single video source with no metadata on participant locations, while using the least amount of constraints and assumptions as to how the data was acquired. Conventional approaches often use YOLO models coupled with tracking algorithms, assuming linear motion trajectories akin to that observed in CCTV footage. However, such assumptions fall short in virtual meetings, where participant video feed window can abruptly change location across the grid. In an organic video meeting setting, participants frequently join and leave, leading to sudden, non-linear movements on the video grid. This disrupts optical flow-based tracking methods that depend on linear motion. Consequently, standard object detection and tracking methods might mistakenly assign multiple participants to the same tracker. In this paper, we introduce a novel approach to track and re-identify participants in remote video meetings, by utilizing the spatio-temporal priors arising from the data in our domain. This, in turn, increases tracking capabilities compared to the use of general object tracking. Our approach reduces the error rate by 95% on average compared to YOLO-based tracking methods as a baseline.


[390] 2409.09844

A Benchmark Dataset with Larger Context for Non-Factoid Question Answering over Islamic Text

Accessing and comprehending religious texts, particularly the Quran (the sacred scripture of Islam) and Ahadith (the corpus of the sayings or traditions of the Prophet Muhammad), in today's digital era necessitates efficient and accurate Question-Answering (QA) systems. Yet, the scarcity of QA systems tailored specifically to the detailed nature of inquiries about the Quranic Tafsir (explanation, interpretation, context of Quran for clarity) and Ahadith poses significant challenges. To address this gap, we introduce a comprehensive dataset meticulously crafted for QA purposes within the domain of Quranic Tafsir and Ahadith. This dataset comprises a robust collection of over 73,000 question-answer pairs, standing as the largest reported dataset in this specialized domain. Importantly, both questions and answers within the dataset are meticulously enriched with contextual information, serving as invaluable resources for training and evaluating tailored QA systems. However, while this paper highlights the dataset's contributions and establishes a benchmark for evaluating QA performance in the Quran and Ahadith domains, our subsequent human evaluation uncovered critical insights regarding the limitations of existing automatic evaluation techniques. The discrepancy between automatic evaluation metrics, such as ROUGE scores, and human assessments became apparent. The human evaluation indicated significant disparities: the model's verdict consistency with expert scholars ranged between 11% to 20%, while its contextual understanding spanned a broader spectrum of 50% to 90%. These findings underscore the necessity for evaluation techniques that capture the nuances and complexities inherent in understanding religious texts, surpassing the limitations of traditional automatic metrics.


[391] 2409.09845

FSL-LVLM: Friction-Aware Safety Locomotion using Large Vision Language Model in Wheeled Robots

Wheeled-legged robots offer significant mobility and versatility but face substantial challenges when operating on slippery terrains. Traditional model-based controllers for these robots assume no slipping. While reinforcement learning (RL) helps quadruped robots adapt to different surfaces, recovering from slips remains challenging, especially for systems with few contact points. Estimating the ground friction coefficient is another open challenge. In this paper, we propose a novel friction-aware safety locomotion framework that integrates Large Vision Language Models (LVLMs) with a RL policy. Our approach explicitly incorporates the estimated friction coefficient into the RL policy, enabling the robot to adapt its behavior in advance based on the surface type before reaching it. We introduce a Friction-From-Vision (FFV) module, which leverages LVLMs to estimate ground friction coefficients, eliminating the need for large datasets and extensive training. The framework was validated on a customized wheeled inverted pendulum, and experimental results demonstrate that our framework increases the success rate in completing driving tasks by adjusting speed according to terrain type, while achieving better tracking performance compared to baseline methods. Our framework can be simply integrated with any other RL policies.


[392] 2409.09846

A Global Perspective on the Past, Present, and Future of Video Streaming over Starlink

This study presents the first global analysis of on-demand video streaming over Low Earth Orbit (LEO) satellite networks, using data from over one million households across 85 countries. We highlight Starlink's role as a major LEO provider, enhancing connectivity in underserved regions. Our findings reveal that while overall video quality on Starlink matches that of traditional networks, the inherent variability in LEO conditions -- such as throughput fluctuations and packet loss -- leads to an increase in bitrate switches and rebuffers. To further improve the quality of experience for the LEO community, we manipulate existing congestion control and adaptive bitrate streaming algorithms using simulation and real A/B tests deployed on over one million households. Our results underscore the need for video streaming and congestion control algorithms to adapt to rapidly evolving network landscapes, ensuring high-quality service across diverse and dynamic network types.


[393] 2409.09848

A Comprehensive Survey of PID and Pure Pursuit Control Algorithms for Autonomous Vehicle Navigation

The autonomous driving industry is experiencing unprecedented growth, driven by rapid advancements in technology and increasing demand for safer, more efficient transportation. At the heart of this revolution are two critical factors: lateral and longitudinal controls, which together enable vehicles to track complex environments with high accuracy and minimal errors. This paper provides a detailed overview of two of the field's most commonly used and stable control algorithms: proportional-integral-derivative (PID) and pure pursuit. These algorithms have proved useful in solving the issues of lateral (steering) and longitudinal (speed and distance) control in autonomous vehicles. This survey aims to provide researchers, engineers, and industry professionals with an in depth understanding of these fundamental control algorithms, their current applications, and their potential to shape the future of autonomous driving technology.


[394] 2409.09849

Dynamic Layer Detection of a Thin Silk Cloth using DenseTact Optical Tactile Sensors

Cloth manipulation is an important aspect of many everyday tasks and remains a significant challenge for robots. While existing research has made strides in tasks like cloth smoothing and folding, many studies struggle with common failure modes (crumpled corners/edges, incorrect grasp configurations) that a preliminary step of cloth layer detection can solve. We present a novel method for classifying the number of grasped cloth layers using a custom gripper equipped with DenseTact 2.0 optical tactile sensors. After grasping a cloth, the gripper performs an anthropomorphic rubbing motion while collecting optical flow, 6-axis wrench, and joint state data. Using this data in a transformer-based network achieves a test accuracy of 98.21% in correctly classifying the number of grasped layers, showing the effectiveness of our dynamic rubbing method. Evaluating different inputs and model architectures highlights the usefulness of using tactile sensor information and a transformer model for this task. A comprehensive dataset of 368 labeled trials was collected and made open-source along with this paper. Our project page is available at https://armlabstanford.github.io/dynamic-cloth-detection.


[395] 2409.09850

Physically-Consistent Parameter Identification of Robots in Contact

Accurate inertial parameter identification is crucial for the simulation and control of robots encountering intermittent contact with the environment. Classically, robots' inertial parameters are obtained from CAD models that are not precise (and sometimes not available, e.g., Spot from Boston Dynamics), hence requiring identification. To do that, existing methods require access to contact force measurement, a modality not present in modern quadruped and humanoid robots. This paper presents an alternative technique that utilizes joint current/torque measurements -- a standard sensing modality in modern robots -- to identify inertial parameters without requiring direct contact force measurements. By projecting the whole-body dynamics into the null space of contact constraints, we eliminate the dependency on contact forces and reformulate the identification problem as a linear matrix inequality that can handle physical and geometrical constraints. We compare our proposed method against a common black-box identification mrethod using a deep neural network and show that incorporating physical consistency significantly improves the sample efficiency and generalizability of the model. Finally, we validate our method on the Spot quadruped robot across various locomotion tasks, showcasing its accuracy and generalizability in real-world scenarios over different gaits.


[396] 2409.09852

A Complete Algorithm for a Moving Target Traveling Salesman Problem with Obstacles

The moving target traveling salesman problem with obstacles (MT-TSP-O) is a generalization of the traveling salesman problem (TSP) where, as its name suggests, the targets are moving. A solution to the MT-TSP-O is a trajectory that visits each moving target during a certain time window(s), and this trajectory avoids stationary obstacles. We assume each target moves at a constant velocity during each of its time windows. The agent has a speed limit, and this speed limit is no smaller than any target's speed. This paper presents the first complete algorithm for finding feasible solutions to the MT-TSP-O. Our algorithm builds a tree where the nodes are agent trajectories intercepting a unique sequence of targets within a unique sequence of time windows. We generate each of a parent node's children by extending the parent's trajectory to intercept one additional target, each child corresponding to a different choice of target and time window. This extension consists of planning a trajectory from the parent trajectory's final point in space-time to a moving target. To solve this point-to-moving-target subproblem, we define a novel generalization of a visibility graph called a moving target visibility graph (MTVG). Our overall algorithm is called MTVG-TSP. To validate MTVG-TSP, we test it on 570 instances with up to 30 targets. We implement a baseline method that samples trajectories of targets into points, based on prior work on special cases of the MT-TSP-O. MTVG-TSP finds feasible solutions in all cases where the baseline does, and when the sum of the targets' time window lengths enters a critical range, MTVG-TSP finds a feasible solution with up to 38 times less computation time.


[397] 2409.09858

A Survey of Out-of-distribution Generalization for Graph Machine Learning from a Causal View

Graph machine learning (GML) has been successfully applied across a wide range of tasks. Nonetheless, GML faces significant challenges in generalizing over out-of-distribution (OOD) data, which raises concerns about its wider applicability. Recent advancements have underscored the crucial role of causality-driven approaches in overcoming these generalization challenges. Distinct from traditional GML methods that primarily rely on statistical dependencies, causality-focused strategies delve into the underlying causal mechanisms of data generation and model prediction, thus significantly improving the generalization of GML across different environments. This paper offers a thorough review of recent progress in causality-involved GML generalization. We elucidate the fundamental concepts of employing causality to enhance graph model generalization and categorize the various approaches, providing detailed descriptions of their methodologies and the connections among them. Furthermore, we explore the incorporation of causality in other related important areas of trustworthy GML, such as explanation, fairness, and robustness. Concluding with a discussion on potential future research directions, this review seeks to articulate the continuing development and future potential of causality in enhancing the trustworthiness of graph machine learning.


[398] 2409.09860

Revisiting Physical-World Adversarial Attack on Traffic Sign Recognition: A Commercial Systems Perspective

Traffic Sign Recognition (TSR) is crucial for safe and correct driving automation. Recent works revealed a general vulnerability of TSR models to physical-world adversarial attacks, which can be low-cost, highly deployable, and capable of causing severe attack effects such as hiding a critical traffic sign or spoofing a fake one. However, so far existing works generally only considered evaluating the attack effects on academic TSR models, leaving the impacts of such attacks on real-world commercial TSR systems largely unclear. In this paper, we conduct the first large-scale measurement of physical-world adversarial attacks against commercial TSR systems. Our testing results reveal that it is possible for existing attack works from academia to have highly reliable (100\%) attack success against certain commercial TSR system functionality, but such attack capabilities are not generalizable, leading to much lower-than-expected attack success rates overall. We find that one potential major factor is a spatial memorization design that commonly exists in today's commercial TSR systems. We design new attack success metrics that can mathematically model the impacts of such design on the TSR system-level attack success, and use them to revisit existing attacks. Through these efforts, we uncover 7 novel observations, some of which directly challenge the observations or claims in prior works due to the introduction of the new metrics.


[399] 2409.09866

Constructing a Singing Style Caption Dataset

Singing voice synthesis and conversion have emerged as significant subdomains of voice generation, leading to much demands on prompt-conditioned generation. Unlike common voice data, generating a singing voice requires an understanding of various associated vocal and musical characteristics, such as the vocal tone of the singer or emotional expressions. However, existing open-source audio-text datasets for voice generation tend to capture only a very limited range of attributes, often missing musical characteristics of the audio. To fill this gap, we introduce S2Cap, an audio-text pair dataset with a diverse set of attributes. S2Cap consists of pairs of textual prompts and music audio samples with a wide range of vocal and musical attributes, including pitch, volume, tempo, mood, singer's gender and age, and musical genre and emotional expression. Utilizing S2Cap, we suggest an effective novel baseline algorithm for singing style captioning. Singing style captioning is a relative task to voice generation that generates text descriptions of vocal characteristics, which we first suggested. First, to mitigate the misalignment between the audio encoder and the text decoder, we present a novel mechanism called CRESCENDO, which utilizes positive-pair similarity learning to synchronize the embedding spaces of a pretrained audio encoder to get similar embeddings with a text encoder. We additionally supervise the model using the singer's voice, which is demixed by the accompaniment. This supervision allows the model to more accurately capture vocal characteristics, leading to improved singing style captions that better reflect the style of the singer. The dataset and the codes are available at \bulurl{https://github.com/HJ-Ok/S2cap}.


[400] 2409.09867

Towards Kinetic Manipulation of the Latent Space

The latent space of many generative models are rich in unexplored valleys and mountains. The majority of tools used for exploring them are so far limited to Graphical User Interfaces (GUIs). While specialized hardware can be used for this task, we show that a simple feature extraction of pre-trained Convolutional Neural Networks (CNNs) from a live RGB camera feed does a very good job at manipulating the latent space with simple changes in the scene, with vast room for improvement. We name this new paradigm Visual-reactive Interpolation, and the full code can be found at https://github.com/PDillis/stylegan3-fun.


[401] 2409.09868

SAFER-Splat: A Control Barrier Function for Safe Navigation with Online Gaussian Splatting Maps

SAFER-Splat (Simultaneous Action Filtering and Environment Reconstruction) is a real-time, scalable, and minimally invasive action filter, based on control barrier functions, for safe robotic navigation in a detailed map constructed at runtime using Gaussian Splatting (GSplat). We propose a novel Control Barrier Function (CBF) that not only induces safety with respect to all Gaussian primitives in the scene, but when synthesized into a controller, is capable of processing hundreds of thousands of Gaussians while maintaining a minimal memory footprint and operating at 15 Hz during online Splat training. Of the total compute time, a small fraction of it consumes GPU resources, enabling uninterrupted training. The safety layer is minimally invasive, correcting robot actions only when they are unsafe. To showcase the safety filter, we also introduce SplatBridge, an open-source software package built with ROS for real-time GSplat mapping for robots. We demonstrate the safety and robustness of our pipeline first in simulation, where our method is 20-50x faster, safer, and less conservative than competing methods based on neural radiance fields. Further, we demonstrate simultaneous GSplat mapping and safety filtering on a drone hardware platform using only on-board perception. We verify that under teleoperation a human pilot cannot invoke a collision. Our videos and codebase can be found at https://chengine.github.io/safer-splat.


[402] 2409.09869

Critic as Lyapunov function (CALF): a model-free, stability-ensuring agent

This work presents and showcases a novel reinforcement learning agent called Critic As Lyapunov Function (CALF) which is model-free and ensures online environment, in other words, dynamical system stabilization. Online means that in each learning episode, the said environment is stabilized. This, as demonstrated in a case study with a mobile robot simulator, greatly improves the overall learning performance. The base actor-critic scheme of CALF is analogous to SARSA. The latter did not show any success in reaching the target in our studies. However, a modified version thereof, called SARSA-m here, did succeed in some learning scenarios. Still, CALF greatly outperformed the said approach. CALF was also demonstrated to improve a nominal stabilizer provided to it. In summary, the presented agent may be considered a viable approach to fusing classical control with reinforcement learning. Its concurrent approaches are mostly either offline or model-based, like, for instance, those that fuse model-predictive control into the agent.


[403] 2409.09870

TransForce: Transferable Force Prediction for Vision-based Tactile Sensors with Sequential Image Translation

Vision-based tactile sensors (VBTSs) provide high-resolution tactile images crucial for robot in-hand manipulation. However, force sensing in VBTSs is underutilized due to the costly and time-intensive process of acquiring paired tactile images and force labels. In this study, we introduce a transferable force prediction model, TransForce, designed to leverage collected image-force paired data for new sensors under varying illumination colors and marker patterns while improving the accuracy of predicted forces, especially in the shear direction. Our model effectively achieves translation of tactile images from the source domain to the target domain, ensuring that the generated tactile images reflect the illumination colors and marker patterns of the new sensors while accurately aligning the elastomer deformation observed in existing sensors, which is beneficial to force prediction of new sensors. As such, a recurrent force prediction model trained with generated sequential tactile images and existing force labels is employed to estimate higher-accuracy forces for new sensors with lowest average errors of 0.69N (5.8\% in full work range) in $x$-axis, 0.70N (5.8\%) in $y$-axis, and 1.11N (6.9\%) in $z$-axis compared with models trained with single images. The experimental results also reveal that pure marker modality is more helpful than the RGB modality in improving the accuracy of force in the shear direction, while the RGB modality show better performance in the normal direction.


[404] 2409.09871

Marginalizing and Conditioning Gaussians onto Linear Approximations of Smooth Manifolds with Applications in Robotics

We present closed-form expressions for marginalizing and conditioning Gaussians onto linear manifolds, and demonstrate how to apply these expressions to smooth nonlinear manifolds through linearization. Although marginalization and conditioning onto axis-aligned manifolds are well-established procedures, doing so onto non-axis-aligned manifolds is not as well understood. We demonstrate the utility of our expressions through three applications: 1) approximation of the projected normal distribution, where the quality of our linearized approximation increases as problem nonlinearity decreases; 2) covariance extraction in Koopman SLAM, where our covariances are shown to be consistent on a real-world dataset; and 3) covariance extraction in constrained GTSAM, where our covariances are shown to be consistent in simulation.


[405] 2409.09874

The Landscape of GPU-Centric Communication

n recent years, GPUs have become the preferred accelerators for HPC and ML applications due to their parallelism and fast memory bandwidth. While GPUs boost computation, inter-GPU communication can create scalability bottlenecks, especially as the number of GPUs per node and cluster grows. Traditionally, the CPU managed multi-GPU communication, but advancements in GPU-centric communication now challenge this CPU dominance by reducing its involvement, granting GPUs more autonomy in communication tasks, and addressing mismatches in multi-GPU communication and computation. This paper provides a landscape of GPU-centric communication, focusing on vendor mechanisms and user-level library supports. It aims to clarify the complexities and diverse options in this field, define the terminology, and categorize existing approaches within and across nodes. The paper discusses vendor-provided mechanisms for communication and memory management in multi-GPU execution and reviews major communication libraries, their benefits, challenges, and performance insights. Then, it explores key research paradigms, future outlooks, and open research questions. By extensively describing GPU-centric communication techniques across the software and hardware stacks, we provide researchers, programmers, engineers, and library designers insights on how to exploit multi-GPU systems at their best.


[406] 2409.09875

Scaling Continuous Kernels with Sparse Fourier Domain Learning

We address three key challenges in learning continuous kernel representations: computational efficiency, parameter efficiency, and spectral bias. Continuous kernels have shown significant potential, but their practical adoption is often limited by high computational and memory demands. Additionally, these methods are prone to spectral bias, which impedes their ability to capture high-frequency details. To overcome these limitations, we propose a novel approach that leverages sparse learning in the Fourier domain. Our method enables the efficient scaling of continuous kernels, drastically reduces computational and memory requirements, and mitigates spectral bias by exploiting the Gibbs phenomenon.


[407] 2409.09876

A Carryover Storage Quantification Framework for Mid-Term Cascaded Hydropower Planning: A Portland General Electric System Study

Mid-term planning of cascaded hydropower systems (CHSs) determines appropriate carryover storage levels in reservoirs to optimize the usage of available water resources, i.e., maximizing the hydropower generated in the current period (i.e., immediate benefit) plus the potential hydropower generation in the future period (i.e., future value). Thus, in the mid-term CHS planning, properly quantifying the future value deposited in carryover storage is essential to achieve a good balance between immediate benefit and future value. To this end, this paper presents a framework to quantify the future value of carryover storage, which consists of three major steps: i) constructing a module to calculate the maximum possible hydropower generation that a given level of carryover storage can deliver in the future period; ii) extracting the implicit locational marginal water value (LMWV) of carryover storage for each reservoir by applying a partition-then-extract algorithm to the constructed module; and iii) developing a set of analytical rules based on the extracted LMWV to effectively calculate the future value. These rules can be seamlessly integrated into mid-term CHS planning models as tractable mixed-integer linear constraints to quantify the future value properly, and can be easily visualized to offer valuable insights for CHS operators. Finally, numerical results on a CHS of Portland General Electric demonstrate the effectiveness of the presented framework in determining proper carryover storage values to facilitate mid-term CHS planning.


[408] 2409.09877

REG: Refined Generalized Focal Loss for Road Asset Detection on Thai Highways Using Vision-Based Detection and Segmentation Models

This paper introduces a novel framework for detecting and segmenting critical road assets on Thai highways using an advanced Refined Generalized Focal Loss (REG) formulation. Integrated into state-of-the-art vision-based detection and segmentation models, the proposed method effectively addresses class imbalance and the challenges of localizing small, underrepresented road elements, including pavilions, pedestrian bridges, information signs, single-arm poles, bus stops, warning signs, and concrete guardrails. To improve both detection and segmentation accuracy, a multi-task learning strategy is adopted, optimizing REG across multiple tasks. REG is further enhanced by incorporating a spatial-contextual adjustment term, which accounts for the spatial distribution of road assets, and a probabilistic refinement that captures prediction uncertainty in complex environments, such as varying lighting conditions and cluttered backgrounds. Our rigorous mathematical formulation demonstrates that REG minimizes localization and classification errors by applying adaptive weighting to hard-to-detect instances while down-weighting easier examples. Experimental results show a substantial performance improvement, achieving a mAP50 of 80.34 and an F1-score of 77.87, significantly outperforming conventional methods. This research underscores the capability of advanced loss function refinements to enhance the robustness and accuracy of road asset detection and segmentation, thereby contributing to improved road safety and infrastructure management. For an in-depth discussion of the mathematical background and related methods, please refer to previous work available at \url{https://github.com/kaopanboonyuen/REG}.


[409] 2409.09881

Proximal Ranking Policy Optimization for Practical Safety in Counterfactual Learning to Rank

Counterfactual learning to rank (CLTR) can be risky and, in various circumstances, can produce sub-optimal models that hurt performance when deployed. Safe CLTR was introduced to mitigate these risks when using inverse propensity scoring to correct for position bias. However, the existing safety measure for CLTR is not applicable to state-of-the-art CLTR methods, cannot handle trust bias, and relies on specific assumptions about user behavior. We propose a novel approach, proximal ranking policy optimization (PRPO), that provides safety in deployment without assumptions about user behavior. PRPO removes incentives for learning ranking behavior that is too dissimilar to a safe ranking model. Thereby, PRPO imposes a limit on how much learned models can degrade performance metrics, without relying on any specific user assumptions. Our experiments show that PRPO provides higher performance than the existing safe inverse propensity scoring approach. PRPO always maintains safety, even in maximally adversarial situations. By avoiding assumptions, PRPO is the first method with unconditional safety in deployment that translates to robust safety for real-world applications.


[410] 2409.09882

Safe Control of Quadruped in Varying Dynamics via Safety Index Adaptation

Varying dynamics pose a fundamental difficulty when deploying safe control laws in the real world. Safety Index Synthesis (SIS) deeply relies on the system dynamics and once the dynamics change, the previously synthesized safety index becomes invalid. In this work, we show the real-time efficacy of Safety Index Adaptation (SIA) in varying dynamics. SIA enables real-time adaptation to the changing dynamics so that the adapted safe control law can still guarantee 1) forward invariance within a safe region and 2) finite time convergence to that safe region. This work employs SIA on a package-carrying quadruped robot, where the payload weight changes in real-time. SIA updates the safety index when the dynamics change, e.g., a change in payload weight, so that the quadruped can avoid obstacles while achieving its performance objectives. Numerical study provides theoretical guarantees for SIA and a series of hardware experiments demonstrate the effectiveness of SIA in real-world deployment in avoiding obstacles under varying dynamics.


[411] 2409.09883

Robots that Suggest Safe Alternatives

Goal-conditioned policies, such as those learned via imitation learning, provide an easy way for humans to influence what tasks robots accomplish. However, these robot policies are not guaranteed to execute safely or to succeed when faced with out-of-distribution requests. In this work, we enable robots to know when they can confidently execute a user's desired goal, and automatically suggest safe alternatives when they cannot. Our approach is inspired by control-theoretic safety filtering, wherein a safety filter minimally adjusts a robot's candidate action to be safe. Our key idea is to pose alternative suggestion as a safe control problem in goal space, rather than in action space. Offline, we use reachability analysis to compute a goal-parameterized reach-avoid value network which quantifies the safety and liveness of the robot's pre-trained policy. Online, our robot uses the reach-avoid value network as a safety filter, monitoring the human's given goal and actively suggesting alternatives that are similar but meet the safety specification. We demonstrate our Safe ALTernatives (SALT) framework in simulation experiments with indoor navigation and Franka Panda tabletop manipulation, and with both discrete and continuous goal representations. We find that SALT is able to learn to predict successful and failed closed-loop executions, is a less pessimistic monitor than open-loop uncertainty quantification, and proposes alternatives that consistently align with those people find acceptable.


[412] 2409.09887

Leiden-Fusion Partitioning Method for Effective Distributed Training of Graph Embeddings

In the area of large-scale training of graph embeddings, effective training frameworks and partitioning methods are critical for handling large networks. However, they face two major challenges: 1) existing synchronized distributed frameworks require continuous communication to access information from other machines, and 2) the inability of current partitioning methods to ensure that subgraphs remain connected components without isolated nodes, which is essential for effective training of GNNs since training relies on information aggregation from neighboring nodes. To address these issues, we introduce a novel partitioning method, named Leiden-Fusion, designed for large-scale training of graphs with minimal communication. Our method extends the Leiden community detection algorithm with a greedy algorithm that merges the smallest communities with highly connected neighboring communities. Our method guarantees that, for an initially connected graph, each partition is a densely connected subgraph with no isolated nodes. After obtaining the partitions, we train a GNN for each partition independently, and finally integrate all embeddings for node classification tasks, which significantly reduces the need for network communication and enhances the efficiency of distributed graph training. We demonstrate the effectiveness of our method through extensive evaluations on several benchmark datasets, achieving high efficiency while preserving the quality of the graph embeddings for node classification tasks.


[413] 2409.09888

Flexible Diffusion Scopes with Parameterized Laplacian for Heterophilic Graph Learning

The ability of Graph Neural Networks (GNNs) to capture long-range and global topology information is limited by the scope of conventional graph Laplacian, leading to unsatisfactory performance on some datasets, particularly on heterophilic graphs. To address this limitation, we propose a new class of parameterized Laplacian matrices, which provably offers more flexibility in controlling the diffusion distance between nodes than the conventional graph Laplacian, allowing long-range information to be adaptively captured through diffusion on graph. Specifically, we first prove that the diffusion distance and spectral distance on graph have an order-preserving relationship. With this result, we demonstrate that the parameterized Laplacian can accelerate the diffusion of long-range information, and the parameters in the Laplacian enable flexibility of the diffusion scopes. Based on the theoretical results, we propose topology-guided rewiring mechanism to capture helpful long-range neighborhood information for heterophilic graphs. With this mechanism and the new Laplacian, we propose two GNNs with flexible diffusion scopes: namely the Parameterized Diffusion based Graph Convolutional Networks (PD-GCN) and Graph Attention Networks (PD-GAT). Synthetic experiments reveal the high correlations between the parameters of the new Laplacian and the performance of parameterized GNNs under various graph homophily levels, which verifies that our new proposed GNNs indeed have the ability to adjust the parameters to adaptively capture the global information for different levels of heterophilic graphs. They also outperform the state-of-the-art (SOTA) models on 6 out of 7 real-world benchmark datasets, which further confirms their superiority.


[414] 2409.09889

Well-Behaved (Co)algebraic Semantics of Regular Expressions in Dafny

Regular expressions are commonly understood in terms of their denotational semantics, that is, through formal languages -- the regular languages. This view is inductive in nature: two primitives are equivalent if they are constructed in the same way. Alternatively, regular expressions can be understood in terms of their operational semantics, that is, through deterministic finite automata. This view is coinductive in nature: two primitives are equivalent if they are deconstructed in the same way. It is implied by Kleene's famous theorem that both views are equivalent: regular languages are precisely the formal languages accepted by deterministic finite automata. In this paper, we use Dafny, a verification-aware programming language, to formally verify, for the first time, what has been previously established only through proofs-by-hand: the two semantics of regular expressions are well-behaved, in the sense that they are in fact one and the same, up to pointwise bisimilarity. At each step of our formalisation, we propose an interpretation in the language of Coalgebra. We found that Dafny is particularly well suited for the task due to its inductive and coinductive features and hope our approach serves as a blueprint for future generalisations to other theories.


[415] 2409.09891

Acquiring Pronunciation Knowledge from Transcribed Speech Audio via Multi-task Learning

Recent work has shown the feasibility and benefit of bootstrapping an integrated sequence-to-sequence (Seq2Seq) linguistic frontend from a traditional pipeline-based frontend for text-to-speech (TTS). To overcome the fixed lexical coverage of bootstrapping training data, previous work has proposed to leverage easily accessible transcribed speech audio as an additional training source for acquiring novel pronunciation knowledge for uncovered words, which relies on an auxiliary ASR model as part of a cumbersome implementation flow. In this work, we propose an alternative method to leverage transcribed speech audio as an additional training source, based on multi-task learning (MTL). Experiments show that, compared to a baseline Seq2Seq frontend, the proposed MTL-based method reduces PER from 2.5% to 1.6% for those word types covered exclusively in transcribed speech audio, achieving a similar performance to the previous method but with a much simpler implementation flow.


[416] 2409.09892

Dynamic Fraud Detection: Integrating Reinforcement Learning into Graph Neural Networks

Financial fraud refers to the act of obtaining financial benefits through dishonest means. Such behavior not only disrupts the order of the financial market but also harms economic and social development and breeds other illegal and criminal activities. With the popularization of the internet and online payment methods, many fraudulent activities and money laundering behaviors in life have shifted from offline to online, posing a great challenge to regulatory authorities. How to efficiently detect these financial fraud activities has become an urgent issue that needs to be resolved. Graph neural networks are a type of deep learning model that can utilize the interactive relationships within graph structures, and they have been widely applied in the field of fraud detection. However, there are still some issues. First, fraudulent activities only account for a very small part of transaction transfers, leading to an inevitable problem of label imbalance in fraud detection. At the same time, fraudsters often disguise their behavior, which can have a negative impact on the final prediction results. In addition, existing research has overlooked the importance of balancing neighbor information and central node information. For example, when the central node has too many neighbors, the features of the central node itself are often neglected. Finally, fraud activities and patterns are constantly changing over time, so considering the dynamic evolution of graph edge relationships is also very important.


[417] 2409.09893

Resolving Inconsistent Semantics in Multi-Dataset Image Segmentation

Leveraging multiple training datasets to scale up image segmentation models is beneficial for increasing robustness and semantic understanding. Individual datasets have well-defined ground truth with non-overlapping mask layouts and mutually exclusive semantics. However, merging them for multi-dataset training disrupts this harmony and leads to semantic inconsistencies; for example, the class "person" in one dataset and class "face" in another will require multilabel handling for certain pixels. Existing methods struggle with this setting, particularly when evaluated on label spaces mixed from the individual training sets. To overcome these issues, we introduce a simple yet effective multi-dataset training approach by integrating language-based embeddings of class names and label space-specific query embeddings. Our method maintains high performance regardless of the underlying inconsistencies between training datasets. Notably, on four benchmark datasets with label space inconsistencies during inference, we outperform previous methods by 1.6% mIoU for semantic segmentation, 9.1% PQ for panoptic segmentation, 12.1% AP for instance segmentation, and 3.0% in the newly proposed PIQ metric.


[418] 2409.09894

Estimating Wage Disparities Using Foundation Models

One thread of empirical work in social science focuses on decomposing group differences in outcomes into unexplained components and components explained by observable factors. In this paper, we study gender wage decompositions, which require estimating the portion of the gender wage gap explained by career histories of workers. Classical methods for decomposing the wage gap employ simple predictive models of wages which condition on a small set of simple summaries of labor history. The problem is that these predictive models cannot take advantage of the full complexity of a worker's history, and the resulting decompositions thus suffer from omitted variable bias (OVB), where covariates that are correlated with both gender and wages are not included in the model. Here we explore an alternative methodology for wage gap decomposition that employs powerful foundation models, such as large language models, as the predictive engine. Foundation models excel at making accurate predictions from complex, high-dimensional inputs. We use a custom-built foundation model, designed to predict wages from full labor histories, to decompose the gender wage gap. We prove that the way such models are usually trained might still lead to OVB, but develop fine-tuning algorithms that empirically mitigate this issue. Our model captures a richer representation of career history than simple models and predicts wages more accurately. In detail, we first provide a novel set of conditions under which an estimator of the wage gap based on a fine-tuned foundation model is $\sqrt{n}$-consistent. Building on the theory, we then propose methods for fine-tuning foundation models that minimize OVB. Using data from the Panel Study of Income Dynamics, we find that history explains more of the gender wage gap than standard econometric models can measure, and we identify elements of history that are important for reducing OVB.


[419] 2409.09895

Materials Matter: Investigating Functional Advantages of Bio-Inspired Materials via Simulated Robotic Hopping

In contrast with the diversity of materials found in nature, most robots are designed with some combination of aluminum, stainless steel, and 3D-printed filament. Additionally, robotic systems are typically assumed to follow basic rigid-body dynamics. However, several examples in nature illustrate how changes in physical material properties yield functional advantages. In this paper, we explore how physical materials (non-rigid bodies) affect the functional performance of a hopping robot. In doing so, we address the practical question of how to model and simulate material properties. Through these simulations we demonstrate that material gradients in the limb system of a single-limb hopper provide functional advantages compared to homogeneous designs. For example, when considering incline ramp hopping, a material gradient with increasing density provides a 35\% reduction in tracking error and a 23\% reduction in power consumption compared to isotropic stainless steel. By providing bio-inspiration to the rigid limbs in a robotic system, we seek to show that future fabrication of robots should look to leverage the material anisotropies of moduli and density found in nature. This would allow for reduced vibrations in the system and would provide offsets of joint torques and vibrations while protecting their structural integrity against reduced fatigue and wear. This simulation system could inspire future intelligent material gradients of custom-fabricated robotic locomotive devices.


[420] 2409.09896

GRIN: Zero-Shot Metric Depth with Pixel-Level Diffusion

3D reconstruction from a single image is a long-standing problem in computer vision. Learning-based methods address its inherent scale ambiguity by leveraging increasingly large labeled and unlabeled datasets, to produce geometric priors capable of generating accurate predictions across domains. As a result, state of the art approaches show impressive performance in zero-shot relative and metric depth estimation. Recently, diffusion models have exhibited remarkable scalability and generalizable properties in their learned representations. However, because these models repurpose tools originally designed for image generation, they can only operate on dense ground-truth, which is not available for most depth labels, especially in real-world settings. In this paper we present GRIN, an efficient diffusion model designed to ingest sparse unstructured training data. We use image features with 3D geometric positional encodings to condition the diffusion process both globally and locally, generating depth predictions at a pixel-level. With comprehensive experiments across eight indoor and outdoor datasets, we show that GRIN establishes a new state of the art in zero-shot metric monocular depth estimation even when trained from scratch.


[421] 2409.09899

Semantic2D: A Semantic Dataset for 2D Lidar Semantic Segmentation

This paper presents a 2D lidar semantic segmentation dataset to enhance the semantic scene understanding for mobile robots in different indoor robotics applications. While most existing lidar semantic datasets focus on 3D lidar sensors and autonomous driving scenarios, the proposed 2D lidar semantic dataset is the first public dataset for 2D lidar sensors and mobile robots. It contains data collected in six different indoor environments and has nine categories of typical objects in indoor environments. A novel semi-automatic semantic labeling framework is proposed to provide point-wise annotation for the dataset with minimal human effort. Based on this 2D lidar dataset, a hardware-friendly stochastic semantic segmentation benchmark is proposed to enable 2D lidar sensors to have semantic scene understanding capabilities. A series of segmentation tests are performed to demonstrate that the proposed learning-based segmentation benchmark can achieve more accurate and richer segmentation for each lidar point compared to traditional geometry-based extraction algorithms.


[422] 2409.09904

Enhancing Visual Inertial SLAM with Magnetic Measurements

This paper presents an extension to visual inertial odometry (VIO) by introducing tightly-coupled fusion of magnetometer measurements. A sliding window of keyframes is optimized by minimizing re-projection errors, relative inertial errors, and relative magnetometer orientation errors. The results of IMU orientation propagation are used to efficiently transform magnetometer measurements between frames producing relative orientation constraints between consecutive frames. The soft and hard iron effects are calibrated using an ellipsoid fitting algorithm. The introduction of magnetometer data results in significant reductions in the orientation error and also in recovery of the true yaw orientation with respect to the magnetic north. The proposed framework operates in all environments with slow-varying magnetic fields, mainly outdoors and underwater. We have focused our work on the underwater domain, especially in underwater caves, as the narrow passage and turbulent flow make it difficult to perform loop closures and reset the localization drift. The underwater caves present challenges to VIO due to the absence of ambient light and the confined nature of the environment, while also being a crucial source of fresh water and providing valuable historical records. Experimental results from underwater caves demonstrate the improvements in accuracy and robustness introduced by the proposed VIO extension.


[423] 2409.09905

Rediscovering the Latent Dimensions of Personality with Large Language Models as Trait Descriptors

Assessing personality traits using large language models (LLMs) has emerged as an interesting and challenging area of research. While previous methods employ explicit questionnaires, often derived from the Big Five model of personality, we hypothesize that LLMs implicitly encode notions of personality when modeling next-token responses. To demonstrate this, we introduce a novel approach that uncovers latent personality dimensions in LLMs by applying singular value de-composition (SVD) to the log-probabilities of trait-descriptive adjectives. Our experiments show that LLMs "rediscover" core personality traits such as extraversion, agreeableness, conscientiousness, neuroticism, and openness without relying on direct questionnaire inputs, with the top-5 factors corresponding to Big Five traits explaining 74.3% of the variance in the latent space. Moreover, we can use the derived principal components to assess personality along the Big Five dimensions, and achieve improvements in average personality prediction accuracy of up to 5% over fine-tuned models, and up to 21% over direct LLM-based scoring techniques.


[424] 2409.09907

Rapid Adaptation of Earth Observation Foundation Models for Segmentation

This study investigates the efficacy of Low-Rank Adaptation (LoRA) in fine-tuning Earth Observation (EO) foundation models for flood segmentation. We hypothesize that LoRA, a parameter-efficient technique, can significantly accelerate the adaptation of large-scale EO models to this critical task while maintaining high performance. We apply LoRA to fine-tune a state-of-the-art EO foundation model pre-trained on diverse satellite imagery, using a curated dataset of flood events. Our results demonstrate that LoRA-based fine-tuning (r-256) improves F1 score by 6.66 points and IoU by 0.11 compared to a frozen encoder baseline, while significantly reducing computational costs. Notably, LoRA outperforms full fine-tuning, which proves computationally infeasible on our hardware. We further assess generalization through out-of-distribution (OOD) testing on a geographically distinct flood event. While LoRA configurations show improved OOD performance over the baseline. This work contributes to research on efficient adaptation of foundation models for specialized EO tasks, with implications for rapid response systems in disaster management. Our findings demonstrate LoRA's potential for enabling faster deployment of accurate flood segmentation models in resource-constrained, time-critical scenarios.


[425] 2409.09912

Discovery and Characterization of Cross-Area and Intra-Area SSOs Sensitive to Delay in Droop Control of Grid-Forming Converters

Subsynchronous oscillations (SSOs) involving grid-forming converters (GFCs) are in a less familiar territory of power system dynamics. This letter reports a new phenomenon namely cross-area SSOs in grids with 100% droop-controlled GFC-based renewable penetration, which was discovered during our study on evaluating the adequacy of quasistationary phasor calculus (QPC) and space phasor calculus (SPC)-based models in capturing SSOs. We present frequency-domain characterization of such oscillatory modes in addition to intra-area SSOs in grids involving GFCs and study the impact of a delay in power-frequency droop feedback loop in regards to their stability. Electromagnetic transient (EMT) simulations validate our findings.


[426] 2409.09913

Practical and Asymptotically Optimal Quantization of High-Dimensional Vectors in Euclidean Space for Approximate Nearest Neighbor Search

Approximate nearest neighbor (ANN) query in high-dimensional Euclidean space is a key operator in database systems. For this query, quantization is a popular family of methods developed for compressing vectors and reducing memory consumption. Recently, a method called RaBitQ achieves the state-of-the-art performance among these methods. It produces better empirical performance in both accuracy and efficiency when using the same compression rate and provides rigorous theoretical guarantees. However, the method is only designed for compressing vectors at high compression rates (32x) and lacks support for achieving higher accuracy by using more space. In this paper, we introduce a new quantization method to address this limitation by extending RaBitQ. The new method inherits the theoretical guarantees of RaBitQ and achieves the asymptotic optimality in terms of the trade-off between space and error bounds as to be proven in this study. Additionally, we present efficient implementations of the method, enabling its application to ANN queries to reduce both space and time consumption. Extensive experiments on real-world datasets confirm that our method consistently outperforms the state-of-the-art baselines in both accuracy and efficiency when using the same amount of memory.


[427] 2409.09915

Forearm Ultrasound based Gesture Recognition on Edge

Ultrasound imaging of the forearm has demonstrated significant potential for accurate hand gesture classification. Despite this progress, there has been limited focus on developing a stand-alone end- to-end gesture recognition system which makes it mobile, real-time and more user friendly. To bridge this gap, this paper explores the deployment of deep neural networks for forearm ultrasound-based hand gesture recognition on edge devices. Utilizing quantization techniques, we achieve substantial reductions in model size while maintaining high accuracy and low latency. Our best model, with Float16 quantization, achieves a test accuracy of 92% and an inference time of 0.31 seconds on a Raspberry Pi. These results demonstrate the feasibility of efficient, real-time gesture recognition on resource-limited edge devices, paving the way for wearable ultrasound-based systems.


[428] 2409.09916

SFR-RAG: Towards Contextually Faithful LLMs

Retrieval Augmented Generation (RAG), a paradigm that integrates external contextual information with large language models (LLMs) to enhance factual accuracy and relevance, has emerged as a pivotal area in generative AI. The LLMs used in RAG applications are required to faithfully and completely comprehend the provided context and users' questions, avoid hallucination, handle unanswerable, counterfactual or otherwise low-quality and irrelevant contexts, perform complex multi-hop reasoning and produce reliable citations. In this paper, we introduce SFR-RAG, a small LLM that is instruction-tuned with an emphasis on context-grounded generation and hallucination minimization. We also present ContextualBench, a new evaluation framework compiling multiple popular and diverse RAG benchmarks, such as HotpotQA and TriviaQA, with consistent RAG settings to ensure reproducibility and consistency in model assessments. Experimental results demonstrate that our SFR-RAG-9B model outperforms leading baselines such as Command-R+ (104B) and GPT-4o, achieving state-of-the-art results in 3 out of 7 benchmarks in ContextualBench with significantly fewer parameters. The model is also shown to be resilient to alteration in the contextual information and behave appropriately when relevant context is removed. Additionally, the SFR-RAG model maintains competitive performance in general instruction-following tasks and function-calling capabilities.


[429] 2409.09918

Hardware-Accelerated Ray Tracing for Discrete and Continuous Collision Detection on GPUs

This paper presents a set of simple and intuitive robot collision detection algorithms that show substantial scaling improvements for high geometric complexity and large numbers of collision queries by leveraging hardware-accelerated ray tracing on GPUs. It is the first leveraging hardware-accelerated ray-tracing for direct volume mesh-to-mesh discrete collision detection and applying it to continuous collision detection. We introduce two methods: Ray-Traced Discrete-Pose Collision Detection for exact robot mesh to obstacle mesh collision detection, and Ray-Traced Continuous Collision Detection for robot sphere representation to obstacle mesh swept collision detection, using piecewise-linear or quadratic B-splines. For robot link meshes totaling 24k triangles and obstacle meshes of over 190k triangles, our methods were up to 3 times faster in batched discrete-pose queries than a state-of-the-art GPU-based method using a sphere robot representation. For the same obstacle mesh scene, our sphere-robot continuous collision detection was up to 9 times faster depending on trajectory batch size. We also performed a detailed measurement of the volume coverage accuracy of various sphere/mesh pose/path representations to provide insight into the tradeoffs between speed and accuracy of different robot collision detection methods.


[430] 2409.09920

Multi-Step Embed to Control: A Novel Deep Learning-based Approach for Surrogate Modelling in Reservoir Simulation

Reduced-order models, also known as proxy model or surrogate model, are approximate models that are less computational expensive as opposed to fully descriptive models. With the integration of machine learning, these models have garnered increasing research interests recently. However, many existing reduced-order modeling methods, such as embed to control (E2C) and embed to control and observe (E2CO), fall short in long-term predictions due to the accumulation of prediction errors over time. This issue arises partly from the one-step prediction framework inherent in E2C and E2CO architectures. This paper introduces a deep learning-based surrogate model, referred as multi-step embed-to-control model, for the construction of proxy models with improved long-term prediction performance. Unlike E2C and E2CO, the proposed network considers multiple forward transitions in the latent space at a time using Koopman operator, allowing the model to incorporate a sequence of state snapshots during training phrases. Additionally, the loss function of this novel approach has been redesigned to accommodate these multiple transitions and to respect the underlying physical principles. To validate the efficacy of the proposed method, the developed framework was implemented within two-phase (oil and water) reservoir model under a waterflooding scheme. Comparative analysis demonstrate that the proposed model significantly outperforms the conventional E2C model in long-term simulation scenarios. Notably, there was a substantial reduction in temporal errors in the prediction of saturation profiles and a decent improvement in pressure forecasting accuracy.


[431] 2409.09921

Towards Real-Time Generation of Delay-Compensated Video Feeds for Outdoor Mobile Robot Teleoperation

Teleoperation is an important technology to enable supervisors to control agricultural robots remotely. However, environmental factors in dense crop rows and limitations in network infrastructure hinder the reliability of data streamed to teleoperators. These issues result in delayed and variable frame rate video feeds that often deviate significantly from the robot's actual viewpoint. We propose a modular learning-based vision pipeline to generate delay-compensated images in real-time for supervisors. Our extensive offline evaluations demonstrate that our method generates more accurate images compared to state-of-the-art approaches in our setting. Additionally, we are one of the few works to evaluate a delay-compensation method in outdoor field environments with complex terrain on data from a real robot in real-time. Additional videos are provided at https://sites.google.com/illinois.edu/comp-teleop.


[432] 2409.09923

Understanding Code Change with Micro-Changes

A crucial activity in software maintenance and evolution is the comprehension of the changes performed by developers, when they submit a pull request and/or perform a commit on the repository. Typically, code changes are represented in the form of code diffs, textual representations highlighting the differences between two file versions, depicting the added, removed, and changed lines. This simplistic representation must be interpreted by developers, and mentally lifted to a higher abstraction level, that more closely resembles natural language descriptions, and eases the creation of a mental model of the changes. However, the textual diff-based representation is cumbersome, and the lifting requires considerable domain knowledge and programming skills. We present an approach, based on the concept of micro-change, to overcome these difficulties, translating code diffs into a series of pre-defined change operations, which can be described in natural language. We present a catalog of micro-changes, together with an automated micro-change detector. To evaluate our approach, we performed an empirical study on a large set of open-source repositories, focusing on a subset of our micro-change catalog, namely those related to changes affecting the conditional logic. We found that our detector is capable of explaining more than 67% of the changes taking place in the systems under study.


[433] 2409.09927

Towards Data Contamination Detection for Modern Large Language Models: Limitations, Inconsistencies, and Oracle Challenges

As large language models achieve increasingly impressive results, questions arise about whether such performance is from generalizability or mere data memorization. Thus, numerous data contamination detection methods have been proposed. However, these approaches are often validated with traditional benchmarks and early-stage LLMs, leaving uncertainty about their effectiveness when evaluating state-of-the-art LLMs on the contamination of more challenging benchmarks. To address this gap and provide a dual investigation of SOTA LLM contamination status and detection method robustness, we evaluate five contamination detection approaches with four state-of-the-art LLMs across eight challenging datasets often used in modern LLM evaluation. Our analysis reveals that (1) Current methods have non-trivial limitations in their assumptions and practical applications; (2) Notable difficulties exist in detecting contamination introduced during instruction fine-tuning with answer augmentation; and (3) Limited consistencies between SOTA contamination detection techniques. These findings highlight the complexity of contamination detection in advanced LLMs and the urgent need for further research on robust and generalizable contamination evaluation. Our code is available at https://github.com/vsamuel2003/data-contamination.


[434] 2409.09928

High-Security Hardware Module with PUF and Hybrid Cryptography for Data Security

This research highlights the rapid development of technology in the industry, particularly Industry 4.0, supported by fundamental technologies such as the Internet of Things (IoT), cloud computing, big data, and data analysis. Despite providing efficiency, these developments also bring negative impacts, such as increased cyber-attacks, especially in manufacturing. One standard attack in the industry is the man-in-the-middle (MITM) attack, which can have severe consequences for the physical data transfer, particularly on the integrity of sensor and actuator data in industrial machines. This research proposes a solution by developing a hardware security module (HSM) using a field-programmable gate array (FPGA) with physical unclonable function (PUF) authentication and a hybrid encryption data security system. Experimental results show that this research improves some criteria in industrial cybersecurity, ensuring critical data security from cyber-attacks in industrial machines.


[435] 2409.09930

Mining of Switching Sparse Networks for Missing Value Imputation in Multivariate Time Series

Multivariate time series data suffer from the problem of missing values, which hinders the application of many analytical methods. To achieve the accurate imputation of these missing values, exploiting inter-correlation by employing the relationships between sequences (i.e., a network) is as important as the use of temporal dependency, since a sequence normally correlates with other sequences. Moreover, exploiting an adequate network depending on time is also necessary since the network varies over time. However, in real-world scenarios, we normally know neither the network structure nor when the network changes beforehand. Here, we propose a missing value imputation method for multivariate time series, namely MissNet, that is designed to exploit temporal dependency with a state-space model and inter-correlation by switching sparse networks. The network encodes conditional independence between features, which helps us understand the important relationships for imputation visually. Our algorithm, which scales linearly with reference to the length of the data, alternatively infers networks and fills in missing values using the networks while discovering the switching of the networks. Extensive experiments demonstrate that MissNet outperforms the state-of-the-art algorithms for multivariate time series imputation and provides interpretable results.


[436] 2409.09931

Generalizability of Graph Neural Network Force Fields for Predicting Solid-State Properties

Machine-learned force fields (MLFFs) promise to offer a computationally efficient alternative to ab initio simulations for complex molecular systems. However, ensuring their generalizability beyond training data is crucial for their wide application in studying solid materials. This work investigates the ability of a graph neural network (GNN)-based MLFF, trained on Lennard-Jones Argon, to describe solid-state phenomena not explicitly included during training. We assess the MLFF's performance in predicting phonon density of states (PDOS) for a perfect face-centered cubic (FCC) crystal structure at both zero and finite temperatures. Additionally, we evaluate vacancy migration rates and energy barriers in an imperfect crystal using direct molecular dynamics (MD) simulations and the string method. Notably, vacancy configurations were absent from the training data. Our results demonstrate the MLFF's capability to capture essential solid-state properties with good agreement to reference data, even for unseen configurations. We further discuss data engineering strategies to enhance the generalizability of MLFFs. The proposed set of benchmark tests and workflow for evaluating MLFF performance in describing perfect and imperfect crystals pave the way for reliable application of MLFFs in studying complex solid-state materials.


[437] 2409.09933

Arbitrary high order ADER-DG method with local DG predictor for solutions of initial value problems for systems of first-order ordinary differential equations

An adaptation of the arbitrary high order ADER-DG numerical method with local DG predictor for solving the IVP for a first-order non-linear ODE system is proposed. The proposed numerical method is a completely one-step ODE solver with uniform steps, and is simple in algorithmic and software implementations. It was shown that the proposed version of the ADER-DG numerical method is A-stable and L-stable. The ADER-DG numerical method demonstrates superconvergence with convergence order 2N+1 for the solution at grid nodes, while the local solution obtained using the local DG predictor has convergence order N+1. It was demonstrated that an important applied feature of this implementation of the numerical method is the possibility of using the local solution as a solution with a subgrid resolution, which makes it possible to obtain a detailed solution even on very coarse coordinate grids. The scale of the error of the local solution, when calculating using standard representations of single or double precision floating point numbers, using large values of the degree N, practically does not differ from the error of the solution at the grid nodes. The capabilities of the ADER-DG method for solving stiff ODE systems characterized by extreme stiffness are demonstrated. Estimates of the computational costs of the ADER-DG numerical method are obtained.


[438] 2409.09934

Coordination-free Collaborative Replication based on Operational Transformation

We introduce Coordination-free Collaborative Replication (CCR), a new method for maintaining consistency across replicas in distributed systems without requiring explicit coordination messages. CCR automates conflict resolution, contrasting with traditional Data-sharing systems that typically involve centralized update management or predefined consistency rules. Operational Transformation (OT), commonly used in collaborative editing, ensures consistency by transforming operations while maintaining document integrity across replicas. However, OT assumes server-based coordination, which is unsuitable for modern, decentralized Peer-to-Peer (P2P) systems. Conflict-free Replicated Data Type (CRDT), like Two-Phase Sets (2P-Sets), guarantees eventual consistency by allowing commutative and associative operations but often result in counterintuitive behaviors, such as failing to re-add an item to a shopping cart once removed. In contrast, CCR employs a more intuitive approach to replication. It allows for straightforward updates and conflict resolution based on the current data state, enhancing clarity and usability compared to CRDTs. Furthermore, CCR addresses inefficiencies in messaging by developing a versatile protocol based on data stream confluence, thus providing a more efficient and practical solution for collaborative data sharing in distributed systems.


[439] 2409.09939

Real-time Coupled Centroidal Motion and Footstep Planning for Biped Robots

This paper presents an algorithm that finds a centroidal motion and footstep plan for a Spring-Loaded Inverted Pendulum (SLIP)-like bipedal robot model substantially faster than real-time. This is achieved with a novel representation of the dynamic footstep planning problem, where each point in the environment is considered a potential foothold that can apply a force to the center of mass to keep it on a desired trajectory. For a biped, up to two such footholds per time step must be selected, and we approximate this cardinality constraint with an iteratively reweighted $l_1$-norm minimization. Along with a linearizing approximation of an angular momentum constraint, this results in a quadratic program can be solved for a contact schedule and center of mass trajectory with automatic gait discovery. A 2 s planning horizon with 13 time steps and 20 surfaces available at each time is solved in 142 ms, roughly ten times faster than comparable existing methods in the literature. We demonstrate the versatility of this program in a variety of simulated environments.


[440] 2409.09940

Robots with Attitude: Singularity-Free Quaternion-Based Model-Predictive Control for Agile Legged Robots

We present a model-predictive control (MPC) framework for legged robots that avoids the singularities associated with common three-parameter attitude representations like Euler angles during large-angle rotations. Our method parameterizes the robot's attitude with singularity-free unit quaternions and makes modifications to the iterative linear-quadratic regulator (iLQR) algorithm to deal with the resulting geometry. The derivation of our algorithm requires only elementary calculus and linear algebra, deliberately avoiding the abstraction and notation of Lie groups. We demonstrate the performance and computational efficiency of quaternion MPC in several experiments on quadruped and humanoid robots.


[441] 2409.09941

ROS2WASM: Bringing the Robot Operating System to the Web

The Robot Operating System (ROS) has become the de facto standard middleware in robotics, widely adopted across domains ranging from education to industrial applications. The RoboStack distribution has extended ROS's accessibility by facilitating installation across all major operating systems and architectures, integrating seamlessly with scientific tools such as PyTorch and Open3D. This paper presents ROS2WASM, a novel integration of RoboStack with WebAssembly, enabling the execution of ROS 2 and its associated software directly within web browsers, without requiring local installations. This approach significantly enhances reproducibility and shareability of research, lowers barriers to robotics education, and leverages WebAssembly's robust security framework to protect against malicious code. We detail our methodology for cross-compiling ROS 2 packages into WebAssembly, the development of a specialized middleware for ROS 2 communication within browsers, and the implementation of a web platform available at www.ros2wasm.dev that allows users to interact with ROS 2 environments. Additionally, we extend support to the Robotics Toolbox for Python and adapt its Swift simulator for browser compatibility. Our work paves the way for unprecedented accessibility in robotics, offering scalable, secure, and reproducible environments that have the potential to transform educational and research paradigms.


[442] 2409.09944

Fault Analysis And Predictive Maintenance Of Induction Motor Using Machine Learning

Induction motors are one of the most crucial electrical equipment and are extensively used in industries in a wide range of applications. This paper presents a machine learning model for the fault detection and classification of induction motor faults by using three phase voltages and currents as inputs. The aim of this work is to protect vital electrical components and to prevent abnormal event progression through early detection and diagnosis. This work presents a fast forward artificial neural network model to detect some of the commonly occurring electrical faults like overvoltage, under voltage, single phasing, unbalanced voltage, overload, ground fault. A separate model free monitoring system wherein the motor itself acts like a sensor is presented and the only monitored signals are the input given to the motor. Limits for current and voltage values are set for the faulty and healthy conditions, which is done by a classifier. Real time data from a 0.33 HP induction motor is used to train and test the neural network. The model so developed analyses the voltage and current values given at a particular instant and classifies the data into no fault or the specific fault. The model is then interfaced with a real motor to accurately detect and classify the faults so that further necessary action can be taken.


[443] 2409.09945

Tracking the spatial dynamics of the synthetic opioid crisis in the USA, 2013-2020 using human mobility-based graph neural network

Synthetic opioids are the most common drugs involved in drug-involved overdose mortalities in the U.S. The Center for Disease Control and Prevention reported that in 2018, about 70% of all drug overdose deaths involved opioids and 67% of all opioid-involved deaths were accounted for by synthetic opioids. In this study, we investigated the spread of synthetic opioids between 2013 and 2020 in the U.S., and analyzed the relationship between the spatiotemporal pattern of synthetic opioid-involved deaths and another key opioid, heroin, and compared patterns of deaths involving these two types of drugs during this time period. Spatial connections between counties were incorporated into a graph convolutional neural network model to represent and analyze the spread of synthetic opioid-involved deaths, and in the context of heroin-involved deaths.


[444] 2409.09947

Gaps or Hallucinations? Gazing into Machine-Generated Legal Analysis for Fine-grained Text Evaluations

Large Language Models (LLMs) show promise as a writing aid for professionals performing legal analyses. However, LLMs can often hallucinate in this setting, in ways difficult to recognize by non-professionals and existing text evaluation metrics. In this work, we pose the question: when can machine-generated legal analysis be evaluated as acceptable? We introduce the neutral notion of gaps, as opposed to hallucinations in a strict erroneous sense, to refer to the difference between human-written and machine-generated legal analysis. Gaps do not always equate to invalid generation. Working with legal experts, we consider the CLERC generation task proposed in Hou et al. (2024b), leading to a taxonomy, a fine-grained detector for predicting gap categories, and an annotated dataset for automatic evaluation. Our best detector achieves 67% F1 score and 80% precision on the test set. Employing this detector as an automated metric on legal analysis generated by SOTA LLMs, we find around 80% contain hallucinations of different kinds.


[445] 2409.09948

Enhancing Industrial Cybersecurity: SoftHSM Implementation on SBCs for Mitigating MITM Attacks

The rapid growth of industrial technology, driven by automation, IoT, and cloud computing, has also increased the risk of cyberattacks, such as Man-in-the-Middle (MITM) attacks. A standard solution to protect data is using a Hardware Security Module (HSM), but its high implementation cost has led to the development of a more affordable alternative: SoftHSM. This software-based module manages encryption and decryption keys using cryptographic algorithms. This study simulates the use of SoftHSM on a single-board computer (SBC) to enhance industrial system security and cost-effectively mitigate MITM attacks. The security system integrates AES and RSA cryptographic algorithms, with SoftHSM handling RSA key storage. The results show that HSM protects RSA private keys from extraction attempts, ensuring data security. In terms of performance, the system achieved an average encryption time of 3.29 seconds, a slot access time of 0.018 seconds, and a decryption time of 2.558 seconds. It also demonstrated efficient memory usage, with 37.24% for encryption and 24.24% for decryption, while consuming 5.20 V and 0.72 A during processing.


[446] 2409.09951

Optimal ablation for interpretability

Interpretability studies often involve tracing the flow of information through machine learning models to identify specific model components that perform relevant computations for tasks of interest. Prior work quantifies the importance of a model component on a particular task by measuring the impact of performing ablation on that component, or simulating model inference with the component disabled. We propose a new method, optimal ablation (OA), and show that OA-based component importance has theoretical and empirical advantages over measuring importance via other ablation methods. We also show that OA-based component importance can benefit several downstream interpretability tasks, including circuit discovery, localization of factual recall, and latent prediction.


[447] 2409.09953

Uncertainty-Guided Appearance-Motion Association Network for Out-of-Distribution Action Detection

Out-of-distribution (OOD) detection targets to detect and reject test samples with semantic shifts, to prevent models trained on in-distribution (ID) dataset from producing unreliable predictions. Existing works only extract the appearance features on image datasets, and cannot handle dynamic multimedia scenarios with much motion information. Therefore, we target a more realistic and challenging OOD detection task: OOD action detection (ODAD). Given an untrimmed video, ODAD first classifies the ID actions and recognizes the OOD actions, and then localizes ID and OOD actions. To this end, in this paper, we propose a novel Uncertainty-Guided Appearance-Motion Association Network (UAAN), which explores both appearance features and motion contexts to reason spatial-temporal inter-object interaction for ODAD.Firstly, we design separate appearance and motion branches to extract corresponding appearance-oriented and motion-aspect object representations. In each branch, we construct a spatial-temporal graph to reason appearance-guided and motion-driven inter-object interaction. Then, we design an appearance-motion attention module to fuse the appearance and motion features for final action detection. Experimental results on two challenging datasets show that UAAN beats state-of-the-art methods by a significant margin, illustrating its effectiveness.


[448] 2409.09956

Context-aware Advertisement Modeling and Applications in Rapid Transit Systems

In today's businesses, marketing has been a central trend for growth. Marketing quality is equally important as product quality and relevant metrics. Quality of Marketing depends on targeting the right person. Technology adaptations have been slow in many fields but have captured some aspects of human life to make an impact. For instance, in marketing, recent developments have provided a significant shift toward data-driven approaches. In this paper, we present an advertisement model using behavioral and tracking analysis. We extract users' behavioral data upholding their privacy principle and perform data manipulations and pattern mining for effective analysis. We present a model using the agent-based modeling (ABM) technique, with the target audience of rapid transit system users to target the right person for advertisement applications. We also outline the Overview, Design, and Details concept of ABM.


[449] 2409.09957

Deep Graph Anomaly Detection: A Survey and New Perspectives

Graph anomaly detection (GAD), which aims to identify unusual graph instances (nodes, edges, subgraphs, or graphs), has attracted increasing attention in recent years due to its significance in a wide range of applications. Deep learning approaches, graph neural networks (GNNs) in particular, have been emerging as a promising paradigm for GAD, owing to its strong capability in capturing complex structure and/or node attributes in graph data. Considering the large number of methods proposed for GNN-based GAD, it is of paramount importance to summarize the methodologies and findings in the existing GAD studies, so that we can pinpoint effective model designs for tackling open GAD problems. To this end, in this work we aim to present a comprehensive review of deep learning approaches for GAD. Existing GAD surveys are focused on task-specific discussions, making it difficult to understand the technical insights of existing methods and their limitations in addressing some unique challenges in GAD. To fill this gap, we first discuss the problem complexities and their resulting challenges in GAD, and then provide a systematic review of current deep GAD methods from three novel perspectives of methodology, including GNN backbone design, proxy task design for GAD, and graph anomaly measures. To deepen the discussions, we further propose a taxonomy of 13 fine-grained method categories under these three perspectives to provide more in-depth insights into the model designs and their capabilities. To facilitate the experiments and validation, we also summarize a collection of widely-used GAD datasets and empirical comparison. We further discuss multiple open problems to inspire more future high-quality research. A continuously updated repository for datasets, links to the codes of algorithms, and empirical comparison is available at https://github.com/mala-lab/Awesome-Deep-Graph-Anomaly-Detection.


[450] 2409.09958

An Offline Adaptation Framework for Constrained Multi-Objective Reinforcement Learning

In recent years, significant progress has been made in multi-objective reinforcement learning (RL) research, which aims to balance multiple objectives by incorporating preferences for each objective. In most existing studies, specific preferences must be provided during deployment to indicate the desired policies explicitly. However, designing these preferences depends heavily on human prior knowledge, which is typically obtained through extensive observation of high-performing demonstrations with expected behaviors. In this work, we propose a simple yet effective offline adaptation framework for multi-objective RL problems without assuming handcrafted target preferences, but only given several demonstrations to implicitly indicate the preferences of expected policies. Additionally, we demonstrate that our framework can naturally be extended to meet constraints on safety-critical objectives by utilizing safe demonstrations, even when the safety thresholds are unknown. Empirical results on offline multi-objective and safe tasks demonstrate the capability of our framework to infer policies that align with real preferences while meeting the constraints implied by the provided demonstrations.


[451] 2409.09959

Mission Planning on Autonomous Avoidance for Spacecraft Confronting Orbital Debris

This paper investigates the mission planning problem for spacecraft confronting orbital debris to achieve autonomous avoidance. Firstly, combined with the avoidance requirements, a closed-loop framework of autonomous avoidance for orbital debris is proposed. Under the established model of mission planning, a two-stage planning is proposed to coordinate the conflict between routine tasks and debris avoidance. During the planning for expansion, the temporal constraints for duration actions are handled by the ordering choices. Meanwhile, dynamic resource variables satisfying instantaneous numerical change and continuous linear change are reasoned in the execution of actions. Linear Programming (LP) can solve the bounds of variables in each state, which is used to check the consistency of the interactive constraints on duration and resource. Then, the temporal relaxed planning graph (TRPG) heuristics is rationally developed to guide the plan towards the goal. Finally, the simulation demonstrates that the proposed mission planning strategy can effectively achieve the autonomous debris avoidance of the spacecraft.


[452] 2409.09967

Hybrid Aerial-Ground Vehicle Autonomy in GPS-denied Environments

The DARPA Subterranean Challenge is leading the development of robots capable of mapping underground mines and tunnels up to 8km in length and identify objects and people. Developing these autonomous abilities paves the way for future planetary cave and surface exploration missions. The Co-STAR team, competing in this challenge, is developing a hybrid aerial-ground vehicle, known as the Rollocopter. The current design of this vehicle is a drone with wheels attached. This allows for the vehicle to roll, actuated by the propellers, and fly only when necessary, hence benefiting from the reduced power consumption of the ground mode and the enhanced mobility of the aerial mode. This thesis focuses on the development and increased robustness of the local planning architecture for the Rollocopter. The first development of thesis is a local planner capable of collision avoidance. The local planning node provides the basic functionality required for the vehicle to navigate autonomously. The next stage was augmenting this with the ability to plan more reliably without localisation. This was then integrated with a hybrid mobility mode capable of rolling and flying to exploit power and mobility benefits of the respective configurations. A traversability analysis algorithm as well as determining the terrain that the vehicle is able to traverse is in the late stages of development for informing the decisions of the hybrid planner. A simulator was developed to test the planning algorithms and improve the robustness of the vehicle to different environments. The results presented in this thesis are related to the mobility of the rollocopter and the range of environments that the vehicle is capable of traversing. Videos are included in which the vehicle successfully navigates through dust-ridden tunnels, horizontal mazes, and areas with rough terrain.


[453] 2409.09968

Artificial Intelligence-Based Opportunistic Coronary Calcium Screening in the Veterans Affairs National Healthcare System

Coronary artery calcium (CAC) is highly predictive of cardiovascular events. While millions of chest CT scans are performed annually in the United States, CAC is not routinely quantified from scans done for non-cardiac purposes. A deep learning algorithm was developed using 446 expert segmentations to automatically quantify CAC on non-contrast, non-gated CT scans (AI-CAC). Our study differs from prior works as we leverage imaging data across the Veterans Affairs national healthcare system, from 98 medical centers, capturing extensive heterogeneity in imaging protocols, scanners, and patients. AI-CAC performance on non-gated scans was compared against clinical standard ECG-gated CAC scoring. Non-gated AI-CAC differentiated zero vs. non-zero and less than 100 vs. 100 or greater Agatston scores with accuracies of 89.4% (F1 0.93) and 87.3% (F1 0.89), respectively, in 795 patients with paired gated scans within a year of a non-gated CT scan. Non-gated AI-CAC was predictive of 10-year all-cause mortality (CAC 0 vs. >400 group: 25.4% vs. 60.2%, Cox HR 3.49, p < 0.005), and composite first-time stroke, MI, or death (CAC 0 vs. >400 group: 33.5% vs. 63.8%, Cox HR 3.00, p < 0.005). In a screening dataset of 8,052 patients with low-dose lung cancer-screening CTs (LDCT), 3,091/8,052 (38.4%) individuals had AI-CAC >400. Four cardiologists qualitatively reviewed LDCT images from a random sample of >400 AI-CAC patients and verified that 527/531 (99.2%) would benefit from lipid-lowering therapy. To the best of our knowledge, this is the first non-gated CT CAC algorithm developed across a national healthcare system, on multiple imaging protocols, without filtering intra-cardiac hardware, and compared against a strong gated CT reference. We report superior performance relative to previous CAC algorithms evaluated against paired gated scans that included patients with intra-cardiac hardware.


[454] 2409.09969

2S-ODIS: Two-Stage Omni-Directional Image Synthesis by Geometric Distortion Correction

Omni-directional images have been increasingly used in various applications, including virtual reality and SNS (Social Networking Services). However, their availability is comparatively limited in contrast to normal field of view (NFoV) images, since specialized cameras are required to take omni-directional images. Consequently, several methods have been proposed based on generative adversarial networks (GAN) to synthesize omni-directional images, but these approaches have shown difficulties in training of the models, due to instability and/or significant time consumption in the training. To address these problems, this paper proposes a novel omni-directional image synthesis method, 2S-ODIS (Two-Stage Omni-Directional Image Synthesis), which generated high-quality omni-directional images but drastically reduced the training time. This was realized by utilizing the VQGAN (Vector Quantized GAN) model pre-trained on a large-scale NFoV image database such as ImageNet without fine-tuning. Since this pre-trained model does not represent distortions of omni-directional images in the equi-rectangular projection (ERP), it cannot be applied directly to the omni-directional image synthesis in ERP. Therefore, two-stage structure was adopted to first create a global coarse image in ERP and then refine the image by integrating multiple local NFoV images in the higher resolution to compensate the distortions in ERP, both of which are based on the pre-trained VQGAN model. As a result, the proposed method, 2S-ODIS, achieved the reduction of the training time from 14 days in OmniDreamer to four days in higher image quality.


[455] 2409.09970

A Non-Linear Model Predictive Task-Space Controller Satisfying Shape Constraints for Tendon-Driven Continuum Robots

Tendon-Driven Continuum Robots (TDCRs) have the potential to be used in minimally invasive surgery and industrial inspection, where the robot must enter narrow and confined spaces. We propose a Model Predictive Control (MPC) approach to leverage the non-linear kinematics and redundancy of TDCRs for whole-body collision avoidance, with real-time capabilities for handling inputs at 30Hz. Key to our method's effectiveness is the integration of a nominal Piecewise Constant Curvature (PCC) model for efficient computation of feasible trajectories, with a local feedback controller to handle modeling uncertainty and disturbances. Our experiments in simulation show that our MPC outperforms conventional Jacobian-based controller in position tracking, particularly under disturbances and user-defined shape constraints, while also allowing the incorporation of control limits. We further validate our method on a hardware prototype, showcasing its potential for enhancing the safety of teleoperation tasks.


[456] 2409.09971

A Preliminary Add-on Differential Drive System for MRI-Compatible Prostate Robotic System

MRI-targeted biopsy has shown significant advantages over conventional random sextant biopsy, detecting more clinically significant cancers and improving risk stratification. However, needle targeting accuracy, especially in transperineal MRI-guided biopsies, presents a challenge due to needle deflection. This can negatively impact patient outcomes, leading to repeated sampling and inaccurate diagnoses if cancerous tissue isn't properly collected. To address this, we developed a novel differential drive prototype designed to improve needle control and targeting precision. This system, featuring a 2-degree-of-freedom (2-DOF) MRI-compatible cooperative needle driver, distances the robot from the MRI imaging area, minimizing image artifacts and distortions. By using two motors for simultaneous needle insertion and rotation without relative movement, the design reduces MRI interference. In this work, we introduced two mechanical differential drive designs: the ball screw/spline and lead screw/bushing types, and explored both hollow-type and side-pulley differentials. Validation through low-resolution rapid-prototyping demonstrated the feasibility of differential drives in prostate biopsies, with the custom hollow-type hybrid ultrasonic motor (USM) achieving a rotary speed of 75 rpm. The side-pulley differential further increased the speed to 168 rpm, ideal for needle rotation applications. Accuracy assessments showed minimal errors in both insertion and rotation motions, indicating that this proof-of-concept design holds great promise for further development. Ultimately, the differential drive offers a promising solution to the critical issue of needle targeting accuracy in MRI-guided prostate biopsies.


[457] 2409.09972

Securing the Future: Exploring Privacy Risks and Security Questions in Robotic Systems

The integration of artificial intelligence, especially large language models in robotics, has led to rapid advancements in the field. We are now observing an unprecedented surge in the use of robots in our daily lives. The development and continual improvements of robots are moving at an astonishing pace. Although these remarkable improvements facilitate and enhance our lives, several security and privacy concerns have not been resolved yet. Therefore, it has become crucial to address the privacy and security threats of robotic systems while improving our experiences. In this paper, we aim to present existing applications and threats of robotics, anticipated future evolution, and the security and privacy issues they may imply. We present a series of open questions for researchers and practitioners to explore further.


[458] 2409.09975

Constrained Bandwidth Observation Sharing for Multi-Robot Navigation in Dynamic Environments via Intelligent Knapsack

Multi-robot navigation is increasingly crucial in various domains, including disaster response, autonomous vehicles, and warehouse and manufacturing automation. Robot teams often must operate in highly dynamic environments and under strict bandwidth constraints imposed by communication infrastructure, rendering effective observation sharing within the system a challenging problem. This paper presents a novel optimal communication scheme, Intelligent Knapsack (iKnap), for multi-robot navigation in dynamic environments under bandwidth constraints. We model multi-robot communication as belief propagation in a graph of inferential agents. We then formulate the combinatorial optimization for observation sharing as a 0/1 knapsack problem, where each potential pairwise communication between robots is assigned a decision-making utility to be weighed against its bandwidth cost, and the system has some cumulative bandwidth limit. Compared to state-of-the-art broadcast-based optimal communication schemes, iKnap yields significant improvements in navigation performance with respect to scenario complexity while maintaining a similar runtime. Furthermore, iKnap utilizes allocated bandwidth and observational resources more efficiently than existing approaches, especially in very low-resource and high-uncertainty settings. Based on these results, we claim that the proposed method enables more robust collaboration for multi-robot teams in real-world navigation problems.


[459] 2409.09978

Context-Conditioned Spatio-Temporal Predictive Learning for Reliable V2V Channel Prediction

Achieving reliable multidimensional Vehicle-to-Vehicle (V2V) channel state information (CSI) prediction is both challenging and crucial for optimizing downstream tasks that depend on instantaneous CSI. This work extends traditional prediction approaches by focusing on four-dimensional (4D) CSI, which includes predictions over time, bandwidth, and antenna (TX and RX) space. Such a comprehensive framework is essential for addressing the dynamic nature of mobility environments within intelligent transportation systems, necessitating the capture of both temporal and spatial dependencies across diverse domains. To address this complexity, we propose a novel context-conditioned spatiotemporal predictive learning method. This method leverages causal convolutional long short-term memory (CA-ConvLSTM) to effectively capture dependencies within 4D CSI data, and incorporates context-conditioned attention mechanisms to enhance the efficiency of spatiotemporal memory updates. Additionally, we introduce an adaptive meta-learning scheme tailored for recurrent networks to mitigate the issue of accumulative prediction errors. We validate the proposed method through empirical studies conducted across three different geometric configurations and mobility scenarios. Our results demonstrate that the proposed approach outperforms existing state-of-the-art predictive models, achieving superior performance across various geometries. Moreover, we show that the meta-learning framework significantly enhances the performance of recurrent-based predictive models in highly challenging cross-geometry settings, thus highlighting its robustness and adaptability.


[460] 2409.09979

Optimality Gap of Decentralized Submodular Maximization under Probabilistic Communication

This paper considers the problem of decentralized submodular maximization subject to partition matroid constraint using a sequential greedy algorithm with probabilistic inter-agent message-passing. We propose a communication-aware framework where the probability of successful communication between connected devices is considered. Our analysis introduces the notion of the probabilistic optimality gap, highlighting its potential influence on determining the message-passing sequence based on the agent's broadcast reliability and strategic decisions regarding agents that can broadcast their messages multiple times in a resource-limited environment. This work not only contributes theoretical insights but also has practical implications for designing and analyzing decentralized systems in uncertain communication environments. A numerical example demonstrates the impact of our results.


[461] 2409.09980

From Bytes to Bites: Using Country Specific Machine Learning Models to Predict Famine

Hunger crises are critical global issues affecting millions, particularly in low-income and developing countries. This research investigates how machine learning can be utilized to predict and inform decisions regarding famine and hunger crises. By leveraging a diverse set of variables (natural, economic, and conflict-related), three machine learning models (Linear Regression, XGBoost, and RandomForestRegressor) were employed to predict food consumption scores, a key indicator of household nutrition. The RandomForestRegressor emerged as the most accurate model, with an average prediction error of 10.6%, though accuracy varied significantly across countries, ranging from 2% to over 30%. Notably, economic indicators were consistently the most significant predictors of average household nutrition, while no single feature dominated across all regions, underscoring the necessity for comprehensive data collection and tailored, country-specific models. These findings highlight the potential of machine learning, particularly Random Forests, to enhance famine prediction, suggesting that continued research and improved data gathering are essential for more effective global hunger forecasting.


[462] 2409.09982

Atomic Norm Minimization-based DoA Estimation for IRS-assisted Sensing Systems

Intelligent reflecting surface (IRS) is expected to play a pivotal role in future wireless sensing networks owing to its potential for high-resolution and high-accuracy sensing. In this work, we investigate a multi-target direction-of-arrival (DoA) estimation problem in a semi-passive IRS-assisted sensing system, where IRS reflecting elements (REs) reflect signals from the base station to targets, and IRS sensing elements (SEs) estimate DoA based on echo signals reflected by the targets. {First of all, instead of solely relying on IRS SEs for DoA estimation as done in the existing literature, this work fully exploits the DoA information embedded in both IRS REs and SEs matrices via the atomic norm minimization (ANM) scheme. Subsequently, the Cram\'er-Rao bound for DoA estimation is derived, revealing an inverse proportionality to $MN^3+NM^3$ under the case of identity covariance matrix of the IRS measurement matrix and a single target, where $M$ and $N$ are the number of IRS SEs and REs, respectively. Finally, extensive numerical results substantiate the superior accuracy and resolution performance of the proposed ANM-based DoA estimation method over representative baselines.


[463] 2409.09984

Convergence of Sharpness-Aware Minimization Algorithms using Increasing Batch Size and Decaying Learning Rate

The sharpness-aware minimization (SAM) algorithm and its variants, including gap guided SAM (GSAM), have been successful at improving the generalization capability of deep neural network models by finding flat local minima of the empirical loss in training. Meanwhile, it has been shown theoretically and practically that increasing the batch size or decaying the learning rate avoids sharp local minima of the empirical loss. In this paper, we consider the GSAM algorithm with increasing batch sizes or decaying learning rates, such as cosine annealing or linear learning rate, and theoretically show its convergence. Moreover, we numerically compare SAM (GSAM) with and without an increasing batch size and conclude that using an increasing batch size or decaying learning rate finds flatter local minima than using a constant batch size and learning rate.


[464] 2409.09989

Comprehensive Study on Sentiment Analysis: From Rule-based to modern LLM based system

This paper provides a comprehensive survey of sentiment analysis within the context of artificial intelligence (AI) and large language models (LLMs). Sentiment analysis, a critical aspect of natural language processing (NLP), has evolved significantly from traditional rule-based methods to advanced deep learning techniques. This study examines the historical development of sentiment analysis, highlighting the transition from lexicon-based and pattern-based approaches to more sophisticated machine learning and deep learning models. Key challenges are discussed, including handling bilingual texts, detecting sarcasm, and addressing biases. The paper reviews state-of-the-art approaches, identifies emerging trends, and outlines future research directions to advance the field. By synthesizing current methodologies and exploring future opportunities, this survey aims to understand sentiment analysis in the AI and LLM context thoroughly.


[465] 2409.09990

SHIRE: Enhancing Sample Efficiency using Human Intuition in REinforcement Learning

The ability of neural networks to perform robotic perception and control tasks such as depth and optical flow estimation, simultaneous localization and mapping (SLAM), and automatic control has led to their widespread adoption in recent years. Deep Reinforcement Learning has been used extensively in these settings, as it does not have the unsustainable training costs associated with supervised learning. However, DeepRL suffers from poor sample efficiency, i.e., it requires a large number of environmental interactions to converge to an acceptable solution. Modern RL algorithms such as Deep Q Learning and Soft Actor-Critic attempt to remedy this shortcoming but can not provide the explainability required in applications such as autonomous robotics. Humans intuitively understand the long-time-horizon sequential tasks common in robotics. Properly using such intuition can make RL policies more explainable while enhancing their sample efficiency. In this work, we propose SHIRE, a novel framework for encoding human intuition using Probabilistic Graphical Models (PGMs) and using it in the Deep RL training pipeline to enhance sample efficiency. Our framework achieves 25-78% sample efficiency gains across the environments we evaluate at negligible overhead cost. Additionally, by teaching RL agents the encoded elementary behavior, SHIRE enhances policy explainability. A real-world demonstration further highlights the efficacy of policies trained using our framework.


[466] 2409.09996

FreeMark: A Non-Invasive White-Box Watermarking for Deep Neural Networks

Deep neural networks (DNNs) have achieved significant success in real-world applications. However, safeguarding their intellectual property (IP) remains extremely challenging. Existing DNN watermarking for IP protection often require modifying DNN models, which reduces model performance and limits their practicality. This paper introduces FreeMark, a novel DNN watermarking framework that leverages cryptographic principles without altering the original host DNN model, thereby avoiding any reduction in model performance. Unlike traditional DNN watermarking methods, FreeMark innovatively generates secret keys from a pre-generated watermark vector and the host model using gradient descent. These secret keys, used to extract watermark from the model's activation values, are securely stored with a trusted third party, enabling reliable watermark extraction from suspect models. Extensive experiments demonstrate that FreeMark effectively resists various watermark removal attacks while maintaining high watermark capacity.


[467] 2409.09997

ViewActive: Active viewpoint optimization from a single image

When observing objects, humans benefit from their spatial visualization and mental rotation ability to envision potential optimal viewpoints based on the current observation. This capability is crucial for enabling robots to achieve efficient and robust scene perception during operation, as optimal viewpoints provide essential and informative features for accurately representing scenes in 2D images, thereby enhancing downstream tasks. To endow robots with this human-like active viewpoint optimization capability, we propose ViewActive, a modernized machine learning approach drawing inspiration from aspect graph, which provides viewpoint optimization guidance based solely on the current 2D image input. Specifically, we introduce the 3D Viewpoint Quality Field (VQF), a compact and consistent representation for viewpoint quality distribution similar to an aspect graph, composed of three general-purpose viewpoint quality metrics: self-occlusion ratio, occupancy-aware surface normal entropy, and visual entropy. We utilize pre-trained image encoders to extract robust visual and semantic features, which are then decoded into the 3D VQF, allowing our model to generalize effectively across diverse objects, including unseen categories.The lightweight ViewActive network (72 FPS on a single GPU) significantly enhances the performance of state-of-the-art object recognition pipelines and can be integrated into real-time motion planning for robotic applications. Our code and dataset are available here: https://github.com/jiayi-wu-umd/ViewActive


[468] 2409.10000

Development and Testing of a Vine Robot for Urban Search and Rescue in Confined Rubble Environments

The request for fast response and safe operation after natural and man-made disasters in urban environments has spurred the development of robotic systems designed to assist in search and rescue operations within complex rubble sites. Traditional Unmanned Aerial Vehicles (UAVs) and Unmanned Ground Vehicles (UGVs) face significant limitations in such confined and obstructed environments. This paper introduces a novel vine robot designed to navigate dense rubble, drawing inspiration from natural growth mechanisms found in plants. Unlike conventional robots, vine robots are soft robots that can grow by everting their material, allowing them to navigate through narrow spaces and obstacles. The prototype presented in this study incorporates pneumatic muscles for steering and oscillation, an equation-based robot length control plus feedback pressure regulating system for extending and retracting the robot body. We conducted a series of controlled experiments in an artificial rubble testbed to assess the robot performance under varying environmental conditions and robot parameters, including volume ratio, environmental weight, oscillation, and steering. The results show that the vine robot can achieve significant penetration depths in cluttered environments with mixed obstacle sizes and weights, and can maintain repeated trajectories, demonstrating potential for mapping and navigating complex underground paths. Our findings highlight the suitability of the vine robot for urban search and rescue missions, with further research planned to enhance its robustness and deployability in real-world scenarios.


[469] 2409.10007

SelECT-SQL: Self-correcting ensemble Chain-of-Thought for Text-to-SQL

In recent years,Text-to-SQL, the problem of automatically converting questions posed in natural language to formal SQL queries, has emerged as an important problem at the intersection of natural language processing and data management research. Large language models (LLMs) have delivered impressive performance when used in an off-the-shelf performance, but still fall significantly short of expected expert-level performance. Errors are especially probable when a nuanced understanding is needed of database schemas, questions, and SQL clauses to do proper Text-to-SQL conversion. We introduce SelECT-SQL, a novel in-context learning solution that uses an algorithmic combination of chain-of-thought (CoT) prompting, self-correction, and ensemble methods to yield a new state-of-the-art result on challenging Text-to-SQL benchmarks. Specifically, when configured using GPT-3.5-Turbo as the base LLM, SelECT-SQL achieves 84.2% execution accuracy on the Spider leaderboard's development set, exceeding both the best results of other baseline GPT-3.5-Turbo-based solutions (81.1%), and the peak performance (83.5%) of the GPT-4 result reported on the leaderboard.


[470] 2409.10009

GA-TEB: Goal-Adaptive Framework for Efficient Navigation Based on Goal Lines

In crowd navigation, the local goal plays a crucial role in trajectory initialization, optimization, and evaluation. Recognizing that when the global goal is distant, the robot's primary objective is avoiding collisions, making it less critical to pass through the exact local goal point, this work introduces the concept of goal lines, which extend the traditional local goal from a single point to multiple candidate lines. Coupled with a topological map construction strategy that groups obstacles to be as convex as possible, a goal-adaptive navigation framework is proposed to efficiently plan multiple candidate trajectories. Simulations and experiments demonstrate that the proposed GA-TEB framework effectively prevents deadlock situations, where the robot becomes frozen due to a lack of feasible trajectories in crowded environments. Additionally, the framework greatly increases planning frequency in scenarios with numerous non-convex obstacles, enhancing both robustness and safety.


[471] 2409.10011

HALO: Hallucination Analysis and Learning Optimization to Empower LLMs with Retrieval-Augmented Context for Guided Clinical Decision Making

Large language models (LLMs) have significantly advanced natural language processing tasks, yet they are susceptible to generating inaccurate or unreliable responses, a phenomenon known as hallucination. In critical domains such as health and medicine, these hallucinations can pose serious risks. This paper introduces HALO, a novel framework designed to enhance the accuracy and reliability of medical question-answering (QA) systems by focusing on the detection and mitigation of hallucinations. Our approach generates multiple variations of a given query using LLMs and retrieves relevant information from external open knowledge bases to enrich the context. We utilize maximum marginal relevance scoring to prioritize the retrieved context, which is then provided to LLMs for answer generation, thereby reducing the risk of hallucinations. The integration of LangChain further streamlines this process, resulting in a notable and robust increase in the accuracy of both open-source and commercial LLMs, such as Llama-3.1 (from 44% to 65%) and ChatGPT (from 56% to 70%). This framework underscores the critical importance of addressing hallucinations in medical QA systems, ultimately improving clinical decision-making and patient care. The open-source HALO is available at: https://github.com/ResponsibleAILab/HALO.


[472] 2409.10015

RPC: A Modular Framework for Robot Planning, Control, and Deployment

This paper presents an open-source, lightweight, yet comprehensive software framework, named RPC, which integrates physics-based simulators, planning and control libraries, debugging tools, and a user-friendly operator interface. RPC enables users to thoroughly evaluate and develop control algorithms for robotic systems. While existing software frameworks provide some of these capabilities, integrating them into a cohesive system can be challenging and cumbersome. To overcome this challenge, we have modularized each component in RPC to ensure easy and seamless integration or replacement with new modules. Additionally, our framework currently supports a variety of model-based planning and control algorithms for robotic manipulators and legged robots, alongside essential debugging tools, making it easier for users to design and execute complex robotics tasks. The code and usage instructions of RPC are available at https://github.com/shbang91/rpc.


[473] 2409.10016

AceParse: A Comprehensive Dataset with Diverse Structured Texts for Academic Literature Parsing

With the development of data-centric AI, the focus has shifted from model-driven approaches to improving data quality. Academic literature, as one of the crucial types, is predominantly stored in PDF formats and needs to be parsed into texts before further processing. However, parsing diverse structured texts in academic literature remains challenging due to the lack of datasets that cover various text structures. In this paper, we introduce AceParse, the first comprehensive dataset designed to support the parsing of a wide range of structured texts, including formulas, tables, lists, algorithms, and sentences with embedded mathematical expressions. Based on AceParse, we fine-tuned a multimodal model, named AceParser, which accurately parses various structured texts within academic literature. This model outperforms the previous state-of-the-art by 4.1% in terms of F1 score and by 5% in Jaccard Similarity, demonstrating the potential of multimodal models in academic literature parsing. Our dataset is available at https://github.com/JHW5981/AceParse.


[474] 2409.10018

Compositional Design of Safety Controllers for Large-scale Stochastic Hybrid Systems

In this work, we propose a compositional scheme based on small-gain reasoning for the safety controller synthesis of interconnected stochastic hybrid systems with both continuous evolutions and instantaneous jumps. In our proposed setting, we first offer an augmented scheme to represent each stochastic hybrid subsystem with continuous and discrete evolutions in a unified framework, ensuring that the state trajectories match those of the original hybrid systems. We then introduce the concept of augmented control sub-barrier certificates (A-CSBC) for each subsystem, which allows the construction of augmented control barrier certificates (A-CBC) for interconnected systems and their safety controllers under small-gain compositional conditions. We eventually leverage the constructed A-CBC and quantify a guaranteed probabilistic bound across the safety of the interconnected system. While the computational complexity of designing a barrier certificate and its safety controller grows polynomially with network dimension using sum-of-squares (SOS) optimization program, our compositional approach significantly reduces it to a linear scale with respect to the number of subsystems. We verify the efficacy of our proposed approach over an interconnected stochastic hybrid system composed of $1000$ nonlinear subsystems.


[475] 2409.10019

Learning Agile Swimming: An End-to-End Approach without CPGs

The pursuit of agile and efficient underwater robots, especially bio-mimetic robotic fish, has been impeded by challenges in creating motion controllers that are able to fully exploit their hydrodynamic capabilities. This paper addresses these challenges by introducing a novel, model-free, end-to-end control framework that leverages Deep Reinforcement Learning (DRL) to enable agile and energy-efficient swimming of robotic fish. Unlike existing methods that rely on predefined trigonometric swimming patterns like Central Pattern Generators (CPG), our approach directly outputs low-level actuator commands without strong constraint, enabling the robotic fish to learn agile swimming behaviors. In addition, by integrating a high-performance Computational Fluid Dynamics (CFD) simulator with innovative sim-to-real strategies, such as normalized density matching and servo response matching, the proposed framework significantly mitigates the sim-to-real gap, facilitating direct transfer of control policies to real-world environments without fine-tuning. Comparative experiments demonstrate that our method achieves faster swimming speeds, smaller turning radii, and reduced energy consumption compared to the conventional CPG-PID-based controllers. Furthermore, the proposed framework shows promise in addressing complex tasks in diverse scenario, paving the way for more effective deployment of robotic fish in real aquatic environments.


[476] 2409.10020

Li-MSD: A lightweight mitigation solution for DAO insider attack in RPL-based IoT

Many IoT applications run on a wireless infrastructure supported by resource-constrained nodes which is popularly known as Low-Power and Lossy Networks (LLNs). Currently, LLNs play a vital role in digital transformation of industries. The resource limitations of LLNs restrict the usage of traditional routing protocols and therefore require an energy-efficient routing solution. IETF's Routing Protocol for Low-power Lossy Networks (RPL, pronounced 'ripple') is one of the most popular energy-efficient protocols for LLNs, specified in RFC 6550. In RPL, Destination Advertisement Object (DAO) control message is transmitted by a child node to pass on its reachability information to its immediate parent or root node. An attacker may exploit the insecure DAO sending mechanism of RPL to perform 'DAO insider attack' by transmitting DAO multiple times. This paper shows that an aggressive DAO insider attacker can drastically degrade network performance. We propose a Lightweight Mitigation Solution for DAO insider attack, which is termed as 'Li-MSD'. Li-MSD uses a blacklisting strategy to mitigate the attack and restore RPL performance, significantly. By using simulations, it is shown that Li-MSD outperforms the existing solution in the literature.


[477] 2409.10021

LithoHoD: A Litho Simulator-Powered Framework for IC Layout Hotspot Detection

Recent advances in VLSI fabrication technology have led to die shrinkage and increased layout density, creating an urgent demand for advanced hotspot detection techniques. However, by taking an object detection network as the backbone, recent learning-based hotspot detectors learn to recognize only the problematic layout patterns in the training data. This fact makes these hotspot detectors difficult to generalize to real-world scenarios. We propose a novel lithography simulator-powered hotspot detection framework to overcome this difficulty. Our framework integrates a lithography simulator with an object detection backbone, merging the extracted latent features from both the simulator and the object detector via well-designed cross-attention blocks. Consequently, the proposed framework can be used to detect potential hotspot regions based on I) the variation of possible circuit shape deformation estimated by the lithography simulator, and ii) the problematic layout patterns already known. To this end, we utilize RetinaNet with a feature pyramid network as the object detection backbone and leverage LithoNet as the lithography simulator. Extensive experiments demonstrate that our proposed simulator-guided hotspot detection framework outperforms previous state-of-the-art methods on real-world data.


[478] 2409.10022

Entrywise Approximate Laplacian Solving

We study the escape probability problem in random walks over graphs. Given vertices, $s,t,$ and $p$, the problem asks for the probability that a random walk starting at $s$ will hit $t$ before hitting $p$. Such probabilities can be exponentially small even for unweighted undirected graphs with polynomial mixing time. Therefore current approaches, which are mostly based on fixed-point arithmetic, require $n$ bits of precision in the worst case. We present algorithms and analyses for weighted directed graphs under floating-point arithmetic and improve the previous best running times in terms of the number of bit operations. We believe our techniques and analysis could have a broader impact on the computation of random walks on graphs both in theory and in practice.


[479] 2409.10024

Highly dynamic physical interaction for robotics: design and control of an active remote center of compliance

Robot interaction control is often limited to low dynamics or low flexibility, depending on whether an active or passive approach is chosen. In this work, we introduce a hybrid control scheme that combines the advantages of active and passive interaction control. To accomplish this, we propose the design of a novel Active Remote Center of Compliance (ARCC), which is based on a passive and active element which can be used to directly control the interaction forces. We introduce surrogate models for a dynamic comparison against purely robot-based interaction schemes. In a comparative validation, ARCC drastically improves the interaction dynamics, leading to an increase in the motion bandwidth of up to 31 times. We introduce further our control approach as well as the integration in the robot controller. Finally, we analyze ARCC on different industrial benchmarks like peg-in-hole, top-hat rail assembly and contour following problems and compare it against the state of the art, to highlight the dynamic and flexibility. The proposed system is especially suited if the application requires a low cycle time combined with a sensitive manipulation.


[480] 2409.10025

DiffATR: Diffusion-based Generative Modeling for Audio-Text Retrieval

Existing audio-text retrieval (ATR) methods are essentially discriminative models that aim to maximize the conditional likelihood, represented as p(candidates|query). Nevertheless, this methodology fails to consider the intrinsic data distribution p(query), leading to difficulties in discerning out-of-distribution data. In this work, we attempt to tackle this constraint through a generative perspective and model the relationship between audio and text as their joint probability p(candidates,query). To this end, we present a diffusion-based ATR framework (DiffATR), which models ATR as an iterative procedure that progressively generates joint distribution from noise. Throughout its training phase, DiffATR is optimized from both generative and discriminative viewpoints: the generator is refined through a generation loss, while the feature extractor benefits from a contrastive loss, thus combining the merits of both methodologies. Experiments on the AudioCaps and Clotho datasets with superior performances, verify the effectiveness of our approach. Notably, without any alterations, our DiffATR consistently exhibits strong performance in out-of-domain retrieval settings.


[481] 2409.10026

From a Single Trajectory to Safety Controller Synthesis of Discrete-Time Nonlinear Polynomial Systems

This work is concerned with developing a data-driven approach for learning control barrier certificates (CBCs) and associated safety controllers for discrete-time nonlinear polynomial systems with unknown mathematical models, guaranteeing system safety over an infinite time horizon. The proposed approach leverages measured data acquired through an input-output observation, referred to as a single trajectory, collected over a specified time horizon. By fulfilling a certain rank condition, which ensures the unknown system is persistently excited by the collected data, we design a CBC and its corresponding safety controller directly from the finite-length observed data, without explicitly identifying the unknown dynamical system. This is achieved through proposing a data-based sum-of-squares optimization (SOS) program to systematically design CBCs and their safety controllers. We validate our data-driven approach over two physical case studies including a jet engine and a Lorenz system, demonstrating the efficacy of our proposed method.


[482] 2409.10027

E2Map: Experience-and-Emotion Map for Self-Reflective Robot Navigation with Language Models

Large language models (LLMs) have shown significant potential in guiding embodied agents to execute language instructions across a range of tasks, including robotic manipulation and navigation. However, existing methods are primarily designed for static environments and do not leverage the agent's own experiences to refine its initial plans. Given that real-world environments are inherently stochastic, initial plans based solely on LLMs' general knowledge may fail to achieve their objectives, unlike in static scenarios. To address this limitation, this study introduces the Experience-and-Emotion Map (E2Map), which integrates not only LLM knowledge but also the agent's real-world experiences, drawing inspiration from human emotional responses. The proposed methodology enables one-shot behavior adjustments by updating the E2Map based on the agent's experiences. Our evaluation in stochastic navigation environments, including both simulations and real-world scenarios, demonstrates that the proposed method significantly enhances performance in stochastic environments compared to existing LLM-based approaches. Code and supplementary materials are available at https://e2map.github.io/.


[483] 2409.10028

AttnMod: Attention-Based New Art Styles

Imagine a human artist looking at the generated photo of a diffusion model, and hoping to create a painting out of it. There could be some feature of the object in the photo that the artist wants to emphasize, some color to disperse, some silhouette to twist, or some part of the scene to be materialized. These intentions can be viewed as the modification of the cross attention from the text prompt onto UNet, during the desoising diffusion. This work presents AttnMod, to modify attention for creating new unpromptable art styles out of existing diffusion models. The style-creating behavior is studied across different setups.


[484] 2409.10031

Assessing the Impact of Sanctions in the Crypto Ecosystem: Effective Measures or Ineffective Deterrents?

Regulatory authorities aim to tackle illegal activities by targeting the economic incentives that drive such behaviour. This is typically achieved through the implementation of financial sanctions against the entities involved in the crimes. However, the rise of cryptocurrencies has presented new challenges, allowing entities to evade these sanctions and continue criminal operations. Consequently, enforcement measures have been expanded to include crypto assets information of sanctioned entities. Yet, due to the nature of the crypto ecosystem, blocking or freezing these digital assets is harder and, in some cases, such as with Bitcoin, unfeasible. Therefore, sanctions serve merely as deterrents. For this reason, in this study, we aim to assess the impact of these sanctions on entities' crypto activities, particularly those related to the Bitcoin ecosystem. Our objective is to shed light on the validity and effectiveness (or lack thereof) of such countermeasures. Specifically, we analyse the transactions and the amount of USD moved by punished entities that possess crypto addresses after being sanctioned by the authority agency. Results indicate that while sanctions have been effective for half of the examined entities, the others continue to move funds through sanctioned addresses. Furthermore, punished entities demonstrate a preference for utilising rapid exchange services to convert their funds, rather than employing dedicated money laundering services. To the best of our knowledge, this study offers valuable insights into how entities use crypto assets to circumvent sanctions.


[485] 2409.10032

Embodiment-Agnostic Action Planning via Object-Part Scene Flow

Observing that the key for robotic action planning is to understand the target-object motion when its associated part is manipulated by the end effector, we propose to generate the 3D object-part scene flow and extract its transformations to solve the action trajectories for diverse embodiments. The advantage of our approach is that it derives the robot action explicitly from object motion prediction, yielding a more robust policy by understanding the object motions. Also, beyond policies trained on embodiment-centric data, our method is embodiment-agnostic, generalizable across diverse embodiments, and being able to learn from human demonstrations. Our method comprises three components: an object-part predictor to locate the part for the end effector to manipulate, an RGBD video generator to predict future RGBD videos, and a trajectory planner to extract embodiment-agnostic transformation sequences and solve the trajectory for diverse embodiments. Trained on videos even without trajectory data, our method still outperforms existing works significantly by 27.7% and 26.2% on the prevailing virtual environments MetaWorld and Franka-Kitchen, respectively. Furthermore, we conducted real-world experiments, showing that our policy, trained only with human demonstration, can be deployed to various embodiments.


[486] 2409.10033

Can GPT-O1 Kill All Bugs?

ChatGPT has long been proven to be effective in automatic program repair (APR). With the continuous iterations and upgrades of the ChatGPT version, its performance in terms of fixes has already reached state-of-the-art levels. However, there are few works comparing the effectiveness and variations of different versions of ChatGPT on APR. In this work, we evaluate the performance of the latest version of ChatGPT (O1-preview and O1-mini), ChatGPT-4o, and historical version of ChatGPT on APR. We study the improvements of the O1 model over traditional ChatGPT in terms of APR from multiple perspectives (repair success rate, repair cost, behavior patterns), and find that O1's repair capability exceeds that of traditional ChatGPT, successfully fixing all 40 bugs in the benchmark. Our work can serve as a reference for further in-depth exploration of the applications of ChatGPT in APR.


[487] 2409.10038

On the Diagram of Thought

We introduce Diagram of Thought (DoT), a framework that models iterative reasoning in large language models (LLMs) as the construction of a directed acyclic graph (DAG) within a single model. Unlike traditional approaches that represent reasoning as linear chains or trees, DoT organizes propositions, critiques, refinements, and verifications into a cohesive DAG structure, allowing the model to explore complex reasoning pathways while maintaining logical consistency. Each node in the diagram corresponds to a proposition that has been proposed, critiqued, refined, or verified, enabling the LLM to iteratively improve its reasoning through natural language feedback. By leveraging auto-regressive next-token prediction with role-specific tokens, DoT facilitates seamless transitions between proposing ideas and critically evaluating them, providing richer feedback than binary signals. Furthermore, we formalize the DoT framework using Topos Theory, providing a mathematical foundation that ensures logical consistency and soundness in the reasoning process. This approach enhances both the training and inference processes within a single LLM, eliminating the need for multiple models or external control mechanisms. DoT offers a conceptual framework for designing next-generation reasoning-specialized models, emphasizing training efficiency, robust reasoning capabilities, and theoretical grounding. The code is available at https://github.com/diagram-of-thought/diagram-of-thought.


[488] 2409.10041

DENSER: 3D Gaussians Splatting for Scene Reconstruction of Dynamic Urban Environments

This paper presents DENSER, an efficient and effective approach leveraging 3D Gaussian splatting (3DGS) for the reconstruction of dynamic urban environments. While several methods for photorealistic scene representations, both implicitly using neural radiance fields (NeRF) and explicitly using 3DGS have shown promising results in scene reconstruction of relatively complex dynamic scenes, modeling the dynamic appearance of foreground objects tend to be challenging, limiting the applicability of these methods to capture subtleties and details of the scenes, especially far dynamic objects. To this end, we propose DENSER, a framework that significantly enhances the representation of dynamic objects and accurately models the appearance of dynamic objects in the driving scene. Instead of directly using Spherical Harmonics (SH) to model the appearance of dynamic objects, we introduce and integrate a new method aiming at dynamically estimating SH bases using wavelets, resulting in better representation of dynamic objects appearance in both space and time. Besides object appearance, DENSER enhances object shape representation through densification of its point cloud across multiple scene frames, resulting in faster convergence of model training. Extensive evaluations on KITTI dataset show that the proposed approach significantly outperforms state-of-the-art methods by a wide margin. Source codes and models will be uploaded to this repository https://github.com/sntubix/denser


[489] 2409.10042

Cross: A Delay Based Congestion Control Method for RTP Media

After more than a decade of development, real time communication (RTC) for video telephony has made significantly progress. However, emerging high-quality RTC applications with high definition and high frame rate requires sufficient bandwidth. The default congestion control mechanism specifically tuned for video telephony leaves plenty of room for optimization under high-rate scenarios. It is necessary to develop new rate control solutions to utilize bandwidth efficiently and to provide better experience for such services. A delay-based congestion control method called Cross is proposed, which regulates rate based on queue load with a multiplicative increase and multiplicative decrease fashion. A simulation module is developed to validate the effectiveness of these congestion control algorithms for RTC services. The module is released with the hope to provide convenience for RTC research community. Simulation results demonstrate that Cross can achieve low queuing delay and maintain high channel utilization under random loss environments. Online deployment shows that Cross can reduce the video freezing ratio by up to 58.45\% on average when compared with a benchmark algorithm.


[490] 2409.10044

Benchmarking Large Language Model Uncertainty for Prompt Optimization

Prompt optimization algorithms for Large Language Models (LLMs) excel in multi-step reasoning but still lack effective uncertainty estimation. This paper introduces a benchmark dataset to evaluate uncertainty metrics, focusing on Answer, Correctness, Aleatoric, and Epistemic Uncertainty. Through analysis of models like GPT-3.5-Turbo and Meta-Llama-3.1-8B-Instruct, we show that current metrics align more with Answer Uncertainty, which reflects output confidence and diversity, rather than Correctness Uncertainty, highlighting the need for improved metrics that are optimization-objective-aware to better guide prompt optimization. Our code and dataset are available at https://github.com/0Frett/PO-Uncertainty-Benchmarking.


[491] 2409.10045

Learning Latent Wireless Dynamics from Channel State Information

In this work, we propose a novel data-driven machine learning (ML) technique to model and predict the dynamics of the wireless propagation environment in latent space. Leveraging the idea of channel charting, which learns compressed representations of high-dimensional channel state information (CSI), we incorporate a predictive component to capture the dynamics of the wireless system. Hence, we jointly learn a channel encoder that maps the estimated CSI to an appropriate latent space, and a predictor that models the relationships between such representations. Accordingly, our problem boils down to training a joint-embedding predictive architecture (JEPA) that simulates the latent dynamics of a wireless network from CSI. We present numerical evaluations on measured data and show that the proposed JEPA displays a two-fold increase in accuracy over benchmarks, for longer look-ahead prediction tasks.


[492] 2409.10046

Global Lightning-Ignited Wildfires Prediction and Climate Change Projections based on Explainable Machine Learning Models

Wildfires pose a significant natural disaster risk to populations and contribute to accelerated climate change. As wildfires are also affected by climate change, extreme wildfires are becoming increasingly frequent. Although they occur less frequently globally than those sparked by human activities, lightning-ignited wildfires play a substantial role in carbon emissions and account for the majority of burned areas in certain regions. While existing computational models, especially those based on machine learning, aim to predict lightning-ignited wildfires, they are typically tailored to specific regions with unique characteristics, limiting their global applicability. In this study, we present machine learning models designed to characterize and predict lightning-ignited wildfires on a global scale. Our approach involves classifying lightning-ignited versus anthropogenic wildfires, and estimating with high accuracy the probability of lightning to ignite a fire based on a wide spectrum of factors such as meteorological conditions and vegetation. Utilizing these models, we analyze seasonal and spatial trends in lightning-ignited wildfires shedding light on the impact of climate change on this phenomenon. We analyze the influence of various features on the models using eXplainable Artificial Intelligence (XAI) frameworks. Our findings highlight significant global differences between anthropogenic and lightning-ignited wildfires. Moreover, we demonstrate that, even over a short time span of less than a decade, climate changes have steadily increased the global risk of lightning-ignited wildfires. This distinction underscores the imperative need for dedicated predictive models and fire weather indices tailored specifically to each type of wildfire.


[493] 2409.10047

Bearing-Distance Based Flocking with Zone-Based Interactions

This paper presents a novel zone-based flocking control approach suitable for dynamic multi-agent systems (MAS). Inspired by Reynolds behavioral rules for $boids$, flocking behavioral rules with the zones of repulsion, conflict, attraction, and surveillance are introduced. For each agent, using only bearing and distance measurements, behavioral deviation vectors quantify the deviations from the local separation, local and global flock velocity alignment, local cohesion, obstacle avoidance and boundary conditions, and strategic separation for avoiding alien agents. The control strategy uses the local perception-based behavioral deviation vectors to guide each agent's motion. Additionally, the control strategy incorporates a directionally-aware obstacle avoidance mechanism that prioritizes obstacles in the agent's forward path. Simulation results validate the effectiveness of this approach in creating flexible, adaptable, and scalable flocking behavior.


[494] 2409.10048

Audio-Driven Reinforcement Learning for Head-Orientation in Naturalistic Environments

Although deep reinforcement learning (DRL) approaches in audio signal processing have seen substantial progress in recent years, audio-driven DRL for tasks such as navigation, gaze control and head-orientation control in the context of human-robot interaction have received little attention. Here, we propose an audio-driven DRL framework in which we utilise deep Q-learning to develop an autonomous agent that orients towards a talker in the acoustic environment based on stereo speech recordings. Our results show that the agent learned to perform the task at a near perfect level when trained on speech segments in anechoic environments (that is, without reverberation). The presence of reverberation in naturalistic acoustic environments affected the agent's performance, although the agent still substantially outperformed a baseline, randomly acting agent. Finally, we quantified the degree of generalization of the proposed DRL approach across naturalistic acoustic environments. Our experiments revealed that policies learned by agents trained on medium or high reverb environments generalized to low reverb environments, but policies learned by agents trained on anechoic or low reverb environments did not generalize to medium or high reverb environments. Taken together, this study demonstrates the potential of audio-driven DRL for tasks such as head-orientation control and highlights the need for training strategies that enable robust generalization across environments for real-world audio-driven DRL applications.


[495] 2409.10049

A Social Force Model for Multi-Agent Systems With Application to Robots Traversal in Cluttered Environments

This letter presents a model to address the collaborative effects in multi-agent systems from the perspective of microscopic mechanism. The model utilizes distributed control for robot swarms in traversal applications. Inspired by pedestrian planning dynamics, the model employs three types of forces to regulate the behavior of agents: intrinsic propulsion, interaction among agents, and repulsion from obstacles. These forces are able to balance the convergence, divergence and avoidance effects among agents. Additionally, we present a planning and decision method based on resultant forces to enable real-world deployment of the model. Experimental results demonstrate the effectiveness on system path optimization in unknown cluttered environments. The sensor data is swiftly digital filtered and the data transmitted is significantly compressed. Consequently, the model has low computation costs and minimal communication loads, thereby promoting environmental adaptability and system scalability.


[496] 2409.10053

Householder Pseudo-Rotation: A Novel Approach to Activation Editing in LLMs with Direction-Magnitude Perspective

Activation Editing, which involves directly editting the internal representations of large language models (LLMs) to alter their behaviors and achieve desired properties, has emerged as a promising area of research. Existing works primarily treat LLMs' activations as points in space and modify them by adding steering vectors. However, this approach is limited in its ability to achieve greater performance improvement while maintaining the necessary consistency of activation magnitudes. To overcome these issues, we propose a novel editing method that views activations in terms of their directions and magnitudes. Our method, named Householder Pseudo-Rotation (HPR), mimics the rotation transformation, thus preserving activation norms and resulting in an improved performance on various safety benchmarks.


[497] 2409.10057

A Response to: A Note on "Privacy Preserving n-Party Scalar Product Protocol"

We reply to the comments on our proposed privacy preserving n-party scalar product protocol made by Liu. In their comment Liu raised concerns regarding the security and scalability of the $n$-party scalar product protocol. In this reply, we show that their concerns are unfounded and that the $n$-party scalar product protocol is safe for its intended purposes. Their concerns regarding the security are based on a misunderstanding of the protocol. Additionally, while the scalability of the protocol puts limitations on its use, the protocol still has numerous practical applications when applied in the correct scenarios. Specifically within vertically partitioned scenarios, which often involve few parties, the protocol remains practical. In this reply we clarify Liu's misunderstanding. Additionally, we explain why the protocols scaling is not a practical problem in its intended application.


[498] 2409.10062

Do Test and Environmental Complexity Increase Flakiness? An Empirical Study of SAP HANA

Background: Test flakiness is a major problem in the software industry. Flaky tests fail seemingly at random without changes to the code and thus impede continuous integration (CI). Some researchers argue that all tests can be considered flaky and that tests only differ in their frequency of flaky failures. Aims: With the goal of developing mitigation strategies to reduce the negative impact of test flakiness, we study characteristics of tests and the test environment that potentially impact test flakiness. Method: We construct two datasets based on SAP HANA's test results over a 12-week period: one based on production data, the other based on targeted test executions from a dedicated flakiness experiment. We conduct correlation analysis for test and test environment characteristics with respect to their influence on the frequency of flaky test failures. Results: In our study, the average test execution time had the strongest positive correlation with the test flakiness rate (r = 0.79), which confirms previous studies. Potential reasons for higher flakiness include the larger test scope of long-running tests or test executions on a slower test infrastructure. Interestingly, the load on the testing infrastructure was not correlated with test flakiness. The relationship between test flakiness and required resources for test execution is inconclusive. Conclusions: Based on our findings, we conclude that splitting long-running tests can be an important measure for practitioners to cope with test flakiness, as it enables parallelization of test executions and also reduces the cost of re-executions. This effectively decreases the negative effects of test flakiness in complex testing environments. However, when splitting long-running tests, practitioners need to consider the potential test setup overhead of test splits.


[499] 2409.10063

GlobalMapNet: An Online Framework for Vectorized Global HD Map Construction

High-definition (HD) maps are essential for autonomous driving systems. Traditionally, an expensive and labor-intensive pipeline is implemented to construct HD maps, which is limited in scalability. In recent years, crowdsourcing and online mapping have emerged as two alternative methods, but they have limitations respectively. In this paper, we provide a novel methodology, namely global map construction, to perform direct generation of vectorized global maps, combining the benefits of crowdsourcing and online mapping. We introduce GlobalMapNet, the first online framework for vectorized global HD map construction, which updates and utilizes a global map on the ego vehicle. To generate the global map from scratch, we propose GlobalMapBuilder to match and merge local maps continuously. We design a new algorithm, Map NMS, to remove duplicate map elements and produce a clean map. We also propose GlobalMapFusion to aggregate historical map information, improving consistency of prediction. We examine GlobalMapNet on two widely recognized datasets, Argoverse2 and nuScenes, showing that our framework is capable of generating globally consistent results.


[500] 2409.10064

MindGuard: Towards Accessible and Sitgma-free Mental Health First Aid via Edge LLM

Mental health disorders are among the most prevalent diseases worldwide, affecting nearly one in four people. Despite their widespread impact, the intervention rate remains below 25%, largely due to the significant cooperation required from patients for both diagnosis and intervention. The core issue behind this low treatment rate is stigma, which discourages over half of those affected from seeking help. This paper presents MindGuard, an accessible, stigma-free, and professional mobile mental healthcare system designed to provide mental health first aid. The heart of MindGuard is an innovative edge LLM, equipped with professional mental health knowledge, that seamlessly integrates objective mobile sensor data with subjective Ecological Momentary Assessment records to deliver personalized screening and intervention conversations. We conduct a broad evaluation of MindGuard using open datasets spanning four years and real-world deployment across various mobile devices involving 20 subjects for two weeks. Remarkably, MindGuard achieves results comparable to GPT-4 and outperforms its counterpart with more than 10 times the model size. We believe that MindGuard paves the way for mobile LLM applications, potentially revolutionizing mental healthcare practices by substituting self-reporting and intervention conversations with passive, integrated monitoring within daily life, thus ensuring accessible and stigma-free mental health support.


[501] 2409.10066

LeGEND: A Top-Down Approach to Scenario Generation of Autonomous Driving Systems Assisted by Large Language Models

Autonomous driving systems (ADS) are safety-critical and require comprehensive testing before their deployment on public roads. While existing testing approaches primarily aim at the criticality of scenarios, they often overlook the diversity of the generated scenarios that is also important to reflect system defects in different aspects. To bridge the gap, we propose LeGEND, that features a top-down fashion of scenario generation: it starts with abstract functional scenarios, and then steps downwards to logical and concrete scenarios, such that scenario diversity can be controlled at the functional level. However, unlike logical scenarios that can be formally described, functional scenarios are often documented in natural languages (e.g., accident reports) and thus cannot be precisely parsed and processed by computers. To tackle that issue, LeGEND leverages the recent advances of large language models (LLMs) to transform textual functional scenarios to formal logical scenarios. To mitigate the distraction of useless information in functional scenario description, we devise a two-phase transformation that features the use of an intermediate language; consequently, we adopt two LLMs in LeGEND, one for extracting information from functional scenarios, the other for converting the extracted information to formal logical scenarios. We experimentally evaluate LeGEND on Apollo, an industry-grade ADS from Baidu. Evaluation results show that LeGEND can effectively identify critical scenarios, and compared to baseline approaches, LeGEND exhibits evident superiority in diversity of generated scenarios. Moreover, we also demonstrate the advantages of our two-phase transformation framework, and the accuracy of the adopted LLMs.


[502] 2409.10068

Spatiotemporal Covariance Neural Networks

Modeling spatiotemporal interactions in multivariate time series is key to their effective processing, but challenging because of their irregular and often unknown structure. Statistical properties of the data provide useful biases to model interdependencies and are leveraged by correlation and covariance-based networks as well as by processing pipelines relying on principal component analysis (PCA). However, PCA and its temporal extensions suffer instabilities in the covariance eigenvectors when the corresponding eigenvalues are close to each other, making their application to dynamic and streaming data settings challenging. To address these issues, we exploit the analogy between PCA and graph convolutional filters to introduce the SpatioTemporal coVariance Neural Network (STVNN), a relational learning model that operates on the sample covariance matrix of the time series and leverages joint spatiotemporal convolutions to model the data. To account for the streaming and non-stationary setting, we consider an online update of the parameters and sample covariance matrix. We prove the STVNN is stable to the uncertainties introduced by these online estimations, thus improving over temporal PCA-based methods. Experimental results corroborate our theoretical findings and show that STVNN is competitive for multivariate time series processing, it adapts to changes in the data distribution, and it is orders of magnitude more stable than online temporal PCA.


[503] 2409.10069

Enhancing Anomaly Detection via Generating Diversified and Hard-to-distinguish Synthetic Anomalies

Unsupervised anomaly detection is a daunting task, as it relies solely on normality patterns from the training data to identify unseen anomalies during testing. Recent approaches have focused on leveraging domain-specific transformations or perturbations to generate synthetic anomalies from normal samples. The objective here is to acquire insights into normality patterns by learning to differentiate between normal samples and these crafted anomalies. However, these approaches often encounter limitations when domain-specific transformations are not well-specified such as in tabular data, or when it becomes trivial to distinguish between them. To address these issues, we introduce a novel domain-agnostic method that employs a set of conditional perturbators and a discriminator. The perturbators are trained to generate input-dependent perturbations, which are subsequently utilized to construct synthetic anomalies, and the discriminator is trained to distinguish normal samples from them. We ensure that the generated anomalies are both diverse and hard to distinguish through two key strategies: i) directing perturbations to be orthogonal to each other and ii) constraining perturbations to remain in proximity to normal samples. Throughout experiments on real-world datasets, we demonstrate the superiority of our method over state-of-the-art benchmarks, which is evident not only in image data but also in tabular data, where domain-specific transformation is not readily accessible. Additionally, we empirically confirm the adaptability of our method to semi-supervised settings, demonstrating its capacity to incorporate supervised signals to enhance anomaly detection performance even further.


[504] 2409.10070

Increasing faithfulness in human-human dialog summarization with Spoken Language Understanding tasks

Dialogue summarization aims to provide a concise and coherent summary of conversations between multiple speakers. While recent advancements in language models have enhanced this process, summarizing dialogues accurately and faithfully remains challenging due to the need to understand speaker interactions and capture relevant information. Indeed, abstractive models used for dialog summarization may generate summaries that contain inconsistencies. We suggest using the semantic information proposed for performing Spoken Language Understanding (SLU) in human-machine dialogue systems for goal-oriented human-human dialogues to obtain a more semantically faithful summary regarding the task. This study introduces three key contributions: First, we propose an exploration of how incorporating task-related information can enhance the summarization process, leading to more semantically accurate summaries. Then, we introduce a new evaluation criterion based on task semantics. Finally, we propose a new dataset version with increased annotated data standardized for research on task-oriented dialogue summarization. The study evaluates these methods using the DECODA corpus, a collection of French spoken dialogues from a call center. Results show that integrating models with task-related information improves summary accuracy, even with varying word error rates.


[505] 2409.10071

Towards Physically-Realizable Adversarial Attacks in Embodied Vision Navigation

The deployment of embodied navigation agents in safety-critical environments raises concerns about their vulnerability to adversarial attacks on deep neural networks. However, current attack methods often lack practicality due to challenges in transitioning from the digital to the physical world, while existing physical attacks for object detection fail to achieve both multi-view effectiveness and naturalness. To address this, we propose a practical attack method for embodied navigation by attaching adversarial patches with learnable textures and opacity to objects. Specifically, to ensure effectiveness across varying viewpoints, we employ a multi-view optimization strategy based on object-aware sampling, which uses feedback from the navigation model to optimize the patch's texture. To make the patch inconspicuous to human observers, we introduce a two-stage opacity optimization mechanism, where opacity is refined after texture optimization. Experimental results show our adversarial patches reduce navigation success rates by about 40%, outperforming previous methods in practicality, effectiveness, and naturalness. Code is available at: [https://github.com/chen37058/Physical-Attacks-in-Embodied-Navigation].


[506] 2409.10072

Speaker Contrastive Learning for Source Speaker Tracing

As a form of biometric authentication technology, the security of speaker verification systems is of utmost importance. However, SV systems are inherently vulnerable to various types of attacks that can compromise their accuracy and reliability. One such attack is voice conversion, which modifies a persons speech to sound like another person by altering various vocal characteristics. This poses a significant threat to SV systems. To address this challenge, the Source Speaker Tracing Challenge in IEEE SLT2024 aims to identify the source speaker information in manipulated speech signals. Specifically, SSTC focuses on source speaker verification against voice conversion to determine whether two converted speech samples originate from the same source speaker. In this study, we propose a speaker contrastive learning-based approach for source speaker tracing to learn the latent source speaker information in converted speech. To learn a more source-speaker-related representation, we employ speaker contrastive loss during the training of the embedding extractor. This speaker contrastive loss helps identify the true source speaker embedding among several distractor speaker embeddings, enabling the embedding extractor to learn the potentially possessing source speaker information present in the converted speech. Experiments demonstrate that our proposed speaker contrastive learning system achieves the lowest EER of 16.788% on the challenge test set, securing first place in the challenge.


[507] 2409.10075

Steinmetz Neural Networks for Complex-Valued Data

In this work, we introduce a new approach to processing complex-valued data using DNNs consisting of parallel real-valued subnetworks with coupled outputs. Our proposed class of architectures, referred to as Steinmetz Neural Networks, leverages multi-view learning to construct more interpretable representations within the latent space. Subsequently, we present the Analytic Neural Network, which implements a consistency penalty that encourages analytic signal representations in the Steinmetz neural network's latent space. This penalty enforces a deterministic and orthogonal relationship between the real and imaginary components. Utilizing an information-theoretic construction, we demonstrate that the upper bound on the generalization error posited by the analytic neural network is lower than that of the general class of Steinmetz neural networks. Our numerical experiments demonstrate the improved performance and robustness to additive noise, afforded by our proposed networks on benchmark datasets and synthetic examples.


[508] 2409.10076

Optimizing Dysarthria Wake-Up Word Spotting: An End-to-End Approach for SLT 2024 LRDWWS Challenge

Speech has emerged as a widely embraced user interface across diverse applications. However, for individuals with dysarthria, the inherent variability in their speech poses significant challenges. This paper presents an end-to-end Pretrain-based Dual-filter Dysarthria Wake-up word Spotting (PD-DWS) system for the SLT 2024 Low-Resource Dysarthria Wake-Up Word Spotting Challenge. Specifically, our system improves performance from two key perspectives: audio modeling and dual-filter strategy. For audio modeling, we propose an innovative 2branch-d2v2 model based on the pre-trained data2vec2 (d2v2), which can simultaneously model automatic speech recognition (ASR) and wake-up word spotting (WWS) tasks through a unified multi-task finetuning paradigm. Additionally, a dual-filter strategy is introduced to reduce the false accept rate (FAR) while maintaining the same false reject rate (FRR). Experimental results demonstrate that our PD-DWS system achieves an FAR of 0.00321 and an FRR of 0.005, with a total score of 0.00821 on the test-B eval set, securing first place in the challenge.


[509] 2409.10077

LLM-DER:A Named Entity Recognition Method Based on Large Language Models for Chinese Coal Chemical Domain

Domain-specific Named Entity Recognition (NER), whose goal is to recognize domain-specific entities and their categories, provides an important support for constructing domain knowledge graphs. Currently, deep learning-based methods are widely used and effective in NER tasks, but due to the reliance on large-scale labeled data. As a result, the scarcity of labeled data in a specific domain will limit its application.Therefore, many researches started to introduce few-shot methods and achieved some results. However, the entity structures in specific domains are often complex, and the current few-shot methods are difficult to adapt to NER tasks with complex features.Taking the Chinese coal chemical industry domain as an example,there exists a complex structure of multiple entities sharing a single entity, as well as multiple relationships for the same pair of entities, which affects the NER task under the sample less condition.In this paper, we propose a Large Language Models (LLMs)-based entity recognition framework LLM-DER for the domain-specific entity recognition problem in Chinese, which enriches the entity information by generating a list of relationships containing entity types through LLMs, and designing a plausibility and consistency evaluation method to remove misrecognized entities, which can effectively solve the complex structural entity recognition problem in a specific domain.The experimental results of this paper on the Resume dataset and the self-constructed coal chemical dataset Coal show that LLM-DER performs outstandingly in domain-specific entity recognition, not only outperforming the existing GPT-3.5-turbo baseline, but also exceeding the fully-supervised baseline, verifying its effectiveness in entity recognition.


[510] 2409.10078

IRIS: Interactive Responsive Intelligent Segmentation for 3D Affordance Analysis

Recent advancements in large language and vision-language models have significantly enhanced multimodal understanding, yet translating high-level linguistic instructions into precise robotic actions in 3D space remains challenging. This paper introduces IRIS (Interactive Responsive Intelligent Segmentation), a novel training-free multimodal system for 3D affordance segmentation, alongside a benchmark for evaluating interactive language-guided affordance in everyday environments. IRIS integrates a large multimodal model with a specialized 3D vision network, enabling seamless fusion of 2D and 3D visual understanding with language comprehension. To facilitate evaluation, we present a dataset of 10 typical indoor environments, each with 50 images annotated for object actions and 3D affordance segmentation. Extensive experiments demonstrate IRIS's capability in handling interactive 3D affordance segmentation tasks across diverse settings, showcasing competitive performance across various metrics. Our results highlight IRIS's potential for enhancing human-robot interaction based on affordance understanding in complex indoor environments, advancing the development of more intuitive and efficient robotic systems for real-world applications.


[511] 2409.10079

Protocol for identifying shared articulatory features of gestures and LSF: application to epistemic gesture

This article focuses on the articulatory characteristics of epistemic gestures (i.e., gestures used to express certainty or uncertainty) in co-speech gestures (CSG) in French and in French Sign Language (LSF). It presents a new methodology for analysis, which relies on the complementary use of manual annotation (using Typannot) and semi-automatic annotation (using AlphaPose) to highlight the kinesiological characteristics of these epistemic gestures. The presented methodology allows to analyze the flexion/extension movements of the head in epistemic contexts. The results of this analysis show that in CSG and LSF: (1) head nods passing through the neutral position (i.e., head straight with no flexion/extension) and high movement speed are markers of certainty; and (2) holding the head position away from the neutral position and low movement speed indicate uncertainty. This study is conducted within the framework of the ANR LexiKHuM project, which develops kinesthetic communication solutions for human-machine interaction.


[512] 2409.10080

DAE-Fuse: An Adaptive Discriminative Autoencoder for Multi-Modality Image Fusion

Multi-modality image fusion aims to integrate complementary data information from different imaging modalities into a single image. Existing methods often generate either blurry fused images that lose fine-grained semantic information or unnatural fused images that appear perceptually cropped from the inputs. In this work, we propose a novel two-phase discriminative autoencoder framework, termed DAE-Fuse, that generates sharp and natural fused images. In the adversarial feature extraction phase, we introduce two discriminative blocks into the encoder-decoder architecture, providing an additional adversarial loss to better guide feature extraction by reconstructing the source images. While the two discriminative blocks are adapted in the attention-guided cross-modality fusion phase to distinguish the structural differences between the fused output and the source inputs, injecting more naturalness into the results. Extensive experiments on public infrared-visible, medical image fusion, and downstream object detection datasets demonstrate our method's superiority and generalizability in both quantitative and qualitative evaluations.


[513] 2409.10081

Messy Code Makes Managing ML Pipelines Difficult? Just Let LLMs Rewrite the Code!

Machine learning (ML) applications that learn from data are increasingly used to automate impactful decisions. Unfortunately, these applications often fall short of adequately managing critical data and complying with upcoming regulations. A technical reason for the persistence of these issues is that the data pipelines in common ML libraries and cloud services lack fundamental declarative, data-centric abstractions. Recent research has shown how such abstractions enable techniques like provenance tracking and automatic inspection to help manage ML pipelines. Unfortunately, these approaches lack adoption in the real world because they require clean ML pipeline code written with declarative APIs, instead of the messy imperative Python code that data scientists typically write for data preparation. We argue that it is unrealistic to expect data scientists to change their established development practices. Instead, we propose to circumvent this "code abstraction gap" by leveraging the code generation capabilities of large language models (LLMs). Our idea is to rewrite messy data science code to a custom-tailored declarative pipeline abstraction, which we implement as a proof-of-concept in our prototype Lester. We detail its application for a challenging compliance management example involving "incremental view maintenance" of deployed ML pipelines. The code rewrites for our running example show the potential of LLMs to make messy data science code declarative, e.g., by identifying hand-coded joins in Python and turning them into joins on dataframes, or by generating declarative feature encoders from NumPy code.


[514] 2409.10085

A Riemannian Approach to Ground Metric Learning for Optimal Transport

Optimal transport (OT) theory has attracted much attention in machine learning and signal processing applications. OT defines a notion of distance between probability distributions of source and target data points. A crucial factor that influences OT-based distances is the ground metric of the embedding space in which the source and target data points lie. In this work, we propose to learn a suitable latent ground metric parameterized by a symmetric positive definite matrix. We use the rich Riemannian geometry of symmetric positive definite matrices to jointly learn the OT distance along with the ground metric. Empirical results illustrate the efficacy of the learned metric in OT-based domain adaptation.


[515] 2409.10090

MotionCom: Automatic and Motion-Aware Image Composition with LLM and Video Diffusion Prior

This work presents MotionCom, a training-free motion-aware diffusion based image composition, enabling automatic and seamless integration of target objects into new scenes with dynamically coherent results without finetuning or optimization. Traditional approaches in this area suffer from two significant limitations: they require manual planning for object placement and often generate static compositions lacking motion realism. MotionCom addresses these issues by utilizing a Large Vision Language Model (LVLM) for intelligent planning, and a Video Diffusion prior for motion-infused image synthesis, streamlining the composition process. Our multi-modal Chain-of-Thought (CoT) prompting with LVLM automates the strategic placement planning of foreground objects, considering their potential motion and interaction within the scenes. Complementing this, we propose a novel method MotionPaint to distill motion-aware information from pretrained video diffusion models in the generation phase, ensuring that these objects are not only seamlessly integrated but also endowed with realistic motion. Extensive quantitative and qualitative results highlight MotionCom's superiority, showcasing its efficiency in streamlining the planning process and its capability to produce compositions that authentically depict motion and interaction.


[516] 2409.10094

DDoS: Diffusion Distribution Similarity for Out-of-Distribution Detection

Out-of-Distribution (OoD) detection determines whether the given samples are from the training distribution of the classifier-under-protection, i.e., the In-Distribution (InD), or from a different OoD. Latest researches introduce diffusion models pre-trained on InD data to advocate OoD detection by transferring an OoD image into a generated one that is close to InD, so that one could capture the distribution disparities between original and generated images to detect OoD data. Existing diffusion-based detectors adopt perceptual metrics on the two images to measure such disparities, but ignore a fundamental fact: Perceptual metrics are devised essentially for human-perceived similarities of low-level image patterns, e.g., textures and colors, and are not advisable in evaluating distribution disparities, since images with different low-level patterns could possibly come from the same distribution. To address this issue, we formulate a diffusion-based detection framework that considers the distribution similarity between a tested image and its generated counterpart via a novel proper similarity metric in the informative feature space and probability space learned by the classifier-under-protection. An anomaly-removal strategy is further presented to enlarge such distribution disparities by removing abnormal OoD information in the feature space to facilitate the detection. Extensive empirical results unveil the insufficiency of perceptual metrics and the effectiveness of our distribution similarity framework with new state-of-the-art detection performance.


[517] 2409.10095

Human Insights Driven Latent Space for Different Driving Perspectives: A Unified Encoder for Efficient Multi-Task Inference

Autonomous driving holds great potential to transform road safety and traffic efficiency by minimizing human error and reducing congestion. A key challenge in realizing this potential is the accurate estimation of steering angles, which is essential for effective vehicle navigation and control. Recent breakthroughs in deep learning have made it possible to estimate steering angles directly from raw camera inputs. However, the limited available navigation data can hinder optimal feature learning, impacting the system's performance in complex driving scenarios. In this paper, we propose a shared encoder trained on multiple computer vision tasks critical for urban navigation, such as depth, pose, and 3D scene flow estimation, as well as semantic, instance, panoptic, and motion segmentation. By incorporating diverse visual information used by humans during navigation, this unified encoder might enhance steering angle estimation. To achieve effective multi-task learning within a single encoder, we introduce a multi-scale feature network for pose estimation to improve depth learning. Additionally, we employ knowledge distillation from a multi-backbone model pretrained on these navigation tasks to stabilize training and boost performance. Our findings demonstrate that a shared backbone trained on diverse visual tasks is capable of providing overall perception capabilities. While our performance in steering angle estimation is comparable to existing methods, the integration of human-like perception through multi-task learning holds significant potential for advancing autonomous driving systems. More details and the pretrained model are available at https://hi-computervision.github.io/uni-encoder/.


[518] 2409.10096

Robust Reinforcement Learning with Dynamic Distortion Risk Measures

In a reinforcement learning (RL) setting, the agent's optimal strategy heavily depends on her risk preferences and the underlying model dynamics of the training environment. These two aspects influence the agent's ability to make well-informed and time-consistent decisions when facing testing environments. In this work, we devise a framework to solve robust risk-aware RL problems where we simultaneously account for environmental uncertainty and risk with a class of dynamic robust distortion risk measures. Robustness is introduced by considering all models within a Wasserstein ball around a reference model. We estimate such dynamic robust risk measures using neural networks by making use of strictly consistent scoring functions, derive policy gradient formulae using the quantile representation of distortion risk measures, and construct an actor-critic algorithm to solve this class of robust risk-aware RL problems. We demonstrate the performance of our algorithm on a portfolio allocation example.


[519] 2409.10098

An integrated design of robust decentralized observer and controller for load frequency control

This paper focuses on designing completely decentralized load frequency control (LFC) for multi-area power systems to achieve global optimized performance. To this end, a new concept of integrated design is introduced for designing the decentralized LFC observers and controllers simultaneously off-line, by taking into account of the interactions between areas and the bidirectional effects between the local observer and controller in each area. The integrated design in this paper is realized via $H_\infty$ optimization with a single-step linear matrix inequality (LMI) formulation. The LMI regional eigenvalue assignment technique is further incorporated with $H_\infty$ optimization to improve the closed-loop system transient performance. A three-area power system is simulated to validate the superiority of the proposed integrated design over the conventional decentralized designs.


[520] 2409.10101

Adaptive Segmentation-Based Initialization for Steered Mixture of Experts Image Regression

Kernel image regression methods have shown to provide excellent efficiency in many image processing task, such as image and light-field compression, Gaussian Splatting, denoising and super-resolution. The estimation of parameters for these methods frequently employ gradient descent iterative optimization, which poses significant computational burden for many applications. In this paper, we introduce a novel adaptive segmentation-based initialization method targeted for optimizing Steered-Mixture-of Experts (SMoE) gating networks and Radial-Basis-Function (RBF) networks with steering kernels. The novel initialization method allocates kernels into pre-calculated image segments. The optimal number of kernels, kernel positions, and steering parameters are derived per segment in an iterative optimization and kernel sparsification procedure. The kernel information from "local" segments is then transferred into a "global" initialization, ready for use in iterative optimization of SMoE, RBF, and related kernel image regression methods. Results show that drastic objective and subjective quality improvements are achievable compared to widely used regular grid initialization, "state-of-the-art" K-Means initialization and previously introduced segmentation-based initialization methods, while also drastically improving the sparsity of the regression models. For same quality, the novel initialization results in models with around 50% reduction of kernels. In addition, a significant reduction of convergence time is achieved, with overall run-time savings of up to 50%. The segmentation-based initialization strategy itself admits heavy parallel computation; in theory, it may be divided into as many tasks as there are segments in the images. By accessing only four parallel GPUs, run-time savings of already 50% for initialization are achievable.


[521] 2409.10102

Trustworthiness in Retrieval-Augmented Generation Systems: A Survey

Retrieval-Augmented Generation (RAG) has quickly grown into a pivotal paradigm in the development of Large Language Models (LLMs). While much of the current research in this field focuses on performance optimization, particularly in terms of accuracy and efficiency, the trustworthiness of RAG systems remains an area still under exploration. From a positive perspective, RAG systems are promising to enhance LLMs by providing them with useful and up-to-date knowledge from vast external databases, thereby mitigating the long-standing problem of hallucination. While from a negative perspective, RAG systems are at the risk of generating undesirable contents if the retrieved information is either inappropriate or poorly utilized. To address these concerns, we propose a unified framework that assesses the trustworthiness of RAG systems across six key dimensions: factuality, robustness, fairness, transparency, accountability, and privacy. Within this framework, we thoroughly review the existing literature on each dimension. Additionally, we create the evaluation benchmark regarding the six dimensions and conduct comprehensive evaluations for a variety of proprietary and open-source models. Finally, we identify the potential challenges for future research based on our investigation results. Through this work, we aim to lay a structured foundation for future investigations and provide practical insights for enhancing the trustworthiness of RAG systems in real-world applications.


[522] 2409.10103

Self-Supervised Syllable Discovery Based on Speaker-Disentangled HuBERT

Self-supervised speech representation learning has become essential for extracting meaningful features from untranscribed audio. Recent advances highlight the potential of deriving discrete symbols from the features correlated with linguistic units, which enables text-less training across diverse tasks. In particular, sentence-level Self-Distillation of the pretrained HuBERT (SD-HuBERT) induces syllabic structures within latent speech frame representations extracted from an intermediate Transformer layer. In SD-HuBERT, sentence-level representation is accumulated from speech frame features through self-attention layers using a special CLS token. However, we observe that the information aggregated in the CLS token correlates more with speaker identity than with linguistic content. To address this, we propose a speech-only self-supervised fine-tuning approach that separates syllabic units from speaker information. Our method introduces speaker perturbation as data augmentation and adopts a frame-level training objective to prevent the CLS token from aggregating paralinguistic information. Experimental results show that our approach surpasses the current state-of-the-art method in most syllable segmentation and syllabic unit quality metrics on Librispeech, underscoring its effectiveness in promoting syllabic organization within speech-only models.


[523] 2409.10104

A Comparative Study of Open Source Computer Vision Models for Application on Small Data: The Case of CFRP Tape Laying

In the realm of industrial manufacturing, Artificial Intelligence (AI) is playing an increasing role, from automating existing processes to aiding in the development of new materials and techniques. However, a significant challenge arises in smaller, experimental processes characterized by limited training data availability, questioning the possibility to train AI models in such small data contexts. In this work, we explore the potential of Transfer Learning to address this challenge, specifically investigating the minimum amount of data required to develop a functional AI model. For this purpose, we consider the use case of quality control of Carbon Fiber Reinforced Polymer (CFRP) tape laying in aerospace manufacturing using optical sensors. We investigate the behavior of different open-source computer vision models with a continuous reduction of the training data. Our results show that the amount of data required to successfully train an AI model can be drastically reduced, and the use of smaller models does not necessarily lead to a loss of performance.


[524] 2409.10106

Industry 6.0: New Generation of Industry driven by Generative AI and Swarm of Heterogeneous Robots

This paper presents the concept of Industry 6.0, introducing the world's first fully automated production system that autonomously handles the entire product design and manufacturing process based on user-provided natural language descriptions. By leveraging generative AI, the system automates critical aspects of production, including product blueprint design, component manufacturing, logistics, and assembly. A heterogeneous swarm of robots, each equipped with individual AI through integration with Large Language Models (LLMs), orchestrates the production process. The robotic system includes manipulator arms, delivery drones, and 3D printers capable of generating assembly blueprints. The system was evaluated using commercial and open-source LLMs, functioning through APIs and local deployment. A user study demonstrated that the system reduces the average production time to 119.10 minutes, significantly outperforming a team of expert human developers, who averaged 528.64 minutes (an improvement factor of 4.4). Furthermore, in the product blueprinting stage, the system surpassed human CAD operators by an unprecedented factor of 47, completing the task in 0.5 minutes compared to 23.5 minutes. This breakthrough represents a major leap towards fully autonomous manufacturing.


[525] 2409.10109

Analysing Attacks on Blockchain Systems in a Layer-based Approach

Blockchain is a growing decentralized system built for transparency and immutability. There have been several major attacks on blockchain-based systems, leaving a gap in the trustability of this system. This article presents a comprehensive study of 23 attacks on blockchain systems and categorizes them using a layer-based approach. This approach provides an in-depth analysis of the feasibility and motivation of these attacks. In addition, a framework is proposed that enables a systematic analysis of the impact and interconnection of these attacks, thereby providing a means of identifying potential attack vectors and designing appropriate countermeasures to strengthen any blockchain system.


[526] 2409.10111

Evaluating the Efficacy of Instance Incremental vs. Batch Learning in Delayed Label Environments: An Empirical Study on Tabular Data Streaming for Fraud Detection

Real-world tabular learning production scenarios typically involve evolving data streams, where data arrives continuously and its distribution may change over time. In such a setting, most studies in the literature regarding supervised learning favor the use of instance incremental algorithms due to their ability to adapt to changes in the data distribution. Another significant reason for choosing these algorithms is \textit{avoid storing observations in memory} as commonly done in batch incremental settings. However, the design of instance incremental algorithms often assumes immediate availability of labels, which is an optimistic assumption. In many real-world scenarios, such as fraud detection or credit scoring, labels may be delayed. Consequently, batch incremental algorithms are widely used in many real-world tasks. This raises an important question: "In delayed settings, is instance incremental learning the best option regarding predictive performance and computational efficiency?" Unfortunately, this question has not been studied in depth, probably due to the scarcity of real datasets containing delayed information. In this study, we conduct a comprehensive empirical evaluation and analysis of this question using a real-world fraud detection problem and commonly used generated datasets. Our findings indicate that instance incremental learning is not the superior option, considering on one side state-of-the-art models such as Adaptive Random Forest (ARF) and other side batch learning models such as XGBoost. Additionally, when considering the interpretability of the learning systems, batch incremental solutions tend to be favored. Code: \url{https://github.com/anselmeamekoe/DelayedLabelStream}


[527] 2409.10112

On the Bit Error Probability of DMA-Based Systems

Dynamic metasurface antennas (DMAs) are an alternative application of metasurfaces as active reconfigurable antennas with advanced analog signal processing and beamforming capabilities, which have been proposed to replace conventional antenna arrays for next generation transceivers. Motivated by this, we investigate the bit error probability (BEP) optimization in a DMA-based system, propose an iterative optimization algorithm, which adjusts the transmit precoder and the weights of the DMA elements, prove its convergence and derive complexity.


[528] 2409.10117

Multi-Agent Obstacle Avoidance using Velocity Obstacles and Control Barrier Functions

Velocity Obstacles (VO) methods form a paradigm for collision avoidance strategies among moving obstacles and agents. While VO methods perform well in simple multi-agent environments, they don't guarantee safety and can show overly conservative behavior in common situations. In this paper, we propose to combine a VO-strategy for guidance with a CBF-approach for safety, which overcomes the overly conservative behavior of VOs and formally guarantees safety. We validate our method in a baseline comparison study, using 2nd order integrator and car-like dynamics. Results support that our method outperforms the baselines w.r.t. path smoothness, collision avoidance, and success rates.


[529] 2409.10118

Approximating the signature of Brownian motion for high order SDE simulation

The signature is a collection of iterated integrals describing the "shape" of a path. It appears naturally in the Taylor expansions of controlled differential equations and, as a consequence, is arguably the central object within rough path theory. In this paper, we will consider the signature of Brownian motion with time, and present both new and recently developed approximations for some of its integrals. Since these integrals (or equivalent L\'{e}vy areas) are nonlinear functions of the Brownian path, they are not Gaussian and known to be challenging to simulate. To conclude the paper, we will present some applications of these approximations to the high order numerical simulation of stochastic differential equations (SDEs).


[530] 2409.10124

Ants on the highway

We perform intensive computations of Generalised Langton's Ants, discovering rules with a big number of highways. We depict the structure of some of them, formally proving that the number of highways which are possible for a given rule does not need to be bounded, moreover it can be infinite. The frequency of appearing of these highways is very unequal within a given generalised ant rule, in some cases these frequencies where found in a ratio of $1/10^7$ in simulations, suggesting that those highways that appears as the only possible asymptotic behaviour of some rules, might be accompanied by a big family of very infrequent ones.


[531] 2409.10126

Data-free Non-intrusive Model Reduction for Nonlinear Finite Element Models via Spectral Submanifolds

The theory of spectral submanifolds (SSMs) has emerged as a powerful tool for constructing rigorous, low-dimensional reduced-order models (ROMs) of high-dimensional nonlinear mechanical systems. A direct computation of SSMs requires explicit knowledge of nonlinear coefficients in the equations of motion, which limits their applicability to generic finite-element (FE) solvers. Here, we propose a non-intrusive algorithm for the computation of the SSMs and the associated ROMs up to arbitrary polynomial orders. This non-intrusive algorithm only requires system nonlinearity as a black box and hence, enables SSM-based model reduction via generic finite-element software. Our expressions and algorithms are valid for systems with up to cubic-order nonlinearities, including velocity-dependent nonlinear terms, asymmetric damping, and stiffness matrices, and hence work for a large class of mechanics problems. We demonstrate the effectiveness of the proposed non-intrusive approach over a variety of FE examples of increasing complexity, including a micro-resonator FE model containing more than a million degrees of freedom.


[532] 2409.10127

Joint Beamforming and Illumination Pattern Design for Beam-Hopping LEO Satellite Communications

Since hybrid beamforming (HBF) can approach the performance of fully-digital beamforming (FDBF) with much lower hardware complexity, we investigate the HBF design for beam-hopping (BH) low earth orbit (LEO) satellite communications (SatComs). Aiming at maximizing the sum-rate of totally illuminated beam positions during the whole BH period, we consider joint beamforming and illumination pattern design subject to the HBF constraints and sum-rate requirements. To address the non-convexity of the HBF constraints, we temporarily replace the HBF constraints with the FDBF constraints. Then we propose an FDBF and illumination pattern random search (FDBF-IPRS) scheme to optimize illumination patterns and fully-digital beamformers using constrained random search and fractional programming methods. To further reduce the computational complexity, we propose an FDBF and illumination pattern alternating optimization (FDBF-IPAO) scheme, where we relax the integer illumination pattern to continuous variables and after finishing all the iterations we quantize the continuous variables into integer ones. Based on the fully-digital beamformers designed by the FDBF-IPRS or FDBF-IPAO scheme, we propose an HBF alternating minimization algorithm to design the hybrid beamformers. Simulation results show that the proposed schemes can achieve satisfactory sum-rate performance for BH LEO SatComs.


[533] 2409.10132

StruEdit: Structured Outputs Enable the Fast and Accurate Knowledge Editing for Large Language Models

As the modern tool of choice for question answering, large language models (LLMs) are expected to deliver answers with up-to-date knowledge. To achieve such ideal question-answering systems, locating and then editing outdated knowledge in the natural language outputs is a general target of popular knowledge editing methods. However, this target is challenging, as both identifying which tokens to edit in the reasoning steps and ensuring the coherence of the revised reasoning chain are difficult tasks. We argue that these challenges stem from the unstructured nature of natural language outputs. To address the above challenges, we propose $\textbf{Stru}$ctural $\textbf{Edit}$ing ($\textbf{StruEdit}$), an improved baseline for knowledge editing. We first prompt LLMs to produce structured outputs consisting of reasoning triplets. Then, StruEdit removes any potentially outdated knowledge and efficiently refills the structured outputs with up-to-date information in a single step. Experimental results show that StruEdit consistently delivers the highest accuracy with lowest latency compared with other knowledge editing methods.


[534] 2409.10134

Advancing Towards a Marine Digital Twin Platform: Modeling the Mar Menor Coastal Lagoon Ecosystem in the South Western Mediterranean

Coastal marine ecosystems face mounting pressures from anthropogenic activities and climate change, necessitating advanced monitoring and modeling approaches for effective management. This paper pioneers the development of a Marine Digital Twin Platform aimed at modeling the Mar Menor Coastal Lagoon Ecosystem in the Region of Murcia. The platform leverages Artificial Intelligence to emulate complex hydrological and ecological models, facilitating the simulation of what-if scenarios to predict ecosystem responses to various stressors. We integrate diverse datasets from public sources to construct a comprehensive digital representation of the lagoon's dynamics. The platform's modular design enables real-time stakeholder engagement and informed decision-making in marine management. Our work contributes to the ongoing discourse on advancing marine science through innovative digital twin technologies.


[535] 2409.10135

A hierarchical framework for collision avoidance in robot-assisted minimally invasive surgery

Minimally invasive surgery (MIS) procedures benefit significantly from robotic systems due to their improved precision and dexterity. However, ensuring safety in these dynamic and cluttered environments is an ongoing challenge. This paper proposes a novel hierarchical framework for collision avoidance in MIS. This framework integrates multiple tasks, including maintaining the Remote Center of Motion (RCM) constraint, tracking desired tool poses, avoiding collisions, optimizing manipulability, and adhering to joint limits. The proposed approach utilizes Hierarchical Quadratic Programming (HQP) to seamlessly manage these constraints while enabling smooth transitions between task priorities for collision avoidance. Experimental validation through simulated scenarios demonstrates the framework's robustness and effectiveness in handling diverse scenarios involving static and dynamic obstacles, as well as inter-tool collisions.


[536] 2409.10136

Count2Multiply: Reliable In-memory High-Radix Counting

Big data processing has exposed the limits of compute-centric hardware acceleration due to the memory-to-processor bandwidth bottleneck. Consequently, there has been a shift towards memory-centric architectures, leveraging substantial compute parallelism by processing using the memory elements directly. Computing-in-memory (CIM) proposals for both conventional and emerging memory technologies often target massively parallel operations. However, current CIM solutions face significant challenges. For emerging data-intensive applications, such as advanced machine learning techniques and bioinformatics, where matrix multiplication is a key primitive, memristor crossbars suffer from limited write endurance and expensive write operations. In contrast, while DRAM-based solutions have successfully demonstrated multiplication using additions, they remain prohibitively slow. This paper introduces Count2Multiply, a technology-agnostic digital-CIM method for performing integer-binary and integer-integer matrix multiplications using high-radix, massively parallel counting implemented with bitwise logic operations. In addition, Count2Multiply is designed with fault tolerance in mind and leverages traditional scalable row-wise error correction codes, such as Hamming and BCH codes, to protect against the high error rates of existing CIM designs. We demonstrate Count2Multiply with a detailed application to CIM in conventional DRAM due to its ubiquity and high endurance. We also explore the acceleration potential of racetrack memories due to their shifting properties, which are natural for Count2Multiply, and their high endurance. Compared to the state-of-the-art in-DRAM method, Count2Multiply achieves up to 10x speedup, 3.8x higher GOPS/Watt, and 1.4x higher GOPS/area, while the RTM counterpart offers gains of 10x, 57x, and 3.8x.


[537] 2409.10139

Towards Explainable Automated Data Quality Enhancement without Domain Knowledge

In the era of big data, ensuring the quality of datasets has become increasingly crucial across various domains. We propose a comprehensive framework designed to automatically assess and rectify data quality issues in any given dataset, regardless of its specific content, focusing on both textual and numerical data. Our primary objective is to address three fundamental types of defects: absence, redundancy, and incoherence. At the heart of our approach lies a rigorous demand for both explainability and interpretability, ensuring that the rationale behind the identification and correction of data anomalies is transparent and understandable. To achieve this, we adopt a hybrid approach that integrates statistical methods with machine learning algorithms. Indeed, by leveraging statistical techniques alongside machine learning, we strike a balance between accuracy and explainability, enabling users to trust and comprehend the assessment process. Acknowledging the challenges associated with automating the data quality assessment process, particularly in terms of time efficiency and accuracy, we adopt a pragmatic strategy, employing resource-intensive algorithms only when necessary, while favoring simpler, more efficient solutions whenever possible. Through a practical analysis conducted on a publicly provided dataset, we illustrate the challenges that arise when trying to enhance data quality while keeping explainability. We demonstrate the effectiveness of our approach in detecting and rectifying missing values, duplicates and typographical errors as well as the challenges remaining to be addressed to achieve similar accuracy on statistical outliers and logic errors under the constraints set in our work.


[538] 2409.10141

PSHuman: Photorealistic Single-view Human Reconstruction using Cross-Scale Diffusion

Detailed and photorealistic 3D human modeling is essential for various applications and has seen tremendous progress. However, full-body reconstruction from a monocular RGB image remains challenging due to the ill-posed nature of the problem and sophisticated clothing topology with self-occlusions. In this paper, we propose PSHuman, a novel framework that explicitly reconstructs human meshes utilizing priors from the multiview diffusion model. It is found that directly applying multiview diffusion on single-view human images leads to severe geometric distortions, especially on generated faces. To address it, we propose a cross-scale diffusion that models the joint probability distribution of global full-body shape and local facial characteristics, enabling detailed and identity-preserved novel-view generation without any geometric distortion. Moreover, to enhance cross-view body shape consistency of varied human poses, we condition the generative model on parametric models like SMPL-X, which provide body priors and prevent unnatural views inconsistent with human anatomy. Leveraging the generated multi-view normal and color images, we present SMPLX-initialized explicit human carving to recover realistic textured human meshes efficiently. Extensive experimental results and quantitative evaluations on CAPE and THuman2.1 datasets demonstrate PSHumans superiority in geometry details, texture fidelity, and generalization capability.


[539] 2409.10142

AALF: Almost Always Linear Forecasting

Recent works for time-series forecasting more and more leverage the high predictive power of Deep Learning models. With this increase in model complexity, however, comes a lack in understanding of the underlying model decision process, which is problematic for high-stakes decision making. At the same time, simple, interpretable forecasting methods such as Linear Models can still perform very well, sometimes on-par, with Deep Learning approaches. We argue that simple models are good enough most of the time, and forecasting performance can be improved by choosing a Deep Learning method only for certain predictions, increasing the overall interpretability of the forecasting process. In this context, we propose a novel online model selection framework which uses meta-learning to identify these predictions and only rarely uses a non-interpretable, large model. An extensive empirical study on various real-world datasets shows that our selection methodology outperforms state-of-the-art online model selections methods in most cases. We find that almost always choosing a simple Linear Model for forecasting results in competitive performance, suggesting that the need for opaque black-box models in time-series forecasting is smaller than recent works would suggest.


[540] 2409.10143

P2U-SLAM: A Monocular Wide-FoV SLAM System Based on Point Uncertainty and Pose Uncertainty

This paper presents P2U-SLAM, a visual Simultaneous Localization And Mapping (SLAM) system with a wide Field of View (FoV) camera, which utilizes pose uncertainty and point uncertainty. While the wide FoV enables considerable repetitive observations of historical map points for matching cross-view features, the data properties of the historical map points and the poses of historical keyframes have changed during the optimization process. The neglect of data property changes triggers the absence of a partial information matrix in optimization and leads to the risk of long-term positioning performance degradation. The purpose of our research is to reduce the risk of the wide field of view visual input to the SLAM system. Based on the conditional probability model, this work reveals the definite impact of the above data properties changes on the optimization process, concretizes it as point uncertainty and pose uncertainty, and gives a specific mathematical form. P2U-SLAM respectively embeds point uncertainty and pose uncertainty into the tracking module and local mapping, and updates these uncertainties after each optimization operation including local mapping, map merging, and loop closing. We present an exhaustive evaluation in 27 sequences from two popular public datasets with wide-FoV visual input. P2U-SLAM shows excellent performance compared with other state-of-the-art methods. The source code will be made publicly available at https://github.com/BambValley/P2U-SLAM.


[541] 2409.10144

Fixed-Parameter Tractability of the (1+1) Evolutionary Algorithm on Random Planted Vertex Covers

We present the first parameterized analysis of a standard (1+1) Evolutionary Algorithm on a distribution of vertex cover problems. We show that if the planted cover is at most logarithmic, restarting the (1+1) EA every $O(n \log n)$ steps will find a cover at least as small as the planted cover in polynomial time for sufficiently dense random graphs $p > 0.71$. For superlogarithmic planted covers, we prove that the (1+1) EA finds a solution in fixed-parameter tractable time in expectation. We complement these theoretical investigations with a number of computational experiments that highlight the interplay between planted cover size, graph density and runtime.


[542] 2409.10146

LLMs4OL 2024 Overview: The 1st Large Language Models for Ontology Learning Challenge

This paper outlines the LLMs4OL 2024, the first edition of the Large Language Models for Ontology Learning Challenge. LLMs4OL is a community development initiative collocated with the 23rd International Semantic Web Conference (ISWC) to explore the potential of Large Language Models (LLMs) in Ontology Learning (OL), a vital process for enhancing the web with structured knowledge to improve interoperability. By leveraging LLMs, the challenge aims to advance understanding and innovation in OL, aligning with the goals of the Semantic Web to create a more intelligent and user-friendly web. In this paper, we give an overview of the 2024 edition of the LLMs4OL challenge and summarize the contributions.


[543] 2409.10151

AutoPET Challenge III: Testing the Robustness of Generalized Dice Focal Loss trained 3D Residual UNet for FDG and PSMA Lesion Segmentation from Whole-Body PET/CT Images

Automated segmentation of cancerous lesions in PET/CT scans is a crucial first step in quantitative image analysis. However, training deep learning models for segmentation with high accuracy is particularly challenging due to the variations in lesion size, shape, and radiotracer uptake. These lesions can appear in different parts of the body, often near healthy organs that also exhibit considerable uptake, making the task even more complex. As a result, creating an effective segmentation model for routine PET/CT image analysis is challenging. In this study, we utilized a 3D Residual UNet model and employed the Generalized Dice Focal Loss function to train the model on the AutoPET Challenge 2024 dataset. We conducted a 5-fold cross-validation and used an average ensembling technique using the models from the five folds. In the preliminary test phase for Task-1, the average ensemble achieved a mean Dice Similarity Coefficient (DSC) of 0.6687, mean false negative volume (FNV) of 10.9522 ml and mean false positive volume (FPV) 2.9684 ml. More details about the algorithm can be found on our GitHub repository: https://github.com/ahxmeds/autosegnet2024.git. The training code has been shared via the repository: https://github.com/ahxmeds/autopet2024.git.


[544] 2409.10155

Efficient approximation schemes for scheduling on a stochastic number of machines

We study three two-stage optimization problems with a similar structure and different objectives. In the first stage of each problem, the goal is to assign input jobs of positive sizes to unsplittable bags. After this assignment is decided, the realization of the number of identical machines that will be available is revealed. Then, in the second stage, the bags are assigned to machines. The probability vector of the number of machines in the second stage is known to the algorithm as part of the input before making the decisions of the first stage. Thus, the vector of machine completion times is a random variable. The goal of the first problem is to minimize the expected value of the makespan of the second stage schedule, while the goal of the second problem is to maximize the expected value of the minimum completion time of the machines in the second stage solution. The goal of the third problem is to minimize the \ell_p norm for a fixed p>1, where the norm is applied on machines' completion times vectors. Each one of the first two problems admits a PTAS as Buchem et al. showed recently. Here we significantly improve all their results by designing an EPTAS for each one of these problems. We also design an EPTAS for \ell_p norm minimization for any p>1.


[545] 2409.10156

Contrastive Learning for Character Detection in Ancient Greek Papyri

This thesis investigates the effectiveness of SimCLR, a contrastive learning technique, in Greek letter recognition, focusing on the impact of various augmentation techniques. We pretrain the SimCLR backbone using the Alpub dataset (pretraining dataset) and fine-tune it on a smaller ICDAR dataset (finetuning dataset) to compare SimCLR's performance against traditional baseline models, which use cross-entropy and triplet loss functions. Additionally, we explore the role of different data augmentation strategies, essential for the SimCLR training process. Methodologically, we examine three primary approaches: (1) a baseline model using cross-entropy loss, (2) a triplet embedding model with a classification layer, and (3) a SimCLR pretrained model with a classification layer. Initially, we train the baseline, triplet, and SimCLR models using 93 augmentations on ResNet-18 and ResNet-50 networks with the ICDAR dataset. From these, the top four augmentations are selected using a statistical t-test. Pretraining of SimCLR is conducted on the Alpub dataset, followed by fine-tuning on the ICDAR dataset. The triplet loss model undergoes a similar process, being pretrained on the top four augmentations before fine-tuning on ICDAR. Our experiments show that SimCLR does not outperform the baselines in letter recognition tasks. The baseline model with cross-entropy loss demonstrates better performance than both SimCLR and the triplet loss model. This study provides a detailed evaluation of contrastive learning for letter recognition, highlighting SimCLR's limitations while emphasizing the strengths of traditional supervised learning models in this task. We believe SimCLR's cropping strategies may cause a semantic shift in the input image, reducing training effectiveness despite the large pretraining dataset. Our code is available at https://github.com/DIVA-DIA/MT_augmentation_and_contrastive_learning/.


[546] 2409.10160

Efficient Network Embedding by Approximate Equitable Partitions

Structural network embedding is a crucial step in enabling effective downstream tasks for complex systems that aims to project a network into a lower-dimensional space while preserving similarities among nodes. We introduce a simple and efficient embedding technique based on approximate variants of equitable partitions. The approximation consists in introducing a user-tunable tolerance parameter relaxing the otherwise strict condition for exact equitable partitions that can be hardly found in real-world networks. We exploit a relationship between equitable partitions and equivalence relations for Markov chains and ordinary differential equations to develop a partition refinement algorithm for computing an approximate equitable partition in polynomial time. We compare our method against state-of-the-art embedding techniques on benchmark networks. We report comparable -- when not superior -- performance for visualization, classification, and regression tasks at a cost between one and three orders of magnitude smaller using a prototype implementation, enabling the embedding of large-scale networks which could not be efficiently handled by most of the competing techniques.


[547] 2409.10161

SplatSim: Zero-Shot Sim2Real Transfer of RGB Manipulation Policies Using Gaussian Splatting

Sim2Real transfer, particularly for manipulation policies relying on RGB images, remains a critical challenge in robotics due to the significant domain shift between synthetic and real-world visual data. In this paper, we propose SplatSim, a novel framework that leverages Gaussian Splatting as the primary rendering primitive to reduce the Sim2Real gap for RGB-based manipulation policies. By replacing traditional mesh representations with Gaussian Splats in simulators, SplatSim produces highly photorealistic synthetic data while maintaining the scalability and cost-efficiency of simulation. We demonstrate the effectiveness of our framework by training manipulation policies within SplatSim}and deploying them in the real world in a zero-shot manner, achieving an average success rate of 86.25%, compared to 97.5% for policies trained on real-world data.


[548] 2409.10164

Quantile Regression for Distributional Reward Models in RLHF

Reinforcement learning from human feedback (RLHF) has become a key method for aligning large language models (LLMs) with human preferences through the use of reward models. However, traditional reward models typically generate point estimates, which oversimplify the diversity and complexity of human values and preferences. In this paper, we introduce Quantile Reward Models (QRMs), a novel approach to reward modeling that learns a distribution over rewards instead of a single scalar value. Our method uses quantile regression to estimate a full, potentially multimodal distribution over preferences, providing a more powerful and nuanced representation of preferences. This distributional approach can better capture the diversity of human values, addresses label noise, and accommodates conflicting preferences by modeling them as distinct modes in the distribution. Our experimental results show that QRM outperforms comparable traditional point-estimate models on RewardBench. Furthermore, we demonstrate that the additional information provided by the distributional estimates can be utilized in downstream applications, such as risk-aware reinforcement learning, resulting in LLM policies that generate fewer extremely negative responses. Our code and model are released at https://github.com/Nicolinho/QRM.


[549] 2409.10165

Maneuver Decision-Making with Trajectory Streams Prediction for Autonomous Vehicles

Decision-making, motion planning, and trajectory prediction are crucial in autonomous driving systems. By accurately forecasting the movements of other road users, the decision-making capabilities of the autonomous system can be enhanced, making it more effective in responding to dynamic and unpredictable environments and more adaptive to diverse road scenarios. This paper presents the FFStreams++ approach for decision-making and motion planning of different maneuvers, including unprotected left turn, overtaking, and keep-lane. FFStreams++ is a combination of sampling-based and search-based approaches, where iteratively new sampled trajectories for different maneuvers are generated and optimized, and afterward, a heuristic search planner is called, searching for an optimal plan. We model the autonomous diving system in the Planning Domain Definition Language (PDDL) and search for the optimal plan using a heuristic Fast-Forward planner. In this approach, the initial state of the problem is modified iteratively through streams, which will generate maneuver-specific trajectory candidates, increasing the iterating level until an optimal plan is found. FFStreams++ integrates a query-connected network model for predicting possible future trajectories for each surrounding obstacle along with their probabilities. The proposed approach was tested on the CommonRoad simulation framework. We use a collection of randomly generated driving scenarios for overtaking and unprotected left turns at intersections to evaluate the FFStreams++ planner. The test results confirmed that the proposed approach can effectively execute various maneuvers to ensure safety and reduce the risk of collisions with nearby traffic agents.


[550] 2409.10168

Algorithmic Behaviors Across Regions: A Geolocation Audit of YouTube Search for COVID-19 Misinformation between the United States and South Africa

Despite being an integral tool for finding health-related information online, YouTube has faced criticism for disseminating COVID-19 misinformation globally to its users. Yet, prior audit studies have predominantly investigated YouTube within the Global North contexts, often overlooking the Global South. To address this gap, we conducted a comprehensive 10-day geolocation-based audit on YouTube to compare the prevalence of COVID-19 misinformation in search results between the United States (US) and South Africa (SA), the countries heavily affected by the pandemic in the Global North and the Global South, respectively. For each country, we selected 3 geolocations and placed sock-puppets, or bots emulating "real" users, that collected search results for 48 search queries sorted by 4 search filters for 10 days, yielding a dataset of 915K results. We found that 31.55% of the top-10 search results contained COVID-19 misinformation. Among the top-10 search results, bots in SA faced significantly more misinformative search results than their US counterparts. Overall, our study highlights the contrasting algorithmic behaviors of YouTube search between two countries, underscoring the need for the platform to regulate algorithmic behavior consistently across different regions of the Globe.


[551] 2409.10170

Minimal Model Counting via Knowledge Compilation

Counting the number of models of a Boolean formula is a fundamental problem in artificial intelligence and reasoning. Minimal models of a Boolean formula are critical in various reasoning systems, making the counting of minimal models essential for detailed inference tasks. Existing research primarily focused on decision problems related to minimal models. In this work, we extend beyond decision problems to address the challenge of counting minimal models. Specifically, we propose a novel knowledge compilation form that facilitates the efficient counting of minimal models. Our approach leverages the idea of justification and incorporates theories from answer set counting.


[552] 2409.10171

Safe and Stable Closed-Loop Learning for Neural-Network-Supported Model Predictive Control

Safe learning of control policies remains challenging, both in optimal control and reinforcement learning. In this article, we consider safe learning of parametrized predictive controllers that operate with incomplete information about the underlying process. To this end, we employ Bayesian optimization for learning the best parameters from closed-loop data. Our method focuses on the system's overall long-term performance in closed-loop while keeping it safe and stable. Specifically, we parametrize the stage cost function of an MPC using a feedforward neural network. This allows for a high degree of flexibility, enabling the system to achieve a better closed-loop performance with respect to a superordinate measure. However, this flexibility also necessitates safety measures, especially with respect to closed-loop stability. To this end, we explicitly incorporated stability information in the Bayesian-optimization-based learning procedure, thereby achieving rigorous probabilistic safety guarantees. The proposed approach is illustrated using a numeric example.


[553] 2409.10172

LiLoc: Lifelong Localization using Adaptive Submap Joining and Egocentric Factor Graph

This paper proposes a versatile graph-based lifelong localization framework, LiLoc, which enhances its timeliness by maintaining a single central session while improves the accuracy through multi-modal factors between the central and subsidiary sessions. First, an adaptive submap joining strategy is employed to generate prior submaps (keyframes and poses) for the central session, and to provide priors for subsidiaries when constraints are needed for robust localization. Next, a coarse-to-fine pose initialization for subsidiary sessions is performed using vertical recognition and ICP refinement in the global coordinate frame. To elevate the accuracy of subsequent localization, we propose an egocentric factor graph (EFG) module that integrates the IMU preintegration, LiDAR odometry and scan match factors in a joint optimization manner. Specifically, the scan match factors are constructed by a novel propagation model that efficiently distributes the prior constrains as edges to the relevant prior pose nodes, weighted by noises based on keyframe registration errors. Additionally, the framework supports flexible switching between two modes: relocalization (RLM) and incremental localization (ILM) based on the proposed overlap-based mechanism to select or update the prior submaps from central session. The proposed LiLoc is tested on public and custom datasets, demonstrating accurate localization performance against state-of-the-art methods. Our codes will be publicly available on https://github.com/Yixin-F/LiLoc.


[554] 2409.10173

jina-embeddings-v3: Multilingual Embeddings With Task LoRA

We introduce jina-embeddings-v3, a novel text embedding model with 570 million parameters, achieves state-of-the-art performance on multilingual data and long-context retrieval tasks, supporting context lengths of up to 8192 tokens. The model includes a set of task-specific Low-Rank Adaptation (LoRA) adapters to generate high-quality embeddings for query-document retrieval, clustering, classification, and text matching. Additionally, Matryoshka Representation Learning is integrated into the training process, allowing flexible truncation of embedding dimensions without compromising performance. Evaluation on the MTEB benchmark shows that jina-embeddings-v3 outperforms the latest proprietary embeddings from OpenAI and Cohere on English tasks, while achieving superior performance compared to multilingual-e5-large-instruct across all multilingual tasks.


[555] 2409.10175

VideoRun2D: Cost-Effective Markerless Motion Capture for Sprint Biomechanics

Sprinting is a determinant ability, especially in team sports. The kinematics of the sprint have been studied in the past using different methods specially developed considering human biomechanics and, among those methods, markerless systems stand out as very cost-effective. On the other hand, we have now multiple general methods for pixel and body tracking based on recent machine learning breakthroughs with excellent performance in body tracking, but these excellent trackers do not generally consider realistic human biomechanics. This investigation first adapts two of these general trackers (MoveNet and CoTracker) for realistic biomechanical analysis and then evaluate them in comparison to manual tracking (with key points manually marked using the software Kinovea). Our best resulting markerless body tracker particularly adapted for sprint biomechanics is termed VideoRun2D. The experimental development and assessment of VideoRun2D is reported on forty sprints recorded with a video camera from 5 different subjects, focusing our analysis in 3 key angles in sprint biomechanics: inclination of the trunk, flex extension of the hip and the knee. The CoTracker method showed huge differences compared to the manual labeling approach. However, the angle curves were correctly estimated by the MoveNet method, finding errors between 3.2{\deg} and 5.5{\deg}. In conclusion, our proposed VideoRun2D based on MoveNet core seems to be a helpful tool for evaluating sprint kinematics in some scenarios. On the other hand, the observed precision of this first version of VideoRun2D as a markerless sprint analysis system may not be yet enough for highly demanding applications. Future research lines towards that purpose are also discussed at the end: better tracking post-processing and user- and time-dependent adaptation.


[556] 2409.10176

TCDformer-based Momentum Transfer Model for Long-term Sports Prediction

Accurate sports prediction is a crucial skill for professional coaches, which can assist in developing effective training strategies and scientific competition tactics. Traditional methods often use complex mathematical statistical techniques to boost predictability, but this often is limited by dataset scale and has difficulty handling long-term predictions with variable distributions, notably underperforming when predicting point-set-game multi-level matches. To deal with this challenge, this paper proposes TM2, a TCDformer-based Momentum Transfer Model for long-term sports prediction, which encompasses a momentum encoding module and a prediction module based on momentum transfer. TM2 initially encodes momentum in large-scale unstructured time series using the local linear scaling approximation (LLSA) module. Then it decomposes the reconstructed time series with momentum transfer into trend and seasonal components. The final prediction results are derived from the additive combination of a multilayer perceptron (MLP) for predicting trend components and wavelet attention mechanisms for seasonal components. Comprehensive experimental results show that on the 2023 Wimbledon men's tournament datasets, TM2 significantly surpasses existing sports prediction mo