Multi-Label Bayesian Active Learning with Inter-Label Relationships

Yuanyuan Qi, Jueqing Lu, Xiaohao Yang, Joanne Enticott, Lan Du1
Monash University,
{yuanyuan.qi, jueqing.lu, xiaohao.yang, joanne.enticott, lan.du}monash.edu?


Abstract

The primary challenge of multi-label active learning, differing it from multi-class active learning, lies in assessing the informativeness of an indefinite number of labels while also accounting for the inherited label correlation. Existing studies either require substantial computational resources to leverage correlations or fail to fully explore label dependencies. Additionally, real-world scenarios often require addressing intrinsic biases stemming from imbalanced data distributions. In this paper we proposes an new multi-label active learning strategy to address both challenges. Our method incorporates progressively updated positive and negative correlation matrices to capture co-occurrence and disjoint relationships within the label space of annotated samples, enabling a holistic assessment of uncertainty rather than treating labels as isolated elements. Furthermore, alongside diversity, our model employs ensemble pseudo labeling and beta scoring rules to address data imbalances. Extensive experiments on four realistic datasets demonstrate that our strategy consistently achieves more reliable and superior performance, compared to several established methods.

1 Introduction↩︎

In recent years, extensive machine learning models and algorithms have been developed to deal with the exponential growth of real-world data. However, the significant mismatch between the rapid increase in data and the slow pace of manual data annotation underscores the imperative of active learning (AL) [1], [2]. Multi-label active learning (MLAL), which considers the co-occurrence of labels and is more aligned with real-world applications, has been explored in different domains, including text classification [3], [4], medical imaging [5], [6], remote sensing [7], [8], and so on. The multi-label task, due to the complexity of label-wise correlation and imbalanced data distribution, remains a vital task to comprehensively examine the data, thus necessitating the development of effective query strategies [9], [10].

To deal with the multi-label issue in active learning, earlier approaches usually transform it into multiple binary classification tasks, known as binary relevance (BR) which sums the informativeness evaluated for each individual labels to obtain the final acquisition score [11], [12]. However, these approaches overlook the potential correlation of labels, such as their co-occurrence, which should be factored into the overall information assessment [13], [14]. Consequently, the information inherent in the label correlation of the queried samples may not be fully explored.

Some recent works have employed co-occurrence and label correlation matrices to model these inherent label relationships [15]. However, while the positive correlations, indicating strong co-occurrence between labels, have been included, few studies have explored negative correlations where labels are mutually exclusive and do not appear together. Moreover, asymmetric label-wise correlations, where one label frequently appears with another label without a reciprocal relationship, remains under-explored. This also includes the hierarchical structure of the label set, where node labels inherently belong to and serve as subsets of their corresponding root labels. The selection of overlapping labels, due to the hierarchical nature, affects the diversity of the strategy, consequently, its overall outcome [16]. Furthermore, due to the high imbalance ratios in real-world datasets, addressing data imbalance to maintain consistent performance across different datasets highlights the critical importance of MLAL tasks [17], [18].

Considering label co-occurrence and data imbalance, we propose a new MLAL framework, named multi-label CoRrelation-Aware active learning with Beta scoring rules (CRAB) in this paper. By incorporating the Beta scoring rules to deal with data imbalance and the expected loss reduction framework to select the most informative data instance, we introduce dynamic positive and negative correlation matrices to handle the distinct and asymmetric label correlation within a Bayesian framework. This approach demonstrates robust and outstanding performance on four benchmark datasets for multi-label active learning.

2 Related Work↩︎

Active learning involves selecting the most informative data from the unlabeled pool for annotation, thereby reducing the required training data while maintaining comparable performance. Two mainstream AL query strategies are uncertainty-based and diversity-based approaches [19]; the former concentrates on informative measurement at the sample-level [20], while the latter emphasizes data distribution [21]. To quantify the sample uncertainty, methods such as proper scoring rules [22], Dirichlet distribution [23], and Gaussian Process [24] can be used to estimate the sample informativeness. By aligning the prior and posterior distributions with model output observations, these models can effectively capture the uncertainty of each prediction.

However, focusing exclusively on uncertainty can introduce bias in sampling (i.e., selecting near-identical instances, thus wasting the annotation budget), which may lead to sub-optimal performance [25]. Incorporating diversity into the sampling process offers an alternative approach to enhancing generalization [26]. [27]  utilized label cardinality inconsistency to exploit uncertainty and integrated it with the diversity-based sampling during data acquisition. PLVI-CE leverages average posterior probability discrepancy to measure data diversity and prediction inconsistency to assess uncertainty, thus enhancing model generalization with limited annotated instances [28]. Recently, [22]  proposed BESRA which uses the strictly proper scoring rules. Its acquisition function combines beta-scoring rules and k-means clustering to enhance diversity, while the Beta scoring rules also address data imbalance common in multi-label datasets. Built on top of BESRA, our framework further takes into account label correlation in the acquisition function.

Research in active learning has gradually paid attention to the label correlation in MLAL. In light of the specific characteristics of graph data, DAMAL incorporates class-label interactions using a graph-based ranking approach, where edge weights are defined as the cosine similarity between latent features, thus quantifying the graph’s informativeness in relation to label correlation [29], [30]. To address the uncertainty in feature correlations within standard data, [24]  integrated a Gaussian process with a Bernoulli Mixture model to model correlation through the covariance matrix. Correlation matrix-based weighted uncertainty, typically derived through co-occurrence or label similarity analysis, is commonly used to query the most informative label pairs by capturing the inter-label influence during label selection [15], [31]. [3]  propose a two-stage sample acquisition strategy, called ALMuLa-mix, utilizing inconsistency to capture label correlations with novel features as the first stage and employing the class frequency at the second stage to ensure inter-class diversity.

Although an increasing number of studies recognize the importance of correlation during data acquisition, existing approaches are often resource-intensive, requiring additional training for interrelation modeling, or struggle to maintain performance under data imbalance. Our approach effectively samples the representative data in a correlation-aware manner while maintaining consistent performance, even with highly imbalanced datasets.

3 Correlation-aware Active Learning↩︎

a

of the CRAB framework. It is trained using an ensemble method that generates pseudo labels based on predictions and calculates the beta proper score. Firstly, positive and negative correlation matrices are updated with newly sampled instances. Then, utilizing pseudo labels, our model samples data from three categories: label-wise, negatively correlated, and hard-to-learn samples. Last, our model calculates correlation-aware proper score and subsequently clusters and labels the selected data based on this score.

:flowchart

Figure 1: No caption.

Without loss of generality, suppose \(L=\{X,Y\}\), \(U=\{X\}\) represent the collection of initial training set and unlabeled data samples, with \(|U| \gg |L|\); and \(y_i \in \{-1, +1\}^k\) represents the label of the \(i_{th}\) example in the \(k\) label space. Firstly, our model generates the two-dimensional correlation matrix, including both positive correlation and negative correlation, based on the iteratively updated labeled dataset \(L\). Then, given a model parameterized by \(\theta\in\Theta\), the probability of label \(y\) of a data instance \(x\) is \(P(y|\theta, x)\). We are able to derive the pseudo label as \(y^*\) based on \(\int_\theta P(y|\theta,x)P(\theta)d\theta\), where the integration can be approximated by Monte Carlo via ensemble. Considering the model’s learning capability of different categories of data, our model refines the sampling pool into a more preventative subset based on the pseudo labels. And considering the influence of correlation during quantities the informativeness of sample, a variation of the beta scoring rule used in [22] , and we introduce the key concept in section 3.1. Finally, the clustering approach assures the diversity in sampling. Fig. [fig:flowchart] illustrates the overall flowchart of our proposed framework.

3.1 Preliminary: Beta Scoring Rules↩︎

Monte Carlo estimation of error reduction [32] involves using the Monte Carlo approach to estimate the expected reduction in error resulting from the labeling of a query. Inspired by this, BEMPS [33] improves these formulations by employing the proper scoring rules [34]. Rather than estimate the error, the scoring rule provides a summary measure of predictive probability, which computes the positive oriented rewards (i.e., utilities) that a classier maximizes [34].

\[\begin{align} Q(L) & = \mathbb{E}_{P_{(x)}}\mathbb{E}_{P(\theta|L)}\big [\mathbb{E}_{P(y|\theta,x)} \notag\\ & \quad\quad [S(P(x,\theta),y)-S(P(x),y)]\big ] \tag{1} \\ \Delta Q(x|L) & = Q_L - \mathbb{E}_{P_{(y|L,x)}}[Q_{L+\{x, y\}}] \notag\\ & = \mathbb{E}_{P_{(y|L,x)}} \big[ \mathbb{E}_{P(x')P(y'|L,(x,y),x')} \tag{2}\\ & \quad\quad [S(P(\cdot|L,(x,y),x'),y')-S(P(\cdot|L,x'),y')] \big] \nonumber \end{align}\]

Eqs 1 and 2 show the core concept of expected increase in score when querying [33]. \(S\) denotes the scoring function that evaluates the predictive probability distribution on an event, i.e., predicting \(y'\), given \(x'\). \(Q(L)\) represents the mean proper scoring rule of the predictive probabilities obtained using Bayesian estimation based on the current labeled dataset \(L\). The term \(\Delta Q(x|L)\) denotes the increment in the score resulting from acquiring the label of a sample \(x\), drawn from the unlabeled data pool \(U\). And \(x\) denotes a selected unlabeled anchor point from \(x'\) for assessment. The value of \(\Delta Q(x|L)\) is then used to select the sample leading to a large increment in the score or reward. Since the label of unlabeled points \(x\) and \(x'\) is unknown, we derive the \(P(y'|L+\{x,y\},x')\) by calculating the posterior distribution of the ensemble models using Eq. 3 .

\[\begin{align} P(y'|L,(x,y),x') &= \mathop{\sum}\limits_{\theta\in\Theta^E}P(y'|\theta,x')P(\theta|L,(x,y)) \tag{3}\\ P(\theta|L,(x,y)) &\approx \frac{P(\theta|L)P(y|\theta,x)}{\sum_{\theta\in\Theta^E}P(\theta|L)P(y|\theta,x)} \tag{4} \end{align}\]

To address the issue of imbalanced label distribution in text datasets, BESRA [22] leverage Beta family [35], which generalizes the logarithmic score and the brier score, or the other desired cost-weighted scoring rule. The equation below illustrates the proper scoring rules \(\mathcal{L}\) of a predictive distribution \(p\) given the expected value \(y^k\), where \(y^k\) represents the label for class \(k\). \(\mathcal{L}(1|p)\) and \(\mathcal{L}(0|p)\) represent the partial losses when \(p\) is been classified as 1 and 0, respectively.

\[\begin{align} S^k_{BR}(p,y^k) \quad &= \quad \mathcal{L}(y^k|p) \notag\\ &= \quad y^k \mathcal{L}(1|p) + (1-y^k)\mathcal{L}(0|p) \end{align}\] \[\begin{align} \mathcal{L}(1|p) &= \mathcal{L}_1(1-p) = \textstyle \int^1_pp^{\alpha-1}(1-p)^{\beta}dp \\ \mathcal{L}(0|p) &= \mathcal{L}_0(p) = \textstyle \int^p_0p^{\alpha}(1-p)^{\beta-1}dp \end{align}\]

By leveraging \(I_x(\alpha,\beta)\), the Incomplete Beta Function, the closed form of the Beta distribution is obtained for \(\alpha, \beta > 0\). When \(\alpha=\beta=0\), the scoring becomes log-loss, and when \(\alpha=\beta=1\), the scoring rule will transform to squared error losses. By adjusting the value of \(\alpha\) and \(\beta\), our model can effectively handle scenarios with diverse data distributions. In our research, we employ the greedy search result of BESRA as the parameter for scoring, where the \(\alpha=0.1, \beta=3\) [22].

3.2 Correlation Matrix Construction↩︎

In multi-label scenario, a single sample often has more than one label, and label with relatively small semantic distances frequently appear simultaneously. To leverage inherent relationships within the label space to assist in acquisition process, our framework maintains two dynamic correlation matrices: a co-occurrence matrix representing positive correlations, and a non-co-occurrence matrix representing negative correlations between labels. Both matrices are updated after each acquisition iteration with newly annotated instances.

By discovering the pattern of the occurrence between labels, we aim to quantify the informativeness considering influence of correlation, and make the query process diversity while take into account the imbalanced data distribution.

3.2.1 Positive correlation matrix:↩︎

The positive correlation matrix is derived from the label-wise dependence matrix \(A(m,n)\), which represent the dependence of one label’s presence on another. This can be implemented via Eq. 5 . Specifically, \(\sum_{i=1}^L N(y_i^m=+1, y_i^n=+1)\) refers to the count of labeled instances where labels \(m\) and \(n\) occur simultaneously in the label set. \(P(y^{m}|y^{n})\) gives the likelihood of label \(m\) occurring when label \(n\) is present. When \(m=n\), \(A(m,n)\) equals 1. The value of \(A(m,n)\) reflects the probability of one label’s existence to another.

\[\begin{align} A(m,n) \quad &= \quad P(y^{m}|y^{n}), \quad \text{where } m\neq n \notag \\ &= \quad \frac{\sum_{i=1}^L N(y_i^m=+1, y_i^n=+1)}{\sum_{i=1}^L N(y_i^n=+1)} \label{eq8} \end{align}\tag{5}\]

For example, if label \(m\) is the subset of the label \(n\), the presence of label \(m\) definitely implies the presence of label \(n\), while the converse does not hold. In this way, \(A(m,n)\) will equal 1, and \(A(n,m)\) generally smaller than \(A(m,n)\). Thereby, the positive correlation matrix is able to describe the pattern of the co-occurrence between labels and reflects the asymmetric correlation between labels, including hierarchical relationships.

3.2.2 Negative correlation matrix↩︎

Despite the positive correlations, negative correlations between labels have rarely been addressed in previous research. However, in real-world scenarios, negative correlations are instrumental in enabling the model differentiate between mutually exclusive classes and contribute to more accurate decisions by clarifying the model’s decision boundaries [36].

To effectively model these negative correlations, we construct an updated non-co-occurrence matrix, \(NegA(m,n)\) where \(m\neq n\), as defined in Eq. 6 . This matrix quantifies the confidence in the presence of label \(m\) given that label \(n\) is present. Specifically, \(\sum_{i=1}^L N(y_i^m=-1, y_i^n=+1)\) represents the count where labels \(m\) and \(n\) do not co-occur. If label \(m\) and \(n\) occur simultaneously, then both \(NegA(m,n)\) and \(NegA(n,m)\) are set to 1. While in most cases, \(NegA(m,n)\) dose not equal to \(NegA(n,m)\), as it depends on the occurrence frequency of the baseline label. This asymmetry allows the matrix to capture the nuanced conditional negative relationships, and provide a new perspective towards the label dependencies.

\[\begin{align} NegA(m,n) \quad &= \quad P(\overline{y^m}|y^n), \quad \text{where } m\neq n \notag\\ &= \quad \frac{\sum_{i=1}^L N(y_i^m=-1, y_i^n=+1)}{\sum_{i=1}^L N(y_i^n=+1)} \label{eq9} \end{align}\tag{6}\]

3.3 Correlation-based Sampling↩︎

Refine the unlabeled pool to ensure that selected instances concentrate on specific representative criteria is a common strategy in active learning [4]. However, current research predominantly based on informativeness analysis, neglecting the critical role of data correlation in MLAL.

To address this limitation and provide more representative and evenly distribution samples for continuous process, our model refines the unlabeled pool from three perspectives based on the correlation properties. And the pseudo label \(y^*\), obtained by averaging the prediction result of ensemble models through Eq. 7 , is used for the following correlation-based sampling.

\[\begin{align} y_i^* = \mathbb{I}[P(y_i \mid x_i, L)>0.5] \label{eq10} \end{align}\tag{7}\] \[\begin{align} P(y_i \mid x_i, L) &= \textstyle \int_{\theta} P(y_i|x_i,\theta) p(\theta \mid L) \notag \\ &\approx \textstyle \sum^E_{e=1} P(y_i|x_i,\theta_e)/E\label{eq} \end{align}\tag{8}\]

3.3.1 Label-wise sampling↩︎

In multi-label scenarios, labels often exhibit asymmetric correlations, where the present of one label, \(m\), is highly correlated with another label, \(n\), but not vice versa. Hierarchical structures within labels are a common example of this kind of relationship. To illustrate the impact on performance, we can consider hierarchical data: when asymmetric correlations exist, the selection of root-node labels often overlaps with that of corresponding leaf-node labels, which reduces the representativeness of the selected root labels [16].

To address this, our strategy introduces a mechanism to modify label-wise selection. If correlation between labels exceeds a threshold, \(\sigma\), defined here as the standard deviation of a two-tailed normal distribution, we consider the label pair to be asymmetrically correlated. In such cases, only the primary label in the correlation chain, \(m\), is selected for label-wise sampling. The model then allocates the per-label query size for each label using pseudo labels to refine the sampling pool. This approach not only improves correlation-aware sampling but also mitigates imbalances by ensuring relatively even representation across selected samples.

3.3.2 Negative-correlated label sampling↩︎

In multi-label learning, ensuring the model respects the exclusivity of certain labels is crucial for achieving accurate predictions [37]. When mutually exclusive labels, such as those that should not logically co-occur, are predicted together, it is often an indication of model bias or misguided learning [38], [39]. This misalignment can reduce the model’s effectiveness. Therefore, as the second subset for concentrated sampling, we select instances with negatively correlated labels that are not expected to co-occur in the label space.

To formalize this, we consider a pair of labels as mutually exclusive when the negative correlation coefficient \(NegA(m,n)\) exceeds a threshold of \(2\sigma\), where \(\sigma\) represents the standard deviation. Based on predictions with pseudo labels, our model selects a query size of samples from these instances to refine the unlabeled pool, guiding the model with a specific focus on avoiding negative correlations.

3.3.3 Hard-to-learn label sampling↩︎

The third subset our model uses to refine the unlabeled pool focuses on hard-to-learn samples. These samples are typically characterized by low confidence and low variability, indicating instances where the model has difficulty making accurate predictions. Such samples often contain ambiguous or noisy features or lie near decision boundaries [40], [41]. In this study, we set a confidence threshold of 0.5 to identify hard-to-learn samples. Specifically, if the pseudo label obtained through Eq. 7 for all classes of an instance falls below this threshold, our model classifies the sample as hard to learn. To improve performance on these challenging instances while maintaining diversity in the sampling process, our model dynamically adjusts the sample size using a polynomial decay function, enabling more focused learning on difficult cases over time.

3.4 Correlation-aware Querying↩︎

\[\begin{align} S_{AB}(f_L(x),y^n) &= \sum^K_{m=1} \hat{A}(m,n)*S_{BR}(f_L(x),y^m) \tag{9} \\ \hat{A}(m,n) &= \text{norm}(A(m,n)), \quad \text{where } m \neq n \notag \\ &= \frac{A(m,n)}{\alpha \cdot \text{max}(\sum A(\cdot,n))} \tag{10} \end{align}\]

With the correlation-based sampling strategy described in section  3.3, our model obtains a refined unlabeled data pool. Then, our model calculate correlation-aware beta scoring for the selected samples, and use it to cluster the final samples for annotation. This process employs a computation method similar to the attention mechanism introduced in transformer models [42]. Using Eq. 9 , we score each prediction, where \(S_{AB}\) incorporates the influence of other labels’ scores through the attention coefficient \(\hat{A}(m,n)\) as the final score, accounting the correlation. Additionally, we introduce \(\alpha\), a normalization parameter set to 2, to prevent over-estimating correlated uncertainty while preserving the original significance of each label’s score. This approach allows our model consider the impact of neighboring labels on informativeness, and the refined unlabeled pool enhances computational efficiency and deepens the analysis of label correlations. Algorithm 2 details the procedure for one iteration of our framework.

Figure 2: CRAB Update Strategy for MLAL

4 Experiments↩︎

We collected four benchmark multi-label text datasets to analyze the performance and robustness of our framework [43]. Those datasets include: RCV1 [44]: A news articles dataset from Reuters; UKLEX [45]: A collection of legal documents sourced from various categories within UK law; EURLEX [46]: A set of descriptors from European legal information thesaurus extracted from the European Union’s legal database; MIMIC3 [47]: A set of de-identified health records for medical diagnosis. Since the data in UKLEX, EURLEX, and MIMIC3 have two levels of labels, to retain uniformity, our study used the coarse level of labels. Following the method by [48], we used mean imbalance ratio (MeanIR) to create synthetic datasets with varying imbalance ratios based on the modified RCV1 dataset, reduce the label size of RCV1 to ten (\(K=10\)) by selecting the most frequently occurring labels, enabling an evaluation of the model’s performance across different degrees of imbalance. Table [tab:data] and Table [tab:data95ir] offer a detailed summary of these four datasets. We also introduced a new metric, termed CorrAvg, defined as \(\sum_{m=1}^K\sum_{n=1}^K A(m,n)/(K\times K), m\neq n\), to quantify the degree of the inter-correlation within the label set. We compared the performance of different query methods across datasets with different correlation levels.

Table 1: No caption
Dataset
Train/Test
#Label
Train/Test
Train/Test
RCV1 24891/6223 104619/102 402/197 0.137/0.137
UKLEX 20000/8500 63157/18 7/6 0.026/0.024
EURLEX 55000/5000 160211/21 16/15 0.131/0.147
MIMIC 29999/10000 137678/19 127/101 0.321/0.320
Table 2: No caption
Dataset
Train/Test
#Label
Train/Test
Train/Test
RCV1-T10-5 1200/600 25254/10 5/10 0.133/0.138
RCV1-T10-10 1200/600 25289/10 10/10 0.135/0.138
RCV1-T10-20 1200/600 24170/10 20/10 0.137/0.138
RCV1-T10-50 1200/600 25280/10 50/10 0.142/0.138

4.1 Implemetation↩︎

We used Neural-Classifier [49], implemented in Pytorch [50], as the code base. In our study, we exployed three mainstream models, TextCNN [51], TextRNN [52], and DistilBERT [53], as the backbone classifiers. To enhance efficiency and performance, we applied the cold start strategy [54] with random initialization at the beginning of each active learning iteration, a method known for its applicability to real-world scenarios [55]. All experiments were conducted on a single RTX3090 GPU. Following the setting of [22] , the maximum sequence length for the text data was set to 256, with each training iteration consisting of 80 epochs. We implemented an early stopping criterion with the patience of 20 epochs to prevent the model from falling into local optima or overfitting [56], [57]. AdamW was used as the optimizer [58], with the learning rate tailored for each model: 5e-2 for TextCNN and TextRNN, and 5e-5 for DistilBERT. The hard-to-learn query size was set to 300, while the per-label query size was set to 50 for RCV1 and 100 for other datasets, due to differences in label space size.

4.2 Baselines↩︎

To conduct a comparative performance analysis, we adopted five state-of-the-art MLAL methods as baselines, including random sampling. Each baseline uses the same query parameters and backbone classifier to maintain consistency across experiments. Specifically, MMC [59] applies maximal confidence to selecting data that induces the largest reduction in expected model loss. AUDI [27] explores uncertainty and diversity in both instance and label spaces through label ranking and threshold learning. ADAPTIVE[60] integrates max-margin prediction uncertainty with label cardinality inconsistency to assess the unified informativeness of multi-label instances. BESRA [22] utilizes the beta scoring rules within an expected loss reduction framework to evaluate informativeness and employs vector representations to maintain diversity. CMAL [61] leverages the global label correlation matrix and label space sparsity with the uncertainty to query the most informative example-label pairs.

4.3 Results↩︎

Figures [fig:bert] and [fig:ir] present the quantitative performance of our proposed framework. Figure [fig:bert] compares the performance of CRAB and baseline methods across four datasets using BERT, with additional results for other models is provided in Appendix 7. Following [22], we obtained the predictive distribution by training five ensemble models independently, each initialized with the same parameters for every AL iteration. The micro-F1 results show that our proposed model, CRAB, has consistently outperforms other AL methods across different text domains and network structures. Additionally, CRAB demonstrates robust performance on datasets with varying degrees of correlation, particularly compared with BESRA, suggesting that our strategy effectively models correlation during data selection. To examine the robustness of the model on imbalanced datasets, we conducted comparative experiments on synthetic datasets, with the results shown in Figure [fig:ir]. CRAB maintains superior performance across synthetic datasets with different MeanIR values, demonstrating its capability to handle imbalanced datasets.

To further investigate the effectiveness of our model, Figures [fig:meanir] and [fig:trend] present a qualitative analysis of CRAB. Figure [fig:meanir] shows the MeanIR of the selected samples across AL iterations. Since MeanIR indicates imbalance, with lower values reflecting a more even data distribution, we observe that CRAB demonstrates a more balanced sample selection compared to other baseline models, which underscores its capacity to address data imbalance effectively. Figure [fig:trend] illustrates the trend of two categories of data within the unlabeled data pool: hard-to-learn data and negatively correlated data. Unlike random selection, CRAB strategically selects data that enhances model learning, thereby reducing misclassification of negatively correlated data. Additionally, CRAB improves performance on hard-to-learn data, helping the model becomes more robust and accurate. This targeted data selection contributes to a more balanced and adaptive learning process, ultimately leading to improved generaliztion across diverse data types.

imagemicro-F1 score on Bert, averaged results with 5 random seeds. :bert

Figure 3: No caption

imagemicro-F1 score on synthetic dataset using TextCNN, averaged results with 5 random seeds. :ir

Figure 4: No caption

imageaveraged MeanIR of selected samples, averaged the results with 5 random seeds. :meanir

Figure 5: No caption

imageof hard-to-learn and negative-correlated data, with the red bar axis on left and the blue bar axis on right, averaged the results with 5 random seeds. :trend

Figure 6: No caption

a

b

Figure 7: (a) Ablation study of the asymmetric-correlated label. (b) Ablation study of considering negatively correlated label pairs..

a

b

Figure 8: (a) Ablation study of the correlation attention. (b) Performance for random sampling or cluster-based sampling for refined sampling pool selection..

4.4 Ablation study↩︎

We conducted four experiments to examine whether the structure of CRAB improves MLAL performance by considering correlation. Taking into consideration the asymmetrically correlated label relationships, CRAB selects only the initial label in the correlation chain for per-label selection, thus avoiding duplicate selection of correlated labels. Figure 7 (a) compares performance with and without considering asymmetrical correlations on the MIMIC dataset. The results illustrate that CRAB demonstrates superior performance and is more effective in querying indicative samples than when treating all labels equally. Figure 7 (b) shows the benefits of sampling conflicted labels, with performance improvements becoming more pronounced in the later training stages.

Figure 8 (a) illustrates how the correlation attention impact MLAL accuracy. To assess performance without correlation in score evaluation, we removed the correlation attention in Eq. 9 , with the results shown by the blue line. Evidently, when positive label correlations are incorporated, CRAB performs more consistently throughout the experiment, indicating that our strategy effectively models inter-label relationships to make more informative queries. Additionally, we compared the micro-F1 score and computation time of the random sampling and clustering-based sampling during the refined unlabeled pool sampling. As shown in Figure 8 (b), random sampling performs almost identically to clustering-based sampling, suggesting it can server as a replacement during the refined unlabeled pool selection. Moreover, the querying time with random sampling decreases by 40% . These findings demonstrate that our method is effective, efficient, and robust.

5 Conclusion↩︎

In this paper, we proposed an innovative MLAL query strategy, CRAB, which takes into account inherent label relationships within a Bayesian framework. By updating the correlation matrices with the annotated data, our model is competent to query more representative samples in the initial stage and achieves a more accurate score for evaluating the informativeness of instances. Additionally, with the utilization of beta scoring rules, our model maintains consistently robust performance on imbalanced datasets. Leveraging pseudo labels and correlation-aware sampling, our strategy eliminates the need for additional training modules, and our model demonstrates significant performance improvements in MLAL on four benchmark datasets.

6 Limitation↩︎

Our work has two main limitations. First, although our model utilizes subsets of the unlabeled pool to select samples based on correlation properties, the framework could be improved by dynamically and proportionally adjusting the subset sample size for the refined unlabeled pool. Since learning performance varies across datasets, dynamically adapting the sample selection process in response to the model’s evolving learning capability and the inter-relationships within each dataset could further enhance performance, allowing the model to balance diverse and informative sample selection more effectively.

Second, our current approach focuses primarily on label-wise correlations, considering both the co-occurrence and non-co-occurrence relationships between labels. Future work will extend this to explore correlations in both instance and label spaces, examining whether the alignment between instances and labels can further improve performance. Additionally, we plan to investigate more complex relationships, such as spurious correlations, that may exist between data features and label distributions. This includes studying the impact of invalid or noisy information from instances on model performance and identifying methods to mitigate such effects.

7 Appendix↩︎

7.1 Supplementary results↩︎

imagemicro-F1 score on TextCNN, averaged results with 5 random seeds. :textcnn

Figure 9: No caption

Figure 10: Averaged micro-F1 score on TextRNN, averaged results with 5 random seeds.

Figure [fig:textcnn] and Figure 10 present supplementary performance comparisons based on TextCNN and TextRNN. Our proposed strategy, CRAB, consistently demonstrates robust and superior performance across different benchmarks. Among the baseline methods, BESRA achieves strong results and shows relatively robustness on different datasets. However, its performance on the highly correlated MIMIC dataset is less stable, likely due to the absence of correlation consideration. AUDI, which incorporates both uncertainty and diversity at both data and instance level, presents notable performance on three of the datasets, RCV1, UKLEX, and EURLEX, but struggles on MIMIC. Adaptive and MMC yield similar results over four datasets, as both utilize the max-margin as the selection criterion. Although CMAL considers global label correlation, it only performs optimally on the highly correlated MIMIC dataset and does not maintain stable performance on all datasets.

7.2 Parameter sensitivity analysis↩︎

To validate the effectiveness and generalizability of CRAB, we conduct two experiments to analyze its performance with different parameters. Figure 11 (a) demonstrates the performance with varying sizes of hard-to-learn samples for refined unlabeled pool selection. With an acquisition size of 100 per iteration, the optimal performance is achieved when the size of the hard-to-learn samples matches the acquisition size. If the sample size is set too large, such as 200, the model initially shows relatively better performance due to the higher proportion of hard-to-learn samples in the early stages. However, as annotated data increases, performance declines because the proportion of hard-to-learn samples becomes less significant, necessitating a reduction in their selection. This parameter is adjustable for different datasets and model structures to ensure compatibility with the learning capabilities across varying scenarios. To deal with the problem of the amount of hard-to-learn samples decreasing with increased annotated data, CRAB adopts a decay function for the size of hard-to-learn samples to adapt to the training process. Figure 11 (b) presents the performance of three decay approaches, linear decay, cosine decay, and polynomial decay. Among these, polynomial decay achieves superior performance in terms of the micro-F1 score, as it produces an accelerated decrease in output for sampling, better aligning with the trend in the size of hard-to-learn samples.

a

b

Figure 11: (a) Performance for different size of hard-to-learn samples. (b) Performance for different decay functions of the hard-to-learn samples..

References↩︎

[1]
Zhuoming Liu, Hao Ding, Huaping Zhong, Weijia Li, Jifeng Dai, and Conghui He. 2021. Influence selection for active learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9274–9283.
[2]
Binhui Xie, Longhui Yuan, Shuang Li, Chi Harold Liu, and Xinjing Cheng. 2022. Towards fewer annotations: Active learning via region impurity and prediction uncertainty for domain adaptive semantic segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8068–8078.
[3]
Xue Han, Qing Wang, Yitong Wang, Jiahui Wang, Chao Deng, and Junlan Feng. 2024. Feature mixing-based active learning for multi-label text classification. In ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 10551–10555. IEEE.
[4]
Xin Kang, Xuefeng Shi, Yunong Wu, and Fuji Ren. 2020. Active learning with complementary sampling for instructing class-biased multi-label text emotion classification. IEEE Transactions on Affective Computing, 14(1):523–536.
[5]
Jiayu Huang, Nazbanoo Farpour, Bingjian J Yang, Muralidhar Mupparapu, Fleming Lure, Jing Li, Hao Yan, and Frank C Setzer. 2024. Uncertainty-based active learning by bayesian u-net for multi-label cone-beam ct segmentation. Journal of Endodontics, 50(2):220–228.
[6]
Raquel Simao, Marı́lia Barandas, David Belo, and Hugo Gamboa. 2023. Study of uncertainty quantification using multi-label ecg in deep learning models. In BIOSIGNALS, pages 252–259.
[7]
Lars Möllenbrok, Gencer Sumbul, and Begüm Demir. 2023. Deep active learning for multi-label classification of remote sensing images. IEEE Geoscience and Remote Sensing Letters.
[8]
Lars Möllenbrok and Begüm Demir. 2023. Active learning guided fine-tuning for enhancing self-supervised based multi-label classification of remote sensing images. In IGARSS 2023-2023 IEEE International Geoscience and Remote Sensing Symposium, pages 4986–4989. IEEE.
[9]
Oriane Siméoni, Mateusz Budnik, Yannis Avrithis, and Guillaume Gravier. 2020. Rethinking deep active learning: Using unlabeled data at model training. In 2020 25th International conference on pattern recognition (ICPR), pages 1220–1227. IEEE.
[10]
Liat Ein Dor, Alon Halfon, Ariel Gera, Eyal Shnarch, Lena Dankin, Leshem Choshen, Marina Danilevsky, Ranit Aharonov, Yoav Katz, and Noam Slonim. 2020. Active learning for bert: an empirical study. In Proceedings of the 2020 conference on empirical methods in natural language processing (EMNLP), pages 7949–7962.
[11]
Rui Zheng, Shulin Zhang, Lei Liu, Yuhao Luo, and Mingzhai Sun. 2021. Uncertainty in bayesian deep label distribution learning. Applied Soft Computing, 101:107046.
[12]
Min Wang, Tingting Feng, Zhaohui Shan, and Fan Min. 2022. Attribute and label distribution driven multi-label active learning. Applied Intelligence, 52(10):11131–11146.
[13]
Yuanjian Zhang, Tianna Zhao, Duoqian Miao, and Witold Pedrycz. 2021. Granular multilabel batch active learning with pairwise label correlation. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 52(5):3079–3091.
[14]
Xue-Yang Min, Kun Qian, Ben-Wen Zhang, Guojie Song, and Fan Min. 2022. Multi-label active learning through serial–parallel neural networks. Knowledge-Based Systems, 251:109226.
[15]
Guoliang Su, Zhangquan Wu, Yujia Ye, Maoxing Chen, and Jun Zhou. 2023. Cost-efficient multi-instance multi-label active learning via correlation of features. In 2023 IEEE International Conference on Image Processing (ICIP), pages 410–414. IEEE.
[16]
Felipe Kenji Nakano, Ricardo Cerri, and Celine Vens. 2020. Active learning for hierarchical multi-label classification. Data Mining and Knowledge Discovery, 34(5):1496–1530.
[17]
Shuyue Chen, Ran Wang, Jian Lu, and Xizhao Wang. 2022. Stable matching-based two-way selection in multi-label active learning with imbalanced data. Information Sciences, 610:281–299.
[18]
Maxime Arens, Lucile Callebert, Mohand Boughanem, and José G Moreno. 2024. Rebalancing label distribution while eliminating inherent waiting time in multi label active learning applied to transformers. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 13621–13632.
[19]
Pengzhen Ren, Yun Xiao, Xiaojun Chang, Po-Yao Huang, Zhihui Li, Brij B Gupta, Xiaojiang Chen, and Xin Wang. 2021. A survey of deep active learning. ACM computing surveys (CSUR), 54(9):1–40.
[20]
Moloud Abdar, Farhad Pourpanah, Sadiq Hussain, Dana Rezazadegan, Li Liu, Mohammad Ghavamzadeh, Paul Fieguth, Xiaochun Cao, Abbas Khosravi, and U Rajendra Acharya. 2021. A review of uncertainty quantification in deep learning: Techniques, applications and challenges. Information fusion, 76:243–297.
[21]
SangMook Kim, Sangmin Bae, Hwanjun Song, and Se-Young Yun. 2023. Re-thinking federated active learning based on inter-class diversity. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3944–3953.
[22]
Wei Tan, Ngoc Dang Nguyen, Lan Du, and Wray Buntine. 2024. https://doi.org/10.1609/aaai.v38i14.29447. Proceedings of the AAAI Conference on Artificial Intelligence, 38(14):15240–15248.
[23]
Patrick Hemmer, Niklas Kühl, and Jakob Schöffer. 2022. Deal: Deep evidential active learning for image classification. Deep Learning Applications, Volume 3, pages 171–192.
[24]
Weishi Shi, Dayou Yu, and Qi Yu. 2021. A gaussian process-bayesian bernoulli mixture model for multi-label active learning. Advances in Neural Information Processing Systems, 34:27542–27554.
[25]
Ameya Prabhu, Charles Dognin, and Maneesh Singh. 2019. https://doi.org/10.18653/v1/D19-1417. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4058–4068, Hong Kong, China. Association for Computational Linguistics.
[26]
Felix Buchert, Nassir Navab, and Seong Tae Kim. 2023. Toward label-efficient neural network training: Diversity-based sampling in semi-supervised active learning. IEEE Access, 11:5193–5205.
[27]
Sheng-Jun Huang and Zhi-Hua Zhou. 2013. Active query driven by uncertainty and diversity for incremental multi-label learning. In 2013 IEEE 13th international conference on data mining, pages 1079–1084. IEEE.
[28]
Yan Gu, Jicong Duan, Hualong Yu, Xibei Yang, and Shang Gao. 2023. Plvi-ce: a multi-label active learning algorithm with simultaneously considering uncertainty and diversity. Applied Intelligence, 53(22):27844–27864.
[29]
Dwarikanath Mahapatra, Behzad Bozorgtabar, Zongyuan Ge, Mauricio Reyes, and Jean-Philippe Thiran. 2024. Combining graph transformers based multi-label active learning and informative data augmentation for chest xray classification. In Proceedings of the AAAI Conference on Artificial Intelligence, 19, pages 21378–21386.
[30]
Jueqing Lu, Lan Du, Ming Liu, and Joanna Dipnall. 2020. Multi-label few/zero-shot learning with knowledge aggregated from multiple label graphs. arXiv preprint arXiv:2010.07459.
[31]
Kailun Gong and Tingting Zhai. 2021. An online active multi-label classification algorithm based on a hybrid label query strategy. In 2021 3rd International Conference on Machine Learning, Big Data and Business Intelligence (MLBDBI), pages 463–468. IEEE.
[32]
Nicholas Roy and Andrew McCallum. 2001. Toward optimal active learning through monte carlo estimation of error reduction. Icml, williamstown, 2(441-448):4.
[33]
Wei Tan, Lan Du, and Wray Buntine. 2023. Bayesian estimate of mean proper scores for diversity-enhanced active learning. IEEE Transactions on Pattern Analysis and Machine Intelligence.
[34]
Tilmann Gneiting and Adrian E Raftery. 2007. Strictly proper scoring rules, prediction, and estimation. Journal of the American statistical Association, 102(477):359–378.
[35]
Andreas Buja, Werner Stuetzle, and Yi Shen. 2005. Loss functions for binary class probability estimation and classification: Structure and applications. Working draft, November, 3:13.
[36]
Yang Yang, Yuxuan Zhang, Xin Song, and Yi Xu. 2024. Not all out-of-distribution data are harmful to open-set active learning. Advances in Neural Information Processing Systems, 36.
[37]
Jun Huang, Guorong Li, Shuhui Wang, Zhe Xue, and Qingming Huang. 2017. Multi-label classification by exploiting local positive and negative pairwise label correlation. Neurocomputing, 257:164–174.
[38]
Rui Huang and Liuyue Kang. 2021. Local positive and negative label correlation analysis with label awareness for multi-label classification. International Journal of Machine Learning and Cybernetics, 12(9):2659–2672.
[39]
Carlos Perales-González, Mariano Carbonero-Ruz, Javier Perez-Rodriguez, David Becerra-Alonso, and Francisco Fernández-Navarro. 2020. Negative correlation learning in the extreme learning machine framework. Neural Computing and Applications, 32:13805–13823.
[40]
Haw-Shiuan Chang, Erik Learned-Miller, and Andrew McCallum. 2017. Active bias: Training more accurate neural networks by emphasizing high variance samples. Advances in Neural Information Processing Systems, 30.
[41]
Yuzhe Yang and Zhi Xu. 2020. Rethinking the value of labels for improving class-imbalanced learning. Advances in neural information processing systems, 33:19290–19301.
[42]
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. 2017. <Go to ISI>://WOS:000452649406008. In 31st Annual Conference on Neural Information Processing Systems (NIPS), volume 30 of Advances in Neural Information Processing Systems.
[43]
Yova Kementchedjhieva and Ilias Chalkidis. 2023. https://doi.org/10.18653/v1/2023.findings-acl.360. In Findings of the Association for Computational Linguistics: ACL 2023, pages 5828–5843, Toronto, Canada. Association for Computational Linguistics.
[44]
David D Lewis, Yiming Yang, Tony Russell-Rose, and Fan Li. 2004. Rcv1: A new benchmark collection for text categorization research. Journal of machine learning research, 5(Apr):361–397.
[45]
Ilias Chalkidis and Anders Søgaard. 2022. Improved multi-label classification under temporal concept drift: Rethinking group-robust algorithms in a label-wise setting. In Findings of the Association for Computational Linguistics: ACL 2022, pages 2441–2454.
[46]
Ilias Chalkidis, Manos Fergadiotis, and Ion Androutsopoulos. 2021. Multieurlex-a multi-lingual and multi-label legal document classification dataset for zero-shot cross-lingual transfer. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6974–6996.
[47]
Alistair EW Johnson, Tom J Pollard, Lu Shen, Li-wei H Lehman, Mengling Feng, Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. 2016. Mimic-iii, a freely accessible critical care database. Scientific data, 3(1):1–9.
[48]
Francisco Charte, Antonio J Rivera, Marı́a J del Jesus, and Francisco Herrera. 2015. Addressing imbalance in multilabel classification: Measures and random resampling algorithms. Neurocomputing, 163:3–16.
[49]
Liqun Liu, Funan Mu, Pengyu Li, Xin Mu, Jing Tang, Xingsheng Ai, Ran Fu, Lifeng Wang, and Xing Zhou. 2019. Neuralclassifier: an open-source neural hierarchical multi-label text classification toolkit. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 87–92.
[50]
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32.
[51]
Ye Zhang and Byron Wallace. 2015. A sensitivity analysis of (and practitioners’ guide to) convolutional neural networks for sentence classification. arXiv preprint arXiv:1510.03820.
[52]
Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2016. Recurrent neural network for text classification with multi-task learning. arXiv preprint arXiv:1605.05101.
[53]
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
[54]
Yu Zhu, Jinghao Lin, Shibi He, Beidou Wang, Ziyu Guan, Haifeng Liu, and Deng Cai. 2019. Addressing the item cold-start problem by attribute-driven active learning. IEEE Transactions on Knowledge and Data Engineering, 32(4):631–644.
[55]
Jonathan Frankle and Michael Carbin. 2018. The lottery ticket hypothesis: Finding sparse, trainable neural networks. arXiv preprint arXiv:1803.03635.
[56]
Simon Du, Jason Lee, Haochuan Li, Liwei Wang, and Xiyu Zhai. 2019. Gradient descent finds global minima of deep neural networks. In International conference on machine learning, pages 1675–1685. PMLR.
[57]
Xue Ying. 2019. An overview of overfitting and its solutions. In Journal of physics: Conference series, volume 1168, page 022022. IOP Publishing.
[58]
Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101.
[59]
Bishan Yang, Jian-Tao Sun, Tengjiao Wang, and Zheng Chen. 2009. Effective multi-label active learning for text classification. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 917–926.
[60]
Xin Li and Yuhong Guo. 2013. Active learning with multi-label svm classification. In IjCAI, volume 13, pages 1479–1485. Citeseer.
[61]
Guoxian Yu, Xia Chen, Carlotta Domeniconi, Jun Wang, Zhao Li, Zili Zhang, and Xiangliang Zhang. 2020. Cmal: Cost-effective multi-label active learning by querying subexamples. IEEE Transactions on Knowledge and Data Engineering, 34(5):2091–2105.

  1. Corresponding author.↩︎