November 26, 2024
The primary challenge of multi-label active learning, differing it from multi-class active learning, lies in assessing the informativeness of an indefinite number of labels while also accounting for the inherited label correlation. Existing studies either require substantial computational resources to leverage correlations or fail to fully explore label dependencies. Additionally, real-world scenarios often require addressing intrinsic biases stemming from imbalanced data distributions. In this paper we proposes an new multi-label active learning strategy to address both challenges. Our method incorporates progressively updated positive and negative correlation matrices to capture co-occurrence and disjoint relationships within the label space of annotated samples, enabling a holistic assessment of uncertainty rather than treating labels as isolated elements. Furthermore, alongside diversity, our model employs ensemble pseudo labeling and beta scoring rules to address data imbalances. Extensive experiments on four realistic datasets demonstrate that our strategy consistently achieves more reliable and superior performance, compared to several established methods.
In recent years, extensive machine learning models and algorithms have been developed to deal with the exponential growth of real-world data. However, the significant mismatch between the rapid increase in data and the slow pace of manual data annotation underscores the imperative of active learning (AL) [1], [2]. Multi-label active learning (MLAL), which considers the co-occurrence of labels and is more aligned with real-world applications, has been explored in different domains, including text classification [3], [4], medical imaging [5], [6], remote sensing [7], [8], and so on. The multi-label task, due to the complexity of label-wise correlation and imbalanced data distribution, remains a vital task to comprehensively examine the data, thus necessitating the development of effective query strategies [9], [10].
To deal with the multi-label issue in active learning, earlier approaches usually transform it into multiple binary classification tasks, known as binary relevance (BR) which sums the informativeness evaluated for each individual labels to obtain the final acquisition score [11], [12]. However, these approaches overlook the potential correlation of labels, such as their co-occurrence, which should be factored into the overall information assessment [13], [14]. Consequently, the information inherent in the label correlation of the queried samples may not be fully explored.
Some recent works have employed co-occurrence and label correlation matrices to model these inherent label relationships [15]. However, while the positive correlations, indicating strong co-occurrence between labels, have been included, few studies have explored negative correlations where labels are mutually exclusive and do not appear together. Moreover, asymmetric label-wise correlations, where one label frequently appears with another label without a reciprocal relationship, remains under-explored. This also includes the hierarchical structure of the label set, where node labels inherently belong to and serve as subsets of their corresponding root labels. The selection of overlapping labels, due to the hierarchical nature, affects the diversity of the strategy, consequently, its overall outcome [16]. Furthermore, due to the high imbalance ratios in real-world datasets, addressing data imbalance to maintain consistent performance across different datasets highlights the critical importance of MLAL tasks [17], [18].
Considering label co-occurrence and data imbalance, we propose a new MLAL framework, named multi-label CoRrelation-Aware active learning with Beta scoring rules (CRAB) in this paper. By incorporating the Beta scoring rules to deal with data imbalance and the expected loss reduction framework to select the most informative data instance, we introduce dynamic positive and negative correlation matrices to handle the distinct and asymmetric label correlation within a Bayesian framework. This approach demonstrates robust and outstanding performance on four benchmark datasets for multi-label active learning.
Active learning involves selecting the most informative data from the unlabeled pool for annotation, thereby reducing the required training data while maintaining comparable performance. Two mainstream AL query strategies are uncertainty-based and diversity-based approaches [19]; the former concentrates on informative measurement at the sample-level [20], while the latter emphasizes data distribution [21]. To quantify the sample uncertainty, methods such as proper scoring rules [22], Dirichlet distribution [23], and Gaussian Process [24] can be used to estimate the sample informativeness. By aligning the prior and posterior distributions with model output observations, these models can effectively capture the uncertainty of each prediction.
However, focusing exclusively on uncertainty can introduce bias in sampling (i.e., selecting near-identical instances, thus wasting the annotation budget), which may lead to sub-optimal performance [25]. Incorporating diversity into the sampling process offers an alternative approach to enhancing generalization [26]. [27] utilized label cardinality inconsistency to exploit uncertainty and integrated it with the diversity-based sampling during data acquisition. PLVI-CE leverages average posterior probability discrepancy to measure data diversity and prediction inconsistency to assess uncertainty, thus enhancing model generalization with limited annotated instances [28]. Recently, [22] proposed BESRA which uses the strictly proper scoring rules. Its acquisition function combines beta-scoring rules and k-means clustering to enhance diversity, while the Beta scoring rules also address data imbalance common in multi-label datasets. Built on top of BESRA, our framework further takes into account label correlation in the acquisition function.
Research in active learning has gradually paid attention to the label correlation in MLAL. In light of the specific characteristics of graph data, DAMAL incorporates class-label interactions using a graph-based ranking approach, where edge weights are defined as the cosine similarity between latent features, thus quantifying the graph’s informativeness in relation to label correlation [29], [30]. To address the uncertainty in feature correlations within standard data, [24] integrated a Gaussian process with a Bernoulli Mixture model to model correlation through the covariance matrix. Correlation matrix-based weighted uncertainty, typically derived through co-occurrence or label similarity analysis, is commonly used to query the most informative label pairs by capturing the inter-label influence during label selection [15], [31]. [3] propose a two-stage sample acquisition strategy, called ALMuLa-mix, utilizing inconsistency to capture label correlations with novel features as the first stage and employing the class frequency at the second stage to ensure inter-class diversity.
Although an increasing number of studies recognize the importance of correlation during data acquisition, existing approaches are often resource-intensive, requiring additional training for interrelation modeling, or struggle to maintain performance under data imbalance. Our approach effectively samples the representative data in a correlation-aware manner while maintaining consistent performance, even with highly imbalanced datasets.
Without loss of generality, suppose \(L=\{X,Y\}\), \(U=\{X\}\) represent the collection of initial training set and unlabeled data samples, with \(|U| \gg |L|\); and \(y_i \in \{-1, +1\}^k\) represents the label of the \(i_{th}\) example in the \(k\) label space. Firstly, our model generates the two-dimensional correlation matrix, including both positive correlation and negative correlation, based on the iteratively updated labeled dataset \(L\). Then, given a model parameterized by \(\theta\in\Theta\), the probability of label \(y\) of a data instance \(x\) is \(P(y|\theta, x)\). We are able to derive the pseudo label as \(y^*\) based on \(\int_\theta P(y|\theta,x)P(\theta)d\theta\), where the integration can be approximated by Monte Carlo via ensemble. Considering the model’s learning capability of different categories of data, our model refines the sampling pool into a more preventative subset based on the pseudo labels. And considering the influence of correlation during quantities the informativeness of sample, a variation of the beta scoring rule used in [22] , and we introduce the key concept in section 3.1. Finally, the clustering approach assures the diversity in sampling. Fig. [fig:flowchart] illustrates the overall flowchart of our proposed framework.
Monte Carlo estimation of error reduction [32] involves using the Monte Carlo approach to estimate the expected reduction in error resulting from the labeling of a query. Inspired by this, BEMPS [33] improves these formulations by employing the proper scoring rules [34]. Rather than estimate the error, the scoring rule provides a summary measure of predictive probability, which computes the positive oriented rewards (i.e., utilities) that a classier maximizes [34].
\[\begin{align} Q(L) & = \mathbb{E}_{P_{(x)}}\mathbb{E}_{P(\theta|L)}\big [\mathbb{E}_{P(y|\theta,x)} \notag\\ & \quad\quad [S(P(x,\theta),y)-S(P(x),y)]\big ] \tag{1} \\ \Delta Q(x|L) & = Q_L - \mathbb{E}_{P_{(y|L,x)}}[Q_{L+\{x, y\}}] \notag\\ & = \mathbb{E}_{P_{(y|L,x)}} \big[ \mathbb{E}_{P(x')P(y'|L,(x,y),x')} \tag{2}\\ & \quad\quad [S(P(\cdot|L,(x,y),x'),y')-S(P(\cdot|L,x'),y')] \big] \nonumber \end{align}\]
Eqs 1 and 2 show the core concept of expected increase in score when querying [33]. \(S\) denotes the scoring function that evaluates the predictive probability distribution on an event, i.e., predicting \(y'\), given \(x'\). \(Q(L)\) represents the mean proper scoring rule of the predictive probabilities obtained using Bayesian estimation based on the current labeled dataset \(L\). The term \(\Delta Q(x|L)\) denotes the increment in the score resulting from acquiring the label of a sample \(x\), drawn from the unlabeled data pool \(U\). And \(x\) denotes a selected unlabeled anchor point from \(x'\) for assessment. The value of \(\Delta Q(x|L)\) is then used to select the sample leading to a large increment in the score or reward. Since the label of unlabeled points \(x\) and \(x'\) is unknown, we derive the \(P(y'|L+\{x,y\},x')\) by calculating the posterior distribution of the ensemble models using Eq. 3 .
\[\begin{align} P(y'|L,(x,y),x') &= \mathop{\sum}\limits_{\theta\in\Theta^E}P(y'|\theta,x')P(\theta|L,(x,y)) \tag{3}\\ P(\theta|L,(x,y)) &\approx \frac{P(\theta|L)P(y|\theta,x)}{\sum_{\theta\in\Theta^E}P(\theta|L)P(y|\theta,x)} \tag{4} \end{align}\]
To address the issue of imbalanced label distribution in text datasets, BESRA [22] leverage Beta family [35], which generalizes the logarithmic score and the brier score, or the other desired cost-weighted scoring rule. The equation below illustrates the proper scoring rules \(\mathcal{L}\) of a predictive distribution \(p\) given the expected value \(y^k\), where \(y^k\) represents the label for class \(k\). \(\mathcal{L}(1|p)\) and \(\mathcal{L}(0|p)\) represent the partial losses when \(p\) is been classified as 1 and 0, respectively.
\[\begin{align} S^k_{BR}(p,y^k) \quad &= \quad \mathcal{L}(y^k|p) \notag\\ &= \quad y^k \mathcal{L}(1|p) + (1-y^k)\mathcal{L}(0|p) \end{align}\] \[\begin{align} \mathcal{L}(1|p) &= \mathcal{L}_1(1-p) = \textstyle \int^1_pp^{\alpha-1}(1-p)^{\beta}dp \\ \mathcal{L}(0|p) &= \mathcal{L}_0(p) = \textstyle \int^p_0p^{\alpha}(1-p)^{\beta-1}dp \end{align}\]
By leveraging \(I_x(\alpha,\beta)\), the Incomplete Beta Function, the closed form of the Beta distribution is obtained for \(\alpha, \beta > 0\). When \(\alpha=\beta=0\), the scoring becomes log-loss, and when \(\alpha=\beta=1\), the scoring rule will transform to squared error losses. By adjusting the value of \(\alpha\) and \(\beta\), our model can effectively handle scenarios with diverse data distributions. In our research, we employ the greedy search result of BESRA as the parameter for scoring, where the \(\alpha=0.1, \beta=3\) [22].
In multi-label scenario, a single sample often has more than one label, and label with relatively small semantic distances frequently appear simultaneously. To leverage inherent relationships within the label space to assist in acquisition process, our framework maintains two dynamic correlation matrices: a co-occurrence matrix representing positive correlations, and a non-co-occurrence matrix representing negative correlations between labels. Both matrices are updated after each acquisition iteration with newly annotated instances.
By discovering the pattern of the occurrence between labels, we aim to quantify the informativeness considering influence of correlation, and make the query process diversity while take into account the imbalanced data distribution.
The positive correlation matrix is derived from the label-wise dependence matrix \(A(m,n)\), which represent the dependence of one label’s presence on another. This can be implemented via Eq. 5 . Specifically, \(\sum_{i=1}^L N(y_i^m=+1, y_i^n=+1)\) refers to the count of labeled instances where labels \(m\) and \(n\) occur simultaneously in the label set. \(P(y^{m}|y^{n})\) gives the likelihood of label \(m\) occurring when label \(n\) is present. When \(m=n\), \(A(m,n)\) equals 1. The value of \(A(m,n)\) reflects the probability of one label’s existence to another.
\[\begin{align} A(m,n) \quad &= \quad P(y^{m}|y^{n}), \quad \text{where } m\neq n \notag \\ &= \quad \frac{\sum_{i=1}^L N(y_i^m=+1, y_i^n=+1)}{\sum_{i=1}^L N(y_i^n=+1)} \label{eq8} \end{align}\tag{5}\]
For example, if label \(m\) is the subset of the label \(n\), the presence of label \(m\) definitely implies the presence of label \(n\), while the converse does not hold. In this way, \(A(m,n)\) will equal 1, and \(A(n,m)\) generally smaller than \(A(m,n)\). Thereby, the positive correlation matrix is able to describe the pattern of the co-occurrence between labels and reflects the asymmetric correlation between labels, including hierarchical relationships.
Despite the positive correlations, negative correlations between labels have rarely been addressed in previous research. However, in real-world scenarios, negative correlations are instrumental in enabling the model differentiate between mutually exclusive classes and contribute to more accurate decisions by clarifying the model’s decision boundaries [36].
To effectively model these negative correlations, we construct an updated non-co-occurrence matrix, \(NegA(m,n)\) where \(m\neq n\), as defined in Eq. 6 . This matrix quantifies the confidence in the presence of label \(m\) given that label \(n\) is present. Specifically, \(\sum_{i=1}^L N(y_i^m=-1, y_i^n=+1)\) represents the count where labels \(m\) and \(n\) do not co-occur. If label \(m\) and \(n\) occur simultaneously, then both \(NegA(m,n)\) and \(NegA(n,m)\) are set to 1. While in most cases, \(NegA(m,n)\) dose not equal to \(NegA(n,m)\), as it depends on the occurrence frequency of the baseline label. This asymmetry allows the matrix to capture the nuanced conditional negative relationships, and provide a new perspective towards the label dependencies.
\[\begin{align} NegA(m,n) \quad &= \quad P(\overline{y^m}|y^n), \quad \text{where } m\neq n \notag\\ &= \quad \frac{\sum_{i=1}^L N(y_i^m=-1, y_i^n=+1)}{\sum_{i=1}^L N(y_i^n=+1)} \label{eq9} \end{align}\tag{6}\]
Refine the unlabeled pool to ensure that selected instances concentrate on specific representative criteria is a common strategy in active learning [4]. However, current research predominantly based on informativeness analysis, neglecting the critical role of data correlation in MLAL.
To address this limitation and provide more representative and evenly distribution samples for continuous process, our model refines the unlabeled pool from three perspectives based on the correlation properties. And the pseudo label \(y^*\), obtained by averaging the prediction result of ensemble models through Eq. 7 , is used for the following correlation-based sampling.
\[\begin{align} y_i^* = \mathbb{I}[P(y_i \mid x_i, L)>0.5] \label{eq10} \end{align}\tag{7}\] \[\begin{align} P(y_i \mid x_i, L) &= \textstyle \int_{\theta} P(y_i|x_i,\theta) p(\theta \mid L) \notag \\ &\approx \textstyle \sum^E_{e=1} P(y_i|x_i,\theta_e)/E\label{eq} \end{align}\tag{8}\]
In multi-label scenarios, labels often exhibit asymmetric correlations, where the present of one label, \(m\), is highly correlated with another label, \(n\), but not vice versa. Hierarchical structures within labels are a common example of this kind of relationship. To illustrate the impact on performance, we can consider hierarchical data: when asymmetric correlations exist, the selection of root-node labels often overlaps with that of corresponding leaf-node labels, which reduces the representativeness of the selected root labels [16].
To address this, our strategy introduces a mechanism to modify label-wise selection. If correlation between labels exceeds a threshold, \(\sigma\), defined here as the standard deviation of a two-tailed normal distribution, we consider the label pair to be asymmetrically correlated. In such cases, only the primary label in the correlation chain, \(m\), is selected for label-wise sampling. The model then allocates the per-label query size for each label using pseudo labels to refine the sampling pool. This approach not only improves correlation-aware sampling but also mitigates imbalances by ensuring relatively even representation across selected samples.
In multi-label learning, ensuring the model respects the exclusivity of certain labels is crucial for achieving accurate predictions [37]. When mutually exclusive labels, such as those that should not logically co-occur, are predicted together, it is often an indication of model bias or misguided learning [38], [39]. This misalignment can reduce the model’s effectiveness. Therefore, as the second subset for concentrated sampling, we select instances with negatively correlated labels that are not expected to co-occur in the label space.
To formalize this, we consider a pair of labels as mutually exclusive when the negative correlation coefficient \(NegA(m,n)\) exceeds a threshold of \(2\sigma\), where \(\sigma\) represents the standard deviation. Based on predictions with pseudo labels, our model selects a query size of samples from these instances to refine the unlabeled pool, guiding the model with a specific focus on avoiding negative correlations.
The third subset our model uses to refine the unlabeled pool focuses on hard-to-learn samples. These samples are typically characterized by low confidence and low variability, indicating instances where the model has difficulty making accurate predictions. Such samples often contain ambiguous or noisy features or lie near decision boundaries [40], [41]. In this study, we set a confidence threshold of 0.5 to identify hard-to-learn samples. Specifically, if the pseudo label obtained through Eq. 7 for all classes of an instance falls below this threshold, our model classifies the sample as hard to learn. To improve performance on these challenging instances while maintaining diversity in the sampling process, our model dynamically adjusts the sample size using a polynomial decay function, enabling more focused learning on difficult cases over time.
\[\begin{align} S_{AB}(f_L(x),y^n) &= \sum^K_{m=1} \hat{A}(m,n)*S_{BR}(f_L(x),y^m) \tag{9} \\ \hat{A}(m,n) &= \text{norm}(A(m,n)), \quad \text{where } m \neq n \notag \\ &= \frac{A(m,n)}{\alpha \cdot \text{max}(\sum A(\cdot,n))} \tag{10} \end{align}\]
With the correlation-based sampling strategy described in section 3.3, our model obtains a refined unlabeled data pool. Then, our model calculate correlation-aware beta scoring for the selected samples, and use it to cluster the final samples for annotation. This process employs a computation method similar to the attention mechanism introduced in transformer models [42]. Using Eq. 9 , we score each prediction, where \(S_{AB}\) incorporates the influence of other labels’ scores through the attention coefficient \(\hat{A}(m,n)\) as the final score, accounting the correlation. Additionally, we introduce \(\alpha\), a normalization parameter set to 2, to prevent over-estimating correlated uncertainty while preserving the original significance of each label’s score. This approach allows our model consider the impact of neighboring labels on informativeness, and the refined unlabeled pool enhances computational efficiency and deepens the analysis of label correlations. Algorithm 2 details the procedure for one iteration of our framework.
We collected four benchmark multi-label text datasets to analyze the performance and robustness of our framework [43]. Those datasets include: RCV1 [44]: A news articles dataset from Reuters; UKLEX [45]: A collection of legal documents sourced from various categories within UK law; EURLEX [46]: A set of descriptors from European legal information thesaurus extracted from the European Union’s legal database; MIMIC3 [47]: A set of de-identified health records for medical diagnosis. Since the data in UKLEX, EURLEX, and MIMIC3 have two levels of labels, to retain uniformity, our study used the coarse level of labels. Following the method by [48], we used mean imbalance ratio (MeanIR) to create synthetic datasets with varying imbalance ratios based on the modified RCV1 dataset, reduce the label size of RCV1 to ten (\(K=10\)) by selecting the most frequently occurring labels, enabling an evaluation of the model’s performance across different degrees of imbalance. Table [tab:data] and Table [tab:data95ir] offer a detailed summary of these four datasets. We also introduced a new metric, termed CorrAvg, defined as \(\sum_{m=1}^K\sum_{n=1}^K A(m,n)/(K\times K), m\neq n\), to quantify the degree of the inter-correlation within the label set. We compared the performance of different query methods across datasets with different correlation levels.
Dataset | ||||
Train/Test | ||||
#Label | ||||
Train/Test | ||||
Train/Test | ||||
RCV1 | 24891/6223 | 104619/102 | 402/197 | 0.137/0.137 |
UKLEX | 20000/8500 | 63157/18 | 7/6 | 0.026/0.024 |
EURLEX | 55000/5000 | 160211/21 | 16/15 | 0.131/0.147 |
MIMIC | 29999/10000 | 137678/19 | 127/101 | 0.321/0.320 |
Dataset | ||||
Train/Test | ||||
#Label | ||||
Train/Test | ||||
Train/Test | ||||
RCV1-T10-5 | 1200/600 | 25254/10 | 5/10 | 0.133/0.138 |
RCV1-T10-10 | 1200/600 | 25289/10 | 10/10 | 0.135/0.138 |
RCV1-T10-20 | 1200/600 | 24170/10 | 20/10 | 0.137/0.138 |
RCV1-T10-50 | 1200/600 | 25280/10 | 50/10 | 0.142/0.138 |
We used Neural-Classifier [49], implemented in Pytorch [50], as the code base. In our study, we exployed three mainstream models, TextCNN [51], TextRNN [52], and DistilBERT [53], as the backbone classifiers. To enhance efficiency and performance, we applied the cold start strategy [54] with random initialization at the beginning of each active learning iteration, a method known for its applicability to real-world scenarios [55]. All experiments were conducted on a single RTX3090 GPU. Following the setting of [22] , the maximum sequence length for the text data was set to 256, with each training iteration consisting of 80 epochs. We implemented an early stopping criterion with the patience of 20 epochs to prevent the model from falling into local optima or overfitting [56], [57]. AdamW was used as the optimizer [58], with the learning rate tailored for each model: 5e-2 for TextCNN and TextRNN, and 5e-5 for DistilBERT. The hard-to-learn query size was set to 300, while the per-label query size was set to 50 for RCV1 and 100 for other datasets, due to differences in label space size.
To conduct a comparative performance analysis, we adopted five state-of-the-art MLAL methods as baselines, including random sampling. Each baseline uses the same query parameters and backbone classifier to maintain consistency across experiments. Specifically, MMC [59] applies maximal confidence to selecting data that induces the largest reduction in expected model loss. AUDI [27] explores uncertainty and diversity in both instance and label spaces through label ranking and threshold learning. ADAPTIVE[60] integrates max-margin prediction uncertainty with label cardinality inconsistency to assess the unified informativeness of multi-label instances. BESRA [22] utilizes the beta scoring rules within an expected loss reduction framework to evaluate informativeness and employs vector representations to maintain diversity. CMAL [61] leverages the global label correlation matrix and label space sparsity with the uncertainty to query the most informative example-label pairs.
Figures [fig:bert] and [fig:ir] present the quantitative performance of our proposed framework. Figure [fig:bert] compares the performance of CRAB and baseline methods across four datasets using BERT, with additional results for other models is provided in Appendix 7. Following [22], we obtained the predictive distribution by training five ensemble models independently, each initialized with the same parameters for every AL iteration. The micro-F1 results show that our proposed model, CRAB, has consistently outperforms other AL methods across different text domains and network structures. Additionally, CRAB demonstrates robust performance on datasets with varying degrees of correlation, particularly compared with BESRA, suggesting that our strategy effectively models correlation during data selection. To examine the robustness of the model on imbalanced datasets, we conducted comparative experiments on synthetic datasets, with the results shown in Figure [fig:ir]. CRAB maintains superior performance across synthetic datasets with different MeanIR values, demonstrating its capability to handle imbalanced datasets.
To further investigate the effectiveness of our model, Figures [fig:meanir] and [fig:trend] present a qualitative analysis of CRAB. Figure [fig:meanir] shows the MeanIR of the selected samples across AL iterations. Since MeanIR indicates imbalance, with lower values reflecting a more even data distribution, we observe that CRAB demonstrates a more balanced sample selection compared to other baseline models, which underscores its capacity to address data imbalance effectively. Figure [fig:trend] illustrates the trend of two categories of data within the unlabeled data pool: hard-to-learn data and negatively correlated data. Unlike random selection, CRAB strategically selects data that enhances model learning, thereby reducing misclassification of negatively correlated data. Additionally, CRAB improves performance on hard-to-learn data, helping the model becomes more robust and accurate. This targeted data selection contributes to a more balanced and adaptive learning process, ultimately leading to improved generaliztion across diverse data types.
We conducted four experiments to examine whether the structure of CRAB improves MLAL performance by considering correlation. Taking into consideration the asymmetrically correlated label relationships, CRAB selects only the initial label in the correlation chain for per-label selection, thus avoiding duplicate selection of correlated labels. Figure 7 (a) compares performance with and without considering asymmetrical correlations on the MIMIC dataset. The results illustrate that CRAB demonstrates superior performance and is more effective in querying indicative samples than when treating all labels equally. Figure 7 (b) shows the benefits of sampling conflicted labels, with performance improvements becoming more pronounced in the later training stages.
Figure 8 (a) illustrates how the correlation attention impact MLAL accuracy. To assess performance without correlation in score evaluation, we removed the correlation attention in Eq. 9 , with the results shown by the blue line. Evidently, when positive label correlations are incorporated, CRAB performs more consistently throughout the experiment, indicating that our strategy effectively models inter-label relationships to make more informative queries. Additionally, we compared the micro-F1 score and computation time of the random sampling and clustering-based sampling during the refined unlabeled pool sampling. As shown in Figure 8 (b), random sampling performs almost identically to clustering-based sampling, suggesting it can server as a replacement during the refined unlabeled pool selection. Moreover, the querying time with random sampling decreases by 40% . These findings demonstrate that our method is effective, efficient, and robust.
In this paper, we proposed an innovative MLAL query strategy, CRAB, which takes into account inherent label relationships within a Bayesian framework. By updating the correlation matrices with the annotated data, our model is competent to query more representative samples in the initial stage and achieves a more accurate score for evaluating the informativeness of instances. Additionally, with the utilization of beta scoring rules, our model maintains consistently robust performance on imbalanced datasets. Leveraging pseudo labels and correlation-aware sampling, our strategy eliminates the need for additional training modules, and our model demonstrates significant performance improvements in MLAL on four benchmark datasets.
Our work has two main limitations. First, although our model utilizes subsets of the unlabeled pool to select samples based on correlation properties, the framework could be improved by dynamically and proportionally adjusting the subset sample size for the refined unlabeled pool. Since learning performance varies across datasets, dynamically adapting the sample selection process in response to the model’s evolving learning capability and the inter-relationships within each dataset could further enhance performance, allowing the model to balance diverse and informative sample selection more effectively.
Second, our current approach focuses primarily on label-wise correlations, considering both the co-occurrence and non-co-occurrence relationships between labels. Future work will extend this to explore correlations in both instance and label spaces, examining whether the alignment between instances and labels can further improve performance. Additionally, we plan to investigate more complex relationships, such as spurious correlations, that may exist between data features and label distributions. This includes studying the impact of invalid or noisy information from instances on model performance and identifying methods to mitigate such effects.
Figure [fig:textcnn] and Figure 10 present supplementary performance comparisons based on TextCNN and TextRNN. Our proposed strategy, CRAB, consistently demonstrates robust and superior performance across different benchmarks. Among the baseline methods, BESRA achieves strong results and shows relatively robustness on different datasets. However, its performance on the highly correlated MIMIC dataset is less stable, likely due to the absence of correlation consideration. AUDI, which incorporates both uncertainty and diversity at both data and instance level, presents notable performance on three of the datasets, RCV1, UKLEX, and EURLEX, but struggles on MIMIC. Adaptive and MMC yield similar results over four datasets, as both utilize the max-margin as the selection criterion. Although CMAL considers global label correlation, it only performs optimally on the highly correlated MIMIC dataset and does not maintain stable performance on all datasets.
To validate the effectiveness and generalizability of CRAB, we conduct two experiments to analyze its performance with different parameters. Figure 11 (a) demonstrates the performance with varying sizes of hard-to-learn samples for refined unlabeled pool selection. With an acquisition size of 100 per iteration, the optimal performance is achieved when the size of the hard-to-learn samples matches the acquisition size. If the sample size is set too large, such as 200, the model initially shows relatively better performance due to the higher proportion of hard-to-learn samples in the early stages. However, as annotated data increases, performance declines because the proportion of hard-to-learn samples becomes less significant, necessitating a reduction in their selection. This parameter is adjustable for different datasets and model structures to ensure compatibility with the learning capabilities across varying scenarios. To deal with the problem of the amount of hard-to-learn samples decreasing with increased annotated data, CRAB adopts a decay function for the size of hard-to-learn samples to adapt to the training process. Figure 11 (b) presents the performance of three decay approaches, linear decay, cosine decay, and polynomial decay. Among these, polynomial decay achieves superior performance in terms of the micro-F1 score, as it produces an accelerated decrease in output for sampling, better aligning with the trend in the size of hard-to-learn samples.
Corresponding author.↩︎