Object-conditioned Bag of Instances for
Few-Shot Personalized Instance Recognition


Abstract

Nowadays, users demand for increased personalization of vision systems to localize and identify personal instances of objects (e.g.., my dog rather than dog) from a few-shot dataset only. Despite outstanding results of deep networks on classical label-abundant benchmarks (e.g.., those of the latest YOLOv8 model for standard object detection), they struggle to maintain within-class variability to represent different instances rather than object categories only. We construct an Object-conditioned Bag of Instances (OBoI) based on multi-order statistics of extracted features, where generic object detection models are extended to search and identify personal instances from the OBoI’s metric space, without need for backpropagation. By relying on multi-order statistics, OBoI achieves consistent superior accuracy in distinguishing different instances. In the results, we achieve \(77.1\%\) personal object recognition accuracy in case of \(18\) personal instances, showing about \(12\%\) relative gain over the state of the art.

Personalization, Object Recognition, Instances, Few-Shot Learning

1 Introduction↩︎

Smart devices are starting to be ubiquitous in everyday life [1] and their users are demanding for instance-level personalized detection of vision systems mounted on such devices [2], [3]. For example, vacuum cleaners can now monitor the behavior of users’ specific pets, and stay away from those specific pets that are mostly scared by the robot’s noise [4]. Nonetheless, users do not provide many labeled examples, being a time-consuming operation. Therefore, we introduce a new task of few-shot instance-level personalization of object detection models to detect and recognize personal instances of objects (e.g.., dog\(_1\) and dog\(_2\) rather than just dog). The limited availability of the data distinguishes our task from previous instance-level personalization attempts [5], [6]. To the best of our knowledge, previous works assume large availability of labelled data and finetune (FT) the models through computationally expensive updates. However, FT-based methods inevitably fail when few-shot samples are provided [7][10].

In our work, we utilize the latest YOLOv8 [11] efficient detection model, and we enable personalized instance recognition via backpropagation-free Prototypes-based Few-Shot Learners (PFSLs), such as [12], [13]. In short, PFSLs learn a metric space in which classification is performed by computing distances to prototypical representations of each class.

In this context, we extend PFSLs to support object-class conditioned search, and we call these approaches Object-conditioned Bag of Instances (OBoI), since they contain instance-level prototypes. Our approach enriches any OBoI method by augmenting localized encoder embeddings (EEs) of the input object via multi-order statistics to construct a richer metric space, where instance-specific patterns are separable. We compute augmented EEs (AEEs) via a reduction module similar to recent pooling schemes [14][17] to characterize the distribution of the specific instances from the few-shot labelled data. A concurrent work [14] applies ensemble learning on multi-order features learned separately; however, their focus is neither personalized instance recognition nor object detection, and they require gradient-based training. A backpropagation-free approach, instead, could be especially useful where dynamic compilers are not available for the target hardware. Our OBoIs with AEEs significantly increase model personalization, alleviating neural collapse [18], [19], i.e.., a state at which within-class variability of hidden layer outputs is completely lost due to the object-level optimization objective. Our main novelties are:

  1. We propose a novel task of few-shot personalization of object detectors to recognize instances of objects;

  2. We extend PFSLs via object-level conditioning (OBoIs);

  3. We further design a multi-order feature space where personal instances can be separated via a backpropagation-free metric learning on few-shot labelled user data only;

  4. OBoIs provide superior results on both same and other domain data (11-22% and 7-18% relative gains respectively).

Figure 1: For each object (e.g.., ball), multiple instances (e.g.., ball\(_1\), ball\(_2\)) are acquired across different sequences.

2 Few-Shot Personalized Instance Recognition↩︎

In our setup, we aim at personalizing generic object detection models to recognize objects by a set of instance-level labels.

We are given a generic object detection model \(M_o\) (e.g.., YOLOv8 [11]) which has been trained on a labelled object-level dataset \(\mathcal{T}_o=\{x_{o,k}, y_{o,k}\}_{k=1}^{N_o}\), whose labels \(y_{o,k}\in \mathcal{C}_o\) belong to an object-level class set \(\mathcal{C}_o\) (e.g.., \(\mathcal{C}_o = \{\mathrm{ball}, \mathrm{bottle}\}\)). We target personalization of the classification ability of \(M_o\) to recognize a set of personal classes given few-shot labelled samples \(\mathcal{T}_i=\{x_{i,k}, y_{i,k}\}_{k=1}^{N_i}\), where \({N_o \gg N_i}\) and labels \(y_{i,k} \in \mathcal{C}_i\) belong to an instance-level class set \({\mathcal{C}_i}\) (e.g.., \({\mathcal{C}_i = \{\mathrm{ball}_{\mathrm{1}}, \mathrm{ball}_{\mathrm{2}}, \mathrm{bottle}_{\mathrm{1}} \}}\)). We remark that, in this work, we focus on the detector’s classification part and we do not update the localization part. In other words, we assume that there exists an instance-to-object function \(f(\cdot)\) mapping each class \(c \in \mathcal{C}_i\) to a label in \(\mathcal{C}_o\); i.e.., \(f: \mathcal{C}_i \to \mathcal{C}_o\) (e.g.., \(f(\mathrm{ball}_{\mathrm{1}})=\mathrm{ball}\)), therefore there is no need to update the detector’s regression part.

In the analyses, we implement the model \(M_o\) by YOLOv8, pre-train the model on MSCOCO [20] and then on \(\mathcal{T}_o\), which contains samples from Open-Images-V7 (OIV7) [21] of the same object-level classes as in the personal dataset \(\mathcal{T}_i\); i.e.., such that \(f(\cdot)\) exists. We used the default learning parameters [11] for pre-training. We design several setups whereby we assign a few samples to the training set and the remaining ones to the testing (\(80\%\)) and validation (\(20\%\)) sets.

Datasets. We use CORe50 [6] or iCubWorld Transformations (iCWT) [22] as the personalized recognition datasets. CORe50: we consider a subset of 45 personal instances (i.e.., \(|\mathcal{C}_i|=45\)) belonging to 9 object-level classes (i.e.., \(|\mathcal{C}_o|=9\)), acquired over 11 variable-background sequences, i.e.., different domains (see Fig. 1). iCWT: we consider a subset of 9 object-level classes with 10 personal instances each acquired under 5 sequences with diverse affine transformations of the items. On both datasets, we restrict the personalization stage to the frames being correctly labelled by YOLOv8n, maintaining a balanced number of samples per instance and per sequence.

Metrics. We compute the instance recognition accuracy averaged within each object-level class (\(\mathrm{Acc}_o\), %) and averaged over all instances (\(\mathrm{Acc}_i\), %) on the test set. We define the relative gain between two methods obtaining \(\mathrm{Acc}_{i,1}\) and \(\mathrm{Acc}_{i,2}\) (\(\mathrm{Acc}_{i,2}>\mathrm{Acc}_{i,1}\)) as: \(\Delta \triangleq 100 \cdot (\mathrm{Acc}_{i,2}-\mathrm{Acc}_{i,1}) / (\mathrm{Acc}_{i,1})\).

image
image

3 Object-conditioned Bag of Instances↩︎

We propose a lightweight module that can be integrated into any object detection network. Our solution is based on three key components: (i) an object detection network, with (ii) a multi-order statistical augmentation of embeddings for (iii) instance-level recognition via an OBoI. Next, we outline how we construct our OBoI to personalize object detectors pre-trained on \(\mathcal{T}_o\) on the server side (see Fig. [fig:method]).

In our case, \(\mathcal{T}_o\) is made of subsets of the generic OIV7 dataset. The model \(M_o\) is then adapted to recognize personal user-specific object instances. Without loss of generality, we assume that \(M_o=D_o\circ E_o\) can be decomposed into an encoder \(E_o\) and a detection head \(D_o\). Given a sample from \(\mathcal{T}_i\), \((x_{i,k}, y_{i,k})\in\mathcal{T}_i\), where \({x_{i,k}\in\mathbb{R}^{H\times W \times 3}}\) is an RGB input image with size \(H\times W\) and \(y_{i,k}\) the associated personal instance label, we pass \(x_{i,k}\) through the model obtaining the object-level predicted label (\(\hat{y}_{o,k}\)) and bounding box. We rescale the predicted coordinates by \(H/H'=W/W'\) to match the low-resolution spatial dimensions \(H'\times W'\) of the \(D\)-dimensional features \({e_{i,k} \triangleq E_o(x_{i,k}) \in \mathbb{R}^{H'\times W' \times D}}\) (for the sake of simplicity, we assume that only one object is present in each image, but the same rationale applies in presence of multiple objects seamlessly). We then build a binary mask \(S_{i,k}\) to discard regions outside the bounding box, and we apply it via the Hadamard product obtaining \({e'_{i,k}=S_{i,k} \odot e_{i,k}}\) which is then passed through a reduction operation. In order to characterize the instance-level distribution from the few input samples, we extract and concatenate the first \(R\) statistical moments to form \(v_{i,k} \triangleq \mathrm{concat}(m_1, \dots, m_R)\) being \(\mathrm{concat}(\cdot)\) the concatenation operation and \(m_n\) the \(n\)-th order central statistical moment [23] computed over the features in \(e'_{i,k}\) corresponding to non-zero entries in \(S_{i,k}\).

Finally, we use the vector \(v_{i,k}\) as the input to some PFSL. In our case, we use PFSLs to identify instance-level classes on a metric space spanned by multi-order statistics. We refer to [12], [13] for more details. Additionally, we condition the search for representations of personal objects only within the instances whose object category matches \(\hat{y}_{o,k}\), thus simplifying the search of the correct nearest instance-level prototype.

The overall pipeline can be thought of as an Object-conditioned Bag of Instances (OBoI) since generic category-level output is converted to specific personal-level output via conditional nearest prototype selection. Our setup and method are fully compatible with the key requirement of continually learning new instances over time [5], [6], [16], [24][26]; whenever a user presents a new instance to be recognized, we can include new instance-level prototypes in the OBoI at any time with no accuracy degradation with respect to the case where all instances are available from the beginning of the adaptation process.

4 Experimental Results↩︎

Most of the evaluation focuses on the personal instance-level accuracy, since our modules do not influence in any way the object-level detection accuracy and bounding box regression. For the sake of clarity, we consider that each input sample contains one single instance. Nonetheless, our method can handle multiple instances in input samples by running the instance-level prototype search for each input object independently. Provided that the general object detection results are accurate, our results would not change.

Unless otherwise stated, we report all results on YOLOv8n, being the most suitable for deployed applications, and in the case of 2 instances per each object-level category.

Same domain. The first scenario we design considers 1-Shot from All Sequences (1SAS), therefore the same domain is seen during few-shot training (one sample per each sequence) and testing (all remaining samples from all sequences). Table [tab:1shot95allseq95variable95instances95same] reports \(\mathrm{Acc}_i\) in multiple setups having a variable number of instances per object-level class. First, we observe that gradient-based fine-tuning methods (e.g.., FT) are not effective and obtain comparable results to a random classifier (lower bound). OBoI via PFSL methods such as SimpleShot [13] and ProtoNet [12] show large gains compared to FT by learning a metric space from the extracted features. In both cases, the augmentation of embeddings via our multi-order statistics boost the recognition accuracy significantly, especially in presence of multiple instances per object. Remarkably, we can personalize YOLOv8n to achieve \(77.08\%\) \(\mathrm{Acc}_i\) when detecting \(18\) personal instances via just a few samples and via a backpropagation-free approach, assuming that a correct object-level classification and bounding box regression are output from the detection head. Fig. 2 (a) reports \(\mathrm{Acc}_i\) and \(\mathrm{Acc}_o\) of OBoIs via ProtoNet in the case of 2 instances per object. We consider three configurations for ProtoNet: at the logits level (i.e.., the output of the last layer of the detector’s head), at the encoder embedding level (i.e.., the output of the detector’s encoder) or via our multi-statistics augmented encoder embeddings (i.e.., with features augmented via multi-order statistics). We observe that our proposed solution consistently improves or achieves comparable results on every object class.

8pt

Table 1: \(\mathrm{Acc}_i\) of OBoIs on 1S1S (other domain).
# Instances per object class
2 3 4 5
Random 50.00 33.33 25.00 20.00
FT 48.12 35.98 21.74 22.53
SimpleShot 60.62 43.95 34.31 29.84
+ AEE (ours) 65.46 49.62 40.52 35.33
\(\Delta\) +8.0 +12.9 +18.1 +18.4
ProtoNet 60.33 46.52 37.13 32.13
+ AEE (ours) 64.37 51.39 40.70 36.77
\(\Delta\) +6.7 +10.5 +9.6 +14.4

a

1SAS (same domain).

b

1S1S (other domain).

Figure 2: \(\mathrm{Acc}_i\) and per-object \(\mathrm{Acc}_o\) for OBoIs via ProtoNet. EE: encoder embeddings. AEE: augmented EE..

Other domain. We designed a more realistic, yet challenging, setup considering 1-Shot from 1 Sequence (1S1S) during training and all remaining samples during testing. The model experiences different domains at training time (one sample from the first sequence only) and at testing time (all remaining samples from all sequences). Table 1 summarizes the main results. Reducing the training samples further decreased the accuracy of FT, compared to the 1SAS setup. Also SimpleShot and ProtoNet show lower accuracy, due to the fewer training samples and the domain gap of the 1S1S setup. Nonetheless, they exhibit large gains over FT. Our method shows a significant improvement of the performance in every case, even in the presence of a domain shift, and especially in case of multiple instances per object category. We argue that the gain attained by our approach is lower than the previous one due to the difficulty in reliably matching multi-order statistics between a single input sample from a single domain and several target samples from several domains. Fig. 2 (b) reports \(\mathrm{Acc}_o\); similarly to the previous case, we confirm that our solution obtains robust results across most of the classes.

Variable training shots are studied in Fig. 3, where we observe that OBoIs with our AEE improve personal recognition accuracy regardless of the number of available training samples (i.e.., shots) for both ProtoNet and SimpleShot.

Figure 3: \(\mathrm{Acc}_i\) at variable shots. Samples are drawn randomly from each sequence. PN: ProtoNet, SS: SimpleShot.

Table 2: Ablation on YOLOv8 models. General object detection results are computed on the subset from OIV7. Personal instance recognition is evaluated on the subset from CORe50; we report a single score that is the \(\mathrm{mean}_p(\mathrm{Acc}_{i,p})\) where \(p\) indicates the setup with \(p\) instances per object (\(p\in\{2,3,4,5\}\)).
YOLOv8: n s m l x
Size [MB] 5.9 21.4 77.0 83.6 130.4
Precision 72.0 73.2 70.4 72.4 77.2
Recall 72.5 70.3 74.4 75.5 77.3
mAP50 77.7 77.6 79.3 78.2 78.7
mAP50-95 60.9 61.7 62.3 62.7 63.1
ProtoNet 44.03 43.56 48.44 55.38 58.77
+ AEE (ours) 48.31 48.47 56.30 56.88 60.84
\(\Delta\) +9.72 +11.27 +16.23 +2.72 +3.52
ProtoNet 53.92 53.86 61.66 64.16 66.60
+ AEE (ours) 62.71 66.67 75.29 75.21 74.65
\(\Delta\) +16.31 +23.79 +22.10 +17.23 +12.09

Other YOLOv8 sizes are evaluated in Table 2 on both general object detection and personalized instance recognition task. Larger YOLOv8 models can improve detection performance, and this correlates with the personal instance recognition accuracy. The improvement of larger YOLOv8

Table 3: \(\mathrm{Acc}_i\) on iCWT. PN: ProtoNet.
# Instances per object class
2 3 4 5 6 7 8 9 10
PN 82.3 66.5 62.6 58.8 46.9 46.0 43.6 41.5 41.5
+ AEE 85.8 75.8 71.9 67.2 57.7 55.8 52.4 49.5 47.8
\(\Delta\) +4.4 +13.9 +14.9 +14.3 +22.9 +21.5 +20.2 +19.1 +15.2
PN 71.7 52.9 51.7 47.2 39.5 38.7 36.1 33.4 31.1
+ AEE 79.6 60.1 56.5 49.8 41.1 40.6 37.5 34.2 33.0
\(\Delta\) +11.1 +13.5 +9.4 +5.5 +4.1 +5.0 +3.8 +2.4 +6.1

models comes at a cost of a significantly larger model size and slower FPS: YOLOv8x improves personal recognition by about \(25\%\) compared to YOLOv8n, while having about 22\(\times\) larger size and \(3.6\times\) slower inference. The final choice depends on the hardware specifications of target devices.

Computational inference time of our AEE on top of the OBoI with ProtoNet increases by as little as \(0.8\%\) making our method lightweight with a nearly negligible impact.

Additional ablation studies to evaluate our design choices are reported here on ProtoNet. Removing the object-level conditioning lowered \(\mathrm{Acc}_i\) in the 1SAS setup from \(77.1\%\) of our approach (ProtoNet + AEE) to \(70.9\%\), showing a relative gain of about \(3\%\) compared to the baseline (\(68.8\%\)). This is due to the larger search space in metric learning. Removing the mask \(S_{i,k}\) leads to more background noise regions to flow into prototype computation, and it decreases accuracy by \(2.6\%\) in the 1SAS setup, and by even more (\(5.1\%\)) in the 1S1S setup, since background varies across different sequences.

Another personal dataset (iCWT) is shown in Table 3 against the highest baseline ProtoNet. Our approach exhibits robust gains across all setups ranging from 18 to 90 instances.

5 Conclusion↩︎

In this paper, we introduced few-shot instance-level personalization of object detectors. We proposed a new method (OBoI) to personalize detection models to recognize user specific instances of object categories. OBoI is a backpropagation-free metric learning approach on a multi-order statistics feature space. We believe that this setup and our method could pave the way to personal instance-level detection and could stimulate future research and applications.

References↩︎

[1]
Ovidiu Vermesan and Joël Bacquet, Internet of Things–The Call of the Edge: Everything Intelligent Everywhere, CRC Press, 2022.
[2]
Junfeng He, Khoi Pham, Nachiappan Valliappan, Pingmei Xu, Chase Roberts, Dmitry Lagun, and Vidhya Navalpakkam, “On-device few-shot personalization for real-time gaze estimation,” in ICCVW, 2019.
[3]
Nidhi Arora, Daniel Ensslen, Lars Fiedler, Wei Wei Liu, Kelsey Robinson, Eli Stein, and Gustavo Schüler, “The value of getting personalization right - or wrong - is multiplying,” McKinsey & Company, pp. 1–12, 2021.
[4]
An Intelligent At-Home Helper – How the Bespoke Jet Bot™ AI+ Takes Care of Your Pet When You’re Away,” https://tinyurl.com/srjs6zcn.
[5]
Raffaello Camoriano, Giulia Pasquale, Carlo Ciliberto, Lorenzo Natale, Lorenzo Rosasco, and Giorgio Metta, “Incremental robot learning of new objects with fixed update time,” in ICRA. IEEE, 2017, pp. 3207–3214.
[6]
Vincenzo Lomonaco and Davide Maltoni, CORe50: a new dataset and benchmark for continuous object recognition,” in CoRL, 2017, pp. 17–26.
[7]
Yu-Xiong Wang, Liangke Gui, and Martial Hebert, “Few-shot hash learning for image retrieval,” in ICCVW, 2017, pp. 1228–1237.
[8]
Deunsol Jung, Dahyun Kang, Suha Kwak, and Minsu Cho, “Few-shot metric learning: Online adaptation of embedding for retrieval,” in ACCV, 2022.
[9]
Jiancai Zhu, Jiabao Zhao, Jiayi Zhou, Liang He, Jing Yang, and Zhi Zhang, “Uncertainty-aware few-shot class-incremental learning,” in ICASSP, 2023, pp. 1–5.
[10]
Aymane Abdali, Vincent Gripon, Lucas Drumetz, and Bartosz Boguslawski, “Active learning for efficient few-shot classification,” in ICASSP, 2023, pp. 1–5.
[11]
“Yolov8 by ultralytics,” https://github.com/ultralytics/, .
[12]
Jake Snell, Kevin Swersky, and Richard Zemel, “Prototypical networks for few-shot learning,” NeurIPS, 2017.
[13]
Yan Wang, Wei-Lun Chao, Kilian Q Weinberger, and Laurens Van Der Maaten, “Simpleshot: Revisiting nearest-neighbor classification for few-shot learning,” arXiv:1911.04623, 2019.
[14]
Sai Yang, Fan Liu, Delong Chen, and Jun Zhou, “Few-shot classification via ensemble learning with multi-order statistics,” IJCAI, 2023.
[15]
Umberto Michieli, Pablo Peso Parada, and Mete Ozay, “Online continual learning in keyword spotting for low-resource devices via pooling high-order temporal statistics,” INTERSPEECH, 2023.
[16]
Umberto Michieli and Mete Ozay, “Online continual learning for robust indoor object recognition,” IROS, 2023.
[17]
Umberto Michieli and Mete Ozay, HOP to the Next Tasks and Domains for Continual Learning in NLP,” in AAAI, 2024.
[18]
Vignesh Kothapalli, Ebrahim Rasromani, and Vasudev Awatramani, “Neural collapse: A review on modelling principles and generalization,” TMLR, 2023.
[19]
Vardan Papyan, XY Han, and David L Donoho, “Prevalence of neural collapse during the terminal phase of deep learning training,” PNAS, vol. 117, no. 40, pp. 24652–24663, 2020.
[20]
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick, “Microsoft coco: Common objects in context,” in ECCV. Springer, 2014, pp. 740–755.
[21]
Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Alexander Kolesnikov, et al., “The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale,” IJCV, vol. 128, no. 7, pp. 1956–1981, 2020.
[22]
Giulia Pasquale, Carlo Ciliberto, Francesca Odone, Lorenzo Rosasco, and Lorenzo Natale, “Are we done with object recognition? the icub robot’s perspective,” Robotics and Autonomous Systems, vol. 112, pp. 260–281, 2019.
[23]
Athanasios Papoulis and S Unnikrishna Pillai, Probability, random variables and stochastic processes, 2002.
[24]
Umberto Michieli and Pietro Zanuttigh, “Continual semantic segmentation via repulsion-attraction of sparse and disentangled latent representations,” in CVPR, 2021, pp. 1114–1124.
[25]
Umberto Michieli and Pietro Zanuttigh, “Incremental learning techniques for semantic segmentation,” in ICCVW, 2019.
[26]
Timothée Lesort, Vincenzo Lomonaco, Andrei Stoian, Davide Maltoni, David Filliat, and Natalia Dı́az-Rodrı́guez, “Continual learning for robotics: Definition, framework, learning strategies, opportunities and challenges,” Information Fusion, 2020.