MosquitoFusion: A Multiclass Dataset for Real-Time Detection of Mosquitoes, Swarms, and Breeding Sites Using Deep Learning

Md. Faiyaz Abdullah Sayeedi, Fahim Hafiz & Md Ashiqur Rahman
Department of Computer Science and Engineering, United International University
msayeedi212049@bscse.uiu.ac.bd,{fahimhafiz,ashiqurrahman}@cse.uiu.ac.bd


Abstract

In this paper, we present an integrated approach to real-time mosquito detection using our multiclass dataset (MosquitoFusion) containing 1204 diverse images and leverage cutting-edge technologies, specifically computer vision, to automate the identification of Mosquitoes, Swarms, and Breeding Sites. The pre-trained YOLOv8 model, trained on this dataset, achieved a mean Average Precision (mAP@50) of 57.1%, with precision at 73.4% and recall at 50.5%. The integration of Geographic Information Systems (GIS) further enriches the depth of our analysis, providing valuable insights into spatial patterns. The dataset and code are available at https://github.com/faiyazabdullah/MosquitoFusion.

1 Introduction↩︎

Mosquito-borne diseases stand as a major global health threat due to the adaptability and resilience of mosquitoes. Roughly 700 million people are infected with mosquito-borne diseases every year. [1] An estimated 1 million people die from these diseases annually. Combatting these diseases requires a reevaluation of existing strategies. Understanding mosquito breeding grounds and behaviors is crucial for effective prevention. This research tackles the broader challenge of preventing mosquito-borne diseases by emphasizing the swift detection of mosquitoes. In this paper, we contribute to this effort by leveraging our extensive MosquitoFusion dataset. To understand the usability of our dataset, we implement the pre-trained YOLOv8 object detection model.

2 Existing Works↩︎

Recent research has witnessed a surge in multidisciplinary approaches to combat mosquito-borne diseases. [2] pioneered an automated detection system using unmanned aerial vehicles (UAVs) for identifying potential breeding sites, emphasizing the efficacy of aerial surveillance. [3] leveraged Convolutional Neural Networks (CNNs) and geospatial analysis, highlighting the synergy between advanced algorithms and geographic insights. [4] integrated Geographic Information Systems (GIS) for refined risk assessments, emphasizing the correlation between breeding sites and environmental factors. Recent studies by [5] and [6] underscore the significance of image annotation precision and preprocessing techniques for improved model accuracy. However, existing works focused on creating the datasets in laboratory environments [7] and often lack multi-class diversity. Such datasets of mosquitoes do not consider real-life aspects in images. Also, many datasets concentrate on a single aspect, like mosquito [8] or breeding site [9] detection. Moreover, a majority of these datasets are not publicly accessible.

Figure 1: Fully tagged and labeled image. (A), (B), (C), (D) shows the original image. In (A’) and (C’), the purple borders mark the breeding sites. In (B’) and (D’) the yellow borders mark the mosquitos. (D’) The red borders mark the swarms

3 Methodology↩︎

In Fig. 1 we present an overview of the annotated dataset. In section 3.1 and 3.2 we present our dataset and technical validation.

3.1 Dataset↩︎

The dataset comprises 1204 meticulously curated images, strategically divided into training (87%), validation (8%), and test (5%) sets, totaling 1053, 100, and 51 images, respectively. Rigorous preprocessing 5.2 measures ensure high-quality data. Augmentations such as flips, rotations, crops, and grayscale applications enhance dataset diversity. This meticulously prepared dataset serves as a valuable resource for training, validating, and testing models for mosquito detection. More details of the dataset have been discussed in the section 5. In figure 2 we present the total overview of our methodology.

3.2 Technical Validation↩︎

We use the pre-trained ‘YOLOv8s’ object detection model [10] which utilizes the CNN architecture, to evaluate our dataset. This configuration achieved a mean Average Precision (mAP@50) of 57.1%, with a precision of 73.4% and a recall of 50.5%. This configuration aligns with the objective of efficient and accurate mosquito identification. Then we integrate the Geographic Information Systems (GIS) to further enrich the depth of our analysis, providing valuable insights into spatial patterns. The summary of evolution matrices is shown in Table 2. More technical validation is shown in 5.4.

4 Result Analysis and Future Work↩︎

The model trained on the MosquitoFusion dataset exhibits promising performance, showcasing its efficacy in real-time mosquito detection. The dataset’s careful curation and diverse augmentations contribute to the model’s robustness. The split into training, validation, and test sets ensures reliable evaluation, emphasizing the dataset’s value for training effective mosquito detection model. Beyond its utility in research, the dataset holds great potential for applications in public health, environmental monitoring, and disease control strategies. Our future work includes creating a custom model exclusively designed for detecting mosquitoes, swarms, and breeding sites to further advance our capabilities in this domain. Additionally, we’ll address a limitation of the pre-trained YOLOv8 model in our future work that, it may struggle to differentiate swarms formed by mosquitoes from those formed by other insects.

URM Statement↩︎

The authors acknowledge that at least one key author of this work meets the URM criteria of ICLR 2024 Tiny Papers Track.

5 Appendix↩︎

In sections 5.1, 5.2, 5.3, and 5.4, we present our data collection, data preprocessing, distribution analysis and folder structure, and the model setup and evaluation of the dataset.

5.1 Data Collection↩︎

In the initial phase of our project, data collection for the MosquitoFusion dataset involved meticulous fieldwork, employing professional cameras to capture 1204 detailed images of mosquitoes, swarms, and breeding sites. We have captured the images in the lighting conditions of daylight and in a sunny environment. The dataset’s reliability is underscored by careful annotation using the tool Roboflow [11]. This hands-on approach ensures the acquisition of authentic and representative images for effective real-time mosquito detection.

5.2 Data Preprocessing↩︎

Data preprocessing is a crucial step in ensuring the quality and effectiveness of a dataset for training machine learning models. In the case of the MosquitoFusion dataset, our preprocessing pipeline includes several key steps, starting with data cleaning and curation. Then we did auto-orientation and resizing to a consistent 640x640 pixel dimension. We also filter out images lacking annotations for integrity. Then we did augmentations, including flips, rotations, crops, and grayscale applications to introduce variability. With a total of 1204 images strategically divided into training, validation, and test sets, the preprocessing emphasizes creating a standardized yet diverse dataset.

5.3 Distribution Analysis and Folder Structure↩︎

The dataset encompasses instances of three distinct classes: Breeding Place, Mosquito, and Mosquito Swarm. Specifically, the Breeding Place Class is represented by 1031 instances, the Mosquito Class includes 133 instances, and the Mosquito Swarm Class comprises 40 instances. In table 1, we present the class distribution of our dataset.

The dataset appears imbalanced because capturing images of mosquitoes and swarms is quite challenging. Unlike other objects, mosquitoes are small, swift, and often found in dynamic swarms, making it harder to obtain clear images. To tackle the imbalance in the dataset, we employ the technique called oversampling. This involves increasing the number of instances for the imbalanced classes by using data augmentation methods. This helps ensure that the model is exposed to a more balanced representation of all classes, enhancing its ability to recognize instances from each category effectively.

Within each directory - Train, Valid, and Test - two folders, namely "image" and "label" organize the dataset. This dual-folder structure streamlines data management, with the "image" folder housing the visual representations, and the "label" folder containing corresponding annotations. This meticulous organization enhances the dataset’s usability. In figure 3 we present the folder structure of our dataset.

Table 1: Distribution of Classes in the Dataset
Class Instances
Breeding Place 1031
Mosquito 133
Mosquito Swarm 40
Total 1204

5.4 Model Setup and Evaluation↩︎

All the images in the dataset were manually reviewed to ensure that no individually identifiable information was included or embedded in the dataset. To make sure the dataset is appropriate for training deep learning models we trained the localization model using the pre-trained YOLOv8s model. The images were randomly split into 87% (1053) training, 8% (100) validation, and 5% (51) test images for training and testing the localization model.

The training process took place on a Windows 11 (Version 23H2) machine running, equipped with Nvidia RTX 3070Ti GPU boasting 8GB of video memory and an AMD Ryzen 5800X processor. The model underwent pre-training using the COCO3, running for a total of 25 epochs. The input size was set to 640 pixels, and standard hyperparameters were employed throughout the training sessions.

Object Detection Performance: For the localization task, the Mosquitos, Swarms, and Breeding Sites were detected with a box precision of 73.4%, recall of 50.5%, and mean Average Precision (mAP@50) of 57.1% at IoU of the 50th percentile on the validation set.

Table 2: Evaluation Metrics
Model Type YOLOv8s
Architecture CNN
mAP@50 57.1%
Precision 73.4%
Recall 50.5%

Figure 2: Framework of Methodology

Figure 3: The folder structure of the MosquitoFusion dataset

References↩︎

[1]
Adnan I. Qureshi. Chapter 2 - mosquito-borne diseases. In Adnan I. Qureshi (ed.), Zika Virus Disease, pp. 27–45. Academic Press, 2018. ISBN 978-0-12-812365-2. . URL https://www.sciencedirect.com/science/article/pii/B9780128123652000032.
[2]
Daniel Trevisan Bravo, Gustavo Araujo Lima, Wonder Alexandre Luz Alves, Vitor Pessoa Colombo, Luc Djogbenou, Sergio Vicente Denser Pamboukian, Cristiano Capellani Quaresma, and Sidnei Alves de Araujo. Automatic detection of potential mosquito breeding sites from aerial images acquired by unmanned aerial vehicles. Computers, Environment and Urban Systems, 90: 101692, 2021.
[3]
Jared Schenkel, Paul Taele, Daniel Goldberg, Jennifer Horney, and Tracy Hammond. Identifying potential mosquito breeding grounds: Assessing the efficiency of uav technology in public health. Robotics, 9 (4): 91, 2020.
[4]
Wesley L Passos, Gabriel M Araujo, Amaro A de Lima, Sergio L Netto, and Eduardo AB da Silva. Automatic detection of aedes aegypti breeding grounds based on deep networks with spatio-temporal consistency. Computers, Environment and Urban Systems, 93: 101754, 2022.
[5]
Wei-Liang Liu, Yuhling Wang, Yu-Xuan Chen, Bo-Yu Chen, Arvin Yi-Chu Lin, Sheng-Tong Dai, Chun-Hong Chen, and Lun-De Liao. An iot-based smart mosquito trap system embedded with real-time mosquito image processing by neural networks for mosquito surveillance. Frontiers in Bioengineering and Biotechnology, 11: 1100968, 2023.
[6]
Veerayuth Kittichai, Morakot Kaewthamasorn, Yudthana Samung, Rangsan Jomtarak, Kaung Myat Naing, Teerawat Tongloy, Santhad Chuwongin, and Siridech Boonsang. Automatic identification of medically important mosquitoes using embedded learning approach-based image-retrieval system. Scientific Reports, 13 (1): 10609, 2023.
[7]
Song-Quan Ong and Hamdan Ahmad. An annotated image dataset for training mosquito species recognition system on human skin. Scientific Data, 9 (1): 413, 2022.
[8]
RPMAKPP Chumchu, K Patil, M Aungmaneeporn, and R Pise. Image dataset of aedes and culex mosquito species. ieee dataport (2020).
[9]
Varalakshmi Perumal, R Sasana, P Rakshitha, et al. Mosquito breeding grounds detection using deep learning techniques. In 2023 International Conference on Advances in Computing, Communication and Applied Informatics (ACCAI), pp. 1–8. IEEE, 2023.
[10]
Glenn Jocher, Ayush Chaurasia, and Jing Qiu. Ultralytics yolov8. 2023. URL https://github.com/ultralytics/ultralytics.
[11]
B Dwyer, J Nelson, J Solawetz, et al. Roboflow (version 1.0)[software], 2022.