PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Applying Knowledge Distillation to Improve Weed Mapping With Drones

Wybrane pełne teksty z tego czasopisma
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Non-invasive remote sensing using UAVs can be used in precision agriculture to observe crops in visible and non-visible spectra. This paper investigates the effectiveness of state-of-the-art knowledge distillation techniques for mapping weeds with drones, an essential component of precision agriculture that employs remote sensing to monitor crops and weeds. The study introduces a lightweight Vision Transformer-based model that achieves optimal weed mapping capabilities while maintaining minimal computation time. The research shows that the student model effectively learns from the teacher model using the WeedMap dataset, achieving accurate results suitable for mobile platforms such as drones, with only 0.5 GMacs compared to 42.5 GMacs of the teacher model. The trained models obtained an F1 score of 0.863 and 0.631 on two data subsets, with a performance improvement of 2 and 7 points, respectively, over the undistilled model. The study results suggest that developing efficient computer vision algorithms on drones can significantly improve agricultural management practices, leading to greater profitability and environmental sustainability.
Rocznik
Tom
Strony
393--400
Opis fizyczny
Bibliogr. 31 poz., wz., tab., il.
Twórcy
  • Department of Computer Science University of Bari Aldo Moro Bari, Italy
  • Department of Computer Science University of Bari Aldo Moro Bari, Italy
  • Department of Computer Science University of Bari Aldo Moro Bari, Italy
Bibliografia
  • 1. FAO, “How to Feed the World in 2050. Insights from an Expert Meet,” FAO, 2009.
  • 2. S. G. Vougioukas, “Agricultural Robotics,” Annual Review of Control, Robotics, and Autonomous Systems, vol. 2, pp. 365–392, 2019.
  • 3. K. Danilchenko and M. Segal, “An efficient connected swarm deployment via deep learning,” in Proceedings of the 16th Conference on Computer Science and Intelligence Systems, ser. Annals of Computer Science and Information Systems, M. Ganzha, L. Maciaszek, M. Paprzycki, and D. Ślęzak, Eds., vol. 25. IEEE, 2021, p. 1–7. [Online]. Available: http://dx.doi.org/10.15439/2021F001
  • 4. A. dos Santos Ferreira, D. M. Freitas, G. G. da Silva, H. Pistori, and M. T. Folhes, “Weed Detection in Soybean Crops Using ConvNets,” Computers and Electronics in Agriculture, vol. 143, pp. 314–324, 2017.
  • 5. I. Sa, Z. Chen, M. Popović, R. Khanna, F. Liebisch, J. Nieto, and R. Siegwart, “Weednet: Dense Semantic Weed Classification Using Multispectral Images and Mav for Smart Farming,” IEEE robotics and automation letters, vol. 3, no. 1, pp. 588–595, 2017.
  • 6. B. Hobba, S. Akıncı, and A. H. Göktogan, “Efficient Herbicide Spray Pattern Generation for Site-Specific Weed Management Practices Using Semantic Segmentation on UAV Imagery,” in Australasian Conference on Robotics and Automation (ACRA-2021), 2021, pp. 1–10.
  • 7. J. Gou, B. Yu, S. J. Maybank, and D. Tao, “Knowledge Distillation: A Survey,” International Journal of Computer Vision, vol. 129, no. 6, pp. 1789–1819, Jun. 2021.
  • 8. I. Sa, M. Popović, R. Khanna, Z. Chen, P. Lottes, F. Liebisch, J. Nieto, C. Stachniss, A. Walter, and R. Siegwart, “WeedMap: A Large-Scale Semantic Weed Mapping Framework Using Aerial Multispectral Imaging and Deep Neural Network for Precision Farming,” Remote Sensing, vol. 10, no. 9, p. 1423, 2018.
  • 9. P. Lottes, J. Behley, N. Chebrolu, A. Milioto, and C. Stachniss, “Joint Stem Detection and Crop-Weed Classification for Plant-Specific Treatment in Precision Farming,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018, pp. 8233–8238.
  • 10. M. Á. Chicchón Apaza, H. M. B. Monzón, and R. Alcarria, “Semantic Segmentation of Weeds and Crops in Multispectral Images by Using a Convolutional Neural Networks Based on U-Net,” in International Conference on Applied Technologies. Springer, 2019, pp. 473–485.
  • 11. W. Ramirez, P. Achanccaray, LF. Mendoza, and MAC. Pacheco, “Deep Convolutional Neural Networks for Weed Detection in Agricultural Crops Using Optical Aerial Images,” in 2020 IEEE Latin American GRSS & ISPRS Remote Sensing Conference (LAGIRS). IEEE, 2020, pp. 133–137.
  • 12. S. I. Moazzam, U. S. Khan, W. S. Qureshi, M. I. Tiwana, N. Rashid, W. S. Alasmary, J. Iqbal, and A. Hamza, “A Patch-Image Based Classification Approach for Detection of Weeds in Sugar Beet Crop,” IEEE access : practical innovations, open solutions, vol. 9, pp. 121 698–121 715, 2021.
  • 13. S. I. Mirzadeh, M. Farajtabar, A. Li, N. Levine, A. Matsukawa, and H. Ghasemzadeh, “Improved Knowledge Distillation via Teacher Assistant,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 04, pp. 5191–5198, Apr. 2020.
  • 14. A. Jafari, M. Rezagholizadeh, P. Sharma, and A. Ghodsi, “Annealing Knowledge Distillation,” in Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. Online: Association for Computational Linguistics, Apr. 2021, pp. 2493–2504.
  • 15. A. Jafari, I. Kobyzev, M. Rezagholizadeh, P. Poupart, and A. Ghodsi, “Continuation KD: Improved Knowledge Distillation through the Lens of Continuation Optimization,” Dec. 2022.
  • 16. J. Li, K. Fu, S. Zhao, and S. Ge, “Spatiotemporal Knowledge Distillation for Efficient Estimation of Aerial Video Saliency,” IEEE Transactions on Image Processing, vol. 29, pp. 1902–1914, 2020.
  • 17. B.-Y. Liu, H.-X. Chen, Z. Huang, X. Liu, and Y.-Z. Yang, “ZoomInNet: A Novel Small Object Detector in Drone Images with Cross-Scale Knowledge Distillation,” Remote Sensing, vol. 13, no. 6, p. 1198, Jan. 2021.
  • 18. G. Yu, “Data-Free Knowledge Distillation for Privacy-Preserving Efficient UAV Networks,” in 2022 6th International Conference on Robotics and Automation Sciences (ICRAS), Jun. 2022, pp. 52–56.
  • 19. M. Ding, N. Li, Z. Song, R. Zhang, X. Zhang, and H. Zhou, “A Lightweight Action Recognition Method for Unmanned-Aerial-Vehicle Video,” in 2020 IEEE 3rd International Conference on Electronics and Communication Engineering (ICECE), Dec. 2020, pp. 181–185.
  • 20. “KeepEdge: A Knowledge Distillation Empowered Edge Intelligence Framework for Visual Assisted Positioning in UAV Delivery,” https://ieeexplore.ieee.org/abstract/document/9732222/.
  • 21. M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, “The Cityscapes Dataset for Semantic Urban Scene Understanding,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 3213–3223.
  • 22. A. Tao, K. Sapra, and B. Catanzaro, “Hierarchical Multi-Scale Attention for Semantic Segmentation,” May 2020.
  • 23. H. Liu, F. Liu, X. Fan, and D. Huang, “Polarized Self-Attention: Towards High-quality Pixel-wise Regression,” Jul. 2021.
  • 24. H. Yan, C. Zhang, and M. Wu, “Lawin Transformer: Improving Semantic Segmentation Transformer with Multi-Scale Representations via Large Window Attention,” arXiv preprint https://arxiv.org/abs/2201.01615, 2022.
  • 25. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An Image Is Worth 16x16 Words: Transformers for Image Recognition at Scale,” arXiv preprint https://arxiv.org/abs/2010.11929, 2020.
  • 26. K. Sun, Y. Zhao, B. Jiang, T. Cheng, B. Xiao, D. Liu, Y. Mu, X. Wang, W. Liu, and J. Wang, “High-Resolution Representations for Labeling Pixels and Regions,” Apr. 2019.
  • 27. Y. Yuan, X. Chen, and J. Wang, “Object-Contextual Representations for Semantic Segmentation,” in Computer Vision – ECCV 2020, A. Vedaldi, H. Bischof, T. Brox, and J.-M. Frahm, Eds. Cham: Springer International Publishing, 2020, vol. 12351, pp. 173–190.
  • 28. E. Xie, W. Wang, Z. Yu, A. Anandkumar, J. M. Alvarez, and P. Luo, “SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers,” Advances in Neural Information Processing Systems, vol. 34, 2021.
  • 29. L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Deeplab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected Crfs,” IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 4, pp. 834–848, 2017.
  • 30. S. Zhao, Y. Wang, Z. Yang, and D. Cai, “Region Mutual Information Loss for Semantic Segmentation,” in Advances in Neural Information Processing Systems, vol. 32. Curran Associates, Inc., 2019.
  • 31. G. Castellano, E. Cotardo, C. Mencar, and G. Vessio, “Density-Based Clustering with Fully-Convolutional Networks for Crowd Flow Detection from Drones,” Neurocomputing, 2023.
Uwagi
1. The research of Pasquale De Marinis is funded by a Ph.D. fellowship within the framework of the Italian “D.M. n. 352, April 9, 2022” - under the National Recovery and Resilience Plan, Mission 4, Component 2, Investment 3.3 - Ph.D. Project “Computer Vision techniques for sustainable AI applications using drones”, co-supported by “Exprivia S.p.A.” (CUP H91I22000410007).
2. Thematic Tracks Regular Papers
3. Opracowanie rekordu ze środków MEiN, umowa nr SONP/SP/546092/2022 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2024).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-dc6f1015-feee-453c-a839-8a983d7239e6
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.