PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

AI-based Maize and Weeds detection on the edge with CornWeed Dataset

Wybrane pełne teksty z tego czasopisma
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Agricultural applications with AI methods are used more heavily and the lack of wifi connections on the fields make cloud services unavailable. Consequently, the AI models have to be processed directly on the edge. In this paper, we evaluate state-of-the-art detection algorithms for their use in agriculture, in particular plant detection. The current paper also presents the CornWeed data set, which has been recorded on land machines, showing labelled maize crops and weeds for plant detection. The paper provides accuracies for the state-of-the-art detection algorithms on the CornWeed data set, as well as FPS metrics for these networks on multiple edge devices. Moreover, for the FPS analysis, the detection algorithms are converted to ONNX and TensoRT engine files as they could be used as future standards for model exchange.
Rocznik
Tom
Strony
577--584
Opis fizyczny
Bibliogr. 35 poz., il., tab.
Twórcy
autor
  • DFKI Plan-based Robot Control Osnabrueck, Germany
  • DFKI Marine Perception Oldenburg, Germany
  • Faculty of Engineering and Computer Science University of Applied Sciences Osnabrueck Osnabrueck, Germany
  • Faculty of Engineering and Computer Science University of Applied Sciences Osnabrueck Osnabrueck, Germany
  • Faculty of Engineering and Computer Science University of Applied Sciences Osnabrueck Osnabrueck, Germany
  • Faculty of Engineering and Computer Science University of Applied Sciences Osnabrueck Osnabrueck, Germany
Bibliografia
  • 1. L. Benos, A. C. Tagarakis, G. Dolias, R. Berruto, D. Kateris, and D. Bochtis, “Machine Learning in Agriculture: A Comprehensive Updated Review,” Sensors, vol. 21, no. 11, p. 3758, Jan. 2021, http://dx.doi.org/10.3390/s21113758.
  • 2. L. Jiao, F. Zhang, F. Liu, S. Yang, L. Li, Z. Feng, and R. Qu, “A Survey of Deep Learning-based Object Detection,” IEEE Access, vol. 7, pp. 128 837–128 868, 2019.
  • 3. L. Liu, W. Ouyang, X. Wang, P. Fieguth, J. Chen, X. Liu, and M. Pietikäinen, “Deep Learning for Generic Object Detection: A Survey,” International Journal of Computer Vision, vol. 128, no. 2, pp. 261–318, Feb. 2020.
  • 4. Z. Tian, C. Shen, H. Chen, and T. He, “Fcos: Fully convolutional one-stage object detection,” in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 9626–9635.
  • 5. M. Caron, H. Touvron, I. Misra, H. Jégou, J. Mairal, P. Bojanowski, and A. Joulin, “Emerging properties in self-supervised vision transformers,” in Proceedings of the International Conference on Computer Vision (ICCV), 2021.
  • 6. X. Zhang, Z. Cao, and W. Dong, “Overview of Edge Computing in the Agricultural Internet of Things: Key Technologies, Applications, Challenges,” IEEE Access, vol. 8, pp. 141 748–141 761, 2020, http://dx.doi.org/10.1109/ACCESS.2020.3013005.
  • 7. M. Weiss, F. Jacob, and G. Duveiller, “Remote sensing for agricultural applications: A meta-review,” Remote Sensing of Environment, vol. 236, p. 111402, 2020, http://dx.doi.org/10.1016/j.rse.2019.111402. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0034425719304213
  • 8. E. Cai, S. Baireddy, C. Yang, M. Crawford, and E. J. Delp, “Deep transfer learning for plant center localization,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2020, pp. 277–284, http://dx.doi.org/10.1109/CVPRW50498.2020.00039.
  • 9. F. López-Granados, “Weed detection for site-specific weed management: mapping and real-time approaches,” Weed Research, vol. 51, no. 1, pp. 1–11, 2011, http://dx.doi.org/10.1111/j.1365-3180.2010.00829.x.
  • 10. Z. Zhou, X. Chen, E. Li, L. Zeng, K. Luo, and J. Zhang, “Edge Intelligence: Paving the Last Mile of Artificial Intelligence With Edge Computing,” Proceedings of the IEEE, vol. 107, no. 8, pp. 1738–1762, Aug. 2019, http://dx.doi.org/10.1109/JPROC.2019.2918951.
  • 11. J. Weyler, A. Milioto, T. Falck, J. Behley, and C. Stachniss, “Joint Plant Instance Detection and Leaf Count Estimation for In-Field Plant Phenotyping,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 3599–3606, Apr. 2021, http://dx.doi.org/10.1109/LRA.2021.3060712.
  • 12. N. Chebrolu, P. Lottes, A. Schaefer, W. Winterhalter, W. Burgard, and C. Stachniss, “Agricultural robot dataset for plant classification, localization and mapping on sugar beet fields,” The International Journal of Robotics Research, vol. 36, no. 10, pp. 1045–1052, Sep. 2017.
  • 13. X. Wu, S. Aravecchia, P. Lottes, C. Stachniss, and C. Pradalier, “Robotic weed control using automated weed and crop classification,” Journal of Field Robotics, vol. 37, no. 2, pp. 322–340, 2020, http://dx.doi.org/10.1002/rob.21938.
  • 14. D. König, M. Igelbrink, C. Scholz, A. Linz, and A. Ruckelshausen, “Entwicklung einer flexiblen Sensorapplikation zur Erzeugung von validen Daten für KI-Algorithmen in landwirtschaftlichen Feldversuchen,” in 42. GIL-Jahrestagung, Künstliche Intelligenz in der Agrar- und Ernährungswirtschaft. Bonn: Gesellschaft für Informatik in der Land-, Forst- und Ernährungswirtschaft e.V., 2022, pp. 165–170.
  • 15. W. Bangert, A. Kielhorn, F. Rahe, A. Albert, P. Biber, S. Grzonka, S. Haug, A. Michaels, D. Mentrup, M. Hänsel et al., “Field-robot-based agriculture:“remotefarming. 1” and “bonirob-apps”,” in 71th conference LAND. TECHNIK-AgEng 2013, 2013, pp. 439–446.
  • 16. B. Sekachev, N. Manovich, M. Zhiltsov, A. Zhavoronkov, D. Kalinin, B. Hoff, TOsmanov, D. Kruchinin, A. Zankevich, DmitriySidnev, M. Markelov, Johannes222, M. Chenuet, a andre, telenachos, A. Melnikov, J. Kim, L. Ilouz, N. Glazov, Priya4607, R. Tehrani, S. Jeong, V. Skubriev, S. Yonekura, vugia truong, zliang7, lizhming, and T. Truong, “opencv/cvat: v1.1.0,” Aug. 2020. [Online]. Available: https://doi.org/10.5281/zenodo.4009388
  • 17. N. Iqbal, J. Bracke, A. Elmiger, H. Hameed, and K. von Szadkowski, “Evaluating synthetic vs. real data generation for ai-based selective weeding,” in 43. GIL-Jahrestagung, Resiliente Agri-Food-Systeme, C. Hoffmann, A. Stein, A. Ruckelshausen, H. Müller, T. Steckel, and H. Floto, Eds. Bonn: Gesellschaft für Informatik e.V., 2023, pp. 125–135.
  • 18. S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” https://arxiv.org/abs/1506.01497 [cs], Jan. 2016, http://dx.doi.org/10.48550/arXiv.1506.01497.
  • 19. M. Carranza-Garcı́a, J. Torres-Mateo, P. Lara-Benı́tez, and J. Garcı́a-Gutiérrez, “On the Performance of One-Stage and Two-Stage Object Detectors in Autonomous Vehicles Using Camera Data,” Remote Sensing, vol. 13, no. 1, p. 89, Dec. 2020, http://dx.doi.org/10.3390/rs13010089.
  • 20. J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement,” https://arxiv.org/abs/1804.02767 [cs], Apr. 2018, http://dx.doi.org/10.48550/arXiv.1804.02767.
  • 21. T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, “Focal loss for dense object detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 2, pp. 318–327, 2020, http://dx.doi.org/10.1109/TPAMI.2018.2858826.
  • 22. Z. Ge, S. Liu, F. Wang, Z. Li, and J. Sun, “YOLOX: Exceeding YOLO Series in 2021,” Aug. 2021, http://dx.doi.org/10.48550/arXiv.2107.08430.
  • 23. R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 580–587, http://dx.doi.org/10.1109/CVPR.2014.81.
  • 24. W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “SSD: Single Shot MultiBox Detector,” in Computer Vision – ECCV 2016, ser. Lecture Notes in Computer Science, B. Leibe, J. Matas, N. Sebe, and M. Welling, Eds. Cham: Springer International Publishing, 2016, pp. 21–37, http://dx.doi.org/10.1007/978-3-319-46448-0 2.
  • 25. K. Oksuz, B. C. Cam, S. Kalkan, and E. Akbas, “Imbalance Problems in Object Detection: A Review,” Transactions on Pattern Analysis and Machine Intelligence (TPAMI), pp. 1–1, 2020, http://dx.doi.org/10.1109/TPAMI. 2020.2981890.
  • 26. J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 779–788, http://dx.doi.org/10.1109/CVPR.2016.91.
  • 27. G. Jocher, A. Chaurasia, A. Stoken, J. Borovec, NanoCode012, Y. Kwon, TaoXie, J. Fang, imyhxy, K. Michael, Lorna, A. V, D. Montes, J. Nadar, Laughing, tkianai, yxNONG, P. Skalski, Z. Wang, A. Hogan, C. Fati, L. Mammana, AlexWang1900, D. Patel, D. Yiwei, F. You, J. Hajek, L. Diaconu, and M. T. Minh, “ultralytics/yolov5: v6.1 - TensorRT, TensorFlow Edge TPU and OpenVINO Export and Inference,” Feb. 2022. [Online]. Available: https://doi.org/10.5281/zenodo.6222936
  • 28. A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “YOLOv4: Optimal Speed and Accuracy of Object Detection,” https://arxiv.org/abs/2004.10934 [cs, eess], Apr. 2020.
  • 29. N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko, “End-to-end object detection with transformers,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part I 16. Springer, 2020, pp. 213–229, http://dx.doi.org/10.48550/arXiv.2005.12872.
  • 30. H. Zhang, F. Li, S. Liu, L. Zhang, H. Su, J. Zhu, L. M. Ni, and H.-Y. Shum, “Dino: Detr with improved denoising anchor boxes for end-to-end object detection,” arXiv preprint https://arxiv.org/abs/2203.03605, 2022, http://dx.doi.org/10.48550/arXiv.2203.03605.
  • 31. Y. Wu, A. Kirillov, F. Massa, W.-Y. Lo, and R. Girshick, “Detectron2,” https://github.com/facebookresearch/detectron2, 2019.
  • 32. detrex contributors, “detrex: An research platform for transformer-based object detection algorithms,” https://github.com/IDEA-Research/detrex, 2022.
  • 33. S. Markidis, S. Chien, E. Laure, I. Peng, and J. S. Vetter, “Nvidia tensor core programmability, performance & precision,” in 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). Los Alamitos, CA, USA: IEEE Computer Society, may 2018, pp. 522–531. [Online]. Available: https://doi.ieeecomputersociety.org/10.1109/IPDPSW.2018.00091
  • 34. M. Ahmad, M. Abdullah, and D. Han, “Small object detection in aerial imagery using retinanet with anchor optimization,” in 2020 International Conference on Electronics, Information, and Communication (ICEIC), 2020, pp. 1–3, http://dx.doi.org/10.1109/ICEIC49074.2020.9051269.
  • 35. M. Wolf, K. van den Berg, S. P. Garaba, N. Gnann, K. Sattler, F. Stahl, and O. Zielinski, “Machine learning for aquatic plastic litter detection, classification and quantification (APLASTIC-Q),” Environmental Research Letters, vol. 15, no. 11, p. 114042, Nov. 2020, http://dx.doi.org/10.1088/1748-9326/abbd01.
Uwagi
1. Thematic Tracks Regular Papers
2. Opracowanie rekordu ze środków MEiN, umowa nr SONP/SP/546092/2022 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2024).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-cda34ede-1422-45df-b2e4-6983adaa575c
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.