PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Microscopic Studies of Activated Sludge Supported by Automatic Image Analysis Based on Deep Learning Neural Networks

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Paper presents microscopic studies of activated sludge supported by automatic image analysis based on deep learning neural networks. The organisms classified as Arcella vulgaris were chosen for the research. They frequently occur in the waters containing organic substances as well as WWTPs employing the activated sludge method. Usually, they can be clearly seen and counted using a standard optical microscope, as a result of their distinctive appearance, numerous population and passive behavior. Thus, these organisms constitute a viable object for detection task. Paper refers to the comparison of performance of deep learning networks namely YOLOv4 and YOLOv8, which conduct automatic image analysis of the afore-mentioned organisms. YOLO (You Only Look Once) constitutes a one-stage object detection model that look at the analyzed image once and allow real-time detection without a marked accuracy loss. The training of the applied YOLO models was carried out using sample microscopic images of activated sludge. The relevant training data set was created by manually labeling the digital images of organisms, followed by calculation and comparison of various metrics, including recall, precision, and accuracy. The architecture of the networks built for the detection task was general, which means that the structure of the layers and filters was not affected by the purpose of using the models. Accounting mentioned universal construction of the models, the results of the accuracy and quality of the classification can be considered as very good. This means that the general architecture of the YOLO networks can also be used for specific tasks such as identification of shell amoebas in activated sludge.
Rocznik
Strony
360--369
Opis fizyczny
Bibliogr. 30 poz., rys., tab.
Twórcy
  • Department of Applied Mathematics, Faculty of Mathematics and Information Technology, Lublin University of Technology, ul. Nadbystrzycka 38, 20-618 Lublin, Poland
  • Department of Water Supply and Wastewater Disposal, Faculty of Environmental Engineering, Lublin University of Technology, ul. Nadbystrzycka 40B, 20-618 Lublin, Poland
  • Department of Water Supply and Wastewater Disposal, Faculty of Environmental Engineering, Lublin University of Technology, ul. Nadbystrzycka 40B, 20-618 Lublin, Poland
Bibliografia
  • 1. Amaral A.L., Mesquita D.P., Ferreira E.C. 2013. Automatic identification of activated sludge disturbances and assessment of operational parameters. Chemosphere, 91(5), 705–710.
  • 2. Babko R., Kuzmina T., Łagód G., Jaromin-Gleń K. 2014. Changes in the structure of activated sludge protozoa community at the different oxygen condition. Chemistry-Didactics-Ecology-Metrology, 19(1–2), 87–95. (in Polish)
  • 3. Babko R., Łagód G., Kuzmina T., Danko Y. 2023. Arcella vulgaris (testacea) jako obiekt testowy osadu czynnego [online]. Available from: https:// pub.pollub.pl/publication/15051/ [Accessed 22 Dec 2023]. (in Polish)
  • 4. Bay H., Ess A., Tuytelaars T., Van Gool L. 2008. Speeded-Up Robust Features (SURF). Computer Vision and Image Understanding, 110(3), 346–359.
  • 5. Bochkovskiy A., Wang C.Y., Liao H.-Y.M. 2020. YOLOv4: Optimal Speed and Accuracy of Object Detection.
  • 6. Dutta A., Zisserman A. 2019. The VIA Annotation Software for Images, Audio and Video. In: Proceedings of the 27th ACM International Conference on Multimedia, Nice, France: ACM, 2276–2279.
  • 7. Fiałkowska E., Fyda J., Pajdak-Stós A., Wiąckowski K. 2005. Osad czynny: biologia i analiza mikroskopowa. Kraków: Oficyna Wydawnicza ‘Impuls’. (in Polish)
  • 8. Girshick R. 2015. Fast R-CNN. In: 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile: IEEE, 1440–1448.
  • 9. Girshick R., Donahue J., Darrell T., Malik J. 2014. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA: IEEE, 580–587.
  • 10. He K., Zhang X., Ren S., Sun J. 2015. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(9), 1904–1916.
  • 11. Jaromin-Gleń K., Babko R., Łagód G., Sobczuk H. 2013. Community composition and abundance of protozoa under different concentration of nitrogen compounds at “Hajdow” wastewater treatment plant. Ecological Chemistry and Engineering S, 20(1), 127–139. (in Polish)
  • 12. Krizhevsky A., Sutskever I., Hinton G.E. 2017. ImageNet classification with deep convolutional neural networks. Communications of the ACM, 60(6), 84–90.
  • 13. Lecun Y., Bottou L., Bengio Y., Haffner P. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278–2324.
  • 14. Lin T.Y., Maire M., Belongie S., Hays J., Perona P., Ramanan D., Dollár P., Zitnick C.L. 2014. Microsoft COCO: Common Objects in Context. In: D. Fleet, T. Pajdla, B. Schiele, T. Tuytelaars, eds. Computer Vision – ECCV 2014. Cham: Springer International Publishing, 740–755.
  • 15. Liu S., Qi L., Qin H., Shi J., Jia J. 2018. Path Aggregation Network for Instance Segmentation. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT: IEEE, 8759–8768.
  • 16. Liu W., Anguelov D., Erhan D., Szegedy C., Reed S., Fu C.Y., Berg A.C. 2016. SSD: Single Shot MultiBox Detector, 21–37.
  • 17. Lowe D.G. 2004. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2), 91–110.
  • 18. Ogden G.G., Hedley R.H. 1980. An atlas of freshwater testate amoebae. Soil Science, 130(3), 176.
  • 19. Papert S. 2004. The Summer Vision Project.
  • 20. Pérez-Uz B., Arregui L., Calvo P., Salvadó H., Fernández N., Rodríguez E., Zornoza A., Serrano S. 2010. Assessment of plausible bioindicators for plant performance in advanced wastewater treatment systems. Water Research, 44(17), 5059–5069.
  • 21. Redmon J. 2023. Darknet: Open Source Neural Networks in C [online]. Available from: https://pjreddie.com/darknet/ [Accessed 22 Dec 2023].
  • 22. Redmon J., Divvala S., Girshick R., Farhadi A. 2016. You Only Look Once: Unified, Real-Time Object Detection.
  • 23. Redmon J., Farhadi A. 2018. YOLOv3: An Incremental Improvement. 24.
  • 24. Ren S., He K., Girshick R., Sun J. 2017. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(6), 1137–1149.
  • 25. Song B., Wang Y., Lou L.P. 2023. SSD-based carton packaging quality defect detection system for the logistics supply chain. Ecological Chemistry and Engineering S, 30(1).
  • 26. Stawarczyk M., Stawarczyk K. 2015. Use of the ImageJ program to assess the damage of plants by snails. Chemistry-Didactics-Ecology-Metrology, 20(1-2).
  • 27. Titano J.J., Badgeley M., Schefflein J., Pain M., Su A., Cai M., Swinburne N., Zech J., Kim J., Bederson J., Mocco J., Drayer B., Lehar J., Cho S., Costa A., Oermann E.K. 2018. Automated deepneural-network surveillance of cranial images for acute neurologic events. Nature Medicine, 24(9), 1337–1341.
  • 28. Viola P., Jones M. 2001. Rapid object detection using a boosted cascade of simple features. In: Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Kauai, HI, USA: IEEE Comput. Soc, I-511–I-518.
  • 29. Wang C.Y., Mark Liao H.Y., Wu Y.H., Chen P.-Y., Hsieh J.-W., Yeh I.-H. 2020. CSPNet: A New Backbone that can Enhance Learning Capability of CNN. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA: IEEE, 1571–1580.
  • 30. Wang L.Y., He Y.P. 2023. Environmental landscape art design based on visual neural network model in rural construction. Ecological Chemistry and Engineering S, 30(2).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-ef327452-c066-4172-9697-9894e865f8d5
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.