PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

A Comparative Analysis of Image Segmentation Using Classical and Deep Learning Approach

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Segmentation is one of the image processing techniques, widely used in computer vision, to extract various types of information represented as objects or areas of interest. The development of neural networks has influenced image processing techniques, including creation of new ways of image segmentation. The aim of this study is to compare classical algorithms and deep learning methods in RGB image segmentation tasks. Two hypotheses were put forward: 1) “The quality of segmentation applying deep learning methods is higher than using classical methods for RGB images”, and 2) “The increase of the RGB image resolution has positive impact on the segmentation quality”. Two traditional segmentation algorithms (Thresholding and K-means) were compared with deep learning approach (U-Net, SegNet and FCN 8) to verify RGB segmentation quality. Two resolutions of images were taken into consideration: 160x240 and 320x480 pixels. Segmentation quality for each algorithm was estimated based on four parameters: Accuracy, Precision, Recall and Sorensen-Dice ratio (Dice score). In the study the Carvana dataset, containing 5,088 high-resolution images of cars, was applied. The initial set was divided into training, validation and test subsets as 60%, 20%, 20%, respectively. As a result, the best Accuracy, Dice score and Recall for images with resolution 160x240 were obtained for U-Net, achieving 99.37%, 98.56%, and 98.93%, respectively. For the same resolution the highest Precision 98.19% was obtained for FCN-8 architecture. For higher resolution, 320x480, the best mean Accuracy, Dice score, and Precision were obtained for FCN-8 network, reaching 99.55%, 99.95% and 98.85%, respectively. The highest results for classical methods were obtained for Threshold algorithm reaching 80.41% Accuracy, 58.49% Dice score, 67.32% Recall and 52.62% Precision. The results confirm both hypotheses.
Twórcy
  • Faculty of Electrical Engineering and Computer Science, Department of Computer Science, Lublin University of Technology, ul. Nadbystrzycka 38D, 20-618 Lublin, Poland
  • Faculty of Electrical Engineering and Computer Science, Department of Computer Science, Lublin University of Technology, ul. Nadbystrzycka 38D, 20-618 Lublin, Poland
  • Faculty of Electrical Engineering and Computer Science, Department of Computer Science, Lublin University of Technology, ul. Nadbystrzycka 38D, 20-618 Lublin, Poland
Bibliografia
  • 1.Garcia-Garcia A., Orts-Escolano S., Oprea S., Vil¬lena-Martinez V., Martinez-Gonzalez P., Garcia-Ro¬driguez J. A survey on deep learning techniques for image and video semantic segmentation. Applied Soft Computing 2018; 70: 41-65.
  • 2.Zhang, Y.-J. Advances in image and video segmen¬tation, IGI Global, 2006.
  • 3.Stockman G., Shapiro L. G. Computer vision, Pren¬tice Hall PTR, 2001.
  • 4.Li B., Shi Y., Qi Z., Chen Z. A survey on semantic segmentation. In Proc. 2018 IEEE International Conference on Data Mining Workshops, Singa¬pore 2018, 1233-1240.
  • 5.Hafiz A. M., Bhat G. M. A survey on instance segmen¬tation: state of the art. International journal of mul¬timedia information retrieval 2020; 9(3): 171-189.
  • 6.Kirillov A., He K., Girshick R., Rother C., Dollár, P. Panoptic segmentation. In Proc. of the IEEE/CVF conference on computer vision and pattern recogni¬tion, Long Beach, USA 2019, 9404-9413.
  • 7.Tilton, J. C. Image Segmentation Analysis for NASA Earth Science Applications. Capital Science 2010. 2010.
  • 8.Aljabri M., AlGhamdi M. A review on the use of deep learning for medical images segmenta¬tion. Neurocomputing 2022.
  • 9.Mousavirad S. J., Ebrahimpour-Komleh H. Image segmentation as an important step in image-based digital technologies in smart cities: a new nature-based approach. Information Innovation Technol¬ogy in Smart Cities 2018; 75-89.
  • 10.Gonzalez R. C., WoodsR. E., Eddins S. L. Digital Image Processing Using Matlab. Digital Image Pro¬cessing Using Matlab, 2004.
  • 11.Yu, H., Yang Z., Tan L., Wang Y., Sun W., Sun M., Tang, Y. Methods and datasets on semantic segmenta¬tion: A review. Neurocomputing 2018; 304: 82-103.
  • 12.Liu D., Soran B., Petrie G., Shapiro L. A review of computer vision segmentation algorithms. Lecture notes 2012, 53.
  • 13.Smołka, J. Fast watershed-based dilation. Advances in Science and Technology Research Journal, 2014; 8(23): 41-44.
  • 14.Badrinarayanan V., Kendall A., Cipolla R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE transactions on pat¬tern analysis and machine intelligence 2017; 39.12: 2481-2495.
  • 15.Lyu H., Fu H., Hu X., Liu L. Esnet: Edge-based segmentation network for real-time semantic seg¬mentation in traffic scenes. In Proc: 2019 IEEE In¬ternational Conference on Image Processing Taipei, Taiwan 2019, 1855-1859.
  • 16.Karabağ C., Verhoeven J., Miller N. R., Reyes- Aldasoro C. C. Texture segmentation: An objective comparison between five traditional algorithms and a deep-learning U-Net architecture. Applied Sci¬ences 2019; 9(18): 3900.
  • 17.Ozturk O., Saritürk B., Seker D. Z. Comparison of fully convolutional networks (FCN) and U-Net for road segmentation from high resolution imager¬ies. International journal of environment and geo¬informatics 2020; 7(3): 272-279.
  • 18.Ahmed I., Ahmad M., Khan F. A., Asif M. Com¬parison of deep-learning-based segmentation mod¬els: Using top view person images. IEEE Access 2020; 8: 136361-136373.
  • 19.King A., Bhandarkar S. M., Hopkinson B. M. A comparison of deep learning methods for semantic segmentation of coral reef survey images. In Proc. of the IEEE conference on computer vision and pattern recognition workshops, Salt Lake City, UT, USA 2018, 1394-1402.
  • 20.Kromp F., Fischer L., Bozsaky E., Ambros I. M., Dörr W., Beiske, K., Ambros P. F., Hanbury A., Tas¬chner-Mandl, S. Evaluation of deep learning archi¬tectures for complex immunofluorescence nuclear image segmentation. IEEE Transactions on Medical Imaging 2021; 40(7): 1934-1949.
  • 21.Erdem F., Avdan U. Comparison of different U-net models for building extraction from high-resolution aerial imagery. International Journal of Environ¬ment and Geoinformatics 2020; 7(3): 221-227.
  • 22.Mulindwa D. B., Du S. An n-Sigmoid Activation Function to Improve the Squeeze-and-Excitation for 2D and 3D Deep Networks. Electronics 2023; 12(4): 911.
  • 23.Fang X. Research on the Application of Unet with Convolutional Block Attention Module to Semantic Segmentation Task. In: Proc. of the 2022 5th Inter¬national Conference on Sensors, Signal and Image Processing, Nanjing Chin 2022, 13-16.
  • 24.Shaler B., Gill D., Mark M., McDonald P., Cukier¬ski W. Carvana Image Masking Challenge. Kaggle. https://kaggle.com/competitions/carvana-image-masking-challenge [access September 2023].
  • 25.Xu J., Guo H., Kageza A., Wu S., AlQarni S. Re¬moving Background with Semantic Segmentation Based on Ensemble Learning. In 2018 11th EAI International Conference on Mobile Multimedia Communications: 187-197.
  • 26. Carvana Image Masking Challenge. Retrieved May, 2023, https://www.kaggle.com/c/carvana-image-masking-challenge [access 26.05.2023]
  • 27. Carvana Upgrades Its Own Industry-Changing Vir¬tual Vehicle Tool Experience with New Automotive Imaging Technology. https://investors.carvana.com/ news-releases/2020/08-19-2020-140123662 [ac¬cess September 2023].
  • 28. Ronneberger O., Fischer P., Brox T. U-net: Convolu¬tional networks for biomedical image segmentation. In: Proc. 18th International Conference Medical Im¬age Computing and Computer-Assisted Interven¬tion–MICCAI, Munich, Germany, 2015, 234-241.
  • 29. Long J., Shelhamer E., Darrell T. Fully convolu¬tional networks for semantic segmentation. In Proc. of the IEEE conference on computer vision and pattern recognition, Boston, Massachusetts 2015, 3431-3440.
  • 30. Simonyan K., Zisserman A. Very deep convolution¬al networks for large-scale image recognition. arXiv preprint arXiv 2014, 1409.1556.
  • 31. Badrinarayanan V., Kendall A., Cipolla R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE transactions on pat¬tern analysis and machine intelligence 2017; 39(12): 2481-2495.
  • 32. Niu Z., Li H. Research and analysis of threshold seg¬mentation algorithms in image processing. In: Proc of Journal of Physics: Conference Series, Ningbo, China 2019, 1237(2), 022122.
  • 33. Shan P. Image segmentation method based on K-mean algorithm. EURASIP Journal on Image and Video Processing, 2018; (1): 1-9.
  • 34. Qin R., Lv H., Zhang Y., Huangfu L., Huang S. AS¬DFL: An adaptive super‐pixel discriminative fea¬ture‐selective learning for vehicle matching. Expert Systems 2023; 40(2): e13144.
  • 35. Teng S., Zhang S., Huang Q., Sebe N. Multi-view spatial attention embedding for vehicle re-identifi¬cation. IEEE Transactions on Circuits and Systems for Video Technology 2020; 31(2): 816–827.
  • 36. Wang Q., Min W., He D., Zou S., Huang T., Zhang Y., Liu R. Discriminative fine-grained network for ve¬hicle re-identification using two-stagere-ranking. Sci¬ence China Information Sciences 2020; 63(11): 1–12.
  • 37. Wu M., Zhang Y., Zhang T., Zhang W. Background segmentation for vehicle re-identification. In: Proc of the International Conference on Multimedia Modeling, Daejeon, South Kore 2020 88–99.
  • 38. Zhu Y., Zha Z.-J., Zhang T., Liu J., Luo J. A struc¬tured graph attention network for vehicle reidenti¬fication. In: Proc. of the 28th ACM International Conference on Multimedia, Seattle, WA, USA 2020, 646–654.14 of 15QINET AL.
Uwagi
Opracowanie rekordu ze środków MNiSW, umowa nr SONP/SP/546092/2022 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2024).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-d108c24b-763a-4faf-ab6c-de6e5c088eb0
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.