PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Context-based segmentation of the longissimus muscle in beef with a deep neural network

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
The problem of segmenting the cross-section through the longissimus muscle in beef carcasses with computer vision methods was investigated. The available data were 111 images of cross-sections coming from 28 cows (typically four images per cow). Training data were the pixels of the muscles, marked manually. The AlexNet deep convolutional neural network was used as the classifier, and single pixels were the classified objects. Each pixel was presented to the network together with its small circular neighbourhood, and with its context represented by the further neighbourhood, darkened by halving the image intensity. The average classification accuracy was 96%. The accuracy without darkening the context was found to be smaller, with a small but statistically significant difference. The segmentation of the longissimus muscle is the introductory stage for the next steps of assessing the quality of beef for the alimentary purposes.
Twórcy
  • Institute of Human Nutrition Sciences, Warsaw University of Life Sciences – SGGW, Poland
  • Institute of Information Technology, Warsaw University of Life Sciences – SGGW, Poland
  • Institute of Information Technology, Warsaw University of Life Sciences – SGGW, Poland
autor
  • Institute of Information Technology, Warsaw University of Life Sciences – SGGW, Poland
  • Institute of Human Nutrition Sciences, Warsaw University of Life Sciences – SGGW, Poland
  • Institute of Information Technology, Warsaw University of Life Sciences – SGGW, Poland
  • Institute of Information Technology, Warsaw University of Life Sciences – SGGW, Poland
  • Institute of Information Technology, Warsaw University of Life Sciences – SGGW, Poland
autor
  • Institute of Information Technology, Warsaw University of Life Sciences – SGGW, Poland
  • Institute of Information Technology, Warsaw University of Life Sciences – SGGW, Poland
Bibliografia
  • [1] S. Agustin and R. Dijaya. Beef image classification using k-nearest neighbor algorithm for identification quality and freshness. Journal of Physics: Conference Series, 1179: 012184, July 2019. doi: 10.1088/1742-6596/1179/1/012184.
  • [2] M. Al-Sarayreh, M. M. Reis, W. Q. Yan, and R. Klette. Detection of red-meat adulteration by deep spectral-spatial features in hyperspectral images. Journal of Imaging, 4 (5), 2018. doi: 10.3390/jimaging4050063.
  • [3] A. E. Andaya, E. R. Arboleda, A. A. Andilab, and R. M. Dellosa. Meat marbling scoring using image processing with fuzzy logic based classifier. Int. J. of Scientific & Technology Research, 8 (8): 1442–1445, Feb. 2018. http://www.ijstr.org/research-paper-publishing.php?month=aug2019.
  • [4] D. F. Barbin, S. M. Mastelini, S. Barbon, G. F. C. Campos, A. P. A. C. Barbon, and M. Shimokomaki. Digital image analyses as an alternative tool for chicken quality assessment. Biosystems Engineering, 144: 85 – 93, 2016. doi: https://doi.org/10.1016/j.biosystemseng.2016.01.015. http://www.sciencedirect.com/science/article/pii/S153751101530060X.
  • [5] C.-J. Du, A. Iqbal, and D.-W. Sun. Quality measurement of cooked meats. In D.-W. Sun, editor, Computer Vision Technology for Food Quality Evaluation (Second Edition), chapter 8, pages 195–212. Academic Press, San Diego, second edition, 2016. doi: 10.1016/B978-0-12-802232-0.00008-6.
  • [6] R. J. S. De Guzman, D. N. N. Niro, and A. C. F. Bueno. Pork quality assessment through image segmentation and support vector machine implementation. Journal of Technology Management and Business, 5 (2), Feb. 2018. https://publisher.uthm.edu.my/ojs/index.php/jtmb/article/view/2261.
  • [7] P. Jackman, D.-W. Sun, and P. Allen. Automatic segmentation of beef longissimus dorsi muscle and marbling by an adaptable algorithm. Meat Science, 83 (2): 187–194, 2009. doi: 10.1016/j.meatsci.2009.03.010.
  • [8] M. Kamruzzaman, Y. Makino, and S. Oshita. Online monitoring of red meat color using hyperspectral imaging. Meat Science, 116: 110–117, 2016. doi: 10.1016/j.meatsci.2016.02.004.
  • [9] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural networks. Commun. ACM, 60 (6): 84–90, May 2017. doi:10.1145/3065386.
  • [10] I. Muñoz, P. Gou, and E. Fulladosa. Computer image analysis for intramuscular fat segmentation in dry-cured ham slices using convolutional neural networks. Food Control, 106: 106693, 2019. doi: doi.org/10.1016/j.foodcont.2019.06.019.
  • [11] X. Sun, J. Young, J. H. Liu, L Bachmeier, R. M. Somers, K. J. Chen, and D. Newman. Prediction of pork color attributes using computer vision system. Meat Science, 113: 62–64, 2016. doi: 10.1016/j.meatsci.2015.11.009.
  • [12] A. Taheri-Garavand, S. Fatahi, M. Omid, and Y. Makino. Meat quality evaluation based on computer vision technique: A review.Meat Science, 156: 183–195, 2019. doi: 10.1016/j.meatsci.2019.06.002.
  • [13] J. Tan. Meat quality evaluation by computer vision. Journal of Food Engineering, 61 (1): 27–35, 2004. doi: 10.1016/S0260-8774(03)00185-7.
Uwagi
Opracowanie rekordu ze środków MNiSW, umowa Nr 461252 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2020).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-682a3520-750b-4297-930b-ebe96371a1cd
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.