PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Deep learning for damaged tissue detection and segmentation in Ki-67 brain tumor specimens based on the U-net model

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
The pathologists follow a systematic and partially manual process to obtain histological tissue sections from the biological tissue extracted from patients. This process is far from being perfect and can introduce some errors in the quality of the tissue sections (distortions, deformations, folds and tissue breaks). In this paper, we propose a deep learning (DL) method for the detection and segmentation of these damaged regions in whole slide images (WSIs). The proposed technique is based on convolutional neural networks (CNNs) and uses the U-net model to achieve the pixel-wise segmentation of these unwanted regions. The results obtained show that this technique yields satisfactory results and can be applied as a pre-processing step for automatic WSI analysis in order to prevent the use of the damaged areas in the evaluation processes.
Rocznik
Strony
849--856
Opis fizyczny
Bibliogr. 25 poz., rys., tab.
Twórcy
  • Warsaw University of Technology, Faculty of Electrical Engineering, Warsaw, Poland
  • Warsaw University of Technology, Faculty of Electrical Engineering, Warsaw, Poland
  • Military Institute of Medicine, Department of Pathomorphology, Warsaw, Poland
autor
  • VISILAB Group, Universidad de Castilla-La Mancha, E.T.S.I.I, Ciudad Real, Spain
autor
  • VISILAB Group, Universidad de Castilla-La Mancha, E.T.S.I.I, Ciudad Real, Spain
autor
  • Military Institute of Medicine, Department of Pathomorphology, Warsaw, Poland
  • VISILAB Group, Universidad de Castilla-La Mancha, E.T.S.I.I, Ciudad Real, Spain
autor
  • Military Institute of Medicine, Department of Pathomorphology, Warsaw, Poland
Bibliografia
  • [1] M. Bojarski, P. Yeres, A. Choromanska, K. Choromanski, B. Firner, L. Jackel, and U. Muller, “Explaining How a Deep Neural Network Trained with End-to-End Learning Steers a Car”, arXiv preprint arXiv:1704.07911, (2017).
  • [2] X. Liu, W. Xue, L. Xiao, and B. Zhang, “PBODL: Parallel Bayesian Online Deep Learning for Click-Through Rate Prediction in Tencent Advertising System”, arXiv preprint arXiv:1707.00802, (2017).
  • [3] A. Madabhushi and G. Lee, “Image analysis and machine learning in digital pathology: Challenges and opportunities”, Medical image analysis, 33(170‒175), (2016).
  • [4] G. Litjens, C.I. Sanchez, N. Timofeeva, M. Hermsen, I. Nagtegaal, I. Kovacs, C. Hulsbergen-van de Kaa, P. Bult, B. van Ginneken, and J. van der Laak, “Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis”, Scientific Reports, 6, (2016).
  • [5] G. Bueno, M. Fernandez-Carrobles, O. Deniz, and M. Garcia Rojo, “New trends of emerging technologies in digital pathology”, Pathobiology, (2016).
  • [6] G. Litjens, T. Kooi, B.E. Bejnordi, A.A.A. Setio, F. Ciompi, M. Ghafoorian, and C.I. Sanchez, “A survey on deep learning in medical image analysis”, arXiv preprint arXiv:1702.05747, (2017).
  • [7] J. Isaksson, I. Arvidsson, K. Aastrom, and A. Heyden, “Semantic segmentation of microscopic images of H&E stained prostatic tissue using CNN”, Neural Networks (IJCNN), 1252‒1256, (2017).
  • [8] L. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. Yuille, “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs”, arXiv preprint arXiv:1606.00915, (2015).
  • [9] H. Dong, G. Yang, F. Liu, Y. Mo, and Y. Guo, “Automatic Brain Tumor Detection and Segmentation Using U-Net Based Fully Convolutional Networks”, arXiv preprint arXiv:1705.03820, (2017).
  • [10] D.X. Xue, R. Zhang, Y.Y. Zhao, J.M. Xu, and Y.L.Wang, “Fully convolutional networks with double-label for esophageal cancer image segmentation by self-transfer learning”, Ninth International Conference on Digital Image Processing (ICDIP 2017), (2017).
  • [11] P. Naylor, M. Lae, F. Reyal, and T.Walter, “Nuclei segmentation in histopathology images using deep neural networks”, Biomedical Imaging (ISBI 2017), 933‒936, (2017).
  • [12] A. BenTaieb, J. Kawahara, and G. Hamarneh, “Multi-loss convolutional networks for gland analysis in microscopy”, Biomedical Imaging (ISBI), 642‒645, (2016).
  • [13] J. Gallego, A. Pedraza, S. Lopez, G. Steiner, L. Gonzalez, A. Laurinavicius, and G. Bueno, “Glomerulus Classification and Detection Based on Convolutional Neural Networks”, Journal of Imaging, 4(1), 20, (2018).
  • [14] A. Krizhevsky, I. Sutskever, and G. Hinton, “Imagenet classification with deep convolutional neural networks”, Proceedings of the Advances in Neural Information Processing Systems, pp. 1097–1105, (2012).
  • [15] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, and A. Rabinovich, “Going deeper with convolutions”, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9, (2015).
  • [16] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation”, In International Conference on Medical Image Computing and Computer- Assisted Intervention, 234‒241, (2015).
  • [17] N. Smith and C. Womack, “A matrix approach to guide IHCbased tissue biomarker development in oncology drug discovery”, J. Pathol., 232(2), pp. 190–198, (2014).
  • [18] S. Kothari, J.H. Phan, T.H. Stokes, and M.D. Wang, “Pathology imaging informatics for quantitative analysis of whole-slide images”, JAMIA (J. Am. Med. Inf. Assoc.), 20(6), pp. 1099–1108, (2013).
  • [19] F. Chollet and others, “Keras”, GitHub, https://github.com/kerasteam/keras, (2015).
  • [20] M. Abadi et al., “TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems”, Software available from tensorflow. org, (2015).
  • [21] Z. Swiderska-Chadaj, T. Markiewicz, B. Grala, M. Lorent, and A. Gertych, “A Deep Learning Pipeline to Delineate Proliferative Areas of Intracranial Tumors in Digital Slides”, Annual Conference on Medical Image Understanding and Analysis (pp. 448‒458). Springer, Cham, 2017.
  • [22] N. Otsu, “A threshold selection method from gray-level histograms”, IEEE Systems, Man, and Cybernetics Society, 9(1), pp. 62–66, (1979).
  • [23] S. Kothari, J.H Phan, and M.D. Wang, “Eliminating tissue-fold artifacts in histopathological whole-slide images for improved image-based prediction of cancer grade”, Journal of pathology informatics, 4, (2013).
  • [24] J.P. Johnson, E.A. Krupinski, M. Yan, H. Roehrig, A.R. Graham, and R.S. Weinstein, “Using a visual discrimination model for the detection of compression artifacts in virtual pathology images”, IEEE transactions on medical imaging, 30(2), 306‒314,(2011).
  • [25] H. Wu, J.H. Phan, A.K. Bhatia, C.A. Cundiff, B.M. Shehata, and M.D. Wang, “Detection of blur artifacts in histopathological whole-slide images of endomyocardial biopsies”, 37th Annual International Conference of the IEEE, pp. 727–730, (2015).
Uwagi
PL
Opracowanie rekordu w ramach umowy 509/P-DUN/2018 ze środków MNiSW przeznaczonych na działalność upowszechniającą naukę (2019).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-269c3aad-4f97-4363-8f10-3797af9c7c8c
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.