PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
For the Convolutional Neural Networks (CNNs) applied in the intelligent diagnosis of gastric cancer, existing methods mostly focus on individual characteristics or network frameworks without a policy to depict the integral information. Mainly, conditional random field (CRF), an efficient and stable algorithm for analyzing images containing complicated contents, can characterize spatial relation in images. In this paper, a novel hierarchical conditional random field (HCRF) based gastric histopathology image segmentation (GHIS) method is proposed, which can automatically localize abnormal (cancer) regions in gastric histopathology images obtained by an optical microscope to assist histopathologists in medical work. This HCRF model is built up with higher order potentials, including pixel-level and patch-level potentials, and graph-based post-processing is applied to further improve its segmentation performance. Especially, a CNN is trained to build up the pixel-level potentials and another three CNNs are fine-tuned to build up the patch-level potentials for sufficient spatial segmentation information. In the experiment, a hematoxylin and eosin (H&E) stained gastric histopathological dataset with 560 abnormal images are divided into training, validation and test sets with a ratio of 1 : 1 :2. Finally, segmentation accuracy, recall and specificity of 78.91%, 65.59%, and 81.33% are achieved on the test set. Our HCRF model demonstrates high segmentation performance and shows its effectiveness and future potential in the GHIS field.
Twórcy
autor
  • Microscopic Image and Medical Image Analysis Group, MBIE College, Northeastern University, Shenyang 110819, PR China; Engineering Research Center of Medical Imaging and Intelligent Analysis, Ministry of Education, Northeastern University, Shenyang 110819, PR China
autor
  • Microscopic Image and Medical Image Analysis Group, MBIE College, Northeastern University, Shenyang 110819, PR China; Engineering Research Center of Medical Imaging and Intelligent Analysis, Ministry of Education, Northeastern University, Shenyang 110819, PR China
  • Microscopic Image and Medical Image Analysis Group, MBIE College, Northeastern University, Shenyang 110819, PR China; Engineering Research Center of Medical Imaging and Intelligent Analysis, Ministry of Education, Northeastern University, Shenyang 110819, PR China
  • Microscopic Image and Medical Image Analysis Group, MBIE College, Northeastern University, Shenyang 110819, PR China; Engineering Research Center of Medical Imaging and Intelligent Analysis, Ministry of Education, Northeastern University, Shenyang 110819, PR China
autor
  • Microscopic Image and Medical Image Analysis Group, MBIE College, Northeastern University, Shenyang 110819, PR China; Engineering Research Center of Medical Imaging and Intelligent Analysis, Ministry of Education, Northeastern University, Shenyang 110819, PR China
autor
  • Microscopic Image and Medical Image Analysis Group, MBIE College, Northeastern University, Shenyang 110819, PR China; Engineering Research Center of Medical Imaging and Intelligent Analysis, Ministry of Education, Northeastern University, Shenyang 110819, PR China; School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094, PR China
autor
  • Microscopic Image and Medical Image Analysis Group, MBIE College, Northeastern University, Shenyang 110819, PR China; Engineering Research Center of Medical Imaging and Intelligent Analysis, Ministry of Education, Northeastern University, Shenyang 110819, PR China
autor
  • Microscopic Image and Medical Image Analysis Group, MBIE College, Northeastern University, Shenyang 110819, PR China; Engineering Research Center of Medical Imaging and Intelligent Analysis, Ministry of Education, Northeastern University, Shenyang 110819, PR China
autor
  • Department of Pathology, Cancer Hospital, China Medical University, Liaoning Cancer Hospital and Institute, Shenyang 110042, PR China
autor
  • Control Engineering College, Chengdu University of Information Technology, Chengdu 610103, PR China
Bibliografia
  • [1] Stewart B, Wild C. World cancer report 2014. World Health Organization, (UN); 2014.
  • [2] Garcia E, Hermoza R, Castanon C, Cano L, Castillo M, Castanñeda C. Automatic lymphocyte detection on gastric cancer IHC images using deep learning. Proc. of CBMS 2017. 2017. pp. 200–4.
  • [3] Srinidhi L, Ciga O, Martel L. Deep neural network models for computational histopathology: a survey. Med Image Anal 2019;1–45, arXiv:1912.12378 (arXiv preprint).
  • [4] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. Proc. of ICLR 2015. 2015. pp. 1–14.
  • [5] Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. Proc. of ICPR 2015. 2015. pp. 3431–40.
  • [6] Chen L, Papandreou G, Kokkinos I, Murphy K, Yuille A. DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans Pattern Anal Mach Intell 2018;40(4):834–48.
  • [7] Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, et al. Imagenet large scale visual recognition challenge. Int J Comput Vis 2015;115(3):211–52.
  • [8] Shin H, Roth R, Gao M, Lu L, Xu Z, Nogues I, et al. Deep convolutional neural networks for computer-aided detection: cnn architectures, dataset characteristics and transfer learning. IEEE Trans Med Imaging 2016;35(5):1285–98.
  • [9] Yosinski J, Clune J, Bengio Y, Lipson H. How transferable are features in deep neural networks?Proc. of NIPS 2014. 2014. pp. 3320–8.
  • [10] Qu J, Hiruta N, Terai K, Nosato H, Murakawa M, Sakanashi H. Gastric pathology image classification using stepwise fine-tuning for deep neural networks. J Healthc Eng 2018;2018:8961781.
  • [11] Sharma H, Zerbe N, Heim D, Wienert S, Behrens H, Hellwich O, et al. A multi-resolution approach for combining visual information using nuclei segmentation and classification in histopathological images. Proc. of VISAPP 2015. 2015. pp. 37–46.
  • [12] Wienert S, Heim D, Saeger K, Stenzinger A, Beil M, Hufnagl P, et al. Detection and segmentation of cell nuclei in virtual microscopy images: a minimum-model approach. Sci Rep 2012;2:503.
  • [13] Kumar N, Verma R, Sharma S, Bhargava S, Vahadane A, Sethi A. A dataset and a technique for generalized nuclear segmentation for computational pathology. IEEE Trans Med Imaging 2017;36(7):1550–60.
  • [14] Cui Y, Zhang G, Liu Z, Xiong Z, Hu J. A deep learning algorithm for one-step contour aware nuclei segmentation of histopathology images. Med Biol Eng Comput 2019;57 (9):2027–43.
  • [15] Peng B, Chen L, Shang J, Xu M. Fully convolutional neural networks for tissue histopathology image classification and segmentation. Proc. of ICIP 2018. 2018. pp. 1403–7.
  • [16] Li Y, Xie X, Liu S, Li X, Shen L. Gt-net: a deep learning network for gastric tumor diagnosis. Proc. of ICTAI 2018. 2018. pp. 20–4.
  • [17] Sun M, Zhang G, Dang H, Qi X, Zhou X, Chang Q. Accurate gastric cancer segmentation in digital pathology images using deformable convolution and multi-scale embedding networks. IEEE Access 2019;7:75530–41.
  • [18] Wang S, Zhu Y, Yu L, Chen H, Lin H, Wan X, et al. Rmdl: recalibrated multi-instance deep learning for whole slide gastric image classification. Med Image Anal 2019;58:101549.
  • [19] Liang Q, Nan Y, Coppola G, Zou K, Sun W, Zhang D, et al. Weakly supervised biomedical image segmentation by reiterative learning. IEEE J Biomed Health Inform 2019;23 (3):1205–14.
  • [20] Ruiz-Sarmiento J, Galindo C, Gonzalez-Jimenez J, et al. UPGMpp: a software library for contextual object recognition. Proc. of REACTS 2015. 2015. pp. 1–14.
  • [21] Hou L, Samaras D, Kurc T, et al. Patch-based convolutional neural network for whole slide tissue image classification. Proc. of CVPR 2016. 2016. pp. 2424–33.
  • [22] Korkmaz S, Bínol H, Akçiçek A, Korkmaz M. A expert system for stomach cancer images with artificial neural network by using HOG features and linear discriminant analysis: HOG_LDA_ANN. Proc. of SISY 2017. 2017. pp. 327–32.
  • [23] Elsheikh T, Austin R, Chhieng D, Miller F, Moriarty A, Renshaw A. American society of cytopathology workload recommendations for automated pap test screening: developed by the productivity and quality assurance in the Era of automated screening task force. Diagn Cytopathol 2013;41(2):174–8.
  • [24] LeCun Y, Bengio Y, Hinton G. Deep learning. Nature 2015;521(7553):436.
  • [25] Lozano R. Comparison of computer-assisted and manual screening of cervical cytology. Gynecol Oncol 2007;104 (1):134–8.
  • [26] Fonseca J, Carmona-Bayonas A, Hernández R, Custodio A, Cano M, Lacalle A, et al. Lauren subtypes of advanced gastric cancer influence survival and response to chemotherapy: real-world data from the agamenon national cancer registry. Br J Cancer 2017;117(6):775–82.
  • [27] Ronneberger O, Fischer P, Brox T. U-net: convolutional networks for biomedical image segmentation. Proc. of MICCAI 2015. 2015. pp. 234–41.
  • [28] He X, Zemel R, Carreira-Perpiñán M. Multiscale conditional random fields for image labeling. Proc. of CVPR 2004, vol. 2. 2004. pp. 1–8.
  • [29] Lafferty J, McCallum A, Pereira F. Conditional random fields: probabilistic models for segmenting and labeling sequence data. Proc. of ICML 2001. 2001. pp. 282–9.
  • [30] Sha F, Pereira F. Shallow parsing with conditional random fields. Proc. of HLT-NAACL 2003. 2003. pp. 134–41.
  • [31] Settles B. Biomedical named entity recognition using conditional random fields and rich feature sets. Proc. of JNLPBA 2004. 2004. pp. 104–7.
  • [32] Chang K, Lin T, Shih L, et al. Analysis and prediction of the critical regions of antimicrobial peptides based on conditional random fields. PLOS ONE 2015;10(3):e0119490.
  • [33] Zheng S, Jayasumana S, Romera-Paredes B, et al. Conditional random fields as recurrent neural networks. Proc. of ICCV 2015. 2015. pp. 1–17.
  • [34] Wang Y, Rajapakse J. Contextual modeling of functional MR images with conditional random fields. IEEE Trans Med Imaging 2006;25(6):804–12.
  • [35] Artan Y, Haider M, Langer D, et al. Prostate cancer localization with multispectral MRI using cost-sensitive support vector machines and conditional random fields. IEEE Trans Image Process 2010;19(9):2444–55.
  • [36] Park S, Sargent D, Lieberman R, et al. Domain-specific image analysis for cervical neoplasia detection based on conditional random fields. IEEE Trans Med Imaging 2011;30 (3):867–78.
  • [37] Mary D, Anandan V, Srinivasagan K. An effective diagnosis of cervical cancer neoplasia by extracting the diagnostic features using CRF. Proc. of ICCEET 2012. 2012. pp. 563–70.
  • [38] Kosov S, Shirahama K, Li C, Grzegorzek M. Environmental microorganism classification using conditional random fields and deep convolutional neural networks. Pattern Recognit 2018;77:248–61.
  • [39] Li C, Chen H, Xue D, Hu Z, Zhang L, He L, et al. Weakly supervised cervical histopathological image classification using multilayer hidden conditional random fields. Proc. of ITIB 2019. 2019. pp. 209–21.
  • [40] Li S. Markov random field models in computer vision. Proc. of ECCV 1994. 1994. pp. 361–70.
  • [41] Gonzalez R, Woods R, Eddins S. Digital image processing using MATLAB. India: Pearson Education; 2004.
  • [42] Garcia-Garcia A, Orts-Escolano S, Oprea S, Villena-Martinez V, Martinez-Gonzalez P, Garcia-Rodriguez J. A survey on deep learning techniques for image and video semantic segmentation. Appl Soft Comput 2018;70:41–65.
  • [43] Chang H, Zhuang A, Valentino D, Chu W. Performance measure characterization for evaluating neuroimage segmentation algorithms. Neuroimage 2009;47(1): 122–35.
  • [44] Taha A, Hanbury A. Metrics for evaluating 3D medical image segmentation: analysis, selection, and tool. BMC Med Imaging 2015;15(1):29.
  • [45] Heimann T, Van Ginneken B, Styner A, Arzhaeva Y, Aurich V, Bauer C, et al. Comparison and evaluation of methods for liver segmentation from ct datasets. IEEE Trans Med Imaging 2009;28(8):1251–65.
  • [46] Li Y, Li X, Xie X, Shen L. Deep learning based gastric cancer identification. Proc. of ISBI 2018. 2018. pp. 182–5.
  • [47] Sun C, Li C, Li Y. Data for hcrf, Mendeley data, v2; 2020. http://dx.doi.org/10.17632/thgf23xgy7.2.
  • [48] Powers D. Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation, technical report in school of informatics and engineering, flinders university, Adelaide, Australia, No. SIE-07-001; 2011.
  • [49] Clifford P. Markov random fields in statistics; disorder in physical systems: A volume in honour of john m hammersley, vol. 19. Oxford University Press; 1990. p. 32.
  • [50] Gupta R. Conditional random fields, unpublished report, IIT Bombay; 2006.
  • [51] Vineet V, Warrell J, Torr P. Filter-based mean-field inference for random fields with higher-order terms and product label-spaces. Proc. of ECCV 2002. 2012. pp. 1–14.
  • [52] Arnab A, Jayasumana S, Zheng S, Torr P. Higher order conditional random fields in deep neural networks. Proc. of ECCV 2016. 2016. pp. 524–40.
  • [53] Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the inception architecture for computer vision. Proc. of ICPR 2016. 2016. pp. 2818–26.
  • [54] He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. Proc. of ICPR 2016. 2016. pp. 770–8.
  • [55] Matsumoto Y, Shinozaki T, Shirahama K, Grzegorzek M, Uehara K. Kobe university, NI, and university of siegen on the TRECVID 2016 AVS task. Proc. of TRECVID 2016. 2016. pp. 1–8.
  • [56] Kumar S, Hebert M. Discriminative random fields. Int J Comput Vis 2006;68(2):179–201.
  • [57] Falk T, Mai D, Bensch R, Çiçek Ö, Abdulkadir A, Marrakchi Y, et al. U-net: deep learning for cell counting, detection, and morphometry. Nat Methods 2019;16(1):67–70.
  • [58] Çiçek Ö, Abdulkadir A, Lienkamp S, Brox T, Ronneberger O. 3D U-Net: learning dense volumetric segmentation from sparse annotation. Proc. of MICCAI 2016. 2016. pp. 424–32.
  • [59] Kermany D, Goldbaum M, Cai W, Valentim C, Liang H, Baxter S, et al. Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell 2018;172 (5):1122–31.
  • [60] Snoek C, Worring M, Smeulders A. Early versus late fusion in semantic video analysis. Proc. of MM 2005. 2005. pp. 1–4.
  • [61] Ingber M, Mitra A. Grid optimization for the boundary element method. Int J Numer Methods Eng 1986;23 (11):2121–36.
  • [62] Badrinarayanan V, Kendall A, Cipolla R. Segnet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans Pattern Anal Mach Intell 2017;39 (12):2481–95.
  • [63] Otsu N. A threshold selection method from gray-level histograms. IEEE Trans Syst Man Cybern 1979;9(1):62–6.
  • [64] Vincent L, Soille P. Watersheds in digital spaces: an efficient algorithm based on immersion simulations. IEEE Trans Pattern Anal Mach Intell 1991;13(6):583–98.
  • [65] Hartigan J, Wong M. Algorithm AS 136: a k-means clustering algorithm. J R Stat Soc Ser C (Appl Stat) 1979;28(1):100–8.
  • [66] Chollet F, et al. Keras; 2015, https://github.com/keras-team/keras.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-a224c978-1a9d-4314-a321-a1ffa43195b7
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.