PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Mediastinal lymph node malignancy detection in computed tomography images using fully convolutional network

Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Differential diagnosis of malignant and benign mediastinal lymph nodes (LNs) through invasive pathological tests is a complex and painful procedure because of sophisticated anatomical locations of LNs in the chest. The image based automatic machine learning techniques have been attempted in the past for malignancy detection. But these conventional methods suffer from complex selection of hand-crafted features and trade-off between performance parameters due to them. Today deep learning approaches are out-performing conventional machine learning techniques and able to overcome these issues. However, the existing convolutional neural network (CNN) based models also are prone to overfitting because of fully connected (FC) layers. Therefore, in this paper authors have proposed a fully convolutional network (FCN) based deep learning model for lymph nodes malignancy detection in computed tomography (CT) images. Moreover, the proposed FCN has been customized with batch normalization and advanced activation function Leaky ReLU to accelerate the training and to overcome the problem of dying ReLU, respectively. The performance of the proposed FCN has been also tuned to its best for smaller data size using data augmentation methods. The generalization of the proposed model is tested using the network parameter variation. To understand the reliability of the proposed model, it has also been compared with state-of-art related deep learning networks. The proposed FCN model has achieved an average accuracy, sensitivity, specificity, and area under curve as 90.28%, 90.63%, 89.95%, and 0.90, respectively. Our results also confirms the successful usabilility of augmentation methods for working on smaller datasets and deep learning approaches.
Twórcy
  • Department of Electronics & Telecommunication, National Institute of Technology Raipur, India
autor
  • Department of Electronics & Telecommunication, National Institute of Technology Raipur, India
  • Department of Electrical Engineering, NIT Raipur, Chhattisgarh 492010, India
Bibliografia
  • [1] Karaman S, Detmar M. Mechanisms of lymphatic metastasis. J Clin Invest 2014;124:922–8. http://dx.doi.org/10.1172/JCI71606.922.
  • [2] Sleeman JP. The lymph node as a bridgehead in the metastatic dissemination of tumors. In: Schlag PM, Veronesi U, editors. Lymphatic metastasis and sentinel lymphonodectomy. Recent results in cancer research. Berlin, Heidelberg: Springer; 2000. p. 55–81. http://dx.doi.org/10.1007/978-3-642-57151-0_6.
  • [3] Singh Malik P, Raina V. Lung cancer: prevalent trends & emerging concepts. Int Agency Res Cancer Indian Counc Med Res 2012;2009–11. http://dx.doi.org/10.4103/0971-5916.154479.
  • [4] Toloza EM, Harpole L, Detterbeck F, McCrory DC. Invasive staging of non-small cell lung cancer: a review of the current evidence. Chest 2003;123:157S–66S.
  • [5] Pak K, Kim K, Kim M, Eom JS, Lee MK, Cho JS, et al. A decision tree model for predicting mediastinal lymph node metastasis in non-small cell lung cancer with F-18 FDG PET/ CT; 2018;1–8. http://dx.doi.org/10.1371/journal.pone.0193403.
  • [6] Liu J, Zhao J, Hoffman J, Yao J, Lu L, Turkbey EB, et al. Detection and station mapping of mediastinal lymph nodes on thoracic computed tomography using spatial prior from multi-atlas label fusion. IEEE 11th Int. Symp. Biomed. Imaging. 2014. pp. 1107–10. http://dx.doi.org/10.1109/ISBI.2014.6868068.
  • [7] Christensen JD, Tong BC. Computed tomography screening for lung cancer: where are we now? N C Med J 2013;74:406–10.
  • [8] Bayanati H, Thornhill RE, Souza CA, Sethi-Virmani V, Gupta A, Maziak D, et al. Quantitative CT texture and shape analysis: can it differentiate benign and malignant mediastinal lymph nodes in patients with primary lung cancer? Eur Radiol 2014;25:480–7. http://dx.doi.org/10.1007/s00330-014-3420-6.
  • [9] Andersen MB, Harders SW, Ganeshan B, Thygesen J, Madsen HHT, Rasmussen F. CT texture analysis can help differentiate between malignant and benign lymph nodes in the mediastinum in patients suspected for lung cancer. Acta Radiol 2016;57:669–76. http://dx.doi.org/10.1177/0284185115598808.
  • [10] Pham TD, Watanabe Y, Higuchi M, Suzuki H. Texture analysis and synthesis of malignant and benign mediastinal lymph nodes in patients with lung cancer on computed tomography. Sci Rep 2017;7:1–10. http://dx.doi.org/10.1038/srep43209.
  • [11] Feulner J, Kevin Zhou S, Hammon M, Hornegger J, Comaniciu D. Lymph node detection and segmentation in chest CT data using discriminative learning and a spatial prior. Med Image Anal 2013;17:254–70. http://dx.doi.org/10.1016/j.media.2012.11.001.
  • [12] Wang H, Zhou Z, Li Y, Chen Z, Lu P, Wang W, et al. Comparison of machine learning methods for classifying mediastinal lymph node metastasis of non-small cell lung cancer from 18F-FDG PET/CT images. EJNMMI Res 2017;7. http://dx.doi.org/10.1186/s13550-017-0260-9.
  • [13] Pham TD. Complementary features for radiomic analysis of malignant and benign mediastinal lymph nodes. IEEE Int. Conf. Image Process.; 2017. pp. 3849–53.
  • [14] Tekchandani H, Verma S, Londhe ND. Severity assessment of lymph nodes in CT images using deep learning paradigm. IEEE Int. Conf. Comput. Methodol. Commun.; 2018.
  • [15] Greenspan H, van Ginneken B, Summers RM. Guest editorial deep learning in medical imaging: overview and future promise of an exciting new technique. IEEE Trans Med Imaging 2016;35:1153–9. http://dx.doi.org/10.1109/TMI.2016.2553401.
  • [16] LeCun Y, Haffner P, Bottou L, Bengio Y. Object recognition with gradient-based learning. Featur. Group; 1999. p. 319–20. http://dx.doi.org/10.1007/3-540-46805-6_19.
  • [17] Lin M, Chen Q, Yan S. Network in network; 2013;1–10. http://dx.doi.org/10.1109/ASRU.2015.7404828.
  • [18] Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit.. 2015. pp. 3431–40. http://dx.doi.org/10.1109/CVPR.2015.7298965.
  • [19] CS231n for convolutional neural networks for visual recognition (n.d.). http://cs231n.github.io/convolutional-networks/#conv.
  • [20] Zhou B, Khosla A, Lapedriza A, Oliva A, Torralba A. Learning deep features for discriminative localization. IEEE Conf. Comput. Vis. Pattern Recognit.. 2015. pp. 2921–9. http://dx.doi.org/10.1109/CVPR.2016.319.
  • [21] Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst.. 2012. pp. 1–9. http://dx.doi.org/10.1016/j.protcy.2014.09.007.
  • [22] Dumoulin V, Visin F. A guide to convolution arithmetic for deep learning; 2016, http://arxiv.org/abs/1603.07285.
  • [23] Glorot X, Bengio Y. Understanding the difficulty of training deep feedforward neural networks. PMLR 2010;9:249–56. doi:10.1.1.207.2059.
  • [24] He K, Zhang X, Ren S, Sun J. Delving deep into rectifiers: surpassing human-level performance on ImageNet classification; 2015, http://arxiv.org/abs/1502.01852.
  • [25] LeCun YA, Bottou L, Orr GB, Müller KR. Efficient backprop. Lect. Notes Comput. Sci. (Including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics). 7700 LECTU. 2012. pp. 9–48. http://dx.doi.org/10.1007/978-3-642-35289-8-3.
  • [26] Ioffe S, Szegedy C. Batch normalization: accelerating deep network training by reducing internal covariate shift. Comput Sci 2015;37. http://dx.doi.org/10.1007/s13398-014-0173-7.2.
  • [27] Nair V, Hinton GE. Rectified linear units improve restricted Boltzmann machines. Proc. 27th Int. Conf. Mach. Learn.. 2010. pp. 807–14. doi:10.1.1.165.6419.
  • [28] Xu B, Wang N, Chen T, Li M. Empirical evaluation of rectified activations in convolution network. ICML Deep Learn. Work.. 2015. pp. 1–5.
  • [29] Maas AL, Hannun AY, Ng AY. Rectifier nonlinearities improve neural network acoustic models. Proc 30th Int Conf Mach Learn 2013;28:6. https://web.stanford.edu/_awni/papers/ relu_hybrid_icml2013_final.pdf.
  • [30] Scherer D, Müller A, Behnke S. Evaluation of pooling operations in convolutional architectures for object recognition. Lect. Notes Comput. Sci. (Including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics). 6354 LNCS. 2010. pp. 92–101. http://dx.doi.org/10.1007/978-3-642-15825-4_10.
  • [31] Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 2014;15:1929– 58. http://dx.doi.org/10.1214/12-AOS1000.
  • [32] Bengio Y. Practical recommendations for gradient-based training of deep architectures. Lect. Notes Comput. Sci. (Including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics). 7700 LECTU. 2012. pp. 437–78. http://dx.doi.org/10.1007/978-3-642-35289-8-26.
  • [33] Hinton GE, Srivastava N, Swersky K. Lecture 6a – overview of mini-batch gradient descent. COURSERA Neural Networks Mach. Learn.. 2012. p. 31. http://www.cs.toronto.edu/_tijmen/csc321/slides/ lecture_slides_lec6.pdf.
  • [34] Kingma DP, Ba J. Adam: a method for stochastic optimization; 2014;1–15. http://dx.doi.org/10.1145/1830483.1830503.
  • [35] Sutskever I, Martens J, Dahl G, Hinton G. On the importance of initialization and momentum in deep learning. ICASSP, IEEE Int. Conf. Acoust. Speech Signal Process. – Proc. 2013. pp. 8609–13. http://dx.doi.org/10.1109/ICASSP.2013.6639346.
  • [36] Goodfellow I, Bengio Y, Courville A. Deep learning; 2016;164–220.
  • [37] Johnson RW, Shore JE. Comments on and correction to ‘‘Axiomatic derivation of the principle of maximum entropy and the principle of minimum cross-entropy’’. IEEE Trans Inf Theory 1983;29:942–3. http://dx.doi.org/10.1109/TIT.1983.1056747.
  • [38] Cho J, Lee K, Shin E, Choy G, Do S. How much data is needed to train a medical image deep learning system to achieve necessary high accuracy?; 2015, http://arxiv.org/abs/1511.06348.
  • [39] Tanner MA, Wong WH. The calculation of posterior distributions by data augmentation. J Am Stat Assoc 1987;82:528–40. http://dx.doi.org/10.2307/2289457.
  • [40] Agu E. Computer Science Dept. Worcester Polytechnic Institute (WPI); 2014, https://web.cs.wpi.edu/_emmanuel/courses/cs545/S14/ slides/lecture11.pdf.
  • [41] Kanpur IIT. Image sharpening. NPTEL; 2009, https://nptel.ac.in/courses/117104069/chapter_8/8_32.html.
  • [42] Hussain Z, Gimenez F, Yi D, Rubin D. Differential data augmentation techniques for medical imaging classification tasks. AMIA. Annu. Symp. Proceedings. AMIA Symp.; 2017. p. 979–84. http://www.ncbi.nlm.nih.gov/pubmed/29854165%0Ahttp:// www.pubmedcentral.nih.gov/articlerender.fcgi? artid=PMC5977656.
  • [43] Keras: The Python Deep Learning Library (n.d.). https://keras.io/.
  • [44] Abadi M, Barham P, Chen J, Chen Z, Davis A, Dean J, et al. TensorFlow: a system for large-scale machine learning. 12th USENIX Symp. Oper. Syst. Des. Implement. (OSDI 16). 2016. pp. 265–83. https://www.usenix.org/system/files/conference/osdi16/ osdi16-abadi.pdf.
  • [45] Setio AAA, Ciompi F, Litjens G, Gerke P, Jacobs C, van Riel SJ, et al. Pulmonary nodule detection in CT images: false positive reduction using multi-view convolutional networks. IEEE Trans Med Imaging 2016;35:1160–9. http://dx.doi.org/10.1109/TMI.2016.2536809.
  • [46] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition; 2014, http://arxiv.org/abs/1409.1556.
Uwagi
PL
Opracowanie rekordu ze środków MNiSW, umowa Nr 461252 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2020).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-3069e0a4-1473-4f60-81f0-3e2da972ef63
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.