PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Powiadomienia systemowe
  • Sesja wygasła!
Tytuł artykułu

Mixup (sample pairing) can improve the performance of deep segmentation networks

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Researchers address the generalization problem of deep image processing networks mainly through extensive use of data augmentation techniques such as random flips, rotations, and deformations. A data augmentation technique called mixup, which constructs virtual training samples from convex combinations of inputs, was recently proposed for deep classification networks. The algorithm contributed to increased performance on classification in a variety of datasets, but so far has not been evaluated for image segmentation tasks. In this paper, we tested whether the mixup algorithm can improve the generalization performance of deep segmentation networks for medical image data. We trained a standard U-net architecture to segment the prostate in 100 T2-weighted 3D magnetic resonance images from prostate cancer patients, and compared the results with and without mixup in terms of Dice similarity coefficient and mean surface distance from a reference segmentation made by an experienced radiologist. Our results suggest that mixup offers a statistically significant boost in performance compared to non-mixup training, leading to up to 1.9% increase in Dice and a 10.9% decrease in surface distance. The mixup algorithm may thus offer an important aid for medical image segmentation applications, which are typically limited by severe data scarcity.
Rocznik
Strony
29--39
Opis fizyczny
Bibliogr. 28 poz., rys.
Twórcy
  • Division of Radiotherapy, European Institute of Oncology IRCCS, via Ripamonti 435, Milan, Italy
autor
  • Division of Radiology, European Institute of Oncology IRCCS, via Ripamonti 435, Milan, Italy
  • Department of Experimental Oncology, European Institute of Oncology IRCCS, via Ripamonti 435, Milan, Italy
autor
  • Department of Experimental Oncology, European Institute of Oncology IRCCS, via Ripamonti 435, Milan, Italy
  • Department of Computer Science, University of Warwick, Coventry CV4 7AL, Warwick, United Kingdom
  • Division of Radiotherapy, European Institute of Oncology IRCCS, via Ripamonti 435, Milan, Italy
  • Department of Oncology and Hemato-oncology, University of Milan, via Festa del Perdono 7, Milan, Italy
  • Division of Radiology, European Institute of Oncology IRCCS, via Ripamonti 435, Milan, Italy
  • Department of Oncology and Hemato-oncology, University of Milan, via Festa del Perdono 7, Milan, Italy
autor
  • Division of Radiotherapy, European Institute of Oncology IRCCS, via Ripamonti 435, Milan, Italy
  • Division of Radiotherapy, European Institute of Oncology IRCCS, via Ripamonti 435, Milan, Italy
  • Department of Oncology and Hemato-oncology, University of Milan, via Festa del Perdono 7, Milan, Italy
Bibliografia
  • [1] O. Ronneberger, P. Fischer, and T. Brox, U-net: Convolutional networks for biomedical image segmentation, in International Conference on Medical image computing and computer-assisted intervention. Springer, 2015, pp. 234–241.
  • [2] G. Litjens, R. Toth, W. van de Ven, C. Hoeks, S. Kerkstra, B. van Ginneken, G. Vincent, G. Guillard, N. Birbeck, J. Zhang et al., Evaluation of prostate segmentation algorithms for mri: the romise12 challenge, Medical image analysis, vol. 18, no. 2, pp. 359–373, 2014.
  • [3] MICCAI challenges, http://www.miccai.org/events/challenges/, 2020, accessed: 2020-08-03.
  • [4] grand-challenge.org challenges, https://grandchallenge.org/challenges/, 2020, accessed:2020-08-03.
  • [5] R. Cuocolo, A. Comelli, A. Stefano, V. Benfante, N. Dahiya, A. Stanzione, A. Castaldo, D. R. De Lucia, A. Yezzi, and M. Imbriaco, Deep learning whole-gland and zonal prostate segmentation on a public mri dataset, Journal of Magnetic Resonance Imaging, 2021.
  • [6] A. Comelli, N. Dahiya, A. Stefano, F. Vernuccio, M. Portoghese, G. Cutaia, A. Bruno, G. Salvaggio, and A. Yezzi, Deep learning-based methods for prostate segmentation in magnetic resonance imaging, Applied Sciences, vol. 11, no. 2, p. 782, 2021.
  • [7] M. Penso, S. Moccia, S. Scafuri, G. Muscogiuri G. Pontone, M. Pepi, and E. G. Caiani, Automated left and right ventricular chamber segmentation in cardiac magnetic resonance images using dense fully convolutional neural network, Computer Methods and Programs in Biomedicine, vol. 204, p. 106059, 2021.
  • [8] Y. Xie, J. Zhang, C. Shen, and Y. Xia, Cotr: Efficiently bridging cnn and transformer for 3d medical image segmentation, arXiv preprint arXiv:2103.03024, 2021.
  • [9] J. Chen, Y. Lu, Q. Yu, X. Luo, E. Adeli, Y. Wang, L. Lu, A. L. Yuille, and Y. Zhou, Transunet: Transformers make strong encoders for medical image segmentation, arXiv preprint arXiv:2102.04306, 2021.
  • [10] Y. Shu, J. Zhang, B. Xiao, and W. Li, Medical image segmentation based on active fusiontransduction of multi-stream features, KnowledgeBased Systems, vol. 220, p. 106950, 2021.
  • [11] H. H. Bo Wang, Shuang Qiu, Dual encoding unet for retinal vessel segmentation, Medical Image Computing and Computer Assisted Intervention, vol. 11764, pp. 84–92, 2019.
  • [12] R. Azad, M. Asadi-Aghbolaghi, M. Fathy, and S. Escalera, Bi-directional convlstm u-net with densley connected convolutions. institute of electrical and electronics engineers (ieee); 2019; 406– 415, 2020.
  • [13] H. Zhang, M. Cisse, Y. N. Dauphin, and D. LopezPaz, mixup: Beyond empirical risk minimization, arXiv preprint arXiv:1710.09412, 2017.
  • [14] Y. Tokozume, Y. Ushiku, and T. Harada, Betweenclass learning for image classification, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 5486–5494.
  • [15] Y. Tokozume, Y. Ushiki, and T. Harada, Learning from between-class examples for deep sound recognition, arXiv preprint arXiv:1711.10282, 2017.
  • [16] H. Inoue, Data augmentation by pairing samples for images classification, arXiv preprint arXiv:1801.02929, 2018.
  • [17] L. Perez and J. Wang, The effectiveness of data augmentation in image classification using deep learning, arXiv preprint arXiv:1712.04621, 2017.
  • [18] D. Liang, F. Yang, T. Zhang, and P. Yang, Understanding mixup training methods, IEEE Access, vol. 6, pp. 58 774–58 783, 2018.
  • [19] C. Summers and M. J. Dinneen, Improved mixedexample data augmentation, in 2019 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 2019, pp. 1262–1270.
  • [20] Promise12 online challenge leaderboard, https://promise12.grand-challenge.org/evaluation /leaderboard/, 2020, accessed: 2020-08-04.
  • [21] K. He, X. Zhang, S. Ren, and J. Sun, Delving deep into rectifiers: Surpassing human-level performance on imagenet classification, in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1026–1034.
  • [22] D. P. Kingma and J. Ba, Adam: A method for stochastic optimization, arXiv preprint arXiv:1412.6980, 2014.
  • [23] M. Zhang, J. Lucas, J. Ba, and G. E. Hinton, Lookahead optimizer: k steps forward, 1 step back, in Advances in Neural Information Processing Systems, 2019, pp. 9597–9608.
  • [24] Z. Wu, C. Shen, and A. v. d. Hengel, Bridging category-level and instance-level semantic image segmentation, arXiv preprint arXiv:1605.06885, 2016.
  • [25] O. Oktay, J. Schlemper, L. L. Folgoc, M. Lee, M. Heinrich, K. Misawa, K. Mori, S. McDonagh, N. Y. Hammerla, B. Kainz et al., Attention u-net: Learning where to look for the pancreas, arXiv preprint arXiv:1804.03999, 2018.
  • [26] M. Z. Alom, M. Hasan, C. Yakopcic, T. M. Taha, and V. K. Asari, Recurrent residual convolutional neural network based on u-net (r2u-net) for medical image segmentation, arXiv preprint arXiv:1802.06955, 2018.
  • [27] R. R. Shamir, Y. Duchin, J. Kim, G. Sapiro, and N. Harel, Continuous dice coefficient: a method for evaluating probabilistic segmentations, arXiv preprint arXiv:1906.11031, 2019.
  • [28] S. Thulasidasan, G. Chennupati, J. A. Bilmes, T. Bhattacharya, and S. Michalak, On mixup training: Improved calibration and predictive uncertainty for deep neural networks, in Advances in Neural Information Processing Systems, 2019, pp. 13 888–13 899.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-35a2ab69-944b-416a-82d7-bbb4f0ed1a5b
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.