Nowa wersja platformy, zawierająca wyłącznie zasoby pełnotekstowe, jest już dostępna.
Przejdź na https://bibliotekanauki.pl

PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
2022 | Vol. 42, no. 1 | 204--214
Tytuł artykułu

SVIA dataset: A new dataset of microscopic videos and images for computer-aided sperm analysis

Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Computer-Aided Sperm Analysis (CASA) is a widely studied topic in the diagnosis and treatment of male reproductive health. Although CASA has been evolving, there is still a lack of publicly available large-scale image datasets for CASA. To fill this gap, we provide the Sperm Videos and Images Analysis (SVIA) dataset, including three different subsets, subset-A, subset-B and subset-C, to test and evaluate different computer vision techniques in CASA. For subset-A, in order to test and evaluate the effectiveness of SVIA dataset for object detection, we use five representative object detection models and four commonly used evaluation metrics. For subset-B, in order to test and evaluate the effectiveness of SVIA dataset for image segmentation, we used eight representative methods and three standard evaluation metrics. Moreover, to test and evaluate the effectiveness of SVIA dataset for object tracking, we have employed the traditional kNN with progressive sperm (PR) as an evaluation metric and two deep learning models with three standard evaluation metrics. For subset-C, to prove the effectiveness of SVIA dataset for image denoising, nine denoising filters are used to denoise thirteen kinds of noise, and the mean structural similarity is calculated for evaluation. At the same time, to test and evaluate the effectiveness of SVIA dataset for image classification, we evaluate the results of twelve convolutional neural network models and six visual transformer models using four commonly used evaluation metrics. Through a series of experimental analyses and comparisons in this paper, it can be concluded that this proposed dataset can evaluate not only the functions of object detection, image segmentation, object tracking, image denoising, and image classification but also the robustness of object detection and image classification models. Therefore, SVIA dataset can fill the gap of the lack of large-scale public datasets in CASA and promote the development of CASA. Dataset is available at: .https://github.com/Demozsj/Detection-Sperm.
Wydawca

Rocznik
Strony
204--214
Opis fizyczny
Bibliogr. 52 poz., rys., tab.
Twórcy
autor
  • Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
autor
  • Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China, lichen@bmie.neu.edu.cn
autor
  • Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
  • Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
autor
  • Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ, USA
  • Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
autor
  • Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
autor
  • Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
autor
  • Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
autor
  • Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
  • Institute of Medical Informatics, University of Luebeck, Luebeck, Germany
Bibliografia
  • [1] Turp AB, Guler I, Bozkurt N, Uysal A, Yilmaz B, Demir M, et al. Infertility and surrogacy first mentioned on a 4000-year-old assyrian clay tablet of marriage contract in turkey. Gynecol Endocrinol 2018;34(1):25–7.
  • [2] Cui W. Mother or nothing: the agony of infertility. B World Health Organ 2010;88(12):881.
  • [3] Inhorn M, Patrizio P. Infertility around the globe: new thinking on gender, reproductive technologies and global movements in the 21st century. Hum Reprod Update 2015;21 (4):411–26.
  • [4] Agarwal A, Parekh N, Selvam MKP, Henkel R, Shah R, Homa ST, et al. Male oxidative stress infertility (mosi): proposed terminology and clinical practice guidelines for management of idiopathic male infertility. World J Men’s Health 2019;37 (3):296–312.
  • [5] Murshidi MM, Choy J, Eisenberg M. Male infertility and somatic health. Urol Clin N Am 2020;47(2):211–7.
  • [6] Kumar N, Singh AK. Trends of male factor infertility, an important cause of infertility: A review of literature. J Human Reproduct Sci 2015;8(4):191.
  • [7] Onofre J, Geenen L, Cox A, Van Der Auwera I, Willendrup F, Andersen E, et al. Simplified sperm testing devices: a possible tool to overcome lack of accessibility and inconsistency in male factor infertility diagnosis. an opportunity for low-and middle-income countries. Facts, Views Vis Obgy 2021;13 (1):79.
  • [8] World Health Organization. WHO laboratory manual for the examination and processing of human semen. (UN): World Health Organization; 2010.
  • [9] Baskaran S, Finelli R, Agarwal A, Henkel R. Diagnostic value of routine semen analysis in clinical andrology. Andrologia 2021;53(2) e13614.
  • [10] Agarwal A, Selvam MKP, Ambar RF. Validation of lenshooke x1 pro and computer-assisted semen analyzer compared with laboratory-based manual semen analysis. World J Men’s Health 2021;39(3):496.
  • [11] Centola G. Comparison of manual microscopic and computer-assisted methods for analysis of sperm count and motility. Arch Androl 1996;36(1):1–7.
  • [12] Mortimer ST, van der Horst G, Mortimer D. The future of computer-aided sperm analysis. Asian J Androl 2015;17 (4):545.
  • [13] Keel BA, Webster BW. Handbook of the laboratory diagnosis and treatment of infertility. US: CRC Press; 1990.
  • [14] Lu J, Huang Y, Lu¨ N. Computer-aided sperm analysis: past, present and future. Andrologia 2014;46(4):329–38.
  • [15] Ghasemian F, Mirroshandel SA, Monji-Azad S, Azarnia M, Zahiri Z. An efficient method for automatic morphological abnormality detection from human sperm images. Comput Meth Prog Bio 2015;122(3):409–20.
  • [16] Javadi S, Mirroshandel SA. A novel deep learning method for automatic assessment of human sperm images. Comput Biol Med 2019;109:182–94.
  • [17] Amann RP, Waberski D. Computer-assisted sperm analysis (casa): capabilities and potential developments. Theriogenology 2014;81(1):5–17.
  • [18] Abbasi A, Miahi E, Mirroshandel SA. Effect of deep transfer and multi-task learning on sperm abnormality detection. Computer Biol Med 2021;128 104121.
  • [19] Abbiramy V, Shanthi V, Allidurai C. Spermatozoa detection, counting and tracking in video streams to detect asthenozoospermia. In: Proc. of ICSIP 2010. p. 265–70.
  • [20] Berezansky M, Greenspan H, Cohen-Or D, Eitan O. Segmentation and tracking of human sperm cells using spatio-temporal representation and clustering. In: Proc. of SPIE 2007. p. 65122M.
  • [21] Shaker F, Monadjemi SA, Alirezaie J, Naghsh-Nilchi AR. A dictionary learning approach for human sperm heads classification. Computer Biol Med 2017;91:181–90.
  • [22] Haugen TB, Hicks SA, Andersen JM, Witczak O, Hammer HL, Borgli R, et al. Visem: A multimodal video dataset of human spermatozoa. In: Proc. of MMSys 2019. p. 261–6.
  • [23] Hu Y, Lu J, Shao Y, Huang Y, Lü N. Comparison of the semen analysis results obtained from two branded computer-aided sperm analysis systems. Andrologia 2013;45(5):315–8.
  • [24] Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu CY, et al. Ssd: Single shot multibox detector. In: Proc. of ECCV 2016. p. 21–37.
  • [25] Lin TY, Goyal P, Girshick R, He K, Dollár P. Focal loss for dense object detection. IEEE T Pattern Anal 2020;42(2):318–27.
  • [26] Ren S, He K, Girshick R, Sun J. Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE T Pattern Anal 2017;39(6):1137–49.
  • [27] Farhadi A, Redmon J. Yolov3: An incremental improvement. In: Proc. of CVPR 2018; 2018. p. 1804–02.
  • [28] Bochkovskiy A, Wang CY, Liao HYM. Yolov4: Optimal speed and accuracy of object detectio. arXiv:2004.10934 (arXiv preprint); 2020.
  • [29] Everingham M, Winn J. The pascal visual object classes challenge 2012 (voc2012) development kit. Pattern Analysis, Statistical Modelling and Computational Learning, Tech Rep 2011;8:5.
  • [30] Li Z, Li C, Yao Y, Zhang J, Rahaman MM, Xu H, et al. Emds-5: Environmental microorganism image dataset fifth version for multiple image analysis tasks. Plos One 2021;16(5) e0250631.
  • [31] MacQueen J et al. Some methods for classification and analysis of multivariate observations. In: Proc. of BSMSP 1967. p. 281–97.
  • [32] Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. In: Proc. of MICCAI 2015. p. 234–41.
  • [33] Badrinarayanan V, Kendall A, Cipolla R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE T Pattern Anal 2017;39(12):2481–95.
  • [34] Chen LC, Zhu Y, Papandreou G, Schroff F, Adam H. Encoderdecoder with atrous separable convolution for semantic image segmentation. In: Proc. of ECCV 2018. p. 1–18.
  • [35] Taha AA, Hanbury A. Metrics for evaluating 3d medical image segmentation: analysis, selection, and tool. BMC Med Imaging 2015;15(1):1–28.
  • [36] O’connell M, Mcclure N, Lewis S. The effects of cryopreservation on sperm morphology, motility and mitochondrial function. Hum Reprod 2002;17(3):704–9.
  • [37] Gonzalez RC, Woods RE, Masters BR. Digital image processing. Society of Photo-Optical Instrumentation Engineers; 2009.
  • [38] Li X, Li C, Kulwa F, Rahaman MM, Zhao W, Wang X, et al. Foldover features for dynamic object behaviour description in microscopic videos. IEEE Access 2020;8:114519–40.
  • [39] Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. Commun Acm 2017;60(6):84–90.
  • [40] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. In: Proc. of ICLR 2015. p. 1–13.
  • [41] He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proc. of CVPR 2016. p. 770–8.
  • [42] Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, et al. Going deeper with convolutions. In: Proc. of CVPR 2015. p. 1–9.
  • [43] Huang G, Liu Z, van der Maaten L, Weinberger KQ. Densely connected convolutional networks. In: Proc. of CVPR 2017. p. 4700–8.
  • [44] Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the inception architecture for computer vision. In: Proc. of CVPR 2016. p. 2818–26.
  • [45] Sandler M, Howard A, Zhu M, Zhmoginov A, Chen LC. Mobilenetv 2: Inverted residuals and linear bottlenecks. In: Proc. of CVPR 2018. p. 4510–20.
  • [46] Ma N, Zhang X, Zheng HT, Sun J. Shufflenet v2: Practical guidelines for efficient cnn architecture design. In: Proc. of ECCV 2018. p. 116–31.
  • [47] Xception Chollet F. Deep learning with depthwise separable convolutions. In: Proc. of CVPR 2017. p. 1251–8.
  • [48] Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. of ICLR 2021. p. 1–22.
  • [49] Srinivas A, Lin TY, Parmar N, Shlens J, Abbeel P, Vaswani A. Bottleneck transformers for visual recognition. In: Proc. of CVPR 2021. p. 16519–29.
  • [50] Touvron H, Cord M, Douze M, Massa F, Sablayrolles A, Jégou H. Training data-efficient image transformers & distillation through attention. In: Proc. of PMLR 2021. p. 10347–57.
  • [51] Yuan L, Chen Y, Wang T, Yu W, Shi Y, Jiang Z, et al. Tokens-totoken vit: Training vision transformers from scratch on imagenet. arXiv:2101.11986 (arXiv preprint); 2021.
  • [52] Powers D. Evaluation: From precision, recall and f-measure to roc, informedness, markedness & correlation. J Mach Learn Technol 2011;2(1):37–63.
Typ dokumentu
Bibliografia
Identyfikatory
Identyfikator YADDA
bwmeta1.element.baztech-d313e4b9-df2d-4e1a-ae5f-c50e8349769a
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.