Tytuł artykułu
Identyfikatory
Warianty tytułu
Języki publikacji
Abstrakty
Urine sediment examination (USE) is an important topic in kidney disease analysis and it is often the prerequisite for subsequent diagnostic procedures. We propose DFPN(Feature Pyramid Network with DenseNet) method to overcome the problem of class confusion in the USE images that it is hard to be solved by baseline model which is the state-of-the-art object detection model FPN with RoIAlign pooling. We explored the importance of two parts of baseline model for the USE cell detection. First, adding attention module in the network head, and the class-specific attention module has improved mAP by 0.7 points with pretrained ImageNet model and 1.4 points with pre-trained COCO model. Next, we introduced DenseNet to the baseline model(DFPN) for cell detection in USE, so that the input of the network's head own multiple levels of semantic information, compared to the baseline model only has high-level semantic information. DFPN achieves top result with a mAP of 86.9% on USE test set after balancing between the classification loss and bounding-box regression loss, which improve 5.6 points compared to baseline model, and especially erythrocyte's AP is greatly improved from 65.4% to 93.8%, indicating class confusion has been basically resolved. And we also explore the impacts of training schedule and pretrained model. Our method is promising for the development of automated USE.
Wydawca
Czasopismo
Rocznik
Tom
Strony
661--670
Opis fizyczny
Bibliogr. 22 poz., rys., tab., wykr.
Twórcy
autor
- School of Information Science and Engineering, Central South University, Changsha, Hunan, China
autor
- School of Information Science and Engineering, Central South University, Changsha, Hunan, China
autor
- School of Information Science and Engineering, Central South University, Changsha, Hunan, China
autor
- School of Information Science and Engineering, Central South University, Changsha, Hunan, China
Bibliografia
- [1] Lamchiagdhase P, Preechaborisutkul K, Lomsomboon P, Srisuchart P, Tantiniti P, Khanura N, Preechaborisutkul B. Urine sediment examination: a comparison between the manual method and the iq200 automated urine microscopy analyzer. Clin Chim Acta 2005;358(1–2):167.
- [2] Verdesca S, Brambilla C, Garigali G, Croci MD, Messa P, Fogazzi GB. How a skillful (correction of skilful) and motivated urinary sediment examination can save the kidneys, nephrology, dialysis, transplantation: official publication of the European Dialysis and Transplant Association. Eur Renal Assoc 2007;22(6):1778–81.
- [3] Winkel P, Statland BE, Jorgensen K. Urine microscopy, an illdefined method, examined by a multifactorial technique. Clin Chem 1974;20(4):436–9.
- [4] Gadeholt H. Quantitative estimation of urinary sediment, with special regard to sources of error. Br Med J 1964;1 (5397):1547–9.
- [5] Li C, Xu C, Gui C, Fox MD. Distance regularized level set evolution and its application to image segmentation. IEEE Trans Image Process A Publ IEEE Signal Process Soc 2010;19 (12):3243.
- [6] Xu C, Prince JL. Snakes, shapes, and gradient vector flow. IEEE Trans Image Process 1998;7(3):359.
- [7] Mahmood NH, Mansor MA. Red blood cells estimation using hough transform technique. Signal Image Process 2012;3 (2):53.
- [8] Ren S, He K, Girshick R, Sun J. Faster r-cnn: towards real-time object detection with region proposal networks. International Conference on Neural Information Processing Systems. 2015. pp. 91–9.
- [9] Lin TY, Dollar P, Girshick R, He K, Hariharan B, Belongie S. Feature pyramid networks for object detection. IEEE Conference on Computer Vision and Pattern Recognition. 2017. pp. 936–44.
- [10] Girshick R. Fast r-cnn. IEEE International Conference on Computer Vision. 2015. pp. 1440–8.
- [11] He K, Gkioxari G, Dollár P, Girshick R. Mask r-cnn; 2017.
- [12] He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition; 2015;770–8.
- [13] Huang G, Liu Z, Weinberger KQ. Densely connected convolutional networks. CVPR; 2016.
- [14] Liu F, Yang L. A novel cell detection method using deep convolutional neural network and maximum-weight independent set. Springer International Publishing; 2015.
- [15] Girshick R, Donahue J, Darrell T, Malik J. Rich feature hierarchies for accurate object detection and semantic segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2014. p. 580–7.
- [16] Uijlings JRR, Sande KEAVD, Gevers T, Smeulders AWM. Selective search for object recognition. Int J Comput Vis 2013;104(2):154–71.
- [17] Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser L, Polosukhin I. Attention is all you need; 2017.
- [18] Xu K, Ba J, Kiros R, Cho K, Courville A, Salakhutdinov R, Zemel R, Bengio Y. Show, attend and tell: neural image caption generation with visual attention. Comput Sci 2015;2048–57.
- [19] Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M. Imagenet large scale visual recognition challenge. Int J Comput Vis 2015;115(3):211–52.
- [20] Douze M, Sandhawalia H, Amsaleg L, Schmid C. Evaluation of gist descriptors for web-scale image search. ACM International Conference on Image and Video Retrieval. 2009. pp. 1–8.
- [21] Lockwood W. The complete urinalysis and urine tests, RN. ORG; 2015.
- [22] Lin TY, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Dollár P, Zitnick CL. Microsoft coco: Common objects in context. European Conference on Computer Vision, vol. 8693. 2014. pp. 740–55.
Uwagi
PL
Opracowanie rekordu w ramach umowy 509/P-DUN/2018 ze środków MNiSW przeznaczonych na działalność upowszechniającą naukę (2018).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-3a57fbef-adf5-4171-9f0d-b605e6b4325d