PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

An ensemble of Deep Convolutional Neural Networks for Marking Hair Follicles on Microscopic Images

Wybrane pełne teksty z tego czasopisma
Identyfikatory
Warianty tytułu
Konferencja
Federated Conference on Computer Science and Information Systems (09-12.09.2018 ; Poznań, Poland)
Języki publikacji
EN
Abstrakty
EN
This paper presents an application of a Convolutional Neural Network as a solution for a task associated with ESENSEI Challenge: Marking Hair Follicles on Microscopic Images. As we show in this paper quality of classification results could be improved not only by changing architecture but also by ensemble networks. In this paper, we present two solutions for the task, the first one based on benchmark convolutional neural network, and the second one, an ensemble of VGG-16 networks. Presented models took first and third places in the final competition leaderboard.
Rocznik
Tom
Strony
23--28
Opis fizyczny
Bibliogr. 14 poz., wykr., tab., rys.
Twórcy
  • National Information Processing Institute, Natural Language Processing Laboratory, al. Niepodległości 188b, 00-608 Warsaw, Poland
  • National Information Processing Institute, Natural Language Processing Laboratory, al. Niepodległości 188b, 00-608 Warsaw, Poland
  • National Information Processing Institute, Natural Language Processing Laboratory, al. Niepodległości 188b, 00-608 Warsaw, Poland
Bibliografia
  • [1] J. Schmidhuber, “Deep learning in neural networks: An overview,” Neural Networks, vol. 61, pp. 85 – 117, 2015. doi: https://doi.org/10.1016/j.neunet.2014.09.003
  • [2] B. Walker, T. Lu, and T.-H. Chao, “Intelligent image analysis for image-guided hair removal and skin therapy,” vol. 8207, p. 820707, 2012. doi: 10.1117/12.910741. [Online]. Available: http://adsabs.harvard.edu/abs/2012SPIE.8207E..07W
  • [3] L. Perez and J. Wang, “The effectiveness of data augmentation in image classification using deep learning,” CoRR, vol. abs/1712.04621, 2017. [Online]. Available: http://arxiv.org/abs/1712.04621
  • [4] A. Fawzi, H. Samulowitz, D. Turaga, and P. Frossard, “Adaptive data augmentation for image classification,” 01 2016.
  • [5] J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. A. Riedmiller, “Striving for simplicity: The all convolutional net,” CoRR, vol. abs/1412.6806, 2014. [Online]. Available: http://arxiv.org/abs/1412.6806
  • [6] W. D. Shuying Liu, Very deep convolutional neural network based image classification using small training sample size. IEEE, 2015.
  • [7] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” J. Mach. Learn. Res., vol. 15, no. 1, pp. 1929–1958, Jan. 2014. [Online]. Available: http://dl.acm.org/citation.cfm?id=2627435.2670313
  • [8] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” CoRR, vol. abs/1502.03167, 2015. [Online]. Available: http://arxiv.org/abs/1502.03167
  • [9] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems 25, F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, Eds. Curran Associates, Inc., 2012, pp. 1097–1105. [Online]. Available: http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf
  • [10] P. Y. Simard, D. Steinkraus, and J. C. Platt, “Best practices for convolutional neural networks applied to visual document analysis,” in Seventh International Conference on Document Analysis and Recognition, 2003. Proceedings., Aug 2003. doi: 10.1109/ICDAR.2003.1227801 pp. 958–963.
  • [11] D. C. Cires¸an, U. Meier, J. Masci, L. M. Gambardella, and J. Schmidhuber, “Flexible, high performance convolutional neural networks for image classification,” in Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence - Volume Volume Two, ser. IJCAI’11. AAAI Press, 2011. doi: 10.5591/978-1-57735-516-8/IJCAI11-210. ISBN 978-1-57735-514-4 pp. 1237–1242. [Online]. Available: http://dx.doi.org/10.5591/978-1-57735-516-8/IJCAI11-210
  • [12] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” CoRR, vol. abs/1412.6980, 2014. [Online]. Available: http://arxiv.org/abs/1412.6980
  • [13] S. J. Reddi, S. Kale, and S. Kumar, “On the convergence of adam and beyond,” in International Conference on Learning Representations, 2018. [Online]. Available: https://openreview.net/forum?id=ryQu7f-RZ
  • [14] M. Izadyyazdanabadi, E. Belykh, M. Mooney, N. Martirosyan, J. Eschbacher, P. Nakaji, M. C. Preul, and Y. Yang, “Convolutional neural networks: Ensemble modeling, fine-tuning and unsupervised semantic localization,” CoRR, vol. abs/1709.03028, 2017. [Online]. Available: http://arxiv.org/abs/1709.03028
Uwagi
1. Track: Preface
2. Technical Session: 3rd International Workshop on Artificial Intelligence in Machine Vision and Graphics
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-0a55e2e5-1a8a-4dfe-add4-4665d6352bf5
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.