PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Interpolation merge as augmentation technique in the problem of ship classification

Wybrane pełne teksty z tego czasopisma
Identyfikatory
Warianty tytułu
Konferencja
Federated Conference on Computer Science and Information Systems (15 ; 06-09.09.2020 ; Sofia, Bulgaria)
Języki publikacji
EN
Abstrakty
EN
Quite a common problem during training the classifier is a small number of samples in the training database, which can significantly affect the obtained results. To increase them, data augmentation can be used, which generates new samples based on existing ones, most often using simple transformations. In this paper, we propose a new approach to generate such samples using image processing techniques and discrete interpolation method. The described technique creates a new image sample using at least two others in the same class. To verify the proposed approach, we performed tests using different architectures of convolution neural networks for the ship classification problem.
Rocznik
Tom
Strony
443--446
Opis fizyczny
Bibliogr. 14 poz., wz., il.
Twórcy
autor
  • Marine Technology Ltd., ul. Roszczynialskiego 4/6, 81-521 Gdynia, Poland
  • Maritime University of Szczecin, Waly Chrobrego 1-2, 70-500 Szczecin, Poland
Bibliografia
  • 1. T. Hyla and N. Wawrzyniak, “Ships detection on inland waters using video surveillance system,” in IFIP International Conference on Computer Information Systems and Industrial Management. Springer, 2019, pp. 39–49.
  • 2. M. A. Kutlugün, Y. Sirin, and M. Karakaya, “The effects of augmented training dataset on performance of convolutional neural networks in face recognition system,” in 2019 Federated Conference on Computer Science and Information Systems (FedCSIS). IEEE, 2019, pp. 929–932.
  • 3. W. Zhang, P. M. Chu, K. Huang, and K. Cho, “Driving data generation using affinity propagation, data augmentation, and convolutional neural network in communication system,” International Journal of Communication Systems, p. e3982, 2019.
  • 4. A. Teramoto, A. Yamada, Y. Kiriyama, T. Tsukamoto, K. Yan, L. Zhang, K. Imaizumi, K. Saito, and H. Fujita, “Automated classification of benign and malignant cells from lung cytological images using deep convolutional neural network,” Informatics in Medicine Unlocked, vol. 16, p. 100205, 2019.
  • 5. N. J. Tustison, B. B. Avants, Z. Lin, X. Feng, N. Cullen, J. F. Mata, L. Flors, J. C. Gee, T. A. Altes, J. P. Mugler III et al., “Convolutional neural networks with template-based data augmentation for functional lung image quantification,” Academic radiology, vol. 26, no. 3, pp. 412–423, 2019.
  • 6. F. Gao, T. Huang, J. Sun, J. Wang, A. Hussain, and E. Yang, “A new algorithm for sar image target recognition based on an improved deep convolutional neural network,” Cognitive Computation, vol. 11, no. 6, pp. 809–824, 2019.
  • 7. K. Cho et al., “Retrieval-augmented convolutional neural networks against adversarial examples,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 11 563–11 571.
  • 8. J. M. Haut, M. E. Paoletti, J. Plaza, A. Plaza, and J. Li, “Hyperspectral image classification using random occlusion data augmentation,” IEEE Geoscience and Remote Sensing Letters, 2019.
  • 9. G. Chen, C. Li, W. Wei, W. Jing, M. Woźniak, T. Blažauskas, and R. Damaševičius, “Fully convolutional neural network with augmented atrous spatial pyramid pool and fully connected fusion path for high resolution remote sensing image segmentation,” Applied Sciences, vol. 9, no. 9, p. 1816, 2019.
  • 10. L. Mou, Y. Hua, and X. X. Zhu, “A relation-augmented fully convolutional network for semantic segmentation in aerial scenes,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2019, pp. 12 416–12 425.
  • 11. N. Wawrzyniak and A. Stateczny, “Automatic watercraft recognition and identification on water areas covered by video monitoring as extension for sea and river traffic supervision systems,” Polish Maritime Research, vol. 25, no. s1, pp. 5–13, 2018.
  • 12. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” pp. 1097–1105, 2012.
  • 13. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint https://arxiv.org/abs/1409.1556, 2014.
  • 14. C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2818–2826.
Uwagi
1. This work was supported by the National Centre for Research and Development (NCBR) of Poland under grant no. LIDER/17/0098/L-8/16/NCBR/2017.
2. Track 2: Computer Science & Systems
3. Technical Session: Advances in Computer Science & Systems
4. Opracowanie rekordu ze środków MNiSW, umowa Nr 461252 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2021).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-7af686d8-af47-41ef-b30a-f09fac192e21
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.