PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Improved efficient capsule network for Kuzushiji-MNIST benchmark dataset classification

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
In this paper, we present an improved efficient capsule network (CN) model for the classification of the Kuzushiji-MNIST and Kuzushiji-49 benchmark datasets. CNs are a promising approach in the field of deep learning, offering advantages such as robustness, better generalization, and a simpler network structure compared to traditional convolutional neural networks (CNNs). Proposed model, based on the Efficient CapsNet architecture, incorporates the self-attention routing mechanism, resulting in improved efficiency and reduced parameter count. The experiments conducted on the Kuzushiji-MNIST and Kuzushiji-49 datasets demonstrate that the model achieves competitive performance, ranking within the top ten solutions for both benchmarks. Despite using significantly fewer parameters compared to higher-rated competitors, presented model achieves comparable accuracy, with overall differences of only 0.91% and 1.97% for the Kuzushiji-MNIST and Kuzushiji- 49 datasets, respectively. Furthermore, the training time required to achieve these results is substantially reduced, enabling training on nonspecialized workstations. The proposed novelties of capsule architecture, including the integration of the self-attention mechanism and the efficient network structure, contribute to the improved efficiency and performance of presented model. These findings highlight the potential of CNs as a more efficient and effective approach for character classification tasks, with broader applications in various domains.
Rocznik
Strony
art. no. e147338
Opis fizyczny
Bibliogr. 41 poz., rys., tab.
Twórcy
  • Department of Artificial Intelligence, Institute of Information Technology, Warsaw University of Life Sciences, Nowoursynowska 159, Warsaw, 02-776, Poland
  • Department of Artificial Intelligence, Institute of Information Technology, Warsaw University of Life Sciences, Nowoursynowska 159, Warsaw, 02-776, Poland
  • Department of Artificial Intelligence, Institute of Information Technology, Warsaw University of Life Sciences, Nowoursynowska 159, Warsaw, 02-776, Poland
Bibliografia
  • [1] J. Kurek, B. Swiderski, A. Jegorowa, M. Kruk, and S. Osowski, “Deep learning in assessment of drill condition on the basis of images of drilled holes,” in Eighth International Conference on Graphic and Image Processing (ICGIP 2016), Y. Wang, T. D. Pham, V. Vozenilek, D. Zhang, and Y. Xie, Eds., vol. 10225, International Society for Optics and Photonics. SPIE, 2017, p. 102251V, doi: 10.1117/12.2266254.
  • [2] A. Jegorowa, J. Kurek, I. Antoniuk, W. Dołowa, M. Bukowski, and P. Czarniak, “Deep learning methods for drill wear classification based on images of holes drilled in melamine faced chipboard,” Wood Sci. Technol., vol. 55, no. 1, pp. 271–293, Jan 2021, doi: 10.1007/s00226-020-01245-7.
  • [3] J. Kurek et al., “Classifiers ensemble of transfer learning for improved drill wear classification using convolutional neural network,” Mach. Graph. Vis., vol. 28, no. 1/4, p. 13–23, Dec. 2019, doi: 10.22630/MGV.2019.28.1.2.
  • [4] G. Hinton, A. Krizhevsky, and S. Wang, “Transforming autoencoders,” in Artificial Neural Networks and Machine Learning – ICANN 2011, vol. 6791, 06 2011, pp. 44–51, doi: 10.1007/978-3-642-21735-7_6.
  • [5] A. Jegorowa, J. Górski, J. Kurek, and M. Kruk, “Use of nearest neighbors (k-nn) algorithm in tool condition identification in the case of drilling in melamine faced particleboard,” Maderas-Cienc. Tecnol., vol. 22, no. 2, p. 189–196, 2020, doi: 10.4067/S0718-221X2020005000205.
  • [6] S. Sabour, N. Frosst, and G.E. Hinton, “Dynamic routing between capsules,” 2017, doi: 10.48550/ARXIV.1710.09829. [On-line]. Available: https://arxiv.org/abs/1710.09829
  • [7] F.D.S. Ribeiro, G. Leontidis, and S.D. Kollias, “Capsule routing via variational bayes,” CoRR, vol. abs/1905.11455, 2019. [On-line]. Available: http://arxiv.org/abs/1905.11455
  • [8] F.A. Heinsen, “An algorithm for routing capsules in all domains,” arXiv preprint arXiv:1911.00792, 2019.
  • [9] A. Byerly, T. Kalganova, and I. Dear, “A branching and merging convolutional network with homogeneous filter capsules,” CoRR, vol. abs/2001.09136, 2020. [Online]. Available: https://arxiv.org/abs/2001.09136
  • [10] S.R. Venkatraman, A. Anand, S. Balasubramanian, and R.R. Sarma, “Learning compositional structures for deep learning: Why routing-by-agreement is necessary,” CoRR, vol. abs/2010.01488, 2020. [Online]. Available: https://arxiv.org/abs/2010.01488
  • [11] D. Wang and Q. Liu, “An optimization view on dynamic routing between capsules,” 2018. [Online]. Available: https://openreview.net/forum?id=HJjtFYJDf
  • [12] V. Mazzia, F. Salvetti, and M. Chiaberge, “Efficient-CapsNet: capsule network with self-attention routing,” Sci. Rep., vol. 11, no. 1, jul 2021, doi: 10.1038/s41598-021-93977-0.
  • [13] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A.N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” 2017, doi: 10.48550/ARXIV.1706.03762. [Online]. Available: https://arxiv.org/abs/1706.03762
  • [14] T.B. Brown, B. Mann, N. Ryder, and E.A. Subbiah, “Language models are few-shot learners,” 2020, doi: 10.48550/ARXIV.2005.14165. [Online]. Available: https://arxiv.org/abs/2005.14165
  • [15] V. Mazzia, F. Salvetti, and M. Chiaberge, “Github repository for efficient capsnet.” 2021. [Online]. Available: https://github.com/EscVM/Efficient-CapsNet
  • [16] T. Clanuwat, M. Bober-Irizar, A. Kitamoto, A. Lamb, K. Yamamoto, and D. Ha. (2018) Deep learning for classical japanese literature.
  • [17] H. Xiao, K. Rasul, and R. Vollgraf. (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms.
  • [18] R. LaLonde, Z. Xu, I. Irmakci, S. Jain, and U. Bagci, “Capsules for biomedical image segmentation,” Med. Image Anal., vol. 68, p. 101889, 2021.
  • [19] R. LaLonde and U. Bagci, “Capsules for object segmentation,” arXiv preprint arXiv:1804.04241, 2018.
  • [20] Y. He, W. Qin, Y. Wu, M. Zhang, Y. Yang, X. Liu, H. Zheng, D. Liang, and Z. Hu, “Automatic left ventricle segmentation from cardiac magnetic resonance images using a capsule network,” J. X-Ray Sci. Technol., vol. 28, no. 3, pp. 541–553, 2020.
  • [21] M. Elmezain, A. Mahmoud, D.T. Mosa, and W. Said, “Brain tumor segmentation using deep capsule network and latent-dynamic conditional random fields,” J. Imaging, vol. 8, no. 7, p. 190, 2022.
  • [22] X. Zhang and S.-G. Zhao, “Cervical image classification based on image segmentation preprocessing and a capsnet network model,” Int. J. Imaging Syst. Technol., vol. 29, no. 1, pp. 19–28, 2019, doi: 10.1002/ima.22291.
  • [23] A. Kumar and N. Sachdeva, “Multimodal cyberbullying detection using capsule network with dynamic routing and deep convolutional neural network,” Multimedia Syst., vol. 28, p. 2043–2052, 2022.
  • [24] B. Chen, Z. Xu, X. Wang, L. Xu, and W. Zhang, “Capsule network-based text sentiment classification,” IFAC-PapersOnLine, vol. 53, no. 5, pp. 698–703, 2020.
  • [25] J. Kim, S. Jang, E. Park, and S. Choi, “Text classification using capsules,” Neurocomputing, vol. 376, pp. 214–221, 2020.
  • [26] H. Ren and H. Lu, “Compositional coding capsule network with k-means routing for text classification,” Pattern Recognit. Lett., vol. 160, pp. 1–8, 2022.
  • [27] D.K. Jain, R. Jain, Y. Upadhyay, A. Kathuria, and X. Lan, “Deep refinement: Capsule network with attention mechanism-based system for text classification,” Neural Comput. Appl., vol. 32, pp. 1839–1856, 2020.
  • [28] J.S. Manoharan, “Capsule network algorithm for performance optimization of text classification,” J. Soft Comput. Paradigm (JSCP), vol. 3, no. 01, pp. 1–9, 2021.
  • [29] L. Xiao, H. Zhang, W. Chen, Y. Wang, and Y. Jin, “Mcapsnet: Capsule network for text with multi-task learning,” in Proceedings of the 2018 conference on empirical methods in natural language processing, 2018, pp. 4565–4574.
  • [30] F. Be¸ser, M.A. Kizrak, B. Bolat, and T. Yildirim, “Recognition of sign language using capsule networks,” in 2018 26th Signal Processing and Communications Applications Conference (SIU). IEEE, 2018, pp. 1–4.
  • [31] A.D. Kumar, “Novel deep learning model for traffic sign detection using capsule networks,” arXiv preprint arXiv:1805.04424, 2018.
  • [32] B. Janakiramaiah, G. Kalyani, A. Karuna, L.N. Prasad, and M. Krishna, “Military object detection in defense using multi-level capsule networks,” Soft Comput., vol. 27, p. 1045–1059, 2023, doi: 10.1007/s00500-021-05912-0.
  • [33] P.-A. Andersen, “Deep reinforcement learning using capsules in advanced game environments,” arXiv preprint arXiv: 1801.09597, 2018.
  • [34] T. Molnar and E. Culurciello, “Capsule network performance with autonomous navigation,” arXiv preprint arXiv:2002.03181, 2020.
  • [35] V. Jayasundara, S. Jayasekara, H. Jayasekara, J. Rajasegaran, S. Seneviratne, and R. Rodrigo, “TextCaps: Handwritten character recognition with very small datasets,” in 2019 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, jan 2019, doi: 10.1109/wacv.2019.00033.
  • [36] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri, “Learning spatiotemporal features with 3d convolutional networks,” in 2015 IEEE International Conference on Computer Vision (ICCV). Los Alamitos, CA, USA: IEEE Computer Society, dec 2015, pp. 4489–4497, doi: 10.1109/ICCV.2015.510. [On-line]. Available: https://doi.ieeecomputersociety.org/10.1109/ICCV.2015.510
  • [37] K. Duarte, Y. S. Rawat, and M. Shah, “Videocapsulenet: A simplified network for action detection,” 2018.
  • [38] D. Ma and X. Wu, “Capsulerrt: Relationships-aware regression tracking via capsules,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2021, pp. 10 948–10 957.
  • [39] T. Vijayakumar, “Comparative study of capsule neural network in various applications,” J. Artif. Intell., vol. 1, no. 01, pp. 19–27, 2019.
  • [40] Y. LeCun and C. Cortes, “MNIST handwritten digit database,” 2010. [Online]. Available: http://yann.lecun.com/exdb/mnist/
  • [41] G. Cohen, S. Afshar, J. Tapson, and A. van Schaik, “EMNIST: an extension of MNIST to handwritten letters,” CoRR, vol. abs/1702.05373, 2017. [Online]. Available: http://arxiv.org/abs/1702.05373
Uwagi
Opracowanie rekordu ze środków MNiSW, umowa nr SONP/SP/546092/2022 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2024).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-f8f0ee8b-47bc-4204-9a89-41aa3008df7c
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.