PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Deep Learning transformer architecture for Named Entity Recognition on low resourced languages: state of the art results

Autorzy
Wybrane pełne teksty z tego czasopisma
Identyfikatory
Warianty tytułu
Konferencja
Federated Conference on Computer Science and Information Systems (17 ; 04-07.09.2022 ; Sofia, Bulgaria)
Języki publikacji
EN
Abstrakty
EN
This paper reports on the evaluation of Deep Learning (DL) transformer architecture models for Named-Entity Recognition (NER) on ten low-resourced South African (SA) languages. In addition, these DL transformer models were compared to other Neural Network and Machine Learning (ML) NER models. The findings show that transformer models substantially improve performance when applying discrete fine-tuning parameters per language. Furthermore, fine-tuned transformer models outperform other neural network and machine learning models on NER with the low-resourced SA languages. For example, the transformer models obtained the highest F-scores for six of the ten SA languages and the highest average F-score surpassing the Conditional Random Fields ML model. Practical implications include developing high-performance NER capability with less effort and resource costs, potentially improving downstream NLP tasks such as Machine Translation (MT). Therefore, the application of DL transformer architecture models for NLP NER sequence tagging tasks on low-resourced SA languages is viable. Additional research could evaluate the more recent transformer architecture models on other Natural Language Processing tasks and applications, such as Phrase chunking, MT, and Part-of-Speech tagging.
Rocznik
Tom
Strony
53--60
Opis fizyczny
Bibliogr. 21 poz., tab., wykr., wz.
Twórcy
  • University of Pretoria Gauteng, South Africa
Bibliografia
  • 1. Loubser, M., & Puttkammer, M. J. (2020). Viability of neural networks for core technologies for resource-scarce languages. Information (Switzerland). https://doi.org/10.3390/info11010041
  • 2. Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., & Stoyanov, V. (2020). Unsupervised Cross-lingual Representation Learning at Scale. https://doi.org/10.18653/v1/2020.acl-main.747
  • 3. Plank, B., Søgaard, A., & Goldberg, Y. (2016). Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss. 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016 - Short Papers. https://doi.org/10.18653/v1/p16-2067
  • 4. Lample, G., Ballesteros, M., Subramanian, S., Kawakami, K., & Dyer, C. (2016). Neural architectures for named entity recognition. 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2016 - Proceedings of the Conference. https://doi.org/10.18653/v1/n16-1030
  • 5. Lafferty, J., McCallum, A., & Pereira, C. N. F. (2001). Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data.
  • 6. Kudo, T. CRF++: Yet another CRF toolkit [Electronic resource]. GitHub. https://github.com/taku910/crfpp.
  • 7. Liddy, E. D. (2001). Natural Language Processing. In Encyclopedia of Library and Information Science. In Encyclopedia of Library and Information Science.
  • 8. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems.
  • 9. Hedderich, M. A., Adelani, D., Zhu, D., Alabi, J., Markus, U., & Klakow, D. (2020). Transfer Learning and Distant Supervision for Multilingual Transformer Models: A Study on African Languages. https://doi.org/10.18653/v1/2020.emnlp-main.204
  • 10. Eiselen, R. (2016). Government domain named entity recognition for South African languages. Proceedings of the 10th International Conference on Language Resources and Evaluation, LREC 2016.
  • 11. Gu, J., Wang, Z., Kuen, J., Ma, L., Shahroudy, A., Shuai, B., Liu, T., Wang, X., Wang, G., Cai, J., & Chen, T. (2018). Recent advances in convolutional neural networks. Pattern Recognition. https://doi.org/10.1016/j.patcog.2017.10.013
  • 12. Schuster, M., & Paliwal, K. K. (1997). Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing. https://doi.org/10.1109/78.650093
  • 13. Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., & Brew, J. (2019). Transformers: State-of-the-art natural language processing. In arXiv. https://doi.org/10.18653/v1/2020.emnlp-demos.
  • 14. Pires, T., Schlinger, E., & Garrette, D. (2020). How multilingual is multilingual BERT? ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference. https://doi.org/10.18653/v1/p19-1493
  • 15. Conneau, A., & Lample, G. (2019). Cross-lingual language model pretraining. Advances in Neural Information Processing Systems.
  • 16. Sokolova, M., & Lapalme, G. (2009). A systematic analysis of performance measures for classification tasks. Information Processing and Management, 45(4). https://doi.org/10.1016/j.ipm.2009.03.002
  • 17. Kadari, R., Zhang, Y., Zhang, W., & Liu, T. (2018). CCG supertagging via Bidirectional LSTM-CRF neural architecture. Neurocomputing, 283. https://doi.org/10.1016/j.neucom.2017.12.050
  • 18. Hanslo, R. (2021). Evaluation of Neural Network Transformer Models for Named-Entity Recognition on Low-Resourced Languages. 16th Conference on Computer Science and Intelligence Systems, FedCSIS. https://doi.org/10.15439/2021F7
  • 19. Sang, E. T. K., & De Meulder, F. (2003). Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL.
  • 20. Chen, S., Pei, Y., Ke, Z., & Silamu, W. (2021). Low-resource named entity recognition via the pre-training model. Symmetry, 13(5), 786.
  • 21. Gao, S., Kotevska, O., Sorokine, A., & Christian, J. B. (2021). A pretraining and self-training approach for biomedical named entity recognition. PloS one, 16(2), e0246310.
Uwagi
1. Track 1: 17th International Symposium on Advanced Artificial Intelligence in Applications
2. Opracowanie rekordu ze środków MEiN, umowa nr SONP/SP/546092/2022 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2022-2023).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-48af7eee-4959-44bf-962e-31f0bc311e38
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.