Tytuł artykułu
Treść / Zawartość
Pełne teksty:
Identyfikatory
Warianty tytułu
Języki publikacji
Abstrakty
Covid-19 has spread across the world and many different vaccines have been developed to counter its surge. To identify the correct sentiments associated with the vaccines from social media posts, we fine-tune various state-of-the-art pretrained transformer models on tweets associated with Covid-19 vaccines. Specifically, we use the recently introduced state-of-the-art RoBERTa, XLNet, and BERT pre-trained transformer models, and the domain-specific CT-BER and BERTweet transformer models that have been pre-trained on Covid-19 tweets. We further explore the option of text augmentation by oversampling using the language model-based oversampling technique (LMOTE) to improve the accuracies of these models - specifically, for small sample data sets where there is an imbalanced class distribution among the positive, negative and neutral sentiment classes. Our results summarize our findings on the suitability of text oversampling for imbalanced, small-sample data sets that are used to fine-tune state-of-the-art pre-trained transformer models as well as the utility of domain-specific transformer models for the classification task.
Wydawca
Czasopismo
Rocznik
Tom
Strony
163--182
Opis fizyczny
Bibliogr. 46 poz., rys., tab., wykr.
Twórcy
autor
- Delhi Technological University
autor
- Delhi Technological University
autor
- Delhi Technological University
autor
- Delhi Technological University
Bibliografia
- [1] Adeyemi I.O., Esan A.O.: Covid-19-Related Health Information Needs and Seeking Behavior among Lagos State Inhabitants of Nigeria, International Journal of Information Science and Management, vol. 20(1), pp. 171–185, 2022.
- [2] Adoma A.F., Henry N.M., Chen W.: Comparative Analyses of Bert, Roberta, Distilbert, and Xlnet for Text-Based Emotion Recognition. In: 2020 17th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP), PP. 117-121, IEEE, 2020. doi: 10.1109/ICCWAMTIP51612.2020.9317379.
- [3] Al-Hashedi A., Al-Fuhaidi B., Mohsen A.M., Ali Y., Gamal Al-Kaf H.A., Al-Sorori W., Maqtary N.: Ensemble classifiers for Arabic sentiment analysis of social network (Twitter data) towards COVID-19-related conspiracy theories, Applied Computational Intelligence and Soft Computing, vol. 2022, 2022.
- [4] Alenezi M.N., Alqenaei Z.M.: Machine learning in detecting COVID-19 misinformation on twitter, Future Internet, vol. 13(10), 244, 2021.
- [5] Araci D.: FinBERT: Financial sentiment analysis with pre-trained language models, 2019. ArXiv preprint arXiv:1908.10063., arXiv:1908.10063.
- [6] Bahdanau D., Cho K., Bengio Y.: Neural machine translation by jointly learning to align and translate. In:3rd International Conference on Learning Representations, ICLR, 2015.
- [7] Bansal A., Susan S., Choudhry A., Sharma A.: Covid-19 Vaccine Sentiment Analysis During Second Wave in India by Transfer Learning Using XLNet. In:International Conference on Pattern Recognition and Artificial Intelligence, pp. 443–454,Springer, 2022.
- [8] Beltagy I., Lo K., Cohan A.: SciBERT: A Pretrained Language Model for Scientific Text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 3615–3620, 2019.
- [9] Chawla N.V., Bowyer K.W., Hall L.O., Kegelmeyer W.P.: SMOTE: synthetic minority over-sampling technique, Journal of Artificial Intelligence Research, vol. 16, pp. 321–357, 2002.
- [10] Dastgheib M.B., Koleini S., Rasti F.: The application of deep learning in persian documents sentiment analysis, International Journal of Information Science and Management (IJISM), vol. 18(1), pp. 1–15, 2020.
- [11] Devlin J., Chang M.W., Lee K., Toutanova K.: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, vol. 1, pp. 4171–4186, 2019.
- [12] Glavas G., Somasundaran S.: Two-level Transformer and Auxiliary Coherence Modeling for Improved Text Segmentation, Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34(05), pp. 7797–7804, 2020.
- [13] Goel R., Susan S., Vashisht S., Dhanda A.: Emotion-Aware Transformer Encoder for Empathetic Dialogue Generation. In: 2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW),pp. 1–6, IEEE, 2021.
- [14] Huang K., Altosaar J., Ranganath R.: ClinicalBERT: Modeling Clinical Notes and Predicting Hospital Readmission, 2019. arXiv preprint, arXiv:1904.05342.
- [15] Hutto C., Gilbert E.: VADER: A parsimonious rule-based model for sentiment analysis of social media text, Proceedings of The International AAAI Conference on Weblogs and Social Media Text, vol. 8(1), pp. 216–225, 2014.
- [16] Kou G., Yang P., Peng Y., Xiao F., Chen Y., Alsaadi F.E.: Evaluation of feature selection methods for text classification with small datasets using multiple criteria decision-making methods, Applied Soft Computing, vol. 86, 2020.
- [17] Lan Z., Chen M., Goodman S., Gimpel K., Sharma P., Soricut R.: ALBERT: A Lite BERT for Self-supervised Learning of Language Representations, 2019.arXiv:1909.11942.
- [18] Lee J.S., Hsiang J.: PatentBERT: Patent classification with fine-tuning a pre-trained BERT model, 2019. arXiv:1906.02124.
- [19] Lee J., Yoon W., Kim S., Kim D., Kim S., So C.H., Kang J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining, Bioinformatics, vol. 36(4), pp. 1234–1240, 2020.
- [20] Leekha M., Goswami M., Jain M.: A multi-task approach to open domain suggestion mining using language model for text over-sampling. In: European Conference on Information Retrieval, pp. 223–229, Springer, Cham, 2020.
- [21] Liew T.M., Lee C.S.: Examining the Utility of Social Media in COVID-19 Vaccination: Unsupervised Learning of 672,133 Twitter Posts, JMIR Public Healthand Surveillance, vol. 7(11), 29789, 2021.
- [22] Liu J., Lu S., Lu C.: Exploring and Monitoring the Reasons for Hesitation with COVID-19 Vaccine Based on Social-Platform Text and Classification Algorithms. In: Healthcare, vol. 9, 1353, 2021.
- [23] Liu S., Liu J.: Public attitudes toward COVID-19 vaccines on English-languageTwitter: A sentiment analysis, Vaccine, vol. 39(39), pp. 5499–5505, 2021.
- [24] Liu Y., Ott M., Goyal N., Du J., Joshi M., Chen D., Levy O.et al.: RoBERTa: A Robustly Optimized BERT Pretraining Approach., 2019. arXiv:1907.11692.
- [25] Lu J., Plataniotis K.N., Venetsanopoulos A.N.: Regularization studies of linear discriminant analysis in small sample size scenarios with application to face recognition, Pattern Recognition Letters, vol. 26(2), pp. 181–191, 2005.
- [26] Mallick R., Susan S., Agrawal V., Garg R., Rawal P.: Context-and sequence-aware convolutional recurrent encoder for neural machine translation. In: SAC’21: Proceedings of the 36th Annual ACM Symposium on Applied Computing, pp. 853–856, 2021. doi: 10.1145/3412841.3442099.
- [27] Manguri K.H., Ramadhan R.N., Amin P.R.M.: Twitter Sentiment Analysison Worldwide COVID-19 Outbreaks, Kurdistan Journal of Applied Research, vol. 5(3), pp. 54-65, 2020. doi: 10.24017/covid.8.
- [28] Marcec R., Likic R.: Using twitter for sentiment analysis towards AstraZeneca/Oxford, Pfizer/BioNTech and Moderna COVID-19 vaccines, Postgraduate Medical Journal, 2021.
- [29] Martin L., Muller B., Ortiz Suarez P.J., Dupont Y., Romary L., de la Clergerie ́E., Seddah D., Sagot B.: CamemBERT: a Tasty French Language Model. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 7203–7219, 2020. doi: 10.18653/v1/2020.acl-main.645.
- [30] Mohsen A., Ali Y., Al-Sorori W., Maqtary N.A., Al-Fuhaidi B., Altabeeb A.M.: A performance comparison of machine learning classifiers for Covid-19 Arabic Quarantine tweets sentiment analysis. In: 2021 1st International Conference on Emerging Smart Technologies and Applications (eSmarTA), pp. 1–8, IEEE, 2021.
- [31] Muller M., Salathe M., Kummervold P.E.: COVID-Twitter-BERT: A Natural Language Processing Model to Analyse COVID-19 Content on Twitter, 2020.arXiv preprint, arXiv:2005.07503.
- [32] Naseem U., Razzak I., Khushi M., Eklund P.W., Kim J.: COVIDSenti: A large-scale benchmark Twitter data set for COVID-19 sentiment analysis,IEEE Trans-actions on Computational Social Systems, vol. 8(4), pp. 1003–1015, 2021.
- [33] Naseem U., Razzak I., Musial K., Imran M.: Transformer based Deep Intelli-gent Contextual Embedding for Twitter sentiment analysis,Future GenerationComputer Systems, vol. 113, pp. 58–69, 2020. doi: 10.1016/j.future.2020.06.050.
- [34] Nguyen D.Q., Vu T., Nguyen A.T.: BERTweet: A pre-trained language modelfor English Tweets. In:Proceedings of the 2020 Conference on Empirical Methodsin Natural Language Processing: System Demonstrations, pp. 9–14, 2020.
- [35] Nowak S.A., Chen C., Parker A.M., Gidengil C.A., Matthews L.J.: Comparing covariation among vaccine hesitancy and broader beliefs within Twitter andsurvey data, PloS One, vol. 15(10), 2020. doi: 10.1371/journal.pone.0239826.
- [36] Olaleye T., Abayomi-Alli A., Adesemowo K., Arogundade O.T., Misra S.,Kose U.: SCLAVOEM: hyper parameter optimization approach to predictive modelling of COVID-19 infodemic tweets using smote and classifier vote ensemble,Soft Computing, vol. 27(6), pp. 3531–3550, 2022.
- [37] Pires T., Schlinger E., Garrette D.: How Multilingual is Multilingual BERT?In:Proceedings of the 57th Annual Meeting of the Association for ComputationalLinguistics, pp. 4996–5001, 2019.
- [38] Preda G.: COVID-19 All Vaccines Tweets. https://www.kaggle.com/gpreda/all-covid19-vaccines-tweets. Last accessed on 11th Feb 2022.
- [39] Saini M., Susan S.: Data augmentation of minority class with transfer learning forclassification of imbalanced breast cancer dataset using inception-V3. In: Iberian Conference on Pattern Recognition and Image Analysis, pp. 409–420, Springer,Cham, 2019.
- [40] Sattar N.S., Arifuzzaman S.: COVID-19 Vaccination awareness and aftermath: Public sentiment analysis on Twitter data and vaccinated population prediction in the USA, Applied Sciences, vol. 11(13), 6128, 2021.
- [41] Scheible R., Thomczyk F., Tippmann P., Jaravine V., Boeker M.: GottBERT: a pure German Language Model, 2020. arXiv preprint, arXiv:2012.02110.
- [42] Susan S., Kumar A.: The balancing trick: Optimized sampling of imbalanced datasets – A brief survey of the recent State of the Art, Engineering Reports,vol. 3(4), 12298, 2021.
- [43] Vashishtha S., Susan S.: Inferring sentiments from supervised classification of text and speech cues using fuzzy rules, Procedia Computer Science, vol. 167,pp. 1370–1379, 2020.
- [44] Vaswani A., Shazeer N., Parmar N., Uszkoreit J., Jones L., Gomez A.N., Kaiser L. et al.: Attention is All you Need. In: Advances in Neural Information Processing Systems (NIPS 2017), vol. 30, pp. 5998–6008, 2017.
- [45] Wang T., Lu K., Chow K.P., Zhu Q.: COVID-19 Sensing: Negative Sentiment Analysis on Social Media in China via BERT Model,IEEE Access, vol. 8,pp. 138162–138169, 2020. doi: 10.1109/ACCESS.2020.3012595.
- [46] Yang Z., Dai Z., Yang Y., Carbonell J., Salakhutdinov R.R., Le Q.V.: XLNet:generalized autoregressive pretraining for language understanding. In: NIPS’19: Proceedings of the 33rd International Conference on Neural Information Processing Systems, pp. 5753–5763, 2019.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-97d7966a-0613-4e62-ba30-b33b5851a055