PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Implementation of language models within an infrastructure designed for Natural Language Processing

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
This paper explores cost-effective alternatives for resource-constrained environments in the context of language models by investigating methods such as quantization and CPUbased model implementations. The study addresses the computational efficiency of language models during inference and the development of infrastructure for text document processing. The paper discusses related technologies, the CLARIN-PL infrastructure architecture, and implementations of small and large language models. The emphasis is on model formats, data precision, and runtime environments (GPU and CPU). It identifies optimal solutions through extensive experimentation. In addition, the paper advocates for a more comprehensive performance evaluation approach. Instead of reporting only average token throughput, it suggests considering the curve’s shape, which can vary from constant to monotonically increasing or decreasing functions. Evaluating token throughput at various curve points, especially for different output token counts, provides a more informative perspective.
Rocznik
Strony
153--159
Opis fizyczny
Bibliogr. 44 poz., fot., tab., wykr.
Twórcy
  • Faculty of Information and Communication Technology, Wroclaw University of Science and Technology, Wroclaw, Poland
  • Faculty of Information and Communication Technology, Wroclaw University of Science and Technology, Wroclaw, Poland
Bibliografia
  • [1] T. B. Brown et al., “Language models are few-shot learners,” in Proceedings of the 34th International Conference on Neural Information Processing Systems, ser. NIPS’20. Red Hook, NY, USA: Curran Associates Inc., 2020. [Online]. Available: https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf
  • [2] H. Touvron et al., “Llama 2: Open foundation and fine-tuned chat models,” 2023. [Online]. Available: doi:10.48550/arXiv.2307.09288
  • [3] J. Devlin, M. Chang, K. Lee, and K. Toutanova, “BERT: pre-training of deep bidirectional transformers for language understanding,” in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), J. Burstein, C. Doran, and T. Solorio, Eds. Association for Computational Linguistics, 2019, pp. 4171-4186. [Online]. Available: doi:10.18653/v1/n19-1423
  • [4] C. Raffel et al., “Exploring the limits of transfer learning with a unified text-to-text transformer,” J. Mach. Learn. Res., vol. 21, p. 67, 2020, id/No 140. [Online]. Available: jmlr.csail.mit.edu/papers/v21/20-074.Html
  • [5] T. Wolf et al., “Transformers: State-of-the-art natural language processing,” in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. Association for Computational Linguistics, 2020, pp. 38-45. [Online]. Available: https://aclanthology.org/2020.emnlp-demos.6
  • [6] Y. Liu et al., “RoBERTa: A robustly optimized BERT pretraining approach,” 2019. [Online]. Available: http://arxiv.org/abs/1907.11692
  • [7] M. Lewis et al., “BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension,” in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Online: Association for Computational Linguistics, Jul. 2020, pp. 7871-7880. [Online]. Available: doi: 10.18653/v1/2020.acl-main.703
  • [8] J. Bai et al., “ONNX: Open neural network exchange,” 2019. [Online]. Available: https://github.com/onnx/onnx
  • [9] G. Gerganov, “GGML - tensor library for machine learning,” 2023. [Online]. Available: https://github.com/ggerganov/ggml
  • [10] G. Gerganov, “Inference of LLaMA model in pure C/C++,” 2023. [Online]. Available: https://github.com/ggerganov/llama.cpp
  • [11] NVIDIA Corporation, “Triton inference server: An optimized cloud and edge inferencing,” 2019. [Online]. Available: https://github.com/triton-inference-server/server
  • [12] C. Olston et al., “Tensorflow-serving: Flexible, high-performance ml serving,” in Workshop on ML Systems at NIPS 2017, 2017. [Online]. Available: doi:10.48550/arXiv.1712.06139
  • [13] NVIDIA Corporation, “Fastertransformer,” 2019. [Online]. Available: https://github.com/NVIDIA/FasterTransformer
  • [14] D. Li, H. Wang, R. Shao, H. Guo, E. P. Xing, and H. Zhang, “MPCFORMER: fast, performant and provate transformer inference with MPC,” in The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. [Online]. Available: https://openreview.net/pdf?id=CWmvjOEhgH-
  • [15] G.-I. Yu, J. S. Jeong, G.-W. Kim, S. Kim, and B.-G. Chun, “Orca: A distributed serving system for Transformer-Based generative models,” in 16th USENIX Symposium on Operating Systems Design and Implementation (OSDI 22). Carlsbad, CA: USENIX Association, Jul. 2022, pp. 521-538. [Online]. Available: https://www.usenix.org/conference/osdi22/presentation/yu
  • [16] N. Yang et al., “Inference with reference: Lossless acceleration of large language models,” 2023. [Online]. Available: doi:10.48550/arXiv.2304.04487
  • [17] B. Wu, Y. Zhong, Z. Zhang, G. Huang, X. Liu, and X. Jin, “Fast distributed inference serving for large language models,” arXiv:2305.05920, 2023.
  • [18] T. Dettmers, M. Lewis, Y. Belkada, and L. Zettlemoyer, “Gpt3.int8(): 8-bit matrix multiplication for transformers at scale,” in NeurIPS, 2022. [Online]. Available: http://papers.nips.cc/paper_files/paper/2022/hash/c3ba4962c05c49636d4c6206a97e9c8a-Abstract-Conference.html
  • [19] Z. Li, E. Wallace, S. Shen, K. Lin, K. Keutzer, D. Klein, and J. Gonzalez, “Train big, then compress: Rethinking model size for efficient training and inference of transformers,” in Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, ser. Proceedings of Machine Learning Research, vol. 119. PMLR, 2020, pp. 5958-5968. [Online]. Available: http://proceedings.mlr.press/v119/li20m.html
  • [20] The Apache Software Foundation, “Apache Airflow,” 2023. [Online]. Available: https://airflow.apache.org
  • [21] Prefect Technologies, Inc., “Prefect,” 2023. [Online]. Available: https://www.prefect.io
  • [22] Explosion, “spaCy: Industrial-strength NLP,” 2023. [Online]. Available: https://github.com/explosion/spaCy
  • [23] P. Qi, Y. Zhang, Y. Zhang, J. Bolton, and C. D. Manning, “Stanza: A python natural language processing toolkit for many human languages,” in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Online: Association for Computational Linguistics, Jul. 2020, pp. 101-108. [Online]. Available: doi:10.18653/v1/2020.acl-demos.14
  • [24] A. Branco et al., “The CLARIN infrastructure as an interoperable language technology platform for SSH and beyond,” Language Resources and Evaluation, Jun. 2023. [Online]. Available: doi: 10.1007/s10579-023-09658-z
  • [25] M. Hinrichs, T. Zastrow, and E. W. Hinrichs, “Weblicht: Web-based LRT services in a distributed escience infrastructure,” in Proceedings of the International Conference on Language Resources and Evaluation, LREC 2010, 17-23 May 2010, Valletta, Malta, N. Calzolari et al., Eds. European Language Resources Association, 2010. [Online]. Available: http://www.lrec-conf.org/proceedings/lrec2010/summaries/270.html
  • [26] A. Lemmens and V. Vandeghinste, “A lightweight NLP workflow engine for CLARIN-BE,” in CLARIN Annual Conference Proceedings, 2022, pp. 29-34. [Online]. Available: https://www.clarin.eu/sites/default/files/CLARIN2022 P 2.1.4 LemmensVandeghinste.pdf
  • [27] M. Pol, T. Walkowiak, and M. Piasecki, “Towards CLARIN-PL LTC digital research platform for: Depositing, processing, analyzing and visualizing language data,” in Reliability and Statistics in Transportation and Communication, I. Kabashkin, I. Yatskiv, and O. Prentkovskis, Eds. Cham: Springer International Publishing, 2018, pp. 485-494. [Online]. Available: doi:10.1007/978-3-319-74454-4_47
  • [28] T. Walkowiak, “Web based engine for processing and clustering of polish texts,” in Theory and Engineering of Complex Systems and Dependability, W. Zamojski et al., Eds. Cham: Springer International Publishing, 2015, pp. 515-522. [Online]. Available: doi:10.1007/978-3-319-19216-1 49
  • [29] S. Newman, Monolith to microservices: evolutionary patterns to transform your monolith. O’Reilly Media, 2019.
  • [30] VMware, “RabbitMQ,” 2023. [Online]. Available: https://www.rabbitmq.com
  • [31] OASIS, “Advanced Message Queuing Protocol,” 2023. [Online]. Available: https://www.amqp.org
  • [32] The Linux Foundation, “Kubernetes,” 2023. [Online]. Available: https://kubernetes.io
  • [33] The Linux Foundation, “Kubernetes Event-driven Autoscaling,” 2023. [Online]. Available: https://keda.sh
  • [34] The Linux Foundation, “HELM The package manager for Kubernetes,” 2023. [Online]. Available: https://helm.sh
  • [35] T. Walkowiak, “Language processing modelling notation - orchestration of NLP microservices,” in Advances in Dependability Engineering of Complex Systems, ser. Advances in Intelligent Systems and Computing, W. Zamojski et al., Eds., vol. 582. Springer, 2017, pp. 464-473. [Online]. Available: doi:10.1007/978-3-319-59415-6 44
  • [36] L. Wang, N. Yang, X. Huang, B. Jiao, L. Yang, D. Jiang, R. Majumder, and F. Wei, “Text embeddings by weakly-supervised contrastive pretraining,” 2022. [Online]. Available: doi:10.48550/arXiv.2212.03533
  • [37] Y. Belkada and T. Dettmers, “A gentle introduction to 8-bit matrix multiplication for transformers at scale using Hugging Face Transformers, Accelerate and bitsandbytes,” 2022. [Online]. Available: https://huggingface.co/blog/hf-bitsandbytes-integration
  • [38] NVIDIA Corporation, “NVIDIA Multi-Instance GPU user guide,” 2023. [Online]. Available: https://docs.nvidia.com/datacenter/tesla/pdf/NVIDIA_MIG_User_Guide.pdf
  • [39] NVIDIA Corporation, “Time-slicing GPUs in Kubernetes,” 2023. [Online]. Available: https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/gpu-sharing.html
  • [40] P. Pezik, A. Mikołajczyk, A. Wawrzyński, B. Nitoń, and M. Ogrodniczuk, “Keyword extraction from short texts with a text-to-text transfer transformer,” in Recent Challenges in Intelligent Information and Database Systems, E. Szczerbicki et al., Eds. Singapore: Springer Nature Singapore, 2022, pp. 530-542. [Online]. Available: doi:10.1007/978-981-19-8234-7 41
  • [41] C. Klamra et al., “Devulgarization of polish texts using pre-trained language models,” in Computational Science - ICCS 2022. Cham: Springer International Publishing, 2022, pp. 49-55. [Online]. Available: doi:10.1007/978-3-031-08754-7_7
  • [42] N. Reimers and I. Gurevych, “Sentence-bert: Sentence embeddings using siamese bert-networks,” in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 11 2019. [Online]. Available: http: //arxiv.org/abs/1908.10084
  • [43] B. Peng et al., “RWKV: Reinventing RNNs for the transformer era,” 2023. [Online]. Available: doi:10.48550/arXiv.2305.13048
  • [44] L. de la Torre, J. Chacon, D. Chaos, S. Dormido, and J. Sánchez, “Using server-sent events for event-based control in networked control systems,” IFAC-PapersOnLine, vol. 52, no. 9, pp. 260-265, 2019, 12th IFAC Symposium on Advances in Control Education ACE 2019. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S2405896319305555
Uwagi
1. Opracowanie rekordu ze środków MNiSW, umowa nr SONP/SP/546092/2022 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2024).
2. Financed by the European Regional Development Fund as a part of the 2014-2020 Smart Growth Operational Programme, CLARIN - Common Language Resources and Technology Infrastructure, project no. POIR.04.02.00-00C002/19.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-76464271-8421-4563-bc28-6bf086abdea7
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.