W artykule porównano najpopularniejsze, zamknięte sterowniki PLC z koncepcją sterowników otwartych, projektowanych warstwowo, których liczba na rynku stale rośnie. Przedstawiono wady i zalety sterowników opartych zarówno na mikrokomputerach, jak i mikrokontrolerach, podkreślając, że obie technologie mają swoje miejsce na rynku. Kluczową częścią pracy jest prezentacja otwartego, warstwowo zaprojektowanego sterownika OpenCPLC opartego na mikrokontrolerze wraz z podkreśleniem jego innowacyjnych cech, zwłaszcza prostą implementację wsparcia poprzez modele językowe AI.
EN
The article compares the most popular, closed PLC controllers with the concept of layered open controllers, a trend that is gaining momentum. It highlights the strengths and weaknesses of controllers based on both microcomputers and microcontrollers, emphasizing that there is room in the market for both solutions. The core part of the paper is the presentation of the OpenCPLC controller, which is designed in a layered, open manner and based on a microcontroller, with an emphasis on its innovative features, particularly the straightforward implementation of support through AI language models.
2
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
The rapid growth of document volumes and complexity in various domains necessitates advanced automated methods to enhance the efficiency and accuracy of information extraction and analysis. This paper aims to evaluate the efficiency and repeatability of OpenAI's APIs and other Large Language Models (LLMs) in automating question-answering tasks across multiple documents, specifically focusing on analyzing Data Privacy Policy (DPP) documents of selected EdTech providers. We test how well these models perform on large-scale text processing tasks using the OpenAI's LLM models (GPT 3.5 Turbo, GPT 4, GPT 4o) and APIs in several frameworks: direct API calls (i.e., one-shot learning), LangChain, and Retrieval Augmented Generation (RAG) systems. We also evaluate a local deployment of quantized versions (with FAISS) of LLM models (Llama-2-13B-chat-GPTQ). Through systematic evaluation against predefined use cases and a range of metrics, including response format, execution time, and cost, our study aims to provide insights into the optimal practices for document analysis. Our findings demonstrate that using OpenAI's LLMs via API calls is a workable workaround for accelerating document analysis when using a local GPU-powered infrastructure is not a viable solution, particularly for long texts. On the other hand, the local deployment is quite valuable for maintaining the data within the private infrastructure. Our findings show that the quantized models retain substantial relevance even with fewer parameters than ChatGPT and do not impose processing restrictions on the number of tokens. This study offers insights on maximizing the use of LLMs for better efficiency and data governance in addition to confirming their usefulness in improving document analysis procedures.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.