Currently, we live in a culture of being overly busy, but this does not translate into efficiency, speed of implementation of the actions taken. Enterprises are constantly looking for methods and tools to make them more efficient. The most popular method of production management is Lean Manufacturing, less known is Theory of Constraints. This work is a continuation of the research on the comparison of these methods with apply a computer simulation, which the analyzed production process in the selected enterprise, after 24 hours and week. An attempt was made to simplify the comparison of the methods based on the obtained simulation in terms of costs. In analyzed case, more advantageous solution is to use the DBR method. To produce various orders that do not require 100% production on the bottleneck position, the use of Kanban is a frequent practice as it provides greater flexibility in order execution.
The article presents a new concept for monitoring industrial tank reactors. The presented concept allows for faster and more reliable monitoring of industrial processes, which increases their reliability and reduces operating costs. The innovative method is based on electrical tomography. At the same time, it is non-invasive and enables the imaging of phase changes inside tanks filled with liquid. In particular, the hybrid tomograph can detect gas bubbles and crystals formed during industrial processes. The main novelty of the described solution is the simultaneous use of two types of electrical tomography: impedance and capacitance. Another novelty is the use of the LSTM network to solve the tomographic inverse problem. It was made possible by taking the measurement vector as a data sequence. Research has shown that the proposed hybrid solution and the LSTM algorithm work better than separate systems based on impedance or capacitance tomography.
This article demonstrates the application of a gas sensor array to monitor the effectiveness of the absorption process of air stream purification from odorous compounds (toluene vapors). A self-constructed matrix consisting of five commercially available gas sensors was used. Multiple linear regression (MLR) was selected as the statistical technique used to calibrate the matrice. Gas chromatography coupled with a flame ionization detector (GC-FID) was used as a reference analytical technique, which enabled to obtain reliable quantitative determinations of toluene concentration in the samples. A commercially available absorption liquid dedicated to non-polar compounds was used as an absorbent. The process was carried out in two identical systems: in first, pure toluene was absorbed and in the second, toluene vapor contaminated with acetone. This approach allowed verifying the selectivity of the prepared MLR calibration model for process control in the case of the presence of more or less expected pollutants in the treated gas. The results obtained with the gas sensor array were related to the reference technique and they confirm the usefulness and advisability of using these devices to monitor the absorption processes as a cheaper and more time-efficient alternative to chromatographic methods. The root mean square error (RMSE) in absorptivity determination between the results received with the analytical and sensor techniques was 0.019 and 0.041 when treating pure toluene vapors and its vapors with acetone, respectively. Compared to instrumental techniques, sensor matrices are technologically less complex, useful for laboratory purposes, as well as showing application potential for field studies. However, it is necessary to develop more sensitive and selective chemical gas sensor arrays and better master advanced data processing and identification techniques.
Wizja wdrożenia kolejnego etapu zielonej transformacji w postaci negocjowanego pakietu „fit for 55” budzi obawy przedstawicieli branży energetycznej i ciepłowniczej w Polsce. Obawy te związane są przede wszystkim ze skalą wyzwań inwestycyjnych i kosztami już ponoszonymi w ramach systemu handlu uprawnieniami do emisji CO2 czyli EU ETS. Inwazja Rosji na Ukrainę zrewidowała dotychczasowe kierunki polityki prowadzonej w ramach realizowanej transformacji w kierunku zmiany paliw kopalnych na odnawialne źródła energii z priorytetem gazu jako paliwa przejściowego. Najbardziej istotnym czynnikiem transformacji znów stała się efektywność, której istotnym wsparciem jest cyfryzacja zarządzania energią na każdym poziomie. Począwszy od minimalizacji wykorzystania energii pierwotnej na etapie wytwarzania, poprzez przesyłanie i wykorzystanie końcowe. Efektywność dzięki zastosowaniu cyfrowych modeli optymalizacji decyzji procesowych, których celem jest również optymalizacja ekonomiczna, powinna ograniczyć koszty na każdym etapie dostaw ciepła. W artykule przedstawiono przykłady wdrożenia modeli ekonometrycznych do programowania pracy sieci, które uzupełnione o elementy sztucznej inteligencji korzystającej z aktualizowanej bazy danych mogą mieć zastosowanie w rozmaitych sferach podejmowania decyzji procesowych i biznesowych. Cyfrowa analiza i system wczesnego ostrzegania wykorzystujący tę analizę poprawia efektywność pracy i komunikację. Przedsiębiorstwa ciepłownicze w dużej mierze odpowiedzialne za sukces „fit for 55” dysponują dziś dużą liczbą danych mierzonych w procesach wytwarzania i dostaw energii. Przy czym nie zawsze z tych zasobów w pełni korzystają. Tymczasem efektywna cyfryzacja, ukierunkowana na ograniczenie zużycia nośników energii pierwotnej, oznacza nie tylko sam pomiar i archiwizację danych, ale również analizę uzyskanych zbiorów danych ukierunkowaną na potrzeby modelowania procesu dostawy ciepła i sformułowania rekomendacji dla efektywnych decyzji.
EN
The vision of implementing the next stage of green transformation in the form of the negotiated package “fit for 55” raises concerns of representatives of the energy and heating industry in Poland. These concerns are mainly related to the scale of investment challenges and the costs already incurred under the CO2 emission allowance trading system, i.e. the EU ETS. Russia’s invasion of Ukraine has revised the current directions of the policy pursued as part of the transformation that is being carried out to switch from fossil fuels to renewable energy sources, with the priority of gas as a transitional fuel. Efficiency has again become the most important factor of transformation, which is significantly supported by the digitization of energy management at every level. Starting from minimizing the use of primary energy at the generation stage, through transmission and final consumption. Efficiency thanks to the use of digital tools of process decision optimization models, which also aim at economic optimization, should reduce energy costs throughout the entire life chain. The article presents examples of the implementation of econometric models for programming network operation, which, supplemented with elements of artificial intelligence using an updated database, can be used in various spheres of process and business decision-making. Digital analysis and an early warning system using this analysis improves the work efficiency and communication of the systems. Heating companies, largely responsible for the success of “fit for 55”, now have a lot of data measured in the processes of energy production and supply, but they do not always fully use these resources. Meanwhile, effective digitization, aimed at reducing energy consumption, does not mean the measurement and recording itself, but the purposeful analysis of the data set for modeling the process and formulating recommendations for the right decision.
Analytical design of the PID-type controllers for linear plants based on the magnitude optimum criterion usually results in very good control quality and can be applied directly for high-order linear models with dead time, without need of any model reduction. This paper brings an analysis of properties of this tuning method in the case of the PI controller, which shows that it guarantees closed-loop stability and a large stability margin for stable linear plants without zeros, although there are limitations in the case of oscillating plants. In spite of the fact that the magnitude optimum criterion prescribes the closed-loop response only for low frequencies and the stability margin requirements are not explicitly included in the design objective, it reveals that proper open-loop behavior in the middle and high frequency ranges, decisive for the closed-loop stability and robustness, is ensured automatically for the considered class of linear systems if all damping ratios corresponding to poles of the plant transfer function without the dead-time term are sufficiently high.
Modern construction standards, both from the ACI, EN, ISO, as well as EC group, introduced numerous statistical procedures for the interpretation of concrete compressive strength results obtained on an ongoing basis (in the course of structure implementation), the values of which are subject to various impacts, e.g., arising from climatic conditions, manufacturing variability and component property variability, which are also described by specific random variables. Such an approach is a consequence of introducing the method of limit states in the calculations of building structures, which takes into account a set of various factors influencing structural safety. The term “concrete family” was also introduced, however, the principle of distributing the result or, even more so, the statistically significant size of results within a family was not specified. Deficiencies in the procedures were partially supplemented by the authors of the article, who published papers in the field of distributing results of strength test time series using the Pearson, t-Student, and Mann-Whitney U tests. However, the publications of the authors define neither the size of obtained subset and their distribution nor the probability of their occurrence. This study fills this gap by showing the size of a statistically determined concrete family, with a defined distribution of the probability of its isolation.
PL
Współczesne normy budowlane zarówno z grupy EN, ISO jak i EC wprowadziły wiele procedur statystycznych do interpretacji uzyskiwanych na bieżąco (w trakcie realizacji obiektu) wyników badań wytrzymałości betonu na ściskanie, której wartości podlegają różnym przypadkowym wpływom, na przykład wynikającym z warunków klimatycznych, zmienności produkcji zmienności właściwości składników, które również opisują określone zmienne losowe. Podejście takie jest konsekwencją wprowadzenia do obliczeń konstrukcji budowlanych metody stanów granicznych uwzględniającej zbiór różnych czynników wpływających na bezpieczeństwo konstrukcji. Z tego powodu wdrożono w ostatnich latach wiele procedur kontrolujących i regulujących dotrzymanie przez producenta betonu granicznych parametrów mieszanki, poczynając od statystycznej, globalnej oceny wytrzymałości, poprzez procedury przedziałowe (karty kontrolne Shewarta), po skomplikowane analizy stochastyczne, zawierające drobnoprzedziałowe oceny ciągów wyników badań o ujednoliconej, statystycznie istotnej, wartości parametrów podstawowych wytrzymałości. Szczególnie dużo uwagi poświeca się, zarówno w praktyce budowlanej jak i w rozważaniach teoretycznych, zagwarantowaniu przez producenta mieszanki wytrzymałości betonu z 95% prawdopodobieństwem jej wystąpienia. W normie europejskiej PN-EN 206-1 wprowadzono dodatkowo termin rodzina betonów (ang. Family of concrete concept), która określono jako “(...) grupę betonów o ustalonej i udokumentowanej zależności pomiędzy odpowiednimi właściwościami”, bez podania jednak oznaczeń ilościowych odnośnie wielkości tej grupy i stabilizacji cech (na przykład wytrzymałości betonu na ściskanie) w jakichkolwiek przedziałach czasowych. Przy wytwarzaniu w sposób ciagły dużych ilości mieszanki betonowej, poprawne oszacowanie rodziny betonów jest zasadne z punktu widzenia niezawodności eksploatowanych później konstrukcji budowlanych, o czym świadczy bogata literatura zacytowana w artykule. Przyporzadkowanie betonu do rodziny jest ściśle zwiazane z relacją pomiędzy wytrzymałością a uwarunkowaniami technologicznymi. Wyznaczenie oddzielnych zbiorów (rodzin betonów) jest podziałem ciągu wyników badań wytrzymałości betonu na ściskanie na grupy o statystycznie ustabilizowanych parametrach wytrzymałościowych w określonych przedziałach czasowych ich wykonania. Przedmiotem analiz zamieszczonych w niniejszej pracy jest więc określona, szczególnie duża liczba wyników badań wytrzymałości betonu na ściskanie zebranych w ciągu jednego roku podczas betonowania kilku obiektów hydrotechnicznych o takiej samej klasie wytrzymałościowej C35/45 i stałej recepturze z drobnymi modyfikacjami sezonowymi (lato, zima). W części teoretycznej pracy podano podstawy weryfikacji hipotez o wyodrebnieniu szeregów czasowych wytrzymałości o statystycznej zwartości, tworzących tzw. rodziny betonu. Rozdziału wyników badan szeregu czasowego wytrzymałości dokonano stosując testy Pearsona, t-Studenta i Manna-Whitneya. Na wybranym przykładzie określono liczby uzyskanych podzbiorów oraz prawdopodobieństwa wystąpienia takich liczności. Jest to istotne wzbogacenie teorii jakości betonu o liczność statystycznie wydzielonej rodziny betonu z określeniem rozkładu prawdopodobieństwa jej wyodrębnienia.
The purpose of this paper was to develop a methodology for diagnosing the causes of die-casting defects based on advanced modelling, to correctly diagnose and identify process parameters that have a significant impact on product defect generation, optimize the process parameters and rise the products’ quality, thereby improving the manufacturing process efficiency. The industrial data used for modelling came from foundry being a leading manufacturer of the high-pressure die-casting production process of aluminum cylinder blocks for the world's leading automotive brands. The paper presents some aspects related to data analytics in the era of Industry 4.0. and Smart Factory concepts. The methodology includes computation tools for advanced data analysis and modelling, such as ANOVA (analysis of variance), ANN (artificial neural networks) both applied on the Statistica platform, then gradient and evolutionary optimization methods applied in MS Excel program’s Solver add-in. The main features of the presented methodology are explained and presented in tables and illustrated with appropriate graphs. All opportunities and risks of implementing data-driven modelling systems in high-pressure die-casting processes have been considered.
This paper presents how Q-learning algorithm can be applied as a general-purpose self-improving controller for use in industrial automation as a substitute for conventional PI controller implemented without proper tuning. Traditional Q-learning approach is redefined to better fit the applications in practical control loops, including new definition of the goal state by the closed loop reference trajectory and discretization of state space and accessible actions (manipulating variables). Properties of Q-learning algorithm are investigated in terms of practical applicability with a special emphasis on initializing of Q-matrix based only on preliminary PI tunings to ensure bumpless switching between existing controller and replacing Q-learning algorithm. A general approach for design of Q-matrix and learning policy is suggested and the concept is systematically validated by simulation in the application to control two examples of processes exhibiting first order dynamics and oscillatory second order dynamics. Results show that online learning using interaction with controlled process is possible and it ensures significant improvement in control performance compared to arbitrarily tuned PI controller.
The paper is concerned with the presentation and analysis of the Dynamic Matrix Control (DMC) model predictive control algorithm with the representation of the process input trajectories by parametrised sums of Laguerre functions. First the formulation of the DMCL (DMC with Laguerre functions) algorithm is presented. The algorithm differs from the standard DMC one in the formulation of the decision variables of the optimization problem - coefficients of approximations by the Laguerre functions instead of control input values are these variables. Then the DMCL algorithm is applied to two multivariable benchmark problems to investigate properties of the algorithm and to provide a concise comparison with the standard DMC one. The problems with difficult dynamics are selected, which usually leads to longer prediction and control horizons. Benefits from using Laguerre functions were shown, especially evident for smaller sampling intervals.
Classical model predictive control (MPC) algorithms need very long horizons when the controlled process has complex dynamics. In particular, the control horizon, which determines the number of decision variables optimised on-line at each sampling instant, is crucial since it significantly affects computational complexity. This work discusses a nonlinear MPC algorithm with on-line trajectory linearisation, which makes it possible to formulate a quadratic optimisation problem, as well as parameterisation using Laguerre functions, which reduces the number of decision variables. Simulation results of classical (not parameterised) MPC algorithms and some strategies with parameterisation are thoroughly compared. It is shown that for a benchmark system the MPC algorithm with on-line linearisation and parameterisation gives very good quality of control, comparable with that possible in classical MPC with long horizons and nonlinear optimisation.
11
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
This article presents the results of numerical research on the dumping of oscillatory phenomena occurring in the continuous bioethanol production process. Proportional and proportional-integral types of controllers were tested for this purpose. Numerical analysis showed that the appropriate selection of the Kc value makes it possible to suppress the oscillations in the system. The introduction of the integral term improves the performance of control system. Using numerical calculations, it was shown that the PI controller is effective at dumping the occurring oscillations. The presence of the integral term allows the reduction of the gain coefficient value. After the proper selection of parameters, the PI controller effectively supresses the oscillations present in the system.
PL
W niniejszym artykule zaprezentowano wyniki badań numerycznych, dotyczących tłumienia zjawisk oscylacyjnych występujących w procesie produkcji bioetanolu metodą ciągłą. Przebadano w tym celu regulator typu proporcjonalnego oraz proporcjonalno-całkującego. Analiza numeryczna wykazała, iż odpowiedni dobór wartości współczynnika wzmocnienia Kc umożliwia tłumienie zjawisk oscylacyjnych w układzie. Wykazano także, iż wprowadzenie członu całkującego poprawia jego działanie. Za pomocą obliczeń symulacyjnych wykazano, że regulator proporcjonalno-całkujący dobrze radzi sobie z tłumieniem występujących oscylacji. Obecność członu całkującego pozwala na zredukowanie wartości współczynnika wzmocnienia. Po odpowiednim doborze parametrów, regulator PI skutecznie tłumi obecne w układzie oscylacje.
The Internet of Production (IoP) describes a vision in which a broad range of different production data is available in real-time. Based on this data, for example, new control types can be implemented, which improve individual manufacturing processes directly at the machine. A possible application scenario is a tool deflection compensation. Although the problem of tool deflection is well known in the industrial field, a process-parallel compensation is not common in industrial applications. State-of-the-art solutions require time and cost consuming tests to determine necessary cutting parameters. An NC-integrated compensation that adapts the tool path in real-time will make these tests obsolete and furthermore enables higher chip removal rates. In this paper, a control-internal real-time compensation of tool deflection is described, which is based on a process-parallel measurement of process forces. The compensation software is designed as an extension to the NC kernel and thereby integrated into the position control loop of an in-series NC. The compensation movements are generated by manipulating the reference values of the feed axes. The approach is investigated by experiments with linear axis movements. During these tests, a significantly reduction of geometrical machining errors is possible.
This paper proposes a new method for the analysis of continuous and periodic event-based state-feedback plus static feedforward controllers that regulate linear time invariant systems with time delays. Measurable disturbances are used in both the control law and triggering condition to provide better disturbance attenuation. Asymptotic stability and L2-gain disturbance rejection problems are addressed by means of Lyapunov–Krasovskii functionals, leading to performance conditions that are expressed in terms of linear matrix inequalities. The proposed controller offers better disturbance rejection and a reduction in the number of transmissions with respect to other robust event-triggered controllers in the literature.
The article presents an innovative concept of improving the monitoring and optimization of industrial processes. The developed method is based on a system of many separately trained neural networks, in which each network generates a single point of the output image. Thanks to the elastic net method, the implemented algorithm reduces the correlated and irrelevant variables from the input measurement vector, making it more resistant to the phenomenon of data noises. The advantage of the described solution over known non-invasive methods is to obtain a higher resolution of images dynamically appearing inside the reactor of artifacts (crystals or gas bubbles), which essentially contributes to the early detection of hazards and problems associated with the operation of industrial systems, and thus increases the efficiency of chemical process control.
PL
W artykule przedstawiono nowatorską koncepcję usprawnienia monitoringu i optymalizacji procesów przemysłowych. Opracowana metoda bazuje na systemie osobno wytrenowanych wielu sieci neuronowych, w którym każda sieć generuje pojedynczy punkt obrazu wyjściowego. Dzięki zastosowaniu metody elastic net zaimplementowany algorytm redukuje z wejściowego wektora pomiarowego zmienne skorelowane i nieistotne, czyniąc go bardziej odpornym na zjawisko zaszumienia danych. Przewagą opisywanego rozwiązania nad znanymi metodami nieinwazyjnymi jest uzyskanie wyższej rozdzielczości obrazów dynamicznie pojawiających się wewnątrz reaktora artefaktów (kryształów lub pęcherzy gazowych), co zasadniczo przyczynia się do wczesnego wykrycia zagrożeń i problemów związanych z eksploatacją systemów przemysłowych, a tym samym zwiększa efektywność sterowania procesami chemicznymi.
The paper presents some aspects of a development project related to Industry 4.0 that was executed at Nemak, a leading manufacturer of the aluminium castings for the automotive industry, in its high pressure die casting foundry in Poland. The developed data analytics system aims at predicting the casting quality basing on the production data. The objective is to use these data for optimizing process parameters to raise the products’ quality as well as to improve the productivity. Characterization of the production data including the recorded process parameters and the role of mechanical properties of the castings as the process outputs is presented. The system incorporates advanced data analytics and computation tools based on the analysis of variance (ANOVA) and applying an MS Excel platform. It enables the foundry engineers and operators finding the most efficient process variables to ensure high mechanical properties of the aluminium engine block castings. The main features of the system are explained and illustrated by appropriate graphs. Chances and threats connected with applications of the data-driven modelling in die casting are discussed.
16
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
Many industrial machine vision problems, particularly real-time control of manufacturing processes such as laser cladding, require robust and fast image processing. The inherent disturbances in images acquired during these processes makes classical segmentation algorithms uncertain. Among many convolutional neural networks introduced recently to solve such difficult problems, U-Net balances simplicity with segmentation accuracy. However, it is too computationally intensive for usage in many real-time processing pipelines. In this work we present a method of identifying the most informative levels of detail in the U-Net. By only processing the image at the selected levels, we reduce the total computation time by 80%, while still preserving adequate quality of segmentation.
17
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
Nieodłącznym elementem nadzoru nad procesami produkcyjnymi jest badanie ich zdolności jakościowej. Przeważnie analizowane są procesy o dwustronnej granicy tolerancji. Problem pojawia się, kiedy ograniczenia dla procesów są jednostronne. Nie można wówczas wykorzystać standardowej procedura oceny zdolności jakościowej procesu. W pracy przedstawiona zostanie metodyka oceny zdolności procesów przy jednostronnej granicy tolerancji na rzeczywistym procesie produkcyjnym. Przedstawiona metodyka zostanie zastosowana do analizy zdolności jakości procesu z jednostronną granicą tolerancji.
EN
An inherent element of supervision over production processes is the study of their qualitative capability. Most processes are analyzed with a both side specification limit. The problem arises when the process restrictions are one-sided. The standard process quality assessment process cannot be used then. The paper will present the methodology for assessing the capability of processes at the one side specification limit on the actual production process. The presented methodology will be used to analyze the process quality capability with a one-sided specification limit.
In this article, the dynamic responses of heat exchanger networks to disturbance and setpoint change were studied. Various control strategies, including: proportional integral, model predictive control, passivity approach, and passivity-based model predictive control were used to monitor all outlet temperatures. The performance of controllers was analyzed through two procedures: 1) inducing a ±5% step disturbance in the supply temperature, or 2) tracking a ±5°C target temperature. The performance criteria used to evaluate these various control modes was settling time and percentage overshoot. According to the results, the passivity-based model predictive controllers produced the best performance to reject the disturbance and the model predictive control proved to be the best controller to track the setpoint. Whereas, the ensuing performance results of both the PI and passivity controllers were discovered to be only acceptable..
Production-related preliminary damage and residual stresses have significant effects on the functions and the damage development in fiber composite components. For this reason, it is important, especially for the safety-relevant components, to check each item. This task becomes a challenge in the context of serial production, with its growing importance in the field of lightweight components. The demand for continuous-reinforced thermoplastic composites increases in various industrial areas. According to this, an innovative Continuous Orbital Winding (COW) process was carried out within the framework of the Federal Cluster of Excellence EXC 1075 “MERGE Technologies for Multifunctional Lightweight Structures”. COW is aiming for mass-production-suited processing of special semi-finished fiber reinforced thermoplastic materials. This resource-efficient and function-integrated manufacturing process contains a combination of thermoplastic tape-winding with automated thermoplastic tape-laying technology. The process has a modular concept, which allows implementing other special applications and technologies, e.g. integration of different sensor types and high-speed automated quality inspection. The results show how to control quality and improve the stability of the COW process for large-scale production. This was realized by developing concepts of a fully integrated quality-testing unit for automatic damage assessment of composite structures. For this purpose, the components produced in the COW method have been examined for imperfections. This was performed based on obtained results of non-destructive or destructive materials testing.
20
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
Screw presses are energy-restricted forming machines that use rotational energy stored in a flywheel for forming, which is converted into a linear movement by a threaded screw. Screw presses are widely used for forging steel, aluminum and brass. In a direct-driven electrical screw press, a reversible electric motor is mounted directly on the screw and on the press frame above the flywheel. With directly driven screw presses, the blow energy can be exactly dosed from one blow to the next. However, so far no prior work is known which uses the blow energy as a control input in a targeted manner to influence the properties of the forging. The purpose of the present work is to lay the foundations for property control through blow energy dosing during forging on screw presses. Process control becomes increasingly interesting due to ever increasing customer demands and needs for resource-efficient production. A major challenge is the variation of process parameters, e.g. temperature variations in the furnace, during transport or due to inherent uncertainty in the heat transfer to the dies and the environment. If the process conditions are changing the deviations from the planned process trajectory may lead to an insufficient die filling or undesired final properties. Forged parts require high precision considering the part geometry and material properties. During forming two mechanisms in terms of forming temperature take place: heat conduction due to contact with tools and heat dissipation due to plastic deformation. The heat transfer acts as disturbance, the impact energy can be used as control input. In this work, investigations into process control by impact energy dosing are put forward using FE (finite element) simulations.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.