W artykule przedstawiono zagadnienie harmonogramowania budowlanego, wieloobiektowego przedsięwzięcia drogowego. Podczas wykonywania robót w takich przedsięwzięciach występują możliwości częściowego zazębiania się kolejnych czynności w obiektach. Ze względu na potrzebę maksymalnego skrócenia czasu zajęcia pracami budowlanymi poszczególnych obiektów zakłada się w nich ciągłość wykonywania robót. Założenia te prowadzą do zadania optymalizacyjnego polegającego na poszukiwaniu optymalnej kolejności wykonywania obiektów, która minimalizuje czas trwania przedsięwzięcia. W artykule to zagadnienie z powodzeniem rozwiązano za pomocą algorytmu przeszukiwania genetycznego i zilustrowano przykładem praktycznym.
EN
The article presents the issue of scheduling a multiunit road construction project. During the execution of works in such projects, there is a possibility of partial overlapping of successive activities in the units. Due to the need to maximally shorten the time of occupancy with construction works of the units, continuity of the works is assumed in them. These assumptions lead to the optimization task consisting in finding the optimal order of execution of the units that minimizes the duration of the project. In the article, this issue was successfully solved using a genetic search algorithm and illustrated by a case study.
When integrated with mobile edge computing (MEC), software-defined networking (SDN) allows for efficient network management and resource allocation in modern computing environments. The primary challenge addressed in this paper is the optimization of task offloading and scheduling in SDN-MEC environments. The goal is to minimize the total cost of the system, which is a function of task completion lead time and energy consumption, while adhering to task deadline constraints. This multi-objective optimization problem requires balancing the trade-offs between local execution on mobile devices and offloading tasks to edge servers, considering factors such as computation requirements, data size, network conditions, and server capacities. This research focuses on evaluating the performance of particle swarm optimization (PSO) and Qlearning algorithms under full and partial offloading scenarios. Simulation-based comparisons of PSO and Q-learning show that for large data quantities, PSO is more cost efficient than the other algorithms, with the cost increase equaling approximately 0.001%per kilobyte, as opposed to 0.002%in the case of Q-learning. As far as energy consumption is concerned, PSO performs 84%and 23%better than Q-learning in the case of full and partial offloading, respectively. The cost of PSO is also less sensitive to network latency conditions than GA. Furthermore, the results demonstrate that Q-learning offers better scalability in terms of execution time as the number of tasks increases, and exceeds the outcomes achieved by PSO for task loads of more than 40. Such observations prove that PSO is better suited for large data transfers and energy-critical applications, whereas Q-learning is better suited for highly scalable environments and large numbers of tasks.
The fourth industrial revolution has broadly transformed the manufacturing system. However, this transformation is somewhat lacking in traditional or manual production systems due to the absence of IT infrastructure. Such traditional industries need to have the advantage of real-time control and monitoring. This study has developed economic assembly planning, scheduling, and control for a traditional assembly system. We used the concept of the configurable virtual workstation as the digitalization framework. Then, we employed the decentralized scheduling concept to reduce the computational effort in scheduling the complex product. The implementation result showed that scheduling and planning have transformed the traditional assembly process into intelligent scheduling and control with low digitalization effort.
This paper introduces a method that combines the K-means clustering genetic algorithm (GA) and Lempel-Ziv-Welch (LZW) compression techniques to enhance the efficiency of data aggregation in wireless sensor networks (WSNs). The main goal of this research is to reduce energy consumption, improve network scalability, and enhance data aggregation accuracy. Additionally, the GA technique is employed to optimize the cluster formation process by selecting the cluster heads, while LZW compresses aggregated data to reduce transmission overhead. To further optimize network traffic, scheduling mechanisms are introduced that contribute to packets being transmitted from sensors to cluster heads. The findings of this study will contribute to advancing packet scheduling mechanisms for data aggregation in WSNs in order to reduce the number of packets from sensors to cluster heads. Simulation results confirm the system's effectiveness compared to other compression methods and non-compression scenarios relied upon in LEACH, M-LEACH, multi-hop LEACH, and sLEACH approaches.
This paper aims to develop new highly efficient PSC-algorithms (algorithms that contain a polynomial-time sub-algorithm with sufficient conditions for the optimality of the solutions obtained) for several interrelated problems involving identical parallel machine scheduling. These problems share common basic theoretical positions and common principles of their solving. Two main intractable scheduling problems are considered: (“Minimization of the total tardiness of jobs on parallel machines with machine release times and a common due date” (TTPR) and “Minimising the total tardiness of parallel machines completion times with respect to the common due date with machine release times” (TTCR)) and an auxiliary one (“Minimising the difference between the maximal and the minimal completion times of the machines” (MDMM)). The latter is used to efficiently solve the first two ones. For the TTPR problem and its generalisation in the case when there are machines with release times that extend past the common due date (TTPRE problem), new theoretical properties are given, which were obtained on the basis of the previously published ones. Based on the new theoretical results and computational experiments the PSC-algorithm solving these two problems is modified (sub-algorithms A1, A2). Then the auxiliary problem MDMM is considered and Algorithm A0 is proposed for its solving. Based on the analysis of computational experiments, A0 is included in the PSC-algorithm for solving the problems TTPR, TTPRE as its polynomial component for constructing a schedule with zero tardiness of jobs if such a schedule exists (a new third sufficient condition of optimality). Next, the second intractable combinatorial optimization problem TTCR is considered, deducing its sufficient conditions of optimality, and it is shown that Algorithm A0 is also an efficient polynomial component of the PSC-algorithm solving the TTCR problem. Next, the case of a schedule structure is analysed (partially tardy), in which the functionals of the TTPR and TTCR problems become identical. This facilitates the use of Algorithm A1 for the TTPR problem in this case of the TTCR problem. For Algorithm A1, in addition to the possibility of obtaining a better solution, there exists a theoretically proven estimate of the deviation of the solution from the optimum. Thus, the second PSC-algorithm solving the TTCR problem finds an exact solution or an approximate solution with a strict upper bound for its deviation from the optimum. The practicability of solving the problems under consideration is substantiated.
Artykuł analizuje zasady harmonogramowania robót budowlanych ze szczególnym uwzględnieniem projektów elektroenergetycznych, koncentrując się na optymalizacji alokacji zasobów, zarządzaniu procesami technologicznymi oraz redukcji ryzyka opóźnień. Praca przedstawia również zasady aktualizacji harmonogramów w odpowiedzi na zmieniające się uwarunkowania projektowe. Istotnym elementem artykułu jest wykaz najczęściej popełnianych błędów w zakresie harmonogramowania robót budowlanych sporządzony na podstawie analizy literatury przedmiotu oraz doświadczeń zawodowych autorki. Zostały sformułowane również zalecenia dla planistów m.in. w zakresie wdrażania rezerw czasowych i zasobowych (ludzkich, materiałowych, sprzętowych), które stanowią istotne elementy skutecznego zarządzania projektami o wysokim stopniu złożoności.
EN
The article examines the principles of scheduling construction works with particular reference to electrical power projects, focusing on optimising resource allocation, process management and reducing the risk of delays. In addition, the article presents principles for updating schedules in response to changing project conditions. An important element of the article is a list of the most frequently committed scheduling errors in the field of construction works drawn up on the basis of an analysis of the literature on the subject and the author's professional experience, and recommendations are formulated for planners in terms of, among other things, implementing time and resource reserves (human, material, equipment), which are important elements of effective management of highly complex projects.
Offsite construction technologies are developed to reduce project cost and duration. To make the most of the potential offered by prefabrication the planner should consider the whole supply chain. A failure to coordinate the off-site production with on-site erection is a source of waste (waiting time of the construction crews or redundant handling activities on-site). Most of the research to date focused on optimizing operations of a prefabrication plant assuming a deterministic schedule of demand for its products. The purpose of this paper is to develop a mathematical model for integrated scheduling of offsite and on-site operations. Its solution is a schedule that minimizes the downtime of both the prefabrication plant and the on-site erection crews. In accordance with the Just-in-Time concept, the prefabrication schedule is set in a way to reduce the stocks of finished products, thus reducing the storage area and cost of funds tied up in inventory. The schedule’s robustness against the disturbance in the production and erection workflows is assumed to be assured allocating time buffers. The advantage of the proposed method is the ease of collecting the input: instead of detailed cost records, estimates of unit cost of lost time can be used.
PL
Zintegrowane zarządzanie łańcuchem dostaw oraz zapewnienie synchronizacji produkcji prefabrykatów z montażem na budowie może przynieść efekty w postaci redukcji przestojów zarówno wytwórni jak i brygad roboczych, zmniejszenia kosztów magazynowania prefabrykatów oraz zamrożenia środków finansowych w zapasach. Duży wpływ na ryzyko czasu i kosztu ma termin rozpoczęcia procesu prefabrykacji w stosunku do terminu rozpoczęcia montażu oraz tempo produkcji, zależne od mocy produkcyjnej zakładu prefabrykacji. Tempo produkcji i wielkość partii oraz terminarz dostaw są uzależnione od postępu procesu montażu – procesy produkcji podstawowej i pomocniczej przebiegają równocześnie i powinny być planowane równorzędnie. Celem synchronizacji jest redukcja kosztownych strat czasu – przestojów w pracy wytwórni i prac na budowie, ale również zbędnych zapasów elementów. Terminy montażu elementów są zatem uwarunkowane przebiegiem realizacji innych procesów w ramach danego przedsięwzięcia. W artykule zaproponowano model matematyczny problemu synchronizacji produkcji podstawowej i pomocniczej realizowanej w wytwórni prefabrykatów. W odróżnieniu od wcześniej przedstawionych w literaturze metod, proponowane podejście zakłada, że terminy montażu elementów nie są sztywne, lecz są ustalane poprzez rozwiązanie opracowanego modelu optymalizacyjnego. Terminy rozpoczęcia produkcji poszczególnych partii prefabrykatów są synchronizowane z terminami zapotrzebowania w celu redukcji przestojów pracy wytwórni oraz brygad realizujących poszczególne procesy budowlane. Podejście to bazuje na koncepcji metody JIT, lecz uwzględnia możliwość wystąpienia zakłóceń zarówno w produkcji w zakładzie jak i na budowie poprzez uwzględnienie w harmonogramie buforów czasu. W artykule zilustrowano zastosowanie proponowanego modelu na przykładzie realizacji przedsięwzięcia polegającego na realizacji kompleksu dwóch budynków w stanie surowym o konstrukcji mieszanej. Przeprowadzono analizę wrażliwości uzyskanego rozwiązania na zmiany wag modelu (kosztu jednego dnia przerwy w pracy brygady montażowej, kosztu dziennego gromadzenia zapasu elementów i jednostkowego kosztu przestoju wytwórni). Utworzony model poddano także analizie pod kątem możliwości i skutków eliminacji zbędnego czasu składowania prefabrykatów oraz przestojów w pracy wytwórni. Przykład został rozwiązany z wykorzystaniem Lingo 14.0. Zaproponowany w artykule podejście pozwala zaplanować terminy produkcji prefabrykatów oraz dostosować do nich terminy prac montażowych w celu minimalizacji kosztów związanych z przestojami i gromadzeniem nadmiernych zapasów. Zaletą opracowanego modelu matematycznego jest możliwość bazowania jedynie na oszacowaniu wzajemnych relacji pomiędzy kosztami jednostkowymi strat czasu, bez konieczności dostępu do szczegółowych danych z ewidencji kosztów. Zaproponowana postać liniowa modelu pozwala na zastosowanie do jego rozwiązania dostępnych powszechnie solverów.
W artykule przedstawiono problematykę harmonogramowania budowlanych przedsięwzięć wieloobiektowych z uwzględnieniem efektu uczenia. Efekt ten pojawia się podczas wykonywania robót jednego rodzaju w wielu obiektach budowlanych. Doprowadza to do istotnego skrócenia czasu trwania przedsięwzięcia. W prezentowanym modelu przedsięwzięcia istnieje problem poszukiwania optymalnej kolejności wykonywania obiektów, która minimalizuje czas trwania przedsięwzięcia. W artykule zagadnienie to z powodzeniem rozwiązano za pomocą metaheurystycznego algorytmu symulowanego wyżarzania i zilustrowano przykładem praktycznym.
EN
The article presents the issues of scheduling multiunit construction projects, taking into account the learning effect. This effect occurs when one type of the activity is carried out in many building units. This leads to a significant reduction in the duration of the project. In the presented model of the project, there is a problem of searching for the optimal order of execution of the units, which minimizes the duration of the project. In this article, this problem was successfully solved using a metaheuristic simulated annealing algorithm and illustrated by a case study.
W artykule rozważano problem doboru metod intensyfikacji pracy z uwzględnieniem ich kosztów i efektów w postaci skrócenia czasu trwania procesów budowlanych. Metody te obejmują: pracę w nadgodzinach, pracę w weekendy, pracę na dwie zmiany oraz zatrudnianie bardziej wydajnych brygad roboczych. Opracowano model matematyczny dla powtarzalnych procesów budowlanych, zapewniający minimalizację przerw w pracy brygad oraz redukcję czasu realizacji całego przedsięwzięcia. W celu weryfikacji poprawności modelu opracowane podejście zastosowano do wyznaczenia wariantów organizacyjnych (działań redukujących czas realizacji procesów) dla przykładowego przedsięwzięcia budowlanego.
EN
The paper considers the problem of selecting methods of work acceleration, taking into account their costs and effects in terms of reducing the duration of construction processes. These methods include: working overtime, working on weekends, working in two shifts and employing more efficient work brigades. A mathematical model was developed for repetitive construction processes, ensuring minimization of interruptions in the crews’ work and reduction of the time of the entire project. In order to verify the correctness of the model, the developed approach was used to determine organizational variants (activities that reduce process completion time) for a sample construction project.
This work is interested to optimize the job shop scheduling problem with a no wait constraint. This constraint occurs when two consecutive operations in a job must be processed without any waiting time either on or between machines. The no wait job shop scheduling problem is a combinatorial optimization problem. Therefore, the study presented here is focused on solving this problem by proposing strategy for making Jaya algorithm applicable for handling optimization of this type of problems and to find processing sequence that minimizes the makespan (Cmax). Several benchmarks are used to analyze the performance of this algorithm compared to the best-known solutions.
The Job Shop scheduling problem is widely used in industry and has been the subject of study by several researchers with the aim of optimizing work sequences. This case study provides an overview of genetic algorithms, which have great potential for solving this type of combinatorial problem. The method will be applied manually during this study to understand the procedure and process of executing programs based on genetic algorithms. This problem requires strong decision analysis throughout the process due to the numerous choices and allocations of jobs to machines at specific times, in a specific order, and over a given duration. This operation is carried out at the operational level, and research must find an intelligent method to identify the best and most optimal combination. This article presents genetic algorithms in detail to explain their usage and to understand the compilation method of an intelligent program based on genetic algorithms. By the end of the article, the genetic algorithm method will have proven its performance in the search for the optimal solution to achieve the most optimal job sequence scenario.
In today’s manufacturing systems, especially in Industry 4.0, highly autonomous production cells play an important role. To reach this goal of autonomy, different technologies like industrial robots, machine tools, and automated guided vehicles (AGV) are deployed simultaneously which creates numerous challenges on various automation levels. One of those challenges regards the scheduling of all applied resources and their corresponding tasks. Combining data from a real production environment and Constraint Programming (CP-SAT), we provide a cascaded scheduling approach that plans production orders for machine tools to minimize makespan and tool changeover time while enabling the corresponding robot for robot-collaborated processes. Simultaneously, AGVs provide all production cells with the necessary material and tools. Hereby, magazine capacity for raw material as well as finished parts and tool service life are taken into account.
The paper brings forward an idea of multi-threaded computation synchronization based on the shared semaphored cache in the multi-core CPUs. It is dedicated to the implementation of multi-core PLC control, embedded solution or parallel computation of models described using hardware description languages. The shared semaphored cache is implemented as guarded memory cells within a dedicated section of the cache memory that is shared by multiple cores. This enables the cores to speed up the data exchange and seamlessly synchronize the computation. The idea has been verified by creating a multi-core system model using Verilog HDL. The simulation of task synchronization methods allows for proving the benefits of shared semaphored memory cells over standard synchronization methods. The proposed idea enhances the computation in the algorithms that consist of relatively short tasks that can be processed in parallel and requires fast synchronization mechanisms to avoid data race conditions.
14
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
In this paper we describe polynomial time algorithms for minimizing a separable convex function of the resource usage over time of a set of jobs with individual release dates and deadlines, and admitting a common processing time.
15
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
Multipath TCP (MPTCP) has been widely used as an efficient way for communication in many applications. Data centers, smartphones, and network operators use MPTCP to balance the traffic in a network efficiently. MPTCP is an extension of TCP (Transmission Control Protocol), which provides multiple paths, leading to higher throughput and low latency. Although MPTCP has shown better performance than TCP in many applications, it has its own challenges. The network can become congested due to heavy traffic in the multiple paths (subflows) if the subflow rates are not determined correctly. Moreover, communication latency can occur if the packets are not scheduled correctly between the subflows. This paper reviews techniques to solve the above-mentioned problems based on two main approaches; non data-driven (classical) and data-driven (Machine learning) approaches. This paper compares these two approaches and highlights their strengths and weaknesses with a view to motivating future researchers in this exciting area of machine learning for communications. This paper also provides details on the simulation of MPTCP and its implementations in real environments.
This paper presents the problem of public transport planning in terms of the optimal use of the available fleet of vehicles and reductions in operational costs and environmental impact. The research takes into account the large fleet of vehicles of various types that are typically found in large cities, including the increasingly widely used electric buses, many depots, and numerous limitations of urban public transport. The mathematical multi-criteria mathematical model formulated in this work considers many important criteria, including technical, economic, and environmental criteria. The preliminary results of the Mixed Integer Linear Programming solver for the proposed model on both theoretical data and real data from urban public transport show the possibility of the practical application of this solver to the transport problems of medium-sized cities with up to two depots, a heterogeneous fleet of vehicles, and up to about 1500 daily timetable trips. Further research directions have been formulated with regard to larger transport systems and new dedicated heuristic algorithms.
The main objective of this paper is to present an example of the IT system implementation with advanced mathematical optimisation for job scheduling. The proposed genetic procedure leads to the Pareto front, and the application of the multiple criteria decision aiding (MCDA) approach allows extraction of the final solution. Definition of the key performance indicator (KPI), reflecting relevant features of the solutions, and the efficiency of the genetic procedure provide the Pareto front comprising the representative set of feasible solutions. The application of chosen MCDA, namely elimination et choix traduisant la réalité (ELECTRE) method, allows for the elicitation of the decision maker (DM) preferences and subsequently leads to the final solution. This solution fulfils all of the DM expectations and constitutes the best trade-off between considered KPIs. The proposed method is an efficient combination of genetic optimisation and the MCDA method.
18
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
W artykule przedstawiono sposób wykonania harmonogramu w postaci wykresu Gantta z zastosowaniem szeregowania zadań metodą sprzężeń czasowych i ciągłości pracy TCM 1 brygad roboczych. Tradycyjną metodą wizualizacji harmonogramu w TCM są cyklogramy. W artykule przedstawiono wykorzystanie programu MS Project do wykonania wykresów Gantta z zastosowaniem szeregowania zadań metodą TCM 1 ze względu na możliwość wykorzystania tego programu w analizie ryzyka i kosztów cyklu życia budowli, gdzie konieczne jest zastosowanie wykresu Gantta do obliczeń.
EN
The purpose of this work is to present the method of making a schedule in the form of a Gantt chart with the use of task scheduling using the time coupling method while maintaining the continuity of work of TCM 1 working teams. The traditional method of schedule visualization in TCM are cyclograms, in this work MS Project was used to prepare Gantt charts using the TCM 1 task scheduling method due to the need to use this program in the risk and cost analysis of the building life cycle, where it is necessary to use the Gantt chart to calculations.
The aim of the work was to develop a prioritizing and scheduling method to be followed in small and medium-sized companies operating under conditions of non-rhythmic and nonrepeatable production. A system in which make to stock, make to order and engineer to order (MTS, MTO and ETO) tasks are carried out concurrently, referred to as a non-homogenous system, has been considered. Particular types of tasks have different priority indicators. Processes involved in the implementation of these tasks are dependent processes, which compete for access to resources. The work is based on the assumption that the developed procedure should be a universal tool that can be easily used by planners. It should also eliminate the intuitive manner of prioritizing tasks while providing a fast and easy to calculate way of obtaining an answer, i.e. a ready plan or schedule. As orders enter the system on an ongoing basis, the created plan and schedule should enable fast analysis of the result and make it possible to implement subsequent orders appearing in the system. The investigations were based on data from the non-homogenous production system functioning at the Experimental Plant of the Łukasiewicz Research Network – Institute of Ceramics and Building Materials, Refractory Materials Division – ICIMB. The developed procedure includes the following steps: 1 – Initial estimation of resource availability, 2 – MTS tasks planning, 3 – Production system capacity analysis, 4 – ETO tasks planning, 5 – MTO orders planning, 6 – Evaluation of the obtained schedule. The scheduling procedure is supported by KbRS (Knowledge-based Rescheduling System), which has been modified in functional terms for the needs of this work assumption.
One of the most popular heuristics used to solve the permutation flowshop scheduling problem (PFSP) is the NEH algorithm. The reasons for the NEH popularity are its simplicity, short calculation time, and good-quality approximations of the optimal solution for a wide range of PFSP instances. Since its development, many works have been published analysing various aspects of its performance and proposing its improvements. The NEH algorithm includes, however, one unspecified and unexamined feature that is related to the order of jobs with equal values of total processing time in an initial sequence. We examined this NEH aspect using all instances from Taillard’s and VRF benchmark sets. As presented in this paper, the sorting operation has a significant impact on the results obtained by the NEH algorithm. The reason for this is primarily the input sequence of jobs, but also the sorting algorithm itself. Following this observation, we have proposed two modifications of the original NEH algorithm dealing with sequencing of jobs with equal total processing time. Unfortunately, the simple procedures used did not always give better results than the classical NEH algorithm, which means that the problem of sequencing jobs with equal total processing time needs a smart approach and this is one of the promising directions for further research.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.