Grid simulation tools provide frameworks for simulating application scheduling in various Grid infrastructures. However, while experimenting with many existing tools, we have encountered two main shortcomings: (i) there are no tools for generating workloads, resources and events; (ii) it is difficult and time consuming to model different Grid levels, i.e. resource brokers, and local level scheduling systems. In this paper we present the Grid Scheduling Simulator (GSSIM), a framework that addresses these shortcomings and provides an easy-to-use Grid scheduling framework for enabling simulations of a wide range of scheduling algorithms in multi-level, heterogeneous Grid infrastructures. In order to foster more collaboration in the community at large, GSSIM is complemented with a portal (http://www.gssim.org) that provides a repository of Grid scheduling algorithms, synthetic workloads and benchmarks for use with GSSIM.
2
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
In the paper we present two different models of Grid resource management problems: (i) Grid scheduling problems with no time characteristics available, and (ii) scheduling of jobs in presence of time characteristics achieved by using some prediction techniques, and resource reservation mechanisms. We focus on demonstrating how these two scenarios, which are important examples of Grid environments, can be modeled as multi-criteria decision support problems. We also discuss advantages and disadvantages of these models as well as practical applications.
3
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
Heterogeneous many-core computing resources are increasingly popular among users due to their improved performance over homogeneous systems. Many developers have realized that heterogeneous systems, e.g. a combination of a shared memory multi-core CPU machine with massively parallel Graphics Processing Units (GPUs), can provide significant performance opportunities to a wide range of applications. However, the best overall performance can only be achieved if application tasks are efficiently assigned to different types of processor units in time taking into account their specific resource requirements. Additionally, one should note that available heterogeneous resources have been designed as general purpose units, however, with many built-in features accelerating specific application operations. In other words, the same algorithm or application functionality can be implemented as a different task for CPU or GPU. Nevertheless, from the perspective of various evaluation criteria, e.g. the total execution time or energy consumption, we may observe completely different results. Therefore, as tasks can be scheduled and managed in many alternative ways on both many-core CPUs or GPUs and consequently have a huge impact on the overall computing resources performance, there are needs for new and improved resource management techniques. In this paper we discuss results achieved during experimental performance studies of selected task scheduling methods in heterogeneous computing systems. Additionally, we present a new architecture for resource allocation and task scheduling library which provides a generic application programming interface at the operating system level for improving scheduling polices taking into account a diversity of tasks and heterogeneous computing resources characteristics.
Przedstawiono tutaj problem szeregowania n niezależnych zadań w systemie procesorów równoległych. Zadania są elastyczne, tj. zadanie może być wykonywane przez kilka procesorów jednocześnie oraz prędkość wykonywania zadania jest nieliniową funkcją od ilości procesorów przydzielonych do niego. Całkowita liczba procesorów w systemie wynosi m i jest to górna granica liczby procesorów, które mogą być używane przez wszystkie zadania w tym samym czasie. Dodatkowym założeniem jest podzielność zadań oraz możliwość zmiany liczby procesorów przydzielonych do zadania w trakcie jego wykonywania. Celem jest znalezienie uszeregowania, dla którego czas zakończenia wszystkich zadań jest najkrótszy z możliwych. Prezentowany jest prosty algorytm o złożoności 0(n), rozwiązujący ten problem w przypadku, kiedy wszystkie funkcje prędkości wykonywania są wypukłe. Jeżeli funkcje te są wszystkie wklęsłe, przedstawiono algorytm pakowania prostokątów (PACK), który rozwiązuje ten problem w czasie wielomianowym.
EN
The problem of optimal scheduling n independent tasks on a parallel processor system is studied. The tasks are malleable, i.e. a task may be executed by several processors simultaneously and the processing speed of a task is a non-linear function of the number of processors allotted to it. The total number of processors is m and it is an upper bound on the number of processors that can be used by all the tasks simultaneously. It is assumed that the tasks are preemptable and the number of processors allotted to one task may change during its execution. The objective is to find a schedule for which the makespan, is minimized. An 0(n) algorithm is presented to solve this problem when all the processing speed functions are convex. If these functions are all concave the rectangles packing (PACK) algorithm is presented, which solves the problem in polynomial time.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.