Ograniczanie wyników
Czasopisma help
Autorzy help
Lata help
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 30

Liczba wyników na stronie
first rewind previous Strona / 2 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  reinforcement learning
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 2 next fast forward last
EN
Reinforcement learning (RL) constitutes an effective method of controlling dynamic systems without prior knowledge. One of the most important and difficult problems in RL is the improvement of data efficiency. Probabilistic inference for learning control (PILCO) is a state-of-the-art data-efficient framework that uses a Gaussian process to model dynamic systems. However, it only focuses on optimizing cumulative rewards and does not consider the accuracy of a dynamic model, which is an important factor for controller learning. To further improve the data efficiency of PILCO, we propose its active exploration version (AEPILCO) that utilizes information entropy to describe samples. In the policy evaluation stage, we incorporate an information entropy criterion into long-term sample prediction. Through the informative policy evaluation function, our algorithm obtains informative policy parameters in the policy improvement stage. Using the policy parameters in the actual execution produces an informative sample set; this is helpful in learning an accurate dynamic model. Thus, the AEPILCOalgorithm improves data efficiency by learning an accurate dynamic model by actively selecting informative samples based on the information entropy criterion. We demonstrate the validity and efficiency of the proposed algorithm for several challenging controller problems involving a cart pole, a pendubot, a double pendulum, and a cart double pendulum. The AEPILCO algorithm can learn a controller using fewer trials compared to PILCO. This is verified through theoretical analysis and experimental results.
EN
Compared with the robots, humans can learn to perform various contact tasks in unstructured environments by modulating arm impedance characteristics. In this article, we consider endowing this compliant ability to the industrial robots to effectively learn to perform repetitive force-sensitive tasks. Current learning impedance control methods usually suffer from inefficiency. This paper establishes an efficient variable impedance control method. To improve the learning efficiency, we employ the probabilistic Gaussian process model as the transition dynamics of the system for internal simulation, permitting long-term inference and planning in a Bayesian manner. Then, the optimal impedance regulation strategy is searched using a model-based reinforcement learning algorithm. The effectiveness and efficiency of the proposed method are verified through force control tasks using a 6-DoFs Reinovo industrial manipulator.
3
Content available remote Accidental exploration through value predictors
EN
Infinite length of trajectories is an almost universal assumption in the theoretical foundations of reinforcement learning. In practice learning occurs on finite trajectories. In this paper we examine a specific result of this disparity, namely a strong bias of the time-bounded Every-visit Monte Carlo value estimator. This manifests as a vastly different learning dynamic for algorithms that use value predictors, including encouraging or discouraging exploration. We investigate these claims theoretically for a one dimensional random walk, and empirically on a number of simple environments. We use GAE as an algorithm involving a value predictor and evolution strategies as a reference point.
EN
Artificial intelligence has made big steps forward with reinforcement learning (RL) in the last century, and with the advent of deep learning (DL) in the 90s, especially, the breakthrough of convolutional networks in computer vision field. The adoption of DL neural networks in RL, in the first decade of the 21 century, led to an end-toend framework allowing a great advance in human-level agents and autonomous systems, called deep reinforcement learning (DRL). In this paper, we will go through the development Timeline of RL and DL technologies, describing the main improvements made in both fields. Then, we will dive into DRL and have an overview of the state-ofthe- art of this new and promising field, by browsing a set of algorithms (Value optimization, Policy optimization and Actor-Critic), then, giving an outline of current challenges and real-world applications, along with the hardware and frameworks used. In the end, we will discuss some potential research directions in the field of deep RL, for which we have great expectations that will lead to a real human level of intelligence.
PL
Artykuł opisuje użycie bazy danych PostgreSQL do przechowywania danych z procesu uczenia ze wzmocnieniem agenta koordynującego działanie innych agentów. Ponieważ agent koordynujący działania innych agentów powinien mieć dostęp do iloczynu kartezjańskiego akcji dla wszystkich koordynowanych agentów, liczba wszystkich akcji rośnie wykładniczo. Dlatego też należy rozważyć użycie bazy danych jako kontenera na dane dotyczące procesu uczenia.
EN
Article describes application of PostgreSQL database for storing learning process data. We consider reinforcement learning of the agent coordinating the other agents’ learning process. As the coordinator should have access to Cartesian product of particular agents’ actions, the size of the data grows exponentially. Thus the application of the database as the container for the learning process data is worth of consideration.
6
Content available remote Online Supervised Learning Approach for Machine Scheduling
EN
Due to rapid growth of computational power and demand for faster and more optimal solution in today's manufacturing, machine learning has lately caught a lot of attention. Thanks to it's ability to adapt to changing conditions in dynamic environments it is perfect choice for processes where rules cannot be explicitly given. In this paper proposes on-line supervised learning approach for optimal scheduling in manufacturing. Although supervised learning is generally not recommended for dynamic problems we try to defeat this conviction and prove it's viable option for this class of problems. Implemented in multi-agent system algorithm is tested against multi-stage, multi-product ow-shop problem. More specically we start from dening considered problem. Next we move to presentation of proposed solution. Later on we show results from conducted experiments and compare our approach to centralized reinforcement learning to measure algorithm performance.
EN
The aim of the presented research was to prove the feasibility of the fuzzy modeling employing in combination with the reinforcement learning, in the process of designing an artificial intelligence that effectively controls the behavior of agents in the RTS-type computer game. It was achieved by implementing a testing environment for “StarCraft”, a widely popular RTS game. The testing environment was focused on a single test-scenario, which was used to explore the behavior of the fuzzy logic-based AI. The fuzzy model’s parameters were adjustable, and a Q-learning algorithm was applied to perform such adjustments in each learning cycle.
PL
W artykule przedstawiono badania możliwości połączenia modelowania rozmytego z uczeniem ze wzmocnieniem w procesie projektowania inteligentnego algorytmu, który będzie efektywnie kontrolował zachowanie agentów w grze typu RTS. Aby osiągnąć założony cel, zaimplementowano testowe środowisko w popularnej grze RTS „StarCraft”. W środowisku tym realizowano jeden założony scenariusz gry, w którym badano zachowanie opracowanego algorytmu rozmytego. Parametry modelu rozmytego były modyfikowane za pomocą metody Q-learning.
EN
In this paper we propose a strategy learning model for autonomous agents based on classification. In the literature, the most commonly used learning method in agent-based systems is reinforcement learning. In our opinion, classification can be considered a good alternative. This type of supervised learning can be used to generate a classifier that allows the agent to choose an appropriate action for execution. Experimental results show that this model can be successfully applied for strategy generation even if rewards are delayed. We compare the efficiency of the proposed model and reinforcement learning using the farmer–pest domain and configurations of various complexity. In complex environments, supervised learning can improve the performance of agents much faster that reinforcement learning. If an appropriate knowledge representation is used, the learned knowledge may be analyzed by humans, which allows tracking the learning process.
EN
This work aims to improve and simplify the procedure used in the Control Adjoining Cell Mapping with Reinforcement Learning (CACM-RL) technique, for the tuning process of an optimal contro ller during the pre-learning stage (controller design), making easier the transition from a simulation environment to the real world. Common problems, encountered when working with CACM-RL, are the adjustment of the cell size and the long-term evolution error. In this sense, the main goal of the new approach, developed for CACM-RL that is proposed in this work (CACMRL*), is to give a response to both problems for helping engineers in defining of the control solution with accuracy and stability criteria instead of cell sizes. The new approach improves the mathematical analysis techniques and reduces the engineering effort during the design phase. In order to demonstrate the behaviour of CACM-RL*, three examples are described to show its application to real problems. In All the examples, CACM-RL* improves with respect to the considered alternatives. In some cases, CACM- RL* improves the average controllability by up to 100%.
PL
Artykuł przedstawia koncepcję autonomicznego generowania trajektorii zadanej w elektronawigacyjnym układzie sterowania ruchem statku. Trajektoria ta wyznaczana jest na podstawie informacji o docelowej pozycji statku, dostarczonej przez operatora oraz sytuacji nawigacyjnej, określanej poprzez zestaw urządzeń elektronawigacyjnych. Działanie układu opiera się na wykorzystaniu algorytmów uczenia przez wzmacnianie. W artykule przedstawiono zasady działania tych algorytmów zarówno w wersji dyskretnej, jak i ciągłej – z aproksymacją przestrzeni stanu. Wyznaczana trajektoria może być realizowana w autopilocie okrętowym wyposażonym w wielowymiarowy, nieliniowy regulator kursu i położenia.
EN
The paper presents the concept of autonomous reference trajectory generation unit for the vessel motion control system. Reference trajectory is determined based on the information about the target position of the vessel, provided by the operator and navigational situation determined by the navigational equipment fitted on the vessel. The key data processing concept of the system relies on a reinforcement learning algorithms. The paper presents the principles of selected RL algorithms in both discrete and continuous domains. Trajectory determined in the proposed module can be realized in marine autopilot equipped with a multidimensional, nonlinear controller of the course and position.
11
Content available Epoch-incremental reinforcement learning algorithms
EN
In this article, a new class of the epoch-incremental reinforcement learning algorithm is proposed. In the incremental mode, the fundamental TD(0) or TD(λ) algorithm is performed and an environment model is created. In the epoch mode, on the basis of the environment model, the distances of past-active states to the terminal state are computed. These distances and the reinforcement terminal state signal are used to improve the agent policy.
12
EN
The basic reinforcement learning algorithms, such as Q-learning or Sarsa, are characterized by short time-consuming single learning step, however the number of epochs necessary to achieve the optimal policy is not acceptable. There are many methods that reduce the number of' necessary epochs, like TD(lambda greather than 0), Dyna or prioritized sweeping, but their computational time is considerable. This paper proposes a combination of Q-learning algorithm performed in the incremental mode with the method of acceleration executed in the epoch mode. This acceleration is based on the distance to the terminal state. This approach ensures the maintenance of short time of a single learning step and high efficiency comparable with Dyna or prioritized sweeping. Proposed algorithm is compared with Q(lambda)-learning, Dyna-Q and prioritized sweeping in the experiments of three grid worlds. The time-consuming learning process and number of epochs necessary to reach the terminal state is used to evaluate the efficiency of compared algorithms.
PL
Efektywność podstawowych algorytmów uczenia ze wzmocnieniem Q-learning i Sarsa, mierzona liczbą prób niezbędnych do uzyskania strategii optymalnej jest stosunkowo niewielka. Stąd też możliwości praktycznego zastosowania tego algorytmu są niewielkie. Zaletą tych podstawowych algorytmów jest jednak niewielka złożoność obliczeniowa, sprawiająca, że czas wykonania pojedynczego kroku uczenia jest na tyle mały, że znakomicie sprawdzają się one w systemach sterowania online. Stosowane metody przyśpieszania procesu uczenia ze wzmocnieniem, które pozwalająna uzyskanie stanu absorbującego po znacznie mniejszej liczbie prób, niż algorytmy podstawowe powodują najczęściej zwiększenie złożoności obliczeniowej i wydłużenie czasu wykonania pojedynczego kroku uczenia. Najczęściej stosowane przyśpieszanie metodą różnic czasowych TD(lambda znak większości 0) wiąże się z zastosowaniem dodatkowych elementów pamięciowych, jakimi są ślady aktywności (eligibility traces). Czas wykonania pojedynczego kroku uczenia w takim algorytmie znacznie się wydłuża, gdyż w odróżnieniu od algorytmu podstawowego, gdzie aktualizacji podlegała wyłącznie funkcja wartości akcji tylko dla stanu aktywnego, tutaj aktualizację przeprowadza się dla wszystkich stanów. Bardziej wydajne metody przyśpieszania, takie jak Dyna, czy też prioritized sweeping również należą do klasy algorytmów pamięciowych, a ich główną ideą jest uczenie ze wzmocnieniem w oparciu o adaptacyjny model środowiska. Metody te pozwalają na uzyskanie stanu absorbującego w znacznie mniejszej liczbie prób, jednakże, na skutek zwiększonej złożoności obliczeniowej, czas wykonania pojedynczego kroku uczenia jest już istotnym czynnikiem ograniczającym zastosowanie tych metod w systemach o znacznej liczbie stanów. Istotą tych algorytmów jest dokonywanie ustalonej liczby aktualizacji funkcji wartości akcji stanów aktywnych w przeszłości, przy czym w przypadku algorytmu Dyna są to stany losowo wybrane, natomiast w przypadku prioritized sweeping stany uszeregowane wg wielkości błędu aktualizacji. W niniejszym artykule zaproponowano epokowo-inkrementacyjny algorytm uczenia ze wzmocnieniem, którego główną ideą jest połączenie podstawowego, inkrementacyjnego algorytmu uczenia ze wzmocnieniem Q-lerning z algorytmem przyśpieszania wykonywanym epokowo. Zaproponowana metoda uczenia epokowego w głównej mierze opiera się na rzeczywistej wartości sygnału wzmocnienia obserwowanego przy przejściu do stanu absorbującego, który jest następnie wykładniczo propagowany wstecz w zależności od estymowanej odległości od stanu absorbującego. Dzięki takiemu podej- ściu uzyskano niewielki czas uczenia pojedynczego kroku w trybie inkrementacyjnym (Tab. 2) przy zachowaniu efektywności typowej dla algorytmów Dyna, czy też prioritized sweeping (Tab. 1 i Fig. 5).
EN
This paper presents the application of the reinforcement learning algorithms to the task of autonomous determination of the ship trajectory during thein-harbour and harbour approaching manoeuvres. Authors used Markov decision processes formalism to build up the background of algorithm presentation. Two versions of RL algorithms were tested in the simulations: discrete (Q-learning) and continuous form (Least-Squares Policy Iteration). The results show that in both cases ship trajectory can be found. However discrete Q-learning algorithm suffered from many limitations (mainly curse of dimensionality) and practically is not applicable to the examined task. On the other hand, LSPI gave promising results. To be fully operational, proposed solution should be extended by taking into account ship heading and velocity and coupling with advanced multi-variable controller.
EN
We present a self-adaptive hyper-heuristic capable of solving static and dynamic instances of the capacitated vehicle routing problem. The hyper-heuristic manages a generic sequence of constructive and perturbative low-level heuristics, which are gradually applied to construct or improve partial routes. We present some design considerations to allow the collaboration among heuristics, and to find the most promising sequence. The search process is carried out by applying a set of operators which constructs new sequences of heuristics, i.e., solving strategies. We have used a general and low-computational cost parameter control strategy, based on simple reinforcement learning ideas, to assign non-arbitrary reward/penalty values and guide the selection of operators. Our approach has been tested using some standard state-of-the-art benchmarks, which present different topologies and dynamic properties, and we have compared it with previous hyper-heuristics and several well-known methods proposed in the literature. The experimental results have shown that our approach is able to attain quite stable and good quality solutions after solving various problems, and to adapt to dynamic scenarios more naturally than other methods. Particularly, in the dynamic case we have obtained high-quality solutions when compared with other algorithms in the literature. Thus, we conclude that our self-adaptive hyper-heuristic is an interesting approach for solving vehicle routing problems as it has been able (1) to guide the search for appropriate operators, and (2) to adapt itself to particular states of the problem by choosing a suitable combination of heuristics.
EN
Hybridization of global and local search techniques has already produced promising results in the fields of optimization and machine learning. It is commonly presumed that approaches employing this idea, like memetic algorithms combining evolutionary algorithms and local search, benefit from complementarity of constituent methods and maintain the right balance between exploration and exploitation of the search space. While such extensions of evolutionary algorithms have been intensively studied, hybrids of local search with coevolutionary algorithms have not received much attention. In this paper we attempt to fill this gap by presenting Coevolutionary Temporal Difference Learning (CTDL) that works by interlacing global search provided by competitive coevolution and local search by means of temporal difference learning. We verify CTDL by applying it to the board game of Othello, where it learns board evaluation functions represented by a linear architecture of weighted piece counter. The results of a computational experiment show CTDL superiority compared to coevolutionary algorithm and temporal difference learning alone, both in terms of performance of elaborated strategies and computational cost. To further exploit CTDL potential, we extend it by an archive that keeps track of selected well-performing solutions found so far and uses them to improve search convergence. The overall conclusion is that the fusion of various forms of coevolution with a gradient-based local search can be highly beneficial and deserves further study.
EN
The paper presents application of the reinforcement learning to autonomous mobile robot moving learning in an unknown, stationary environment. The robot movement policy was represented by a probabilistic RBF neural network. As the learning process was very slow or even impossible for complicated environments, there are presented some improvements, which were found out to be very effective in most cases.
PL
W artykule zaprezentowane jest zastosowanie uczenia ze wzmocnieniem w poszukiwaniu strategii ruchu autonomicznego robota mobilnego w nieznanym, stacjonarnym środowisku. Zadaniem robota jest dotarcie do zadanego i znanego mu punktu docelowego jak najkrótszą drogą i bez kolizji z przeszkodami. Stan robota określa jego położenie w stałym (związanym ze środowiskiem) układzie współrzędnych, natomiast akcja wyznaczana jest jako zadany kierunek ruchu. Strategia robota zdefiniowana jest pośrednio za pomocą funkcji wartości, którą reprezentuje sztuczna sieć neuronowa typu RBF. Sieci tego typu są łatwe w uczeniu, a dodatkowo ich parametry umożliwiają wygodną interpretację realizowanego odwzorowania. Ponieważ w ogólnym przypadku uczenie robota jest bardzo trudne, a w skomplikowanych środowiskach praktycznie niemożliwe, stąd w artykule zaprezentowanych jest kilka propozycji jego usprawnienia. Opisane są eksperymenty: z wykorzystaniem ujemnych wzmocnień generowanych przez przeszkody, z zastosowaniem heurystycznych sposobów podpowiadania robotowi właściwych zachowań w "trudnych" sytuacjach oraz z wykorzystaniem uczenia stopniowego. Badania wykazały, że najlepsze efekty uczenia dało połączenie dwóch ostatnich technik.
17
Content available remote Approximate dynamic programming in robust tracking control of wheeled mobile robot
EN
In this work, a novel approach to designing an on-line tracking controller for a nonholonomic wheeled mobile robot (WMR) is presented. The controller consists of nonlinear neural feedback compensator, PD control law and supervisory element, which assure stability of the system. Neural network for feedback compensation is learned through approximate dynamic programming (ADP). To obtain stability in the learning phase and robustness in face of disturbances, an additional control signal derived from Lyapunov stability theorem based on the variable structure systems theory is provided. Verification of the proposed control algorithm was realized on a wheeled mobile robot Pioneer-2DX, and confirmed the assumed behavior of the control system.
PL
W pracy przedstawiono nowe ujęcie problematyki sterowania nadążnego mobilnym robotem dwukołowym. Algorytm bazuje na metodzie uczenia ze wzmocnieniem o strukturze aktor-krytyk i nie wymaga uczenia wstępnego, działa on-line bez znajomości modelu robota. Element generujący sterowania (aktor - ASE) oraz element generujący sygnał wewnętrznego wzmocnienia (krytyk - ACE) są zrealizowane w postaci sztucznej sieci neuronowej (SN). Prezentowany algorytm sterowania zweryfikowano na rzeczywistym obiekcie, dwukołowym robocie mobilnym Pioneer-2DX. Badania potwierdziły poprawność przyjętego rozwiązania.
EN
The current and important question for Internet is how to assure the quality of service. Several protocols have been proposed to support different classes of network traffic. The open research problem is how to divide available bandwidth among those traffic classes to support their Quality of Service requirements. A major challenge in this area is developing algorithms that can handle situations in which we do not know the traffic intensities in all traffic classes in advance or those intensities are changing with time. In this paper we formulate the problem and next propose the reinforcement learning algorithm to solve it. The reinforcement function proposed is evaluated and compared to other methods.
19
EN
The paper presents an application of the reinforcement learning for a searching of an optimal policy in an exploration problem (also known as a Jeep problem). The continuous problem, in unrealistic so the main work was concentrated on the discrete Jeep problem. There is examined and described an influence of main learning parameters on the learning speed and there are presented some found exemplary policies for different problem conditions.
20
Content available remote Concepts of learning in assembler encoding
EN
Assembler Encoding (AE) represents Artificial Neural Network (ANN) in the form of a simple program called Assembler Encoding Program (AEP). The task of AEP is to create the so-called Network Definition Matrix (NDM) maintaining the whole information necessary to construct ANN. To generate AEPs and in consequence ANNs genetic algorithms are used. Using evolution is one of the methods to create optimal ANNs. Another method is learning. During learning parameters of ANN, e.g. weights of interneuron connections, adjust to the task performed by ANN. Usually, combining both methods accelerates generating optimal ANNs. The paper addresses the problem of simultaneous use of the evolution and learning in AE.
first rewind previous Strona / 2 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.