Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 3

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  behavior-based control
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
This work addresses the development of a distributed switching control strategy to drive the group of mobile robots in both backward and forward motion in a tightly coupled geometric pattern, as a solution for the deadlock situation that arises while navigating the unknown environment. A generalized closed-loop tracking controller considering the leader referenced model is used for the robots to remain in the formation while navigating the environment. A tracking controller using the simple geometric approach and the Instantaneous Centre of Radius (ICR), to drive the robot in the backward motion during deadlock situation is developed and presented. State-Based Modelling is used to model the behaviors/motion states of the proposed approach in MATLAB/STATEFLOW environment. Simulation studies are carried out to test the performance and error dynamics of the proposed approach combining the formation, navigation, and backward motion of the robots in all geometric patterns of formation, and the results are discussed.
EN
The paper presents FraDIA, a framework facilitating the creation of vision systems, that can operate as a stand–alone application as well as play a role of a vision subsystem for robotic controllers. The article describes motivations leading to the tool creation, its structure and a method of integration with a MRROC++ system, enabling the development of a robot controllers with visual feedback. The usefulness of the framework is demonstrated on the example of a robot playing checkers. In the application, FraDIA was used to implement two different vision subsystems, and the control system exhibited two behaviors utilizing visual information in two totally different ways: passive, responsible for monitoring the state of the game, and active, in which vision was utilized during the manipulator motion for localization of a pawn to be grasped. Regarding the complexity of the system, a specification method based on agents and transition function was used. The method, consisting of mathematical formulas supplemented by data flow diagrams, enables the reader to understand both the system structure and its behavior.
EN
Three-dimensional scene reconstruction is an important tool in many applications varying from computer graphics to mobile robot navigation. In this paper, we focus on the robotics application, where the goal is to estimate the 3D rigid motion of a mobile robot and to reconstruct a dense three-dimensional scene representation. The reconstruction problem can be subdivided into a number of subproblems. First, the egomotion has to be estimated. For this, the camera (or robot) motion parameters are iteratively estimated by reconstruction of the epipolar geometry. Secondly, a dense depth map is calculated by fusing sparse depth information from point features and dense motion information from the optical flow in a variational framework. This depth map corresponds to a point cloud in 3D space, which can then be converted into a model to extract information for the robot navigation algorithm. Here, we present an integrated approach for the structure and egomotion estimation problem.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.