Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 4

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  3D perception
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
In this paper, we propose a system for natural and intuitive interaction with the robot. Its purpose is to allow a person with no specialized knowledge or training in robot programming to program a robotic arm. We utilize data from the RGB-D camera to segment the scene and detect objects. We also estimate the configuration of the operator’s hand and the position of the visual marker to determine the intentions of the operator and the actions of the robot. To this end, we utilize trained neural networks and operations on the input point clouds. Also, voice commands are used to define or trigger the execution of the motion. Finally, we performed a set of experiments to show the properties of the proposed system.
EN
This study investigates listeners’ perceptual responses in audio-visual interactions concerning binaural spatial audio. Audio stimuli are coupled with or without visual cues to the listeners. The subjective test participants are tasked to indicate the direction of the incoming sound while listening to the audio stimulus via loudspeakers or headphones with the head-related transfer function (HRTF) plugin. First, the methodology assumptions and the experimental setup are described to the participants. Then, the results are presented and analysed using statistical methods. The results indicate that the headphone trials showed much higher perceptual ambiguity for the listeners than when the sound is delivered via loudspeakers. The influence of the visual modality dominates the audio-visual evaluation when loudspeaker playback is employed. Moreover, when the visual stimulus is present, the headphone playback pattern of behavior is not always in response to the loudspeaker playback.
EN
In this paper we investigate methods for self-localisation of a walking robot with the Kinect 3D active range sensor. The Iterative Closest Point (ICP) algorithm is considered as the basis for the computation of the robot rotation and translation between two viewpoints. As an alternative, a feature-based method for matching of 3D range data is considered, using the Normal Aligned Radial Feature (NARF) descriptors. Then, it is shown that NARFs can be used to compute a good initial estimate for the ICP algorithm, resulting in convergent estimation of the sensor egomotion. Experimental results are provided.
EN
The capability of a robot to follow autonomously a person highly enhances its usability when humans and robots collaborate. In this paper we present a system for autonomous following of a walking person in outdoor environments while avoiding static and dynamic obstacles. The principal sensor is a 3D LIDAR with a resolution of 59x29 points. We present a combination of 3D features, motion detection and tracking with a sampling Bayesian filter which results in reliable person detection for a lowresolu tion 3D-LIDAR. The method is implemented on an outdoor robot with car-like steering, which incorporates the target's path into its own path planning around local obstacles. Experiments in outdoor areas validate the approach.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.