PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Bayesian model for multimodal sensory information fusion in humanoid

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
In this paper, the Bayesian model for bimodal sensory information fusion is presented. It is a simple and biological plausible model used to model the sensory fusion in human’s brain. It is adopted into humanoid robot to fuse the spatial information gained from analyzing auditory and visual input, aiming to increase the accuracy of object localization. Bayesian fusion model requires prior knowledge on weights for sensory systems. These weights can be determined based on standard deviation (SD) of unimodal localization error obtained in experiments. The performance of auditory and visual localization was tested under two conditions: fixation and saccade. The experiment result shows that Bayesian model did improve the accuracy of object localization. However, the fused position of the object is not accurate when both of the sensory systems were bias towards the same direction.
Słowa kluczowe
Twórcy
autor
autor
autor
autor
autor
  • Centre for Robotics and Electrical Systems, Multimedia University, JalanAyer Keroh Lama, 75450 Melaka, Malaysia, kin2031@yahoo.com
Bibliografia
  • [1] Knill D.C., "Bayesian models of sensory cue integration". In: Kenji Doya, Shin Ishii, Alexandre Pouget, Rajesh P. N. Rao, Bayesian Brain: Probabilistic Approach to Neural Coding, The MIT Press, Cambridge, 2007, pp.189-206.
  • [2] BindaP.,BrunoA.,BurrD.C.,MorroneM.C.,"Fusionofvisual and auditory stimuli during saccades: a Bay esian explanation -for perisaccadic distortions". The Journal of Neuroscience, vol. 27, 2007, pp. 8525-8532.
  • [3] Sophie Deneve, Alexandre Pouget, "Bayesian multisensory in tegration and cross-modal spatial links". Journal of Physio logy-Paris, vol. 98, 2004, pp. 249-258.
  • [4] Burr D.C., Alais D., "Combining visual and auditory information". Progress in Brain Research, vol.155, 2006, pp. 243-258.
  • [5] Battaglia P.W., Jacobs R.A., Aslin R.N., "Bayesian integration of visual and auditory signals for spatial localization". Journal of the Optical Society of America, vol. 20, 2003, pp. 1391-1397.
  • [6] Bolognini N., Rasi F., L'adavas E., "Visual localization of sounds". Neuropsychologia, vol. 43, 2005, pp. 1655-1661.
  • [7] Sommer K.-D., Kuhn O., Puente Leon F., Bernd R.L. Siebert, "A Bayesian approach to information fusion for evaluating the measurement uncertainty". Robotics and Autonomous Systems, vol. 57, 2009, pp. 339-344.
  • [8] Hackett J.K., Shah M., "Multisensor fusion: a perspective". In: Proc. of IEEE International Conference on Robotics and Automation, vol. 2, 1990, pp.1324-1330.
  • [9] Wei Kin Wong, Tze Ming Neoh, Chu Kiong Loo, Chuan Poh Ong, "Bayesian fusion of auditory and visual spatial cues during fixation and saccade in humanoid robot" . Lecture Notes in Com puter Science , vol. 5506, 2008, pp. 1103-1109.
  • [10] Yong-Ge Wu, Jing-Yu Yang, Ke Liu, "Obstacle detection and environment modeling based on multisensor fusion for robot navigation". Artificial Intelligence in Engineering, vol. 10, 1996, pp. 323-333.
  • [11] Lopez-Orozco J.A., de la Cruz J.M., Besada E., Ruiperez P., "An asynchronous, robust, and distributed multisensor fusion system for mobile robots". The International Journal of Robotics Research, vol. 19, 2000, pp. 914-932.
  • [12] Toyama K., Horvitz E., "Bayesian modality fusion: probabilistic integration of multiple vision algorithms for head tracking". In: Proc. of 4th Asian Conference on Computer Vision, 2000.
  • [13] Kobayashi F., Arai F., "Sensor fusion system using recurrent fuzzy inference". Journal of Intelligent and Robotic Systems, vol. 23, 1998, pp. 201-216.
  • [14] Pao L.Y., O'Neil S.D., "Multisensor fusion algorithms for tracking". American Control Conference, 1993, pp. 859-863.
  • [15] Beltran-Gonzalez C., Sandini G., "Visual attention priming based on crossmodal expectations". In: IEEE/RSJ International Conference on Intelligent Robots and Systems, 2005, pp. 10601065.
  • [16] Bothe H.-H., Persson M., Biel L., Rosenholm M., "Multivariate sensor fusion by a neural network model". In: Proc. 2 nd International Conference on Information Fusion, 1999, pp. 10941101.
  • [17] Hiromichi Nakashima, Noboru Ohnishi, "Acquiring locali zation ability by integration between motion and sensing". In: Proc. of IEEE International Conference on Systems, Man, and Cybernetics, vol. 2, 1999, pp. 312-317.
  • [18] Kording K.P., Beierholm U., Ma W.J., et al. , "Causal Inference in Multisensory Perception". PLoS ONE, vol.2, 2007, e943.
  • [19] Mishra J., Martinez A., Sejnowski T.J., Hillyard S.A., "Early Cross-Modal Interactions inAuditory and Visual Cortex under lie a Sound-Induced Visual Illusion". The Journal of Neuro science, vol. 27, 2007, pp. 4120-4131.
  • [20] Yoshiaki Sakagami, Ryujin Watanabe, Chiaki Aoyama, et al. , "The intelligent ASIMO: system overview and integration". In: IEEE/RSJ International Conference on Intelligent Robots and System, vol. 3, 2002, pp. 2478-2483.
  • [21] Metta G., Sandini G., Vernon D., et al. , "The iCub humanoid robot: an open platform for research in embodied cognition". Performance Metrics for Intelligent Systems Workshop, Gaithersburg, USA, 2008.
  • [22] Metta G., Gasteratos A., Sandini G., "Learning to track colored objects with Log-Polar vision". Mechatronic, vol. 14, 2004, pp. 989-1006.
  • [23] Berton F., A brief introduction to log-polar mapping. LIRA-Lab, University of Genova, 2006.
  • [24] Natale L., Metta G., Sandini G., "Development of auditoryevoked reflexes: visuoacoustic cues integration in a binocular head". Robotics and Autonomous Systems, vol. 39, 2002, pp. 87-106.
  • [25] Kee K.C., Loo C.K., Khor S.E., Sound localization using ge neralized cross correlation: Performance Comparison of Pre-Filter. Center of Robotics andAutomation, Multimedia Univer sity, 2008.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-article-BUJ5-0030-0029
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.