PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Action based activities prediction by considering human-object relation

Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
For an effective human-robot collaboration, it is important that an assistive robot is able to forecast a human action. The new action recognition method for anticipating the human activities basis on visual observation is presented. The spatio-temporal human-object relation taking into account the so-called affordance is analyzed, the action features are defined. We also deliver a RGB-D type activity dataset obtained using new Senz3D vision sensors. To demonstrate the effectiveness of the proposed approach, we discuss the experiments with summarizing the anticipation results obtained using two different datasets.
Rocznik
Tom
Strony
343--352
Opis fizyczny
Bibliogr. 15 poz., rys., tab.
Twórcy
autor
  • Faculty of Power and Aeronautical Engineering, Warsaw University of Technology, ul. Nowowiejska 24 00-665 Warsaw, Poland
autor
  • Faculty of Power and Aeronautical Engineering, Warsaw University of Technology, ul. Nowowiejska 24 00-665 Warsaw, Poland
Bibliografia
  • [1] H. S. Koppula and A. Saxena, "Anticipating human activities using object affordances for reactive robotic response," IEEE Trans. Patt. Analysis and Machine Intelligence, vol. 38, no. 1, pp. 14-29, 2016.
  • [2] M. Ryoo, T. J. Fuchs, L. Xia, J. K. Aggarwal, and L. Matthies, "Robot-centric activity prediction from first-person videos: What will they do to me?," in Proc. 10th Int. Conf. Human-Robot Interaction, pp. 295-302, ACM, 2015 .
  • [3] F. Hoeller, D. Schulz, M. Moors, and F. E. Schneider, "Accompanying persons with a mobile robot using motion prediction and probabilistic roadmaps," in IEEE/RSJ Int. Conf. Intelligent Robots and Systems, pp. 1260-1265, 2007.
  • [4] V. Dutta and T. Zielinska, "Predicting the intention of human activities for real-time human-robot interaction (hri)," in Int. Conf. Social Robotics, pp. 723-734, Springer, 2016.
  • [5] W. Choi, K. Shahid, and S. Savarese, "Learning context for collective activity recognition," in IEEE Conf. Computer Vision and Pattern Recognition, pp. 3273-3280, 2011.
  • [6] D. Han, L. Bo, and C. Sminchisescu, "Selection and context for action recognition," in 12th IEEE Int. Conf. Computer Vision, pp. 1933-1940, 2009.
  • [7] A. Gupta, A. Kembhavi, and L. S. Davis, "Observing human-object interactions: Using spatial and functional compatibility for recognition," IEEE Trans. Patt. Analysis and Machine Intelligence, vol. 31, no. 10, pp. 1775-1789, 2009.
  • [8] V. Dutta and T. Zielinska, "Predicting human actions taking into account object affordances," Journal of Intelligent & Robotic Systems, pp. 1-17, 2018.
  • [9] H. S. Koppula, R. Gupta, and A. Saxena, "Learning human activities and object affordances from rgb-d videos," The Int. J. Robotics Research, vol. 32, no. 8, pp. 951-970, 2013.
  • [10] J. C. Niebles, C.-W. Chen, and L. Fei-Fei, "Modeling temporal structure of decomposable motion segments for activity classification," in European Conf. Computer Vision, pp. 392-405, Springer, 2010.
  • [11] S. Sadanand and J. J. Corso, "Action bank: A high-level representation of activity in video," in IEEE Conf. on Computer Vision and Pattern Recognition, pp. 1234-1241, 2012.
  • [12] X. Yang and Y. L. Tian, "Eigenjoints-based action recognition using naive-bayes-nearest-neighbor," in IEEE Computer Society Conf. Computer Vision and Pattern Recognition Workshops, pp. 14-19, 2012.
  • [13] Y. Kim, J. Chen, M.-C. Chang, X. Wang, E. M. Provost, and S. Lyu, "Modeling transition patterns between events for temporal human action segmentation and classification," in 11th IEEE Int. Conf. and Workshops Automatic Face and Gesture Recognition, vol. 1, pp. 1-8, 2015.
  • [14] V. Dutta and T. Zielinska, "Action prediction based on physically grounded object affordances in human-object interactions," in 11th IEEE Int. Workshop Robot Motion and Control (RoMoCo), pp. 47-52, 2017.
  • [15] H. Wu, W. Pan, X. Xiong, and S. Xu, "Human activity recognition based on the combined SVM and HMM," in IEEE Int. Conf. Information and Automation, pp. 219-224, 2014.
Uwagi
PL
Opracowanie rekordu w ramach umowy 509/P-DUN/2018 ze środków MNiSW przeznaczonych na działalność upowszechniającą naukę (2019).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-eda84515-2f23-43ef-88e7-ca879d90039d
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.