PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

A suite of tools supporting data streams annotation and its use in experiments with hand gesture recognition

Identyfikatory
Warianty tytułu
PL
Zestaw narzędzi wspomagających adnotowanie strumieni danych oraz jego zastosowanie w eksperymentach dotyczących rozpoznawania gestów dłoni
Języki publikacji
EN
Abstrakty
EN
In this paper we present the concept and our implementation of a suite of tools supporting the annotation of sequential data. These tools are useful in experiments related to multimedia data sequences. We show the two exemplary usage scenarios of these tools in the process of building the gesture recognition system.
PL
W artykule przedstawiamy koncepcję i naszą implementację zestawu narzędzi wspomagających adnotowanie danych sekwencyjnych. Opracowane narzędzia są użyteczne w eksperymentach związanych z sekwencjami danych multimedialnych. Przedstawiono dwa przykładowe scenariusze użycia tych narzędzi w procesie budowy systemu rozpoznawania gestów wykonywanych dłonią.
Czasopismo
Rocznik
Strony
89--107
Opis fizyczny
Bibliogr. 30 poz.
Twórcy
  • Rzeszow University of Technology, Faculty of Electrical and Computer Engineering, Department of Computer and Control Engineering
autor
  • Rzeszów University of Technology, Department of Computer and Control Engineering, Faculty of Electrical and Computer Engineering W. Pola 2, 35-959 Rzeszów, Poland
Bibliografia
  • 1. Aubert O., Prié Y., Schmitt D.: Advene as a tailorable hypervideo authoring tool: a case study. Proceedings of the 2012 ACM Symposium on Document Engineering, 2012, s. 79÷82.
  • 2. Aubert O., Prié Y., Canellas C.: Leveraging video annotations in video-based elearning. Proceedings of International Conference on Computer Supported Education, 2014, s. 479÷485.
  • 3. Bhat M., Olszewska I.-J.: DALES: Automated tool for detection, annotation, labelling, and segmentation of multiple objects in multi-camera video streams. Proceedings of the Third Workshop on Vision and Language, 2014, s. 87÷94.
  • 4. Bradski G., Kaehler A.: Learning OpenCV: Computer vision with the OpenCV library. O'Reilly Media Inc., 2008.
  • 5. Busjahn T., Schulte C., Sharif B., Simon, Begel A., Hansen M., Bednarik R., Orlov P., Ihantola P., Shchekotova G., Antropova M.: Eye tracking in computing education. Proceedings of the Tenth Annual Conference on International Computing Education Research, 2014, s. 3÷10.
  • 6. Chang W.-L., Šabanović S., Huber L.: Situated analysis of interactions between cognitively impaired older adults and the therapeutic robot PARO. Social Robotics, Lecture Notes in Computer Science, 2013, t. 8239, s. 371÷380.
  • 7. Cooperrider K.: Body-directed gestures: Pointing to the self and beyond. Journal of Pragmatics, 2014, t. 71, s. 1÷16.
  • 8. Crasborn O., Sloetjes H.: Enhanced ELAN functionality for sign language corpora. Proceedings of Language Resources and Evaluation Conference (LREC'08), 2008, s. 39÷42.
  • 9. Dasiopoulou S., Giannakidou E., Litos G., Malasioti P., Kompatsiaris Y.: A survey of semantic image and video annotation tools. Knowledge-Driven Multimedia Information Extraction and Ontology Evolution. Lecture Notes in Computer Science, 2011, t. 6050, s. 196÷239.
  • 10. Davidsen J., Vanderlinde R.: Researchers and teachers learning together and from each other using video-based multimodal analysis. British Journal of Educational Technology, 2014, t. 35, nr 3, s. 451÷460.
  • 11. Gamma E., Helm R., Johnson R., Vlissides J.: Design patterns: Elements of reusable object-oriented software. Addison-Wesley Longman Publishing Co., Inc., Boston 1995.
  • 12. Heloir A., Neff M.: Exploiting motion capture for virtual human animation: Data collection and annotation visualization. Proceedings of the Workshop on Multimodal Corpora: Advances in Capturing, Coding and Analyzing Multimodality, 2010.
  • 13. Hyde J., Kiesler S.-B., Hodgins J.-K., Carter E.-J.: Conversing with children: Cartoon and video people elicit similar conversational behaviors. Proceedings of Conference on Human Factors in Computing Systems, 2014, s. 1787÷1796.
  • 14. Jongejan B.: Automatic annotation of face velocity and acceleration in Anvil. Proceedings of International Conference on Language Resources and Evaluation (LREC'12), 2012, s. 201÷208.
  • 15. Kipp M.: ANVIL - A generic annotation tool for multimodal dialogue. Conference of the International Speech Communication Association, 2001, s. 1367÷1370.
  • 16. Kipp M.: Gesture generation by imitation - from human behavior to computer character animation. Dissertation.com, Boca Raton 2004.
  • 17. Kipp M.: Spatiotemporal coding in ANVIL. Proceedings of Language Resources and Evaluation Conference (LREC'08), 2008, s. 2042÷2045.
  • 18. Kipp M., von Hollen L.-F., Hrstka M.-C., Zamponi F.: Single-person and multi-party 3D visualizations for nonverbal communication analysis. Proceedings of Language Resources and Evaluation Conference (LREC'14), 2014, s. 3393÷3397.
  • 19. MESA Imaging SR4000, http://www.adept.net.au/cameras/Mesa/SR4000.shtml (2017.05.09).
  • 20. Ng-Thow-Hing V., Pengcheng L., Okita S.: Synchronized gesture and speech production for humanoid robots. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2010, s. 4617÷4624.
  • 21. Ooko R., Ishii R., Nakano Y.-I.: Estimating a user's conversational engagement based on head pose information. Intelligent Virtual Agents, Lecture Notes in Computer Science, t. 6895, 2011, s. 262÷268.
  • 22. Russell B.-C., Torralba A., Murphy K.-P., Freeman W.-T.: LabelMe: A Database and Web-Based Tool for Image Annotation. International Journal of Computer Vision, 2008, t. 77, nr 1, s. 157÷173.
  • 23. Rusu R.-B., Cousins S.: 3D is here: Point Cloud Library (PCL). IEEE International Conference on Robotics and Automation (ICRA), 2011, s. 1÷4.
  • 24. Rusu R.-B., Bradski G., Thibaux R., Hsu J.: Fast 3D Recognition and pose using the viewpoint feature histogram. Proceedings of the 23rd IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2010, s. 2155÷2162.
  • 25. Sargent G., Hanna P., Nicolas H.: Segmentation of music video streams in music pieces through audio-visual analysis. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2014, s. 724÷728.
  • 26. Sloetjes H., Wittenburg P.: Annotation by category - ELAN and ISO DCR. Proceedings of Language Resources and Evaluation Conference (LREC'08), 2008, s. 816÷820.
  • 27. Tseng B., Ching-Yung L., Smith J.: Video personalization and summarization system. Multimedia Signal Processing, 2002 IEEE Workshop on, 2002, s. 424÷427.
  • 28. Uebersax D., Gall J., Van den Bergh M., Van Gool L.: Real-time sign language letter and word recognition from depth data. IEEE International Conference on Computer Vision Workshops, 2011, s. 383÷390.
  • 29. VIA - Video Image Annotation Tool, http://via-tool.sourceforge.net (2017.05.09).
  • 30. Wolfe R., Mcdonald J., Berke L., Stumbo M.: Expanding n-gram analytics in ELAN and a case study for sign synthesis. Proceedings of Language Resources and Evaluation Conference (LREC'14), 2014, s. 1880÷1885.
Uwagi
PL
Opracowanie rekordu w ramach umowy 509/P-DUN/2018 ze środków MNiSW przeznaczonych na działalność upowszechniającą naukę (2018)
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-687d8d7b-f796-470e-a570-da3408784bff
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.