PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

An augmented reality platform for wearable assisted living systems

Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Assisted Living systems aim to improve life quality of their users giving context relevant guidance accordingly with their location and hour of the day. Most of commercial systems depend on complex/expensive infrastructure and still their positional information is not sufficient to take advantage of modern interfacing devices such as head-up displays or pico-projectors. We describe in this paper the design of an Assisted Living system capable to offer such guidance with on-site Augmented Reality, without introducing changes in the environment and using off-the-shelf equipment. The system can take advantage of wearable computing devices with embedded camera by determining the user position and orientation by using Visual Odometry and SLAM techniques. With this data, advices can be exhibited automatically by merging it seamlessly to the environment using augmented reality. In order to assess the efficacy of our system, we performed pilot tests in health care centres and residences using a prototype of a low-cost wearable device which displays the information through an embedded pico-projector.
Rocznik
Strony
56--79
Opis fizyczny
Bibliogr. 43 poz., rys., tab.
Twórcy
  • Technical Institute of Castilla y León, Burgos, Spain
  • Technical Institute of Castilla y León, Burgos, Spain
Bibliografia
  • [1] Ekahau: Ekahau Vision. http://www.ekahau.com/real-time-location-system /technology/ekahau-vision. Accessed: 2014-02-11.
  • [2] Ubisense: Ubisense RTLS Hardware. http://www.ubisense.net. Accessed: 2014-05-12.
  • [3] Technologies, S.: Sonitor Sense RTLS. http://www.sonitor.com/. Acessed: 2014-05-12.
  • [4] Technologies, E. S.: MotionSTAR Wireless LITE. http://www.est-kl.com/products /motion-tracking/ascension/motionstar-wireless-lite.html. Accessed: 2014-02-11.
  • [5] Technologies, E. S.: PowerTRAK 360. http://www.est-kl.com/en/products /motion-tracking/polhemus/powertrak-360.html. Accessed: 2015-05-31.
  • [6] Aviles-Lopez, E., Villanueva-Miranda, I., Garcia-Macias, J., Palafox-Maestre, L.: Taking Care of Our Elders through Augmented Spaces. In: Web Congress, 2009. LA-WEB ’09. Latin American, pp. 16–21. 2009.
  • [7] Kurz, D., Fedosov, A., Diewald, S., Guttier, J., Geilhof, B., Heuberger, M.: [Poster] Towards mobile augmented reality for the elderly. In: Mixed and Augmented Reality (ISMAR), 2014 IEEE International Symposium on, pp. 275–276. 2014.
  • [8] Mistry, P., Maes, P.: SixthSense: A Wearable Gestural Interface. In: ACM SIGGRAPH, pp. 11:1–11:1. ACM, New York, NY, USA, 2009.
  • [9] Lowe, D. G.: Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vision, 60(2), pp. 91–110, 2004. ISSN 0920-5691.
  • [10] Bay, H., Tuytelaars, T., Van Gool, L.: SURF: Speeded Up Robust Features. In: Leonardis, A., Bischof, H., Pinz, A. (eds.), Computer Vision – ECCV 2006, volume 3951 of Lecture Notes in Computer Science, pp. 404–417. Springer Berlin / Heidelberg, 2006. ISBN 978-3-540-33832-1.
  • [11] Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: ORB: An Efficient Alternative to SIFT or SURF. In: International Conference on Computer Vision. Barcelona, 2011.
  • [12] Alcantarilla, P. F., Bartoli, A., Davison, A. J.: KAZE Features. In: Computer Vision – ECCV 2012, volume 7577 of Lecture Notes in Computer Science, p. 214–227. Springer Berlin Heidelberg, 2012. ISBN 978-3-642-33782-6.
  • [13] Fischler, M. A., Bolles, R. C.: Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Commun. ACM, 24(6), pp. 381–395, 1981. ISSN 0001-0782.
  • [14] Klein, G.: Visual tracking for augmented reality. Ph.D. thesis, University of Cambridge, 2006.
  • [15] Klein, G., Murray, D.: Parallel Tracking and Mapping for Small ARWorkspaces. In: Proc. Sixth IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR’07). Nara, Japan, 2007.
  • [16] Castle, R., Klein, G., Murray, D.: Video-Rate Localization in Multiple Maps for Wearable Augmented Reality. In: IEEE International Symposium onWearable Computers (ISWC), pp. 15–22. 2008. ISSN 1550-4816.
  • [17] Rosten, Drummond: Fusing points and lines for high performance tracking. In: IEEE International Conference on Computer Vision, volume 2, pp. 1508–1511. 2005.
  • [18] Klein, G., Murray, D.: Parallel Tracking and Mapping on a Camera Phone. In: Proc. Eigth IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR’09). Orlando, 2009.
  • [19] Weiss, S., Achtelik, M., Lynen, S., Achtelik, M., Kneip, L., Chli, M., Siegwart, R.: Monocular Vision for Long-term Micro Aerial Vehicle State Estimation: A Compendium. Journal of Field Robotics, 30(5), pp. 803–831, 2013. ISSN 1556-4967.
  • [20] Minetto, R., Leite, N., Stolfi, J.: AffTrack: Robust tracking of features in variable-zoom videos. In: IEEE International Conference on Image Processing (ICIP), 2009 16th, pp. 4285–4288. 2009. ISSN 1522-4880.
  • [21] Newcombe, Lovegrove, Davison: DTAM: Dense tracking and mapping in real-time. In: IEEE Int. Conf. on Computer Vision (ICCV), pp. 2320–2327. 2011. ISSN 1550-5499.
  • [22] Engel, J., Sturm, J., Cremers, D.: Semi-dense Visual Odometry for a Monocular Camera. In: IEEE International Conference on Computer Vision (ICCV 2013), pp. 1449–1456. 2013.
  • [23] Engel, J., Schöps, T., Cremers, D.: LSD-SLAM: Large-Scale Direct Monocular SLAM. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.), Computer Vision – ECCV 2014, volume 8690 of Lecture Notes in Computer Science, pp. 834–849. Springer International Publishing, 2014. ISBN 978-3-319-10604-5.
  • [24] Forster, C., Pizzoli, M., Scaramuzza, D.: SVO: Fast Semi-Direct Monocular Visual Odometry. In: IEEE International Conference on Robotics and Automation (ICRA), p. 15–22. 2014.
  • [25] Newcombe R.A., Izadi S., Hilliges O., Molyneaux D., Kim D., Davison A., Kohi P., Shotton J., Hodges S., Fitzgibbon A.: KinectFusion: Real-time dense surface mapping and tracking. In: IEEE Int. Symp. on Mixed and Augmented Reality (ISMAR), pp. 127–136. 2011.
  • [26] Meilland, M., Barat, C., Comport, A.: 3D High Dynamic Range dense visual SLAM and its application to real-time object re-lighting. In: IEEE Int. Symposium on Mixed and Augmented Reality (ISMAR 2013), pp. 143–152. 2013.
  • [27] Henry, P., Krainin, M., Herbst, E., Ren, X., Fox, D.: RGB-D Mapping: Using Kinect-style Depth Cameras for Dense 3D Modeling of Indoor Environments. Int. Journal of Robotics Research, 31(5), pp. 647–663, 2012. ISSN 0278-3649.
  • [28] Whelan T., Kaess M., Fallon M., Johannsson H., Leonard J., McDonald J.: Kintinuous: Spatially extended kinectfusion. Technical Report MIT-CSAIL-TR-2012-020, Massachusetts Institute of Technology, 2012.
  • [29] Whelan, T., Kaess, M., Johannsson, H., Fallon, M., Leonard, J., McDonald, J.: Real-time Large Scale Dense RGB-D SLAM with Volumetric Fusion. Intl. J. of Robotics Research, IJRR, 34(4-5), pp. 598–626, 2015.
  • [30] Inc., G.: Project Tango. https://www.google.com/atap/project-tango. Accessed: 2015-05-31.
  • [31] Intel: RealSense Technology. http://www.intel.com/content/www/us/en/architecture-and-technology/realsense-overview.html. Accessed: 2015-05-31.
  • [32] OGRE: Open Source 3D Graphics Engine. http://www.ogre3d.org. Accessed: 2015-05-31.
  • [33] Unity: Unity 3D Engine. http://unity3d.com. Accessed: 2015-05-31.
  • [34] Nister, D., Stewenius, H.: Scalable Recognition with a Vocabulary Tree. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, volume 2, pp. 2161–2168. 2006. ISSN 1063-6919.
  • [35] Shi, J., Tomasi, C.: Good features to track. In: Computer Vision and Pattern Recognition, 1994. Proceedings CVPR ’94., 1994 IEEE Computer Society Conference on, pp. 593–600. 1994. ISSN 1063-6919.
  • [36] Varadarajan, V. S.: Lie groups, Lie algebras, and their representations, volume 102. Prentice-Hall Englewood Cliffs, NJ, 1974.
  • [37] Rusu, R. B., Cousins, S.: 3D is here: Point Cloud Library (PCL). In: IEEE International Conference on Robotics and Automation (ICRA). Shanghai, China, 2011.
  • [38] Grisetti G., Stachniss C., Grzonka S., Burgard W.: A tree parameterization for efficiently computing maximum likelihood maps using gradient descent. In: In Proc. of Robotics: Science and Systems. 2007.
  • [39] Wu C., Agarwal S., Curless B., Seitz S.W.: Multicore bundle adjustment. In: IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 3057–3064. 2011. ISSN 1063-6919.
  • [40] Wu, C.: SiftGPU: A GPU Implementation of Scale Invariant Feature Transform (SIFT). http://cs.unc.edu/˜ccwu/siftgpu, 2007.
  • [41] Saracchini, R., Catalina-Ortega, C., Bordoni, L.: A Mobile Augmented Reality Assistive Technology for the Elderly. Comunicar, 23(45), pp. 65–74, 2015.
  • [42] Alcantarilla, P. F., Nuevo, J., Bartoli, A.: Fast Explicit Diffusion for Accelerated Features in Nonlinear Scale Spaces. In: British Machine Vision Conf. (BMVC). 2013.
  • [43] (ref. AAL-2010-3-116), A. A. L. J. P.: NACODEAL - Natural Communication Device for Assisted Living. European Union Project. www.nacodeal.eu, 2014.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-69770c77-5cdd-4804-afb1-719a55998e1d
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.