PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Wide-angle vision for road views

Wybrane pełne teksty z tego czasopisma
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
The field-of-view of a wide-angle image is greater than (say) 90 degrees, and so contains more information than available in a standard image. A wide field-of-view is more advantageous than standard input for understanding the geometry of 3D scenes, and for estimating the poses of panoramic sensors within such scenes. Thus, wide-angle imaging sensors and methodologies are commonly used in various road-safety, street surveillance, street virtual touring, or street 3D modelling applications. The paper reviews related wide-angle vision technologies by focusing on mathematical issues rather than on hardware.
Twórcy
autor
  • Computer Science and Information Engineering, National Ilan University, 1, Sec. 1, Shen-lung Road, Yi-Lan, 260, Taiwan, R.O.C.
autor
  • Vision Systems E-2, Technical University Hamburg-Harburg, 95, Schwarzenbergstraße, D-21073 Hamburg, Germany
autor
  • Tamaki Innovation Campus, The University of Auckland, 261, Morrin Rd., St Johns, Auckland 1072, New Zealand
autor
  • Tamaki Innovation Campus, The University of Auckland, 261, Morrin Rd., St Johns, Auckland 1072, New Zealand
Bibliografia
  • 1. F. Huang, R. Klette, and K. Scheibe, Panoramic Imaging: Sensor-Line Cameras and Laser Range-Finders, Wiley, Chichester, 2008.
  • 2. K. Daniilidis and R. Klette, Imaging Beyond the Pinhole Camera, Springer, New York, 2007.
  • 3. S. Nayar, „Catadioptric omnidirectional camera”, Proc. Conf. Comput. Vision Pattern Recogn., pp. 482–488, San Juan, Puerto Rico, 1997.
  • 4. D. G. Lowe, „Automatic panoramic image stitching using in variant features”,Int. J. Comput. Vision, 74, 59–73 (2006).
  • 5. S. Peleg, „Panoramic mosaics by manifold projection” Proc. Conf. Comput. Vision Pattern Recogn., pp. 338–343, San Juan, Puerto Rico, 1997.
  • 6. R. Szeliski, „Image alignment and stitching: A tutorial”, Technical Report MSR-TR-2004-92, Microsoft Research, 2004.
  • 7. R. Szeliski and H.-Y. Shum, „Creating full view panoramic image mosaics and texture-mapped models”, Proc. SIGGRAPH, ACM Press, pp. 251–258, Los Angeles, 1997.
  • 8. Y.-C. Liu, K.-Y. Lin, and Y.-S. Chen, „Bird's-eye view vision system for vehicle surrounding monitoring”, Proc. Robot Vision 4931, pp. 207–218, Springer-Verlag Heidelberg, 2008.
  • 9. T. Ehlgen, T. Pajdla, and D. Ammon, „Eliminating blind spots for assisted driving”, IEEE T. Intell. Transp. 9(4), 657–665 (2008).
  • 10. S. Gehrig, C. Rabe, and L. Krueger, „6D vision goes fisheye for intersection assistance”, Proc. Canadian Conf. Comput. Robot Vision, pp. 34–41, Windsor, 2008.
  • 11. M. Pollefeys, D. Nister, J.-M. Frahm, A. Akbarzadeh, P. Mordohai, B. Clipp, C. Engels, D. Gallup, S. J. Kim, P. Merrell, C. Salmi, S. Sinha, B. Talton, L. Wang, Q. Yang, H. Stewenius, R. Yang, G. Welch, and H. Towles, „Detailed real-time urban 3D reconstruction from video”, Int. J. Comput. Vision 78, 143–167 (2008).
  • 12. G. Hartmann and R. Klette, „Cylinder sweep: Fisheye images into a bird's-eye view”, Technical Report MI-tech-TR 69, The University of Auckland, New Zealand, 2011.
  • 13. S.-B. Kang, R. Szeliski, and M. Uyttendaele, „Seamless stitching using multi-perspective plane sweep”, Technical Report MSR-TR-2001-48, Microsoft Research 2001.
  • 14. F. Huang and R. Klette, „Stereo panorama acquisition and automatic image disparity adjustment for stereoscopic visualization”, Multimed. Tools Appl. 47, 353–377 (2010).
  • 15. C. Frueh, S. Jain, and A. Zakhor, „Data processing algorithms for generating textured 3D building facade meshes from laser scans and camera images”,Int. J. Comput. Vision 61(2), 159–184 (2005).
  • 16. M. Fleck, „Perspective projection: The wrong imaging model”, Technical Report, Dep. Computer Science, University of Iowa, 1995.
  • 17. J. Kumler and M. Bauer, „Fisheye lens designs and their relative performance”, Proc. SPIE 4093, pp. 360–369 (2000).
  • 18. K. Miyamoto, „Fish-eye lens”, J. Opt. Soc. Amer. 54, 1060–1061 (1964).
  • 19. H. Bakstein and T. Pajdla, „Panoramic mosaicing with 180° field of view lens”, Proc. IEEE Workshop Omnidirectional Vision, pp. 60–67, Copenhagen, 2002.
  • 20. J. Kannala and S. S. Brandt, „A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses”, IEEE Trans. Pattern Anal. Machine Intell. 28, 1335–1340 (2006).
  • 21. D. Scaramuzza, A. Martinelli, and R. Siegwart, „A toolbox for easily calibrating omnidirectional cameras”, Proc. IEEE/RSJ Int. Conf. Intell. Robots Systems, pp. 5695–5701, Beijing, 2006.
  • 22. H. Ishiguro, M. Yamamoto, and S. Tsuji, „Omni-directional stereo”, IEEE T. Pattern Anal. Machine Intell. 14, 257–262 (1992).
  • 23. Y. Li, H. Y. Shum, C. K. Tang, and R. Szeliski, „Stereo reconstruction from multiperspective panoramas”, IEEE T. Pattern Anal. Machine Intell. 26, 45–62 (2004).
  • 24. D. Murray, „Recovering range using virtual multicamera stereo”, Computer Vision Image Understanding 61, 285–291, (1995).
  • 25. S. Peleg and M. Ben-Ezra, „Stereo panorama with a single camera”, Proc. Conf. Comput. Vision Pattern Recogn., pp. 395–401, Fort Collins, 1999.
  • 26. F. Huang, A. Torii, and R. Klette, „Geometries of panoramic images and 3D vision”, Machine Graphics & Vision 9, 463–477 (2010).
  • 27. J.-Y. Bouguet, „Camera calibration toolbox for MATLAB”. http://www.vision.caltech.edu/bouguetj/calib_doc/, 2010.
  • 28. T.-H. Ho, C. C. Davis, and S. D. Milner, „Using geometric constraints for fisheye camera calibration”, Proc. IEEE Workshop Omnidirectional Vision, pp. 17–21, Beijing, 2005.
  • 29. J.-Y. Bouguet, „Visual methods for three-dimensional modelling”, PhD thesis, California Institute of Technology, 1999.
  • 30. S. Li, „Binocular spherical stereo”,IEEE T. Intell. Transp. 9, 589–600 (2008).
  • 31. S. E. Chen, „QuickTime - An image-based approach to virtual environment navigation”, Proc. SIGGRAPH, pp. 29–37, Los Angeles, 1995.
  • 32. S. B. Kang and P. Desikan, „Virtual navigation of complex scenes using clusters of cylindrical panoramic images”, Proc. Graphics Interface, pp. 223–232, Vancouver, 1998.
  • 33. S. B. Kang and R. Szeliski, „3-d scene data recovery using omnidirectional multibaseline stereo”, Int. J. Comput. Vision 25, 167–183 (1997).
  • 34. H. Li, R. I. Hartley, and J. H. Kim, „A linear approach to motion estimation using generalized camera models”, Proc. IEEE Comput. Society Conf. on Comput. Vision Pattern Recogn., pp. 1–8, Anchorage, 2008.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-article-BWAD-0033-0001
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.