PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Fusion of door and corner features for scene recognition

Treść / Zawartość
Identyfikatory
Warianty tytułu
Konferencja
International Seminar on Computational Intelligence held at Tijuana, Mexico on January of 2010
Języki publikacji
EN
Abstrakty
EN
Scene recognition is a paramount task for autonomous systems that navigate in open scenarios. In order to achie ve high scene recognition performance it is necesary to use correct information. Therefore, data fusion is beco ming a paramount point in the design of scene recognition systems. This paper presents a scenery recognition system using a neural network hierarchical approach. The system is based on information fusion in indoor scenarios. The system extracts relevant information with respect to color and landmarks. Color information is related mainly to localization of doors. Landmarks are related to corner de tection. The corner detection method proposed in the pa per based on corner detection windows has 99% detection of real corners and 13.43% of false positives. The hierar chical neural systems consist on two levels. The first level is built with one neural network and the second level with two. The hierarchical neural system, based on feed for ward architectures, presents 90% of correct recognition in the first level in training, and 95% in validation. The first ANN in the second level shows 90.90% of correct recogni tion during training, and 87.5% in validation. The second ANN has a performance of 93.75% and 91.66% during training and validation, respectively. The total perfor mance of the systems was 86.6% during training, and 90% in validation.
Słowa kluczowe
Twórcy
Bibliografia
  • [1] Kemp C., Edsinger A., Torres-Jara D., “Challenges for Robot Manipulation in Human Environments”, IEEE Robotics & Automation Magazine , March 2007, pp. 20-29,.
  • [2] Durrant-Whyte H., Bailey T., “Simultaneous Localization and Mapping (SLAM): Part I ”, IEEE Robotics & Automation Magazine , June 2006, pp. 99-108.
  • [3] Bailey T., Durrant-Whyte H., “Simultaneous Localization and Mapping (SLAM): Part II”, IEEE Robotics & Automation Magazine , September 2006, pp. 108-117.
  • [4] Addison J., Choong K., “Image Recognition For Mobile Applications”. In: International Conferences on Image Processing ICIP 2007 , 2007, pp. VI177-VI180.
  • [5] DeSouza G., Kak A., “Vision for Mobile Robot Navigation: A Survey”, IEEE Transactions On Pattern Ana lysis And Machine Intelligence , vol. 24, no. 2, February 2002 , pp. 237-267.
  • [6] Kelly A., Nagy B., Stager D., Unnikrishnan R., “An Infrastructure-Free Automated Guided Vehicle Based on Computer Vision”, IEEE Robotics & Automation Magazine, September 2007, pp. 24-34.
  • [7] Srinivasan M.V., Thurrowgood S., Soccol D., “Competent Vision and Navigation Systems”, IEEE Robotics & Automation Magazine , September 2009, pp. 59-71.
  • [8] Antoine Maint J.B., Viergever M.A., “A Survey of Medical Image Registration”, Medical Image Analysis , 1998, vol. 2, no. 1, pp. 137.
  • [9] Zitova B., Flusser J., “Image Registration Methods: a survey”, Image and Vision Computing , vol. 21, 2003, pp. 977-1000.
  • [10] TissainayagamP.,Suter D., ”Assessing the Performance of Corner Detectors for Point Feature Tracking Applications”, Image and Vision Computing , vol. 22, 2004, pp. 663-679.
  • [11] Mokhtarian F., Suomela R., “Curvature Scale Space for Robust Image Corner Detection”. In: International Con ference on Pattern Recognition , Brisbane, Australia, 1998.
  • [12] Andrade J., Environment Learning for Indoor Movile Robots. PhD Thesis, Universidad politecnica de Catalunya, 2003.
  • [13] Rangarajan K., Shah M., van Brackle D., “Optimal Corner Detector”, 2ndInternationalConference onCom puter Vision , December 1988, pp. 90-94,
  • [14] Smith S.M., Brady J.M., “SUSAN -A New Approach to Low Level Image Processing”. Int. Journal of Computer Vision , vol. 23, no. 1, May 1997, pp. 45-78.
  • [15] Harris C.G., Stephens M., “A combined corner and edge detector”. In: Proceedings of the Alvey Vision Confe rence , Manchester 1988, pp. 189-192.
  • [16] He X.C., Yung N.H.C., “Curvature Scale Space Corner Detector with Adaptive Threshold and Dynamic Region of Support”. In: Proceedings of the 17 th International Conference on Pattern Recognition, vol. 2, August 2004, pp. 791-794.
  • [17] Aguilar G., Sanchez G., Toscano K., Salinas M., Nakano M., Perez H., “Fingerprint Recognition”. In: 2nd International Conference on Internet Monitoring and Protec tion -ICIMP 2007 , July 2007, pp. 32-32.
  • [18] Leung W.F., Leung S.H., Lau W.H., Luk A., “Fingerprint Recognition Using Neural Network, Neural Networks for Signal Processing”. In: Proceedings of the 1991 IEEE Workshop , 30 th Sept.1st Oct. 1991, pp. 226-235.
  • [19] Chen Z., Birchfield S.T., “Visual detection of linteloccluded doors from a single image”, In: IEEE Compu ter Society Conference on Computer Vision and Pattern Recognition Workshops 2008 , pp. 1-8.
  • [20] Schwarz M.W., Cowan M.,W., Beatty J. C., “An experimental comparison of RGB,YIQ, LAB, HSV, and opponent color models”, ACM Transactions on Gra phics, vol. 6, issue 2, 1997, pp. 123-158.
  • [21] CarinenaP.,RegueiroC.,OteroA.,BugarínA.,BarroS., “Landmark Detection in Mobile Robotics Using Fuzzy Temporal Rules”, IEEE Transactions on Fuzzy Systems , vol. 12, no. 4,August 2004, pp. 423-235.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-article-BUJ5-0030-0035
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.